uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
2,877,628,090,254 | arxiv | \section{Introduction}\label{Sec:introduction}
CW~Leo (a.k.a. IRC\,+10216) is the nearest carbon-rich Asymptotic Giant Branch (AGB) star at a distance around 120 to 150\,pc \citep[][and reference therein]{Menshchikov2001A&A...368..497M, Groenewegen2012A&A...543L...8G}. During the AGB-phase, the evolution is determined by mass loss, with the mass-loss rate being significantly larger than the nuclear burning rate \citep{Lagadec2008MNRAS.383..399L}. The general accepted idea is that mass-loss proceeds via a 2-steps process: (1)\, pulsations lift photospheric layers to temperatures where it is cool and dense enough for molecules to nucleate and form dust species, (2)\, radiation pressure on the dust grains pushes the material outwards and a wind is generated.
However, even for the nearest carbon-rich AGB star it is still not well established if the dominant behaviour of the mass-loss is via homogeneous, isotropic processes or if small-scale irregularities/instabilities dictate the overall appearance.
Specifically for CW Leo, it has been shown that at \textit{milli-arcsecond scale} dust clumps formed in the dust formation region are moving due to the expanding wind of the star, with clump velocity motions varying between 7.9 and 17.5\,km/s \citep{Tuthill2000ApJ...543..284T, Weigelt2002A&A...392..131W, Menut2007MNRAS.376L...6M}. Formation of new dust clumps at irregular time intervals might explain the fading of the illumination of some dust features during some periods. At \textit{arcsecond-scale}, a bipolar morphology is seen in optical data with a bipolar axis at a position angle estimated around 8\deg\ \citep{Mauron2000A&A...359..707M} to 22\deg\ \citep{Leao2006A&A...455..187L} (with respect to the North). This feature probably indicates that scattering from stellar photons is more efficient in the polar direction. Evidence was provided from optical and near-infrared data for an equatorial density enhancement in the form of a geometrically thick disk \citep{Dyck1987PASP...99...99D} or dusty
torus \citep{Skinner1998MNRAS.300L..29S, Murakawa2005A&A...436..601M, Jeffers2014} seen almost edge-on.
\textit{Beyond 1\hbox{$^{\prime\prime}$}}, multiple, almost concentric, shells (or arcs) are detected on top of the smooth extended envelope \citep{Mauron1999A&A...349..203M, Mauron2000A&A...359..707M, Leao2006A&A...455..187L, Dinh2008ApJ...678..303D, Fong2003ApJ...582L..39F, Decin2011A&A...534A...1D}. The nested shells are composed of thinner, elongated arcs, with a typical width of $\sim$1.6\hbox{$^{\prime\prime}$}\ and separated by $\sim$3\hbox{$^{\prime\prime}$}-20\hbox{$^{\prime\prime}$}, corresponding to $\sim$150--1000\,yr. These partial shells have a typical angular extent of $\sim$30\deg\ to 90\deg\ and are detected up to a distance of 320\hbox{$^{\prime\prime}$}\ \citep{Decin2011A&A...534A...1D}. Around 450\hbox{$^{\prime\prime}$}, Herschel data have revealed the presence of a bow shock \citep{Ladjal2010A&A...518L.141L}.
In this paper, we present the first ALMA band 9 observations of the inner envelope of CW~Leo at a spatial resolution of $\sim$0.2\hbox{$^{\prime\prime}$}\ \citep[$\sim$10\,\Rstar, with 1\Rstar$\sim 5 \times 10^{13}$\,cm;][]{DeBeck2012A&A...539A.108D}.
In Sec.~\ref{Sec:ALMA_data} we describe the ALMA observations and data reduction including imaging and image fidelity. Sec.~\ref{Sec:astronometry} reports on the astrometry and continuum properties. The ALMA spectra are described in Sec.~\ref{Sec:spectral_results} and the channel maps and PV-diagrams are shown in Sec.~\ref{Sec:data_results}. A qualitative interpretation of the ALMA $^{13}$CO J=6-5 channel maps is given in Sec.~\ref{Sec:qualitative}. Preliminary morpho-kinematical modeling of the ALMA data is presented in Sec.~\ref{Sec:Shape}. The results are discussed in Sec.~\ref{Sec:Discussion} and the conclusions are given in Sec.~\ref{Sec:conclusions}.
\section{ALMA observations and data reduction} \label{Sec:ALMA_data}
\subsection{ALMA observations}
IRC\,+10216 was observed by ALMA on 1 December 2012 for proposal code
2011.0.00277.S (Cycle~0). The total time on IRC\,+10216 was 17.5\,min spread over 35\,min. The
precipitable water vapour was 0.55\,mm. The 18 antennas provided good
data, with minimum and maximum baselines of 25 -- 340\,m.
Four 1.875-GHz frequency bands (spectral windows, spw)
were used for science, centred at 643.6645, 646.4827, 658.0587 and
661.1205\,GHz (ALMA Band 9). Each spectral window was divided into 3840 channels of 488\,kHz. After Hanning smoothing,
the effective velocity resolution is 0.455\,km s$^{-1}$. The usual ALMA instrumental corrections were applied including
those derived from water vapour radiometry and system temperature
measurements. The Cycle 0 flux scale was derived with respect to Titan. The
flux density scale is formally accurate to 20\% in Band 9; the accuracy of our data might even be
better, as the flux densities derived for other calibration sources were consistent with values measured during
other observations. The compact, bright QSO 3C279 and J0854+201 were
used as bandpass calibration and phase reference sources,
respectively; both were also used to calibrate the pointing.
\subsection{Data reduction} \label{Sec:data_reduction}
Normal data reduction procedures were performed in CASA \citep{McMullin2007ASPC..376..127M, 2011ascl.soft07013I} \footnote{Credit: International consortium of scientists based at the National
Radio Astronomical Observatory (NRAO), the European Southern
Observatory (ESO), the National Astronomical Observatory of Japan
(NAOJ), the CSIRO Australia Telescope National Facility (CSIRO/ATNF),
and the Netherlands Institute for Radio Astronomy (ASTRON) under the
guidance of NRAO.}. Inspection of the phase reference source data showed that the phase of
the first few scans could be connected smoothly, without wrapping, but
there were apparent discontinuities in the second half of the data.
We applied the phase-reference solutions to the target data and
inspected the first half of the corrected data to identify the
line-free channels. We used these to make a continuum image, which provided a model for
self-calibration of all the target continuum data. The residual
scatter in the final self-calibration solutions was about 5\deg\ in
phase and 10\% in amplitude. Using the expressions from \citet{Perley1999ASPC..180..275P},
this corresponds to a dynamic range limitation of $\sim$150--300, depending
on whether the errors are correlated between scans. We applied the solutions to all the channels of the target data. Finally, we made a preliminary cube to refine the selection of
continuum channels, subtracted these and made the final line and
continuum images.
We used CASA to Fourier transform
and clean each channel to produce image cubes for all spectral
windows. During this process, we corrected for the motion of the Earth relative to the Local Standard of Rest.
For the identified
lines, we extracted the channels covering that line and set the
velocity to zero at its rest frequency. The noise in the image cubes is 0.2 -- 0.3\,Jy beam$^{-1}$ per 488\,kHz
channel\footnote{The requested sensitivity of our Cycle~0 proposal was 0.04\,Jy beam$^{-1}$ per km, which would have required six times as
long on-target.}. The noise is worst near the 658\,GHz atmospheric water line
and is also higher in bright channels due to dynamic range
limitations, as well as being slightly increased by continuum
subtraction and by residual calibration errors.
The short duration of the observations provided sparse visibility
plane coverage, most noticeable at the highest spectral resolution.
The full-resolution images were made with partial uniform weighting to
give a synthesized beam 0\farcs42$\times$0\farcs24 for a position angle of 117\deg\ (measured
anticlockwise from north). This is similar to the natural beam
position angle, but improves the resolution without increasing the
noise appreciably\footnote{Natural weighting simply takes each data
point at the sampled position in the visibility plane; uniform
weighting interpolates into the missing spacings and the intermediate
scheme used here gives more weight depending on the local sample
density.}. A beam of
0\farcs4$\times$0\farcs4 was used for the 3-channel averaged images which have a resolution of $\sim$0.7\,km/s and a noise value of 0.13--0.18\,Jy beam$^{-1}$ (see Sect.~\ref{Sec:channel_maps}).
The shortest baselines in our observations impose a limit of
$\sim3$\hbox{$^{\prime\prime}$}\ on the size of any structure which can be imaged
accurately. Emission on scales 3\hbox{$^{\prime\prime}$}--6\hbox{$^{\prime\prime}$}\ is imperfectly sampled and
emission that is smoothly extended over $>6$\hbox{$^{\prime\prime}$}\ is undetectable, even
if it is much brighter than our limit ($\sim1$ Jy beam$^{-1}$ at full
resolution) for compact emission.
These limitations (dynamic range and sparse visibility plane coverage)
produce two kinds of artifacts. The dynamic range limitations are
apparent as residual sidelobes of the synthesized beam: the intensity is
proportional to the channel peak and it is placed symmetrically w.r.t.\ the channel peak at a constant (scaled
with frequency) spacing. We took care to check that any apparently symmetric
structures were not in fact co-incident with sidelobes. The effects of poorly sampled emission are
more complicated, being the convolution of the actual distribution
with the sidelobes. If the true distribution would for example be a disc $>$6\hbox{$^{\prime\prime}$}\
across in a given channel, the inner $\sim$3\hbox{$^{\prime\prime}$}\ would be faithfully
imaged but at larger separations negative and positive ring-like
artifacts (known as the `cereal bowl' effect) might be visible.
Nevertheless, smaller and larger rings (or patches of emission) seen in the channel maps with a size in the 3\hbox{$^{\prime\prime}$}--6\hbox{$^{\prime\prime}$}\ range
result from actual structures with a smaller or larger size, respectively, which allows one to at least partly reconstruct the actual morphological structures.
\subsection{Position measurements} \label{Sec:pos_measurement}
We measured the positions and extents of emission by fitting 2D
Gaussian components to the continuum peak and to the zeroth moment of
each line (see Sect.~\ref{Sec:astronometry}). The noise-based uncertainty (for a
sparsely-filled snapshot) is given by the beam size divided by the
signal-to-noise ratio (before self-calibration), and gives the uncertainty
in comparing positions within the same data set. However, if
the actual flux distribution is non-Gaussian, the uncertainties are larger.
\section{Imaging results} \label{Sec:astronometry}
In this section, we discuss the astrometry and stellar and dust properties of CW~Leo as deduced from the ALMA images in band 9.
\subsection{Astrometry}
The position of the 650\,GHz continuum peak is 09:47:57.4553
+13:16:43.749 (J2000), epoch 2012.92. Four factors affect the
astrometry. Uncertainties in the antenna positions and the phase
reference source coordinates are less than a few mas. The noise-based
uncertainty (for a sparsely-filled snapshot) is given by the beam size
divided by the signal-to-noise ratio (before self-calibration),
$\sim$$0\farcs02$ for the continuum peak. The uncertainty in measuring the extent of
emission is $\sim$2 times the position uncertainty. The phase reference source is
14$^{\circ}$ from IRC\,+10216 and the timescales of phase changes due to
the atmosphere leads to the dominant uncertainty of $0\farcs25$.
\citet{Menten2012A&A...543A..73M} used the VLA in 2006.16 to measure the position and
proper motion of IRC\,+10216 which would give 09:47:57.4417
+13:16:43.896 in 2012.92 (uncertainty $0\farcs012$). The discrepancy
from the position measured by ALMA is ($0\farcs20$, $-0\farcs15$),
i.e.\ within the uncertainties.
The position fitted to the zeroth moment of each line apart from
$^{13}$CO is within 30 mas (the combined position uncertainties) of
the continuum peak; the mean position of all these lines is within 6
mas. The $^{13}$CO peak is offset by ($-$67, $-$37) mas, but this is
probably due to the relatively poor fit of a Gaussian component to the
complex and extended CO emission (see Sec.~\ref{Sec:channel_maps}).
\subsection{Stellar and dust properties} \label{Sec:dust}
The total flux density within the $3\sigma$ contour is 5.66\,Jy, see Fig.~\ref{Fig:dust_continuum}.
The continuum peak is 4.362\,Jy beam$^{-1}$ with a noise uncertainty of
0.023\,Jy beam$^{-1}$ and a calibration uncertainty $<$0.87\,Jy\,beam$^{-1}$. The size (deconvolved from the beam) is $120\pm4$\,mas by
$108\pm5$\,mas at a position angle (PA) of $128\pm20^{\circ}$. The ALMA total flux density at 650\,GHz is slightly higher, although within the uncertainty limits, than the continuum emission flux density derived by
\citet{Young2004ApJ...616L..51Y} using the SMA at 680\,GHz, who found a compact unresolved component in their $\sim$2\hbox{$^{\prime\prime}$}\ full width at half maximum (FWHM) beam with a flux density of $3.9\pm1.2$\,Jy.
\begin{figure}[htp]
\includegraphics[width=0.48\textwidth]{IRC10216_CONT.pdf}
\caption{Dust continuum image of CW~Leo for a beam of $0\farcs42\times0\farcs24$ (shown as an ellipse in the bottom left corner). The contour levels are at [$-$1, 1, 2, 4, 8, 16, 32, 64, 128, 256] $\times$ 15\,mJy beam$^{-1}$ (3\,${\sigma_{\mathrm{rms}}}$).}
\label{Fig:dust_continuum}
\end{figure}
The stellar photospheric emission contributes part of the total flux density at 650\,GHz. Following \citet{Jenness2002MNRAS.336...14J}, the ALMA data were taken near maximum light (at phase $\varphi$\,=\,0.98), which would imply a stellar luminosity of $\sim$15\,600\,\Lsun\ according to \citet{DeBeck2012A&A...539A.108D}. For an effective temperature of 2330\,K \citep{DeBeck2012A&A...539A.108D}, this yields a stellar diameter of 48\,mas.
\citet{Groenewegen1997A&A...322L..21G} gives a stellar diameter of 70.2\,mas at 243\,GHz for an effective temperature of
$T_{\mathrm{eff}}$ of 2000\,K. \citet{Menten2012A&A...543A..73M} found a diameter of 83\,mas at 43\,GHz. Thus, the star is likely to be unresolved at our spatial resolution.
We used two approaches to estimate the stellar flux contribution at 650\,GHz. First, using the IRAM Plateau de Bure Interferometer (PdBI) \citet{Lucas1999IAUS..191..305L} found a `point source' at 89\,GHz (3.5\,mm) and 242\,GHz (1.3\,mm) with flux densities of $65\pm7$\,mJy and $487\pm70$\,mJy, respectively. They identified the point source as being the photospheric emission. Using a spectral index of $1.96$ ($\pm0.04$) as derived by \citet{Menten2006}, this yields a stellar contribution of 3.3\,Jy at 650\,GHz. The uncertainty on this value is estimated around 20\% and arises from the variability of the pulsating star and the uncertainties on the spectral index.
Second, the stellar contribution at 650\,GHz can also be calculated from the effective temperature and stellar diameter, under the assumption of a blackbody spectrum. Using the values obtained by \citet{DeBeck2012A&A...539A.108D} at maximum luminosity (see previous paragraph), the stellar flux at 650\,GHz is 1.3\,Jy, while the values derived by \citet{Groenewegen1997A&A...322L..21G} yield a stellar flux of 2.3\,Jy.
The difference in stellar flux estimates mainly arises from the uncertainties in the angular diameter; the uncertainty on the effective temperature only gives on uncertainty in the flux of 0.2\,Jy.
These values for the stellar contribution are a factor $\sim$1.4--2.5 lower than the value obtained by extrapolating the `point-source' values of \citet{Lucas1999IAUS..191..305L}. This might mean that part of the 'point-source' flux densities at 89\,GHz (3.5\,mm) and 243\,GHz (1.3\,mm) might not come from the central star, resulting in too high an estimate for the stellar contribution at 650\,GHz when using the data of \citet{Lucas1999IAUS..191..305L}. Following \citet{Reid1997ApJ...476..327R, Groenewegen1997A&A...317..503G, Groenewegen1997A&A...322L..21G}, we suggest that free-free emission at wavelengths beyond 800\,$\mu$m might (at least partly) contribute to the high flux density values of \citet{Lucas1999IAUS..191..305L}.
The estimated stellar flux density at 650\,GHz of $\sim$2.3\,Jy \citep[adopting the parameters of][]{Groenewegen1997A&A...322L..21G} is significantly lower than the flux density of 5.66\,Jy obtained from the ALMA data. Since free-free emission is negligible at 650\,GHz \citep{Groenewegen1997A&A...317..503G}, we conclude that the extended emission seen in the ALMA data is due to dust emission alone.
We estimated the 650\,GHz emission from dust by subtracting a 2.3\,Jy point source, convolved with the natural synthesised beam, from the continuum image, leaving a residual 3.36\,Jy (see Fig.~\ref{Fig:Flux-4}). The main sources of uncertainty are the 20\% uncertainty on the ALMA data and the estimated stellar flux density. The residual emission is extended and clearly shows an asymmetric distribution, with a central peak flux density of $\sim$2\,Jy. The lowest 3-sigma
contour in Fig.~\ref{Fig:Flux-4} corresponds to a position uncertainty $\sim$1/3 of the beam size,
so although the apparent Southern bifurcation may be unreliable the
overall extension is real. The inner emission appears to be elongated in a direction similar to the $\sim$128\deg\ position angle of the equatorial
density enhancement detected in scattered light and polarimetry data \citep{Skinner1998MNRAS.300L..29S, Murakawa2005A&A...436..601M, Jeffers2014}. However, this is close to the direction of
elongation of the natural synthesised beam, and may be coincidence.
The faintest emission is, nonetheless, elongated in an almost
orthogonal direction, at a position angle of $\sim$20\deg\ and 200\deg, corresponding to
the biconical cone detected by \citet{Skinner1998MNRAS.300L..29S, Mauron2000A&A...359..707M} (see Sect.~\ref{Sec:qualitative}).
\begin{figure}[htp]
\includegraphics[width=.48\textwidth]{IRC10216_CONT-2_3.pdf}
\caption{Flux density after subtracting a 2.3\,Jy star. The contour levels are at [$-$1, 1, 2, 4, 8, 16, 32, 64, 128] $\times$ 15\,mJy beam$^{-1}$ (3\,${\sigma_{\mathrm{rms}}}$). }
\label{Fig:Flux-4}
\end{figure}
The dust nucleation zone starts around 5\,\Rstar\ \citep[or 100-150\,mas,][]{Decin2010A&A...518L.143D}. This is comparable to the resolution of these images, but we can not distinguish faint details so close to the strong stellar peak. The
outer radius of the dust detectable by ALMA is set by the radius at
which it becomes too diffuse or too cool to be detectable.
Subtracting any realistic estimate for the
stellar contribution (1.3--3.3\,Jy) leaves an approximately flat or centrally-peaked flux
distribution within the inner $\sim$160\,mas radius, suggesting that the dust could be close to optically thick. In the case that the stellar contribution would be $\ga$4\,Jy, the dust emission is characterized by a central hollow.
Using the continuum image after subtracting a 2.3\,Jy star, assuming that the dust is optically thin and for an opacity $\kappa_\lambda$ at 650\,GHz of 90\,cm$^2$/g \citep{Mennella1998ApJ...496.1058M} and a dust temperature distributed as derived by \citet{Decin2010Natur.467...64D}, we obtain a dust mass within 0.8\hbox{$^{\prime\prime}$}\ of 3--6$\times$10$^{-7}$\,\Msun, corresponding to a dust mass loss rate of 1.1--2.2$\times$10$^{-8}$\,\Msun/yr. This estimate for the dust mass-loss rate is a factor $\sim$4 lower than derived by \citet{Decin2010Natur.467...64D} from modeling the spectral energy distribution. The large error margin is due to the flux scale uncertainty, the unknown stellar contribution, and the unknown composition of the dust and geometry in the inner regions. Assuming a constant dust velocity equal to the terminal velocity of 14.5\,km/s, this yields a dust optical depth of $\sim$0.1. Since the dust velocity might be overestimated in the wind acceleration region, the derived value for the
dust optical depth might be an underestimate by a factor of a few. Future multi-wavelength high-resolution ALMA observations will provide a spectral index map and thus a better measurement of the optical depth.
\section{Spectral results} \label{Sec:spectral_results}
In this section, we present the ALMA spectra of CW Leo in band~9 around 650\,GHz (Sect.~\ref{Sec:ALMA_spectra}). For each detected emission line, we determine the line strength, line width and spatial extension. The line strengths are compared to the line strengths as deduced from Herschel/HIFI observations in the same frequency window (Sect.~\ref{Sec:Herschel_ALMA}). The line widths and spatial extension are used to determine the velocity structure of CW~Leo in the inner wind region, where the wind is accelerated from the sound velocity to the terminal velocity (Sect.~\ref{Sec:vgas}).
\subsection{ALMA spectra of CW~Leo in band 9} \label{Sec:ALMA_spectra}
\begin{figure*}[htp]
\begin{minipage}[t]{.48\textwidth}
\centerline{\resizebox{1.2\textwidth}{!}{\includegraphics[angle=180]{spectra_spw0.pdf}}}
\end{minipage}
\hfill
\begin{minipage}[t]{.48\textwidth}
\centerline{\resizebox{1.2\textwidth}{!}{\includegraphics[angle=180]{spectra_spw1.pdf}}}
\end{minipage}
\begin{minipage}[t]{.48\textwidth}
\vspace*{-1cm}
\centerline{\resizebox{1.2\textwidth}{!}{\includegraphics[angle=180]{spectra_spw2.pdf}}}
\end{minipage}
\hfill
\begin{minipage}[t]{.48\textwidth}
\vspace*{-1cm}
\centerline{\resizebox{1.2\textwidth}{!}{\includegraphics[angle=180]{spectra_spw3.pdf}}}
\end{minipage}
\vspace*{-2ex}
\caption{Integrated flux density of the 4 spectral setups extracted in a circle of 1.2\hbox{$^{\prime\prime}$}\ in radius centred on the continuum peak. The molecular lines are indicated in color; unidentified lines are marked as 'U'. Note that the noise around 658\,GHz is due to the strong atmospheric opacities at this frequency. The frequency values are corrected for the Earth's motion.}
\label{Fig:spectra_ALMA}
\end{figure*}
Fig.~\ref{Fig:spectra_ALMA} shows the spectra of the four spectral bands extracted using a circular aperture of 1.2\hbox{$^{\prime\prime}$}\ centered on the dust continuum peak. In the ALMA data, 25 molecular line transitions are detected, of which 7 are unidentified (see Table~\ref{Table:FWHM}). The identified emission lines arise from $^{13}$CO, C$^{18}$O, SiO, $^{29}$SiO, SiS, $^{29}$SiS, $^{30}$SiS, Si$^{33}$S, Si$^{34}$S, $^{29}$Si$^{34}$S, SiC$_2$, $^{13}$CS, and AlF.
To determine the velocity structure in the wind (from the half width of the emission lines at continuum level) and the central line frequency, each line has been fitted using the soft-parabola line profile \citep[Eq.~1 in][]{Olofsson1993ApJS...87..267O} (see Table~\ref{Table:FWHM}). The soft-parabola fit to the ALMA data yields a mean difference between the rest frequencies and the ALMA frequencies of $-$57.4\,MHz or $-26.41$\,km/s, in accordance with the local standard of rest velocity derived by \citet{DeBeck2012A&A...539A.108D} from the high-spectral resolution Herschel/HIFI data.
The half width of the emission lines, $v_{\rm{line}}$, is determined by the expansion velocity, $v_{\rm{exp}}(r)$, and turbulent velocity, $v_{\rm{turb}}(r)$, with $r$ being the radial distance. The turbulent velocity in stellar winds is not well established. \citet{Keady1988ApJ...326..832K} propose a turbulent velocity in the inner wind region of CW~Leo of $\sim$5\,km/s and a terminal turbulent velocity of $\sim$1.0\,km/s; \citet{DeBeck2012A&A...539A.108D} derived a value of 1.5\,km/s from a fit to high-resolution Herschel/HIFI CO lines.
The derived half
line widths, $v_{\rm{line}}$, range between $\sim$3.5\,km/s and $\sim$15\,km/s. The velocity structure as derived from the ALMA data is discussed in detail in Sect.~\ref{Sec:vgas}.
To determine the spatial extension, we have computed the first moment for the inner 10\,km\,s$^{-1}$ of each line and have measured the spatial FWHM by fitting a 2-D Gaussian component and deconvolving the restoring beam (see Table~\ref{Table:FWHM}). We only perform the spatial fitting to the inner 10\,km/s to avoid line blending and since for an expanding shell, the emission at line center would be most spatially extended, whilst the extreme velocity emission comes from closer to the line of sight to the star. All lines have a high
enough signal-to-noise ratio to be spatially resolved apart from the
faintest unidentified line ('U' line) at 645.507\,GHz. They are all compact enough for a good
fit except for the $^{13}$CO, for which its apparent spatial FWHM, $s_{\mathrm{FWHM}}$,
represents only the inner peak. There is
likely also to be extended C$^{18}$O emission below our detection
threshold. Table~\ref{Table:FWHM} also lists the (not deconvolved) largest angular size (LAS) of the central emission brighter than 2\,Jy km/s per beam (approximately 3$\sigma_{\mathrm{rms}}$). In the case of $^{13}$CO J=6-5 there is also non-contiguous emission extended throughout the field of view (see Sect.~\ref{Sec:channel_maps}). As illustrated in Fig.~\ref{Fig:Eupper_FWHM} the extension scales inversely with the upper state energy, $E_{\mathrm{upper}}$. The anomalies in this plot are due to blending and the fact that the ALMA data resolve out a lot of the $^{13}$CO flux (see Sect.~\ref{Sec:Herschel_ALMA}).
\begin{table*}[htp]
\caption{Spectral and spatial extent of the detected lines. First column designates a number to each transition, second and third columns list the molecular line, fourth column the line rest frequency $\nu_0$, fifth column the upper energy level $E_{\rm{upper}}$, sixth column the integrated flux density of the fitted Gaussian component $S_{\rm{int}}$ and its uncertainty between parentheses, seventh column the spatial FWHM $s_{\rm{FWHM}}$ and its uncertainty, eighth column the LAS above 2\,Jy km/s per beam, ninth column the spectral FWHM, and the last column the half line width at continuum level $v_{\rm{line}}$ (indicative of the expansion and turbulent velocity) and its uncertainty.}
\label{Table:FWHM}
\begin{tabular}{llllcrccll}
\hline
\rule[0mm]{0mm}{5mm} & {molecule} & \multicolumn{1}{c}{transition} & \multicolumn{1}{c}{$\nu_0$} & $E_{\rm{upper}}$ & \multicolumn{1}{c}{$S_{\mathrm{int}}$} & $s_{\mathrm{FWHM}}$ & LAS & \multicolumn{1}{c}{FWHM} & \multicolumn{1}{c}{$v_{\rm{line}}$} \\
\rule[-3mm]{0mm}{3mm} & & & \multicolumn{1}{c}{[GHz]} & \multicolumn{1}{c}{[K]} & [Jy\,MHz] & [mas] & [arcsec] & \multicolumn{1}{c}{[km/s]} & \multicolumn{1}{c}{[km/s]} \\
\hline
\rule[0mm]{0mm}{5mm}1 & $^{29}$SiO &15-14 & 642.8080$^*$ & 246 & 413 (9) & 727 (15) & 2.0 & 25.78$^*$ & 14.36 (0.26)$^*$\\
2 &'U' & & 642.845$^*$ & -- & 167 (4) & 343 (14) & 1.2 & 25.78$^*$ & 14.36 (0.26)$^*$\\
3 & 'U' & & 642.96 & -- & 124 (2) & 341 (18) & 1.3 & 6.52 & 5.13 (0.70)\\
4 & 'U' & & 643.06 & -- & 75 (2) & 253 (26) & 1.2 & 10.00 & 7.92 (0.17)\\
5 & Si$^{33}$S & 36-35 & 643.2608 & 570 & 143 (2) & 386 (14) & 1.3 & 17.81 & 15.19 (0.80)\\
6 & 'U' & & 645.507 & -- & 39 (2) & $<$216 & 0.7 & \ \ 5.33 & \ \ 3.57 (0.92)\\
7 & 'U' & & 645.567 & -- & 64 (2) & 235 (36) & 0.9 & \ \ 7.60 & \ \ 4.95 (0.80)\\
8 & SiC$_2$ & 26,4,22-25,4,21 & 645.7526 & 441 & 287 (9) & 548 (18) & 2.0 & 20.66 & 14.63 (0.03)\\
9 & Si$^{34}$S & v=2 37-36 & 645.9198 & 2679 & 101 (2) & 275 (23) & 1.3 & 10.20 & \ \ 5.10 (0.18)\\
10 & SiS & v=2 36-35 & 646.1000 & 2724 & 133 (2) & 205 (19) & 1.1 & \ \ 7.89 & \ \ 3.94 (0.12)\\
11 & 'U'& & 646.31 & -- & 79 (2) & 282 (29) & 1.5 & \ \ 5.49 & \ \ 3.61 (0.98)\\
12 & SiC$_2$ & 27,6,22-26,6,21 & 646.4211$^*$ & 501 & 295 (6) & 512 (16) & 1.7 & 19.15$^*$ & 15.37 (0.28)$^*$\\
13 & SiO & v=1 15-14 & 646.4296$^*$ & 2018 & 300 (9) & 558 (18) & 2.0 & 19.15$^*$ & 15.37 (0.28)$^*$\\
14 & $^{30}$SiS & 37-36 & 646.7703 & 589 & 375 (6) & 486 (13) & 1.8 & 19.54 & 14.09 (0.17)\\
15 & $^{13}$CS & 14-13 & 647.0762 & 233 & 492 (13) & 675 (16) & 1.9 & 22.10 & 15.29 (0.08)\\
16 & 'U'& & 647.13 & -- & 71 (2) & 246 (39) & 1.1 & 10.80 & \ \ 7.46 (0.95)\\
17 & $^{29}$Si$^{34}$S & 38-37 & 657.1243$^*$ & 614 & 364 (13) & 593 (23) & 2.1 & 20.56$^*$ & 15.41 (0.18)$^*$\\
18 & SiC$_2$ & 31,0,31-30,0,30 & 657.1290$^*$ & 512 & 377 (13) & 568 (23) & 2.2 & 20.56$^*$ & 15.41 (0.18)$^*$\\
19 & SiC$_2$ & 30,2,29-29,2,28 & 657.6034 & 509 & 340 (13) & 551 (24) & 2.0 & 19.70 & 14.40 (0.09)\\
20 & $^{29}$SiS & 37-36 & 658.2241 & 599 & 486 (11) & 484 (13) & 2.1 & 16.73 & 14.74 (0.36)\\
21 & AlF & 20-19 & 658.5295$^*$ & 331 & 158 (6) & 424 (28) & 1.8 & 22.06$^*$ & 17.08 (0.46)$^*$\\
22 & C$^{18}$O & 6-5 & 658.5533$^*$ & 111 & 100 (9) & 526 (54) & 1.5 & 22.06$^*$ & 17.08 (0.46)$^*$\\
23 & SiC$_2$ & 29,2,27-28,2,26 & 658.7924 & 503 & 280 (13) & 579 (27) & 2.0 & 19.77 & 13.57 (0.04)\\
24 & Si$^{33}$S & 37-36 & 661.0665$^{\dagger}$ & 602 & 407 (13) & 590 (21) & 2.8 & 24.69$^{\dagger}$& 13.56 (0.01)$^{\dagger}$\\
25 & $^{13}$CO & 6-5 & 661.0673$^{\dagger}$ & 111 & 407 (13) & 590 (21) & 2.8 & 24.69 $^{\dagger}$& 13.56 (0.01)$^{\dagger}$\\
\hline
\end{tabular}
\tablefoot{
$^*$ blended within 2--20 km s$^{-1}$;
$^{\dagger}$ blended within 0.4 km s$^{-1}$
}
\end{table*}
\begin{figure}[htp]
\includegraphics[width=.48\textwidth]{SizeEnergyUI.pdf}
\caption{Spatial FWHM in function of the upper state energy level. Lines are identified by their number given in the first column of Table~\ref{Table:FWHM}. The size of the symbols is proportional to the integrated intensity; unidentified lines are put at an artifical upper state energy of 1500\,K. }
\label{Fig:Eupper_FWHM}
\end{figure}
\subsection{Comparison between ALMA and Herschel} \label{Sec:Herschel_ALMA}
The frequency range observed with ALMA was also observed by the Herschel/HIFI instrument \citep{deGraauw2010A&A...518L...6D}. A survey in all HIFI bands was carried out on May 11--15 2010. The data reduction and first results are described in \citet{Cernicharo2010A&A...521L...8C}. The HIFI beam size around 640\,GHz is 33.2\hbox{$^{\prime\prime}$}\ and the main-beam antenna efficiency is 0.75 \citep{Roelfsema2012A&A...537A..17R}. The data were taken in double beam-switching mode at a frequency resolution
of 1.1\,MHz. To increase the SNR of the weaker lines, some spectral regions were rebinned. The rms noise after averaging all scans is given in Table~\ref{Table_ALMA_Herschel}. To convert from antenna temperatures (in K) to fluxes in Jy, we have used the point-source sensitivity of 466\,Jy/K \citep{Roelfsema2012A&A...537A..17R}. The source size of all transitions listed in Table~\ref{Table_ALMA_Herschel} can be approximated with a point-source within the Herschel beam, with the exception of the $^{13}$CO J=6-5 line. Based on observations with the JCMT and CSO \citep{Crosas1997ApJ...483..913C} and our modeling of the $^{13}$CO emission presented in \citet{Decin2010A&A...518L.143D} and \cite{DeBeck2012A&A...539A.108D}, we derive a source size of $\sim$10\hbox{$^{\prime\prime}$}\ for the $^{13}$CO J=6-5 line, yielding a 3\% correction w.r.t.\ point-source sensitivity.
\begin{table*}[htp]
\caption{Comparison between the Herschel and ALMA integrated line fluxes. The first and second columns list the molecule and its transition, the third , fourth and fifth column give the Herschel antenna peak temperature, rms and SNR; the Herschel and ALMA integrated line fluxes are tabulated in the sixth column; comments on blends as seen in the Herschel and ALMA data sets are given in the last two columns. The integrated line fluxes plagued by weak line blends are given within parentheses. A `$-$' indicates the cases of strong line blending where only detailed modeling can reveal the contribution of the individual components. The Herschel point-source sensitivity of 466 Jy/K \citep{Roelfsema2012A&A...537A..17R} has been used for all lines. For $^{13}$CO J=6-5, the estimated source size of 10\hbox{$^{\prime\prime}$}\ \citep{Crosas1997ApJ...483..913C} was taken into account.}
\label{Table_ALMA_Herschel}
\setlength{\tabcolsep}{2mm}
{\small{
\begin{tabular}{lllllllll}
\hline \hline
\rule[0mm]{0mm}{5mm}Molecule &Transition&Herschel&Herschel&Herschel&Herschel&ALMA&comments&comments \\
&&$T_{\rm{a}}^{\rm{peak}}$&rms&SNR&$F_\nu\,d\nu$&$F_\nu\,d\nu$&Herschel &ALMA \\
\rule[-3mm]{0mm}{3mm}&&(K)&(mK)&&(Jy MHz)&(Jy MHz)&data&data \\
\hline
\rule[0mm]{0mm}{5mm}$^{29}$SiO& 15-14&0.065&9&7&(1408)&(834)&& blend `U' \\
Si$^{33}$S & 36-35&0.047&8.9&5&210&271&low SNR&\\
SiC$_{2}$& 26,4,22-25,4,21&0.054&5.5&10&1168&816&& \\
SiS v=2& 36-35&0.018&3.3&5&191&159&low SNR&\\
SiC$_{2}$& 27,6,22-26,6,21&$-$&4.5&$-$&$-$&$-$&blend SiO v=1& blend SiO v=1 \\
SiO v=1& 15-14&$-$&2.9&$-$&$-$&$-$&blend SiC$_2$& blend SiC$_2$ \\
$^{30}$SiS&37-36&0.055&4.9&11&1035&950&&\\
$^{13}$CS& 14-13&0.093&4.7&20&2012&1397&& weak blend `U'\\
$^{29}$Si$^{34}S$& 38-37&$-$&7&$-$&$-$&$-$&blend SiC$_2$& blend SiC$_{2}$ \\
SiC$_{2}$& 31,0,31-30,0,30&$-$&5.1&$-$&$-$&$-$&blend $^{29}$Si$^{34}$S& blend $^{29}$Si$^{34}$S \\
SiC$_{2}$&30,2,29-29,2,28&0.061&8.4&7&1297&883&& \\
$^{29}$SiS& 37-36&0.075&6.2&12&1452&1059&&\\
AlF& 20-19&$-$&6.4&$-$&$-$&$-$&blend C$^{18}$O & blend C$^{18}$O \\
C$^{18}$O& 6-5&$-$&9&$-$&$-$&$-$&blend AlF& blend AlF \\
SiC$_{2}$& 29,2,27-28,2,26&0.055&5.6&10&1281&735&&\\
$^{13}$CO& 6-5&1.235&7&176&32525&1366&weak blend Si$^{33}$S J=37-36& blend Si$^{33}$S J=37-36 \\
\hline
\end{tabular}
}}
\end{table*}
Table~\ref{Table_ALMA_Herschel} lists the integrated line intensities measured with Herschel and ALMA. Note that the ALMA integrated line intensities are higher compared to the values listed in Table~\ref{Table:FWHM} due to the fact that the spatial fitting used for the values in Table~\ref{Table:FWHM} was restricted to the inner 10\,km/s (see previous section).
As can be seen from Table~\ref{Table_ALMA_Herschel}, only a few lines are not plagued by blends with other molecular line transitions. The half line width at continuum level, $v_{\rm{line}}$, of all common unblended lines is in agreement for both instrument. The integrated flux density of the non-blended lines measured from
the Herschel data is typically 1.2 -- 1.7 times larger than that
measured using ALMA (note that the overall ALMA flux
scale is uncertain by up to 20\%). The ratio is lower for the Si$^{33}$S J=36-35
transition but this line has a low signal-to-noise ratio in the Herschel data, and
the ratio is much higher for $^{13}$CO J=6-5 line.
The 20--50\% additional flux detected by Herschel compared with ALMA can
be explained by two factors.
Firstly, there are observational differences. Herschel is
fundamentally more sensitive to large-scale emission. Its 5$\sigma$
sensitivity for these observations, per 0.7\,km s$^{-1}$, was $\sim$20\,Jy beam$^{-1}$, compared with $<$0.2\.Jy beam$^{-1}$ for ALMA. However, the Herschel beam is $\sim$$10^4$ times the area of the ALMA
synthesized beam. A 20\,Jy point source would be very bright in the
ALMA data but, spread out over an area more than 33\farcs2 in diameter,
the surface brightness per ALMA beam would be below the detection
limit. In addition, even brighter emission which is smooth on scales
$>3$'' would be poorly imaged by ALMA, and not detected at all,
however bright, if it was $>6$'' as explained in Sect.~\ref{Sec:data_reduction}. Our
results are consistent with this, as the fraction of the Herschel flux
detected by ALMA is higher for higher excitation
temperature lines and lower for more spatially-extended species and we
detect a similar fraction of the flux with ALMA for similar
transitions. Specifically,
for the transitions listed in Table~\ref{Table_ALMA_Herschel}, the estimated emission
region for the SiS isotopologues (J=37-36 or J=36-35)
in the ground-vibrational state is $<6$\farcs5 \citep[based on modeling by][]{Decin2010A&A...518L.143D}. For the $^{13}$CS J=14-13 line the main
emission region is within $\sim7$\hbox{$^{\prime\prime}$}\ \citep[based on the $^{12}$CS J=7-6 and $^{13}$CS J=7-6 line observed by][]{Williams1992A&A...266..365W, Patel2011ApJS..193...17P} and the $^{12}$CS J=14-13 SMA data of \citet{Young2004ApJ...616L..51Y} even indicate an emission region within $\sim$4\hbox{$^{\prime\prime}$}. As mentioned
above, the main emission region for the $^{13}$CO J=6-5 transition
is $\sim10$\hbox{$^{\prime\prime}$}\ and the ALMA observations have resolved-out at least
95\% of the flux. No flux will be resolved out for the high excitation
SiS v=2 J=36-35 line with an upper state energy
of 2724\,K; the difference between ALMA and Herschel integrated line flux values is within the calibration uncertainties of both instruments.
Secondly, the variability of thermal line emission due to the variability of the central star. Following \citet{Jenness2002MNRAS.336...14J}, the Herschel and ALMA data were taken at phase 0.46 and 0.98, respectively. The changes in the radiation field during the pulsation period can influence the molecular excitation. The infrared radiation field is expected to mainly
affect molecules with strong vibrational transitions, with stronger modulations for lines excited close to the star. Using Herschel data, \citet{Teyssier2013} have recently shown that the $^{12}$CS J=14-13 has a maximum amplification factor in the line intensities of $\sim$1.3, while for SiS J=37-36 and J=36-35 this is $\sim$1.2. The low excitation CO lines seem quite insensitive to a change in the infrared radiation field.
Predicting if a line will have a higher or lower amplification factor depending on the variability phase at the time of the observations is, however, complex. The ALMA data are taken near
light maximum and the Herschel data near light minimum, which indicates that higher excitation levels might be more populated at the time of the ALMA observations. However, the exact effect is dependent on the transition rate probabilities with the connecting levels.
\subsection{Gas velocity structure in the inner wind region} \label{Sec:vgas}
\begin{figure*}[htp]
\begin{minipage}[t]{.48\textwidth}
\centerline{\resizebox{\textwidth}{!}{\includegraphics[angle=90]{vexp_LAS.pdf}}}
\end{minipage}
\hfill
\begin{minipage}[t]{.48\textwidth}
\centerline{\resizebox{\textwidth}{!}{\includegraphics[angle=90]{vexp_sFWHMdiv2.pdf}}}
\end{minipage}
\caption{Measured velocities [$v_{\rm{line}}$ in km/s] versus half of the LAS (representing the not-deconvolved maximum detectable scale of each transition) in the left panel and half of the spatial FWHM $s_{\rm{FWHM}}$ (representing the dominant line formation region) in the right panel. Note that the value for the LAS is restricted by the sensitivity limit of the observations and the fact that large-scale flux might be resolved out (see Sect.~\ref{Sec:data_reduction}). The blended lines are indicated with a squared box, the unblended lines with a cross (and include the error bars). In the right panel, the full black line shows the velocity structure as derived by \citet{Decin2010Natur.467...64D} from solving the momentum equation; the dashed line indicates the velocity structure as derived by \citet{Fonfria2008ApJ...673..445F} from modelling different $J$ ro-vibrational transitions of C$_2$H$_2$, the dash-dotted line the velocity structure derived by \citet{Keady1988ApJ...326..832K} from the analysis of
the near-infrared ro-vibrational CO spectrum, and the thick grey lines the acceleration derived from the ALMA data. The vertical dotted line indicates the dust nucleation region around 5.6\,\Rstar, where the sound velocity is $\sim$3\,km/s. }
\label{Fig:vexp_LAS_sFWHM}
\end{figure*}
Table~\ref{Table:FWHM} and Fig.~\ref{Fig:vexp_LAS_sFWHM} show that the expansion velocity of the lines falls into
two groups. All but two of the identified lines trace a velocity larger than 13.5\,km/s at a LAS radius (representing the not-deconvolved maximum detectable scale) beyond 600\,mas radius (or 30\,\Rstar) and a spatial FWHM (illustrating the main line formation region) beyond 200\,mas (10\,\Rstar), indicating that the wind has reached its terminal velocity within $\sim$10\,\Rstar\ from the central star, in agreement with the results of \citet{Keady1988ApJ...326..832K}. The other group has line width velocities $<$8\,km/s within 800\,mas LAS radius (40\,\Rstar). This contains
the two identified lines in v=2 excited vibrational states and all unblended 'U' lines, suggesting that these also come from
high-excitation states. The narrowest line is the ‘U’ line at 646.31\,GHz with a velocity of only 3.6\,km/s, slightly higher than the sound velocity of $\sim$3\,km/s at the base of the dust nucleation region.
Plotting the derived velocities in function of the spatial FWHM (see Fig.~\ref{Fig:vexp_LAS_sFWHM}) shows that the measured line width velocities rapidly increase from the sound velocity to $\sim$8\,km/s between $\sim$100 to 200\,mas (5--10\,\Rstar). Assuming that the turbulent velocity does not vary substantially in this small region, this indicates that carbonaceous material condenses around 5\,\Rstar, resulting in a rapid acceleration of the gas until a velocity of $\sim$8\,km/s.
In the four ALMA spectral windows, no lines are detected with a velocity between 8 and 13.5\,km/s, but the data suggest a rapid increase of the gas velocity from $\sim$8\,km/s to $\sim$14.5\,km/s around $\sim$10--12\,\Rstar.
This acceleration profile is not captured by the solution of the wind momentum equation derived by \citet{Decin2010Natur.467...64D}, who assumed instantaneous condensation of amorphous carbon, iron, silicon carbide and magnesium sulfide dust species at a radius of 5.6\,\Rstar. \citet{Keady1988ApJ...326..832K} suggested a 2-steps acceleration process from an analysis of high-resolution near-infrared ro-vibrational CO spectra. They suggested an increase around 3 and 11\,\Rstar, with the first increase up to a velocity of 11\,km/s being due to the condensation of carbon and a second increase that might be linked to dust condensation involving Mg and/or S. This idea was used and adapted by \citet{Fonfria2008ApJ...673..445F} who modeled a large
set of C$_2$H$_2$ lines. They suggested a strong velocity gradient from 5 to 11\,km/s around 4.7-5.7\,\Rstar\ and a second velocity increase to 14.5\,km/s around 20.75-21.75\,\Rstar\ (see Fig.~\ref{Fig:vexp_LAS_sFWHM}).
Based on the ALMA band 9 data, the wind acceleration profile can be further refined. The data can be fit assuming a linear velocity increase from around 2--3\,km/s at $\sim$5\,\Rstar\ reaching terminal velocity around 11\,\Rstar (see full grey line in the right panel of Fig.~\ref{Fig:vexp_LAS_sFWHM}). Assuming a 2-steps acceleration process, we obtain
\begin{equation}
v(r) = \left\{
\begin{array}{rl}
2-3{\rm{\,km/s}}, & \textnormal{$1\le r/\textnormal{R$_*$} <5.6$}\\
3-8{\rm{\,km/s }}, & \textnormal{$5.6\le r/\textnormal{R$_*$} <8$}\\
8{\rm{\,km/s }}, & \textnormal{$8\le r/\textnormal{R$_*$} <10$}\\
8-14.5{\rm{\,km/s}}, & \textnormal{$10\le r/\textnormal{R$_*$} <11$}\\
14.5 {\rm{\,km/s}}, & \textnormal{$11\le r/\textnormal{R$_*$}$}\,,
\end{array}
\right.
\label{Eq:velocity}
\end{equation}
i.e.\ a first jump around 5.6\,\Rstar\ and a second one around 10\,\Rstar\ (see dotted grey line in the right panel of Fig.~\ref{Fig:vexp_LAS_sFWHM}). To improve upon the solution of the momentum equation as derived by \citet{Decin2010Natur.467...64D}, different possibilities exist. (1)\,Either the dust absorption efficiencies need to increase in the near-infrared, where the bulk of the stellar photons are emitted. (2)\, Or larger grains ($\ga$0.2\,$\mu$m) should be formed more efficiently close to the star, thus enhancing the wind acceleration via scattering processes \citep{Hofner2008A&A...491L...1H}. (3)\,A third possibility might exist in adapting the velocity at the start of the dust condensation region, which is now assumed to be the sound velocity \citep{Decin2006A&A...456..549D}. However, in that case, the dust-to-gas ratio would need to be decreased in order not to overshoot the observed terminal velocity.
\section{Channel maps and PV diagrams} \label{Sec:data_results}
In Sect.~\ref{Sec:channel_maps} and Sect.~\ref{Sec:PV}, we show and discuss the channel maps and position-velocity (PV) diagrams of 1 representative high-excitation line with an upper state energy (E$_{\rm upper}$) around 2700\,K, of 1 medium excitation line (E$_{\rm upper}\sim$450\,K) and of 1 lower excitation line (E$_{\rm upper}\sim$100\,K). As will be seen, the vibrationally-excited line traces the wind acceleration region, molecules present in the inner envelope with excitation energies around a few hundred K are slightly resolved with the ALMA beam of $\sim$0.2\hbox{$^{\prime\prime}$}, while the $^{13}$CO J=6-5 line shows a complex emission pattern.
\subsection{Channel maps and morphology} \label{Sec:channel_maps}
\begin{figure*}[htp]
\sidecaption
\includegraphics[width=12cm]{SISV2J36_3CH_5ASEC_1.pdf}
\caption{SiS v=2 J=36-35 channel map, averaged over 3 channels with a natural weighting and for a circular beam of 0.4\hbox{$^{\prime\prime}$}. North is up, East is left; the black cross indicates the center of the continuum map. The flux density units are Jy/beam. The contours are at ($-$1,1,2,4,8\ldots)$\times$0.5\,Jy/beam ($\sim$3\,$\sigma_{\rm{rms}}$).
This high-excitation line has a width of $\sim$9\,km/s and the line formation region is not resolved with the current ALMA beam. The contrast in the figure is best visible on screen.}
\label{Fig:SiSv2_J36_channel}
\end{figure*}
As expected, the channel map of a very high-excitation line of a molecule present in the inner wind region just above the photosphere, e.g.\ SiS v=2 J=36-35 (Fig.~\ref{Fig:SiSv2_J36_channel}), shows that the emission of this line comes from the very inner regions of the envelope, close to the stellar surface, where the wind material has a velocity around 4\,km/s and the gas kinetic temperature is typically around 900-1300\,K. The morphology in the region $<$1\hbox{$^{\prime\prime}$} (or diameter of 50\,\Rstar) is still more or less circular (for a circular beam of 0\farcs4).
\begin{figure*}[htp]
\sidecaption
\includegraphics[width=12cm]{SIC2J26_3CH_5ASEC_1.pdf}
\caption{SiC$_2$ 26(4,22)-25(4,21) channel map, averaged over 3 channels with a natural weighting and for a circular beam of 0.4\hbox{$^{\prime\prime}$}. North is up, East is left; the black cross indicates the center of the continuum map. The flux density units are Jy/beam. The contours are at ($-$1,1,2,4,8\ldots)$\times$0.5\,Jy/beam ($\sim$3\,$\sigma_{\rm{rms}}$). The ALMA instrument resolves the line formation region. At different velocities the flux densities display a non-spherical structure. The contrast in the figure is best visible on screen.}
\label{Fig:SiC2_J26_channel}
\end{figure*}
The channel maps of molecular transitions formed in the inner wind region and with upper state energies around a few hundred K, such as the SiC$_2$ 26(4,22)-25(4,21) line displayed in Fig.~\ref{Fig:SiC2_J26_channel}, show that the emission of this line is extended and displays a non-homogeneous distribution of the molecular density. The typical spatial FWHM of these lines is around a few hundred milli-arcsec (see Table~\ref{Table:FWHM}) and these lines are formed in a region of the wind where the velocity has already reached the terminal velocity.
\begin{figure*}[htp]
\sidecaption
\includegraphics[width=12cm]{13CO_6-5_3CH_5ASEC_1.pdf}
\caption{$^{13}$CO J=6-5 channel map, averaged over 3 channels with a natural weighting and for a circular beam of 0.4\hbox{$^{\prime\prime}$}. North is up, East is left; the black cross indicates the center of the continuum map. The flux density units are Jy/beam. The contours are at ($-$1,1,2,4,8\ldots)$\times$0.5\,Jy/beam ($\sim$3\,$\sigma_{\rm{rms}}$). Different arc-like features can be discerned. The contrast in the figure is best visible on screen.}
\label{Fig:13CO_channel}
\end{figure*}
The $^{13}$CO J=6-5 channel map (Fig.~\ref{Fig:13CO_channel}) displays a complex structure. At different velocities, parts of a spiral-like arm (arcs) can be distinguished: around $-$17.5\,km/s one can see an arc to the North at $\sim$1.5\hbox{$^{\prime\prime}$}\ offset from the central source, around $-$24.6\,km/s at 0.7\hbox{$^{\prime\prime}$}\ offset to the south-east, around $-$27.5\,km/s at 1\hbox{$^{\prime\prime}$}\ to the south-west, and around $-$37.4\,km/s at 2\hbox{$^{\prime\prime}$}\ to the south-west. Local density enhancements, the change in (projected) velocity for different arcs, gaps in the UV-coverage (see Sect.~\ref{Sec:data_reduction}) and the fact that we resolve out flux for scales larger than $\sim$3\hbox{$^{\prime\prime}$}\ can explain why the spiral arms are not complete. At, for example, $-$37.4\, km/s, the extended emission has a distinctive curve not related to
the beam sidelobe pattern. The negative artefact is due to the
"bowl" effect mentioned in Sect.~\ref{Sec:data_reduction}, resulting from emission on scales
just larger than is sampled by the interferometer. Although the
detailed flux distribution at radii $\ga1\farcs5$ is unreliable, it
does represent the general shape of the brighter emission at each
velocity.
The local density enhancements/spiral arcs are seen best in the maps of $^{13}$CO, as it is the lowest-excitation and most abundant transition in our ALMA observations, mitigating the over-resolution of its extended emission. Some of the less-abundant, intermediate-excitation lines have a
similar LAS for integrated emission in the ALMA data, but are too faint to resolve extended details, spectrally or spatially. Fig.~\ref{Fig:13CO_integrated_dust} compares the integrated $^{13}$CO J=6-5 emission (zeroth moment) and dust emission. Bright arcs at 1\hbox{$^{\prime\prime}$}\ to the North and 1\hbox{$^{\prime\prime}$}\ and 1.6\hbox{$^{\prime\prime}$}\ to the South-west can clearly be discerned in the $^{13}$CO emission, but the dust emission at 650\,GHz is below our detection threshold at radii beyond 1\hbox{$^{\prime\prime}$}. Important to note is that it is the first time that we can confirm that the extended continuum and molecular emission are both centred on the continuum peak position.
\begin{figure}[htp]
\includegraphics[width=.48\textwidth]{13CO_dust.jpg}
\caption{\textit{Gray scale:} Integrated $^{13}$CO J=6-5 emission (cut-off level of 0.02 Jy/beam), with contour levels at [$-$1, 1, 2, 4, 8, 16]$\times$6.82\,Jy$\cdot$\,km/s per beam. \textit{Red contours:} dust emission with contours levels are at [$-1$, 1, 2, 3, 4, 8, 16, 28]$\times$15\,mJy beam$^{-1}$ (cf.\ Fig.~\ref{Fig:Flux-4}). In the bottom left corner, the beam ellipse for the integrated $^{13}$CO J=6-5 emission is shown in gray and for the dust continuum in red.}
\label{Fig:13CO_integrated_dust}
\end{figure}
\subsection{Position-velocity diagrams and morphology} \label{Sec:PV}
\begin{figure*}[htp]
\sidecaption
\begin{minipage}[t]{5.8cm}
\centerline{\resizebox{\textwidth}{!}{\includegraphics{PV_SiSv2J36_PA90_w101.pdf}}}
\end{minipage}
\begin{minipage}[t]{5.8cm}
\centerline{\resizebox{\textwidth}{!}{\includegraphics{PV_13CSJ14_PA90_w101.pdf}}}
\end{minipage}
\caption{Position-velocity diagram in right ascension for SiS v=2 J=36-35 at 646.100\,GHz \textit{(left)} and $^{13}$CS J=14-13 at 647.076\,GHz \textit{(right)}. Note that an offset to the East is negative, and to the West positive. Sidelobe-effects are seen in the right image, as well as the fact that the $^{13}$CS J=14-13 line is blended in the blue wing with a 'U'-line. }
\label{Fig:PV_other_excitation}
\end{figure*}
Using the {\sc CASA} task {\sc impv} position-velocity diagrams have been calculated for all transitions. In the standard set-up, 201 slits with a width of 1 pixel (\,=\,0.060\hbox{$^{\prime\prime}$}) and a length of 101 pixels (\,=\,6.06\hbox{$^{\prime\prime}$}) were taken. Each slit is then centered perpendicular to the direction slice (i.e.\ in this case 3.03\hbox{$^{\prime\prime}$}\ above and 3.03\hbox{$^{\prime\prime}$}\ below the central pixel) and the average for each slit is taken. A PV along the right ascension axis hence has PA=90\deg, and along the declination axis PA=0\deg. The strength of PV plots is that one can correlate structure at different spatial offsets, which is otherwise a challenging task from channel maps alone.
As discussed in Sect.~\ref{Sec:data_reduction}, the PV diagrams are not free from artifacts. (1)~For lines with an integrated line peak flux larger than 0.4\,Jy/beam, sidelobe effects are clearly visible at $-$4\hbox{$^{\prime\prime}$}\ and $+$4\hbox{$^{\prime\prime}$}\ offset (see right panel in Fig.~\ref{Fig:PV_other_excitation}). (2)~The instrument resolves out some flux for most of the lines, which is especially true for the emission of the CO isotopologues (see Sect.~\ref{Sec:Herschel_ALMA}). In some PV diagrams, one can see emission extending beyond the $-10$ to $-40$\,km/s range (covering the radially outflowing wind), which is due to
blending with molecular line transitions at slightly different frequencies (see right panel in Fig.~\ref{Fig:PV_other_excitation}).
For all transitions, the channel maps and PV diagrams indicate that (some part of) the emission originates from the region just above the stellar photosphere, i.e. within $\sim$0.2\hbox{$^{\prime\prime}$}\ or 10\,\Rstar\ (see online Appendix). The ALMA data prove that SiO is formed in the inner wind region indicating an active photochemical region where shock-driven chemistry \citep{Cherchneff2011A&A...526L..11C} and/or photodissociation of molecules in a clumpy medium \citep{Agundez2010ApJ...724L.133A} dictate the chemical balance.
\begin{figure*}[htp]
\begin{minipage}[t]{12cm}
\includegraphics[width=5.8cm]{PV_13COJ6_PA90_w101.pdf}
\includegraphics[width=5.8cm]{PV_13COJ6_PA0_w101.pdf}
\end{minipage}
\newline
\begin{minipage}[b]{12cm}
\includegraphics[width=5.8cm]{PV_13COJ6_PA10_w101.pdf}
\includegraphics[width=5.8cm]{PV_13COJ6_PA22_w101.pdf}
\end{minipage}
\hfill
\begin{minipage}[b]{6cm}
\vspace*{-2ex}
\caption{Position-velocity diagram of $^{13}$CO J=6-5 at 661.067\,GHz for a slit width of 101 pixels at a position angle (PA, measured from North to East) of (\textit{top left:}) 90\deg\ (i.e., along the right ascension; offset to the East is negative, and to the West positive), (\textit{top right:}) 0\deg\ (i.e., along the declination; offset to the North is negative and to the South positive), (\textit{bottom left:}) 10\deg, and (\textit{bottom right:}) 22.5\deg.}
\label{Fig:PV_13COJ6}
\end{minipage}
\end{figure*}
\begin{figure*}[htp]
\sidecaption
\includegraphics[width=12cm]{PV_overview_PA_new.jpg}
\caption{Position-velocity diagram of $^{13}$CO J=6-5 for a slit width of 51 pixels at different position angles, indicating in the top left corner of each panel. The same color wedge has been used as in Fig.~\ref{Fig:PV_13COJ6}. The position angles at top and bottom row differ by 90\deg. The offset (x-axis) ranges from $-$6\hbox{$^{\prime\prime}$} to 6\hbox{$^{\prime\prime}$}, the velocity (y-axis) from 0 to $-$55\,km/s.}
\label{Fig:PV_13COJ6_other_angles}
\end{figure*}
Fig.~\ref{Fig:PV_13COJ6} shows the PV diagrams of the $^{13}$CO J=6-5 channel maps for a direction slice in right ascension or in declination. These PV diagrams display a more complex morpho-kinematical structure compared to the PV diagrams in Fig.~\ref{Fig:PV_other_excitation}, or any other line. This is not unexpected as the $^{13}$CO J=6-5 line is the strongest in the frequency range we covered, as discussed in Sect~\ref{Sec:channel_maps}. It is clear that the curved distribution of the $^{13}$CO PV maps represents the genuine morpho-kinematical shape, but that the actual distribution of the emission on scales larger than $\sim$3\hbox{$^{\prime\prime}$}\ is artificially fragmented due to the gaps in UV spacing (see Sec.~\ref{Sec:data_reduction}). By changing the position angle of the PV diagram, one can diagnose changes in the morpho-kinematical structure. The most revealing $^{13}$CO J=6-5 PV-diagram is obtained for position angles around
10-25\hbox{$^\circ$}\ to the North-East (see Fig.~\ref{Fig:PV_13COJ6}). To show and enhance the effect of the small-scale structure, we have also calculated these PV diagrams for a slit of 51 pixels (instead of 101 pixels in the standard setup); see bottom panels in Fig.~\ref{Fig:PV_13COJ6_other_angles}. In all panels of Fig.~\ref{Fig:PV_13COJ6_other_angles}, one can clearly distinguish correlated curved structures ranging from $-40$ to $-12$\,km/s and the distribution of the brightest emission around 0\hbox{$^{\prime\prime}$}\ offset displays an `{$S$}'-shape.
\section{Qualitative interpretation of the $^{13}$CO J=6-5 ALMA data} \label{Sec:qualitative}
The $^{13}$CO J=6-5 PV diagrams in Fig.~\ref{Fig:PV_13COJ6_other_angles} show a very characteristic shape with correlated structure at an interval of $\sim$1\hbox{$^{\prime\prime}$}\ (around the systemic velocity) for position angles around 10-20\deg. These structures are almost absent in the PV diagrams for PA$\sim$112.5\deg. In almost all PV diagrams a typical `{$S$}'-shape is seen around zero offset.
Based on the simulations of \citet{Kim2013ApJ...776...86K}, we postulate that this type of morphology hints towards the presence of a spiral structure induced by a binary companion. In contrast to R~Scl for which the ALMA data have recently shown the presence of a spiral seen almost face-on \citep{Maercker2012Natur.490..232M}, the ALMA data of CW~Leo are reminiscent of a spiral seen almost edge-on, with the orbital axis at an angle of $\sim$22.5\deg\ to the North-East (see Sect.~\ref{Sec:Similarities}).
Our ALMA images capture fragments of the spiral arms, seen almost edge-on, with a typical width of $\sim$300\,mas. We cannot be sure whether the fragmentation is an instrumental artifact, but in the case of CIT~6 the 0\farcs7-resolution data analysed by \citet{Kim2013ApJ...776...86K}
show that the structure in the spiral arms is not smooth. High spatial resolution observations
\citep{Tuthill2000ApJ...543..284T, Weigelt2002A&A...392..131W, Menut2007MNRAS.376L...6M} and hydrodynamical models
\citep{Woitke2006A&A...452..537W} have shown that gas and dust clumps are created in the inner envelope by the wind formation mechanism. These density inhomogeneities will remain in the inner spiral-arm structures, until they are eventually dispersed.
No direct evidence has yet been found on the presence of a binary companion. However, the current ALMA data in combination with a plethora of other observational diagnostics (schematized in Fig.~\ref{Fig:sketch}) suggest that CW Leo is part of a binary system. (1)~\citet{Guelin1993A&A...280L..19G} have suggested that CW~Leo is a binary since the clumpy shells observed in MgNC, C$_4$H, and other carbon-chain molecules and radicals appear off-center from the continuum source by 2--3\hbox{$^{\prime\prime}$}\ to the East. The cause of such a drift could be an acceleration of the star due to a companion. (2)~As mentioned in Sect.~\ref{Sec:introduction}, the central regions around CW~Leo show the presence of an axi-symmetric peanut- or bipolar-like structure with the major axis lying at a position angle of $\sim$8--22\deg\ \citep[see Fig.~\ref{Fig:sketch};][]{LeBertre1989Msngr..55...25L, Skinner1998MNRAS.300L..29S, Mauron2000A&A...359..707M, Kastner1994ApJ...434..719K}. This is an indirect evidence of the possible existence of an
equatorial density
enhancement of dust and gas, with the equatorial plane being located perpendicular to the major axis of the nebula, enhancing scattered star light emission through the bi-conical openings.
A direct detection of an equatorial dust lane seen almost edge-on was recently presented by
\citet{Jeffers2014}, corroborating the results of \cite{Murakawa2005A&A...436..601M}. The inferred radius of the dust lane is $\sim$0.5--1\hbox{$^{\prime\prime}$}. The elongated East-West structure seen in our ALMA continuum images at a PA of $\sim$ 128\deg\ with a length of 1.8\hbox{$^{\prime\prime}$}\ across (Fig.~\ref{Fig:Flux-4} and Fig.~\ref{Fig:13CO_integrated_dust}) might be the signature of this equatorial density enhancement\footnote{Although we note that this direction is close to the direction of elongation of the natural synthesised beam, and may be coincidence (see Sect~\ref{Sec:dust}).}, while the extended emission at a PA of $\sim$20\deg\ and $\sim$200\deg\ might reflect the biconical cones. This type of morphology can be explained in terms of a binary system in which angular momentum provides a natural way to define the equatorial plane and polar axis. Interaction between the secondary and expanding red giant primary star will create a bipolar nebula and the dissipative tidal interaction might eventually lead to
a merger of the two stars in a common envelope system \citep{Morris1987PASP...99.1115M}.
(3)~Optical and infrared data proved the presence of multiple, almost concentric, shells (or arcs) out to 320\hbox{$^{\prime\prime}$}\ from the central star \citep{Mauron1999A&A...349..203M, Leao2006A&A...455..187L, Fong2003ApJ...582L..39F, Decin2011A&A...534A...1D}. The shells are incomplete and cover $\sim$30\deg--90\deg\ in azimuth. These shells might represent the limb-brightened spiral arms. The shell-intershell density contrast deduced from the observations is about a factor 3--10, in agreement with recent hydrodynamical simulations for binary induced spiral patterns as calculated by \citet{Kim2012ApJ...759...59K}.
(4)~Previous observations by \citet{Guelin1993A&A...280L..19G}, \citet{Lucas1999IAUS..191..305L} and \citet{Dinh2008ApJ...678..303D} of molecular shells show a lack of molecular emission at position angles of $\sim$0\deg--30\deg. This position angle aligns with our inferred model rotational axis of the spiral structure (see Sec~\ref{Sec:Similarities}). As shown by \citet{Mastrodemos1999ApJ...523..357M} a binary induced spiral structure has a lower density along the orbital axis, which might result in a lack of molecular emission along the inferred rotational axis.
\begin{figure*}[hbtp]
\sidecaption
\includegraphics[width=12cm]{cone.jpg}
\caption{Sketch of the inner wind region of CW~Leo indicating the different observations/results which signal the presence of a binary companion. From the modeling of the ALMA data, we infer an orbital axis with a PA of $\sim$20\deg\ (Sect.~\ref{Sec:Similarities}). The reflex motion of the primary AGB star results in a one-armed spiral structure that is seen almost edge-on and with an extent almost reaching the orbital axis (full red arcs). The dust lane with a radius of $\sim$0.5--1\hbox{$^{\prime\prime}$}\ situated in the orbital plane partly impedes the expansion of the spiral shock in the orbital plane direction (illustrated by the dashed red arcs).}
\label{Fig:sketch}
\end{figure*}
\citet{Mauron2000A&A...359..707M} argued against the binary scenario based on the irregular arc spacing seen in the optical and infrared images. However, one has to realize that the shells are projections in the plane of the sky of a complex 3D structure. A binary companion in a smooth envelope structure would indeed create a spiral structure with regular arm spacing. However, the wind and envelope creation does not occur in a homogeneous way, adding complexity to the morpho-kinematical structure. Magnetic cool spots during the active phase \citep{Soker1999MNRAS.307..993S} and/or large scale photospheric convective cells \citep{Freytag2008A&A...483..571F} can locally reduce the gas temperature, which might locally enhance the dust formation and thus lead to gas and dust clumps. The magnetic cool spots cover only a fraction of the stellar surface, and this inhomogeneity might explain the clumpiness seen on smaller spatial scales and in the arcs\footnote{The ALMA instrumental effects also result in a
superficial fragmentation of the spiral structure (see Sect.~\ref{Sec:PV_radial} and \citet{Maercker2012Natur.490..232M}).}. Star-spots have not yet been observed on AGB stars, although recent observations of
linear polarization in CO J=3-2, SiS J=19-18 and CS J=7-6 suggest a complex magnetic field configuration in the wind of CW~Leo with a strength around 50--300\,mG at $\sim$3\hbox{$^{\prime\prime}$}\ offset of the central source \citep{Girart2012ApJ...751L..20G} and recent interferometric data show strong evidence for inhomogeneities in the molecular photospheric layers of AGB stars \citep{Wittkowski2011A&A...532L...7W}. In addition, modulations in the density structure might be a natural outcome from the dust formation process during which a complex non-linear interplay between gas-grain drift, grain nucleation, radiation pressure, and envelope hydrodynamics create density irregularities. On top of that, flow instabilities (as, e.g., Rayleigh-Taylor instabilities) have time to fragment the outward moving density structures and can produce numerous small-scale cloud-like sub-structures \citep{Woitke2006A&A...452..537W}.
\section{Modeling a binary-induced spiral structure} \label{Sec:Shape}
A full reconstruction of the intensities, the 3D shell pattern and its kinematics using the ALMA $^{13}$CO channel maps and PV diagrams requires detailed hydrodynamical simulations and radiative transfer calculations. This is beyond the scope of the current paper, but will be presented in a forthcoming paper (Homan et al., in prep.).
In this section, we want to demonstrate how different kinds of density fluctuations impact a PV diagram and how the morphology in the current ALMA PV data points towards the influence of a binary companion in the shaping of the wind structure around CW~Leo. The simulations are obtained using the {\sc Shape} software developed by \citet{Steffen2006RMxAA..42...99S} and \citet{Steffen2011}.
In this section, we first shortly describe the impact of a binary companion on the wind envelope structure (Sect.~\ref{Sec:binary}). Then, we present {\sc Shape} simulations, where we gradually increase the complexity of the envelope structure (Sect.~\ref{Sec:Shape_models}). In Sect.~\ref{Sec:Similarities}, we demonstrate the similarities between the ALMA $^{13}$CO PV diagrams and PV diagrams based on spiral structures.
\subsection{Binary-induced spiral structures}\label{Sec:binary}
As shown in the seminal work of \citet{Mastrodemos1999ApJ...523..357M}, a binary companion might produce a spiral pattern in the circumstellar wind material. \citet{Kim2012ApJ...744..136K} and \citet{Kim2012ApJ...759...59K} have studied the separate effects of the orbital motions of the individual stars and have shown that two types of spiral patterns are created. Firstly, a more flattened spiral pattern confined within a very limited height from the orbital plane is created due to the companion's motion \citep{Kim2012ApJ...744..136K}. Secondly, a spiral-shell-shaped pattern is created due to the orbital motion of the mass-losing star around the center of gravity \citep{Kim2012ApJ...759...59K}. These two spiral patterns are different in several ways. (1)~The companion's wake is attached to the companion, while the pattern due to the motion of the mass-losing star has a stand-off radius defined by the orbital and wind velocity \citep[see Eq.~7 in][]{Kim2012ApJ...759...59K}. (2)~Secondly, the propagation
speeds of the patterns determining the shape are different. However, in the case of AGB stars, this difference is very small since the propagation speed of all patterns is close to the wind speed, which dominates over the orbital and sound speed. (3)~Thirdly, while the arc pattern due to the reflex motion of the mass-losing star nearly reaches the orbital axis and introduces an oblate-shaped flattening of the circumstellar envelope density, the direct effect of the companion results in a spiral structure confined toward the orbital plane. When combining both types of spiral structures, the hydrodynamical models show the presence of clumpy structures within the vertical extension limit of the companion's wake due to shocks.
\citet{Kim2013ApJ...776...86K} presented PV diagrams at different inclinations for simulations including both types of spirals for the carbon star CIT~6. Note that a spherical central region of 2\hbox{$^{\prime\prime}$}\ was carved out in their simulations to mimic the central hole seen in the observed HC$_3$N data. The PV diagrams in their simulations are dominated by the spiral induced by the reflex motion. In Sect.~\ref{Sec:Shape_models}, we will show that for spiral structures quite confined towards the orbital plane and for position angles significantly different than the orbital axis or plane, an $S$-type feature around offset zero appears in a PV diagram around offset zero when seen almost edge-on; a feature also seen in the ALMA data (see Fig.~\ref{Fig:PV_13COJ6}).
\subsection{Morpho-kinematical simulations}\label{Sec:Shape_models}
To understand the complex morpho-kinematical structure seen in the $^{13}$CO PV diagrams and link the ALMA data to a 3D shell pattern, we have used the {\sc Shape} modeling tool \citep{Steffen2006RMxAA..42...99S, Steffen2011}. {\sc Shape} is a flexible interactive 3D morpho-kinematical modeling application for astrophysics, that is publicly available. By interactively defining 3D structures, one can calculate intensity maps, PV diagrams, channel maps and spectra. While eulerian 3D grid-based hydrodynamic simulations are possible, in this work we use a purely mathematical description of the object structure (see below). A 3D mesh is constructed, which serves as a container of the emissivity and velocity field. If a radiation transfer computation is done, the information in the mesh is then transfered to a regular cartesian 3D grid, which is used to compute the radiation transfer. The asset of {\sc Shape} is that it is computationally very fast, which facilitates a first broad screening of the huge 3D
kinematical and morphological parameter space. The deduced model parameters can then be used as input for a more detailed (hydrodynamical) simulation. Recently, a non-LTE (non local-thermodynamic equilibrium) radiative transfer solver ({\sc Shapemol}) has been added to compute molecular excitation levels \citep[][Santander-Garci\'ia et al.\ (2014), submitted]{Santander2012A&A...545A.114S}. This solver is based on the well-known LVG \citep[Large Velocity Gradient,][]{Castor1970MNRAS.149..111C} approximation, which significantly simplifies the radiative transfer problem. The current version of {\sc Shapemol} still has limitations when calculating accurately the molecular level populations in an AGB wind: it only includes collisional rate constants for temperatures up to 1000\,K and H$_2$ densities between 1$\times$10$^{13}$ to 1$\times$10$^{8}$\,m$^{-3}$, rotational levels in the vibrational excited states are not included, and the tabulated densities are sometimes too low for the high densities
encountered in the inner wind \footnote{Tabulated H$_2$ densities in {\sc Shapemol} range from 1$\times$10$^{13}$ to 1$\times$10$^{8}$\,m$^{-3}$, while the wind of CW~Leo has a H$_2$ density around 1$\times$10$^{15}$\,m$^{-3}$ at 2.5\,\Rstar\ (or 0\farcs05), reaching 6.6$\times$10$^{10}$\,m$^{-3}$ at 6\hbox{$^{\prime\prime}$}. Since the interest of this paper is not to reproduce the $^{13}$CO intensity maps but to analyse the morpho-kinematical structure in the inner envelope, we have divided the overall density by a factor 100.}. Nonetheless, it can still be used for a first morpho-kinematical interpretation of the structure seen in the ALMA PV data of $^{13}$CO.
Based on the results of \citet{Decin2010A&A...518L.143D} and \citet{DeBeck2012A&A...539A.108D}, we chose as input for our basic models a distance of 150\,pc, a constant gas mass-loss rate of 1.5$\times$10$^{-5}$\,\Msun/yr, a fractional abundance [CO/H$_2$] of 6$\times$10$^{-4}$, an isotopologue ratio $^{12}$CO/$^{13}$CO of 30, a stellar radius of 4.55$\times$10$^{13}$\,cm (or 0.02\hbox{$^{\prime\prime}$}\ at 150\,pc) and a wind velocity of 14.5\,km/s. The gas kinetic temperature is assumed to follow a power law
\begin{equation}
T(r) = \Teff \left(\frac{R_{\star}}{r}\right)^{\zeta}\,,
\end{equation}
with \Teff\ the effective temperature of 2330\,K and $\zeta \approx 0.5$. The molecular line data (energy levels, Einstein A coefficients and collisional rates) have been taken from the LAMDA database \citep{Schoier2005A&A...432..369S}.
We produced models of the inner 6\hbox{$^{\prime\prime}$}\ (i.e., total width of 12\hbox{$^{\prime\prime}$}), at $\sim$47\,mas resolution, for
spectral channels separated by 0.4\,km/s. We simulated the effect of the ALMA setup using the CASA task {\sc simobserve}, with antenna
positions and atmospheric conditions matching those used for our
observations. In order to obtain the same total duration (and range of
$uv$ spacings sampled) the simulations have slightly longer on-target duration.
The simulated model images and PV plots therefore provide comparable sampling, including missing spacings, as the original observations. The noise in the simulated images is slightly lower, not only due to the longer time on-source, but because although atmospheric and instrumental noise affecting the target is included, the limitations due to other imperfections in phase calibration are not. The plots of simulated data do have similar dynamic range limitations and artefacts due to missing flux (where the input model has large-scale structure) as the observed images. For example, the absolute value of the most negative emission in the simulated data in the right panels of Fig.~\ref{Fig:PV_radial_outflow} is $\sim30\%$ of the peak. The look-up tables in some plots have been curtailed in order to highlight the most relevant features for comparison. We used the same {\sc clean} parameters
as for our observations to produce image cubes and position-velocity
plots.
\subsubsection{PV diagram for a radially outflowing wind} \label{Sec:PV_radial}
We first simulate two simple examples for a radially outflowing wind at a constant expansion velocity of 14.5\,km/s. In the first case, the mass-loss rate is assumed to be constant (or density $\rho \propto r^{-2}$; see top left panel in Fig.~\ref{Fig:PV_radial_outflow}); while in the second example (bottom left and right panels in Fig.~\ref{Fig:PV_radial_outflow}) we demonstrate the effect of density-enhanced shells \citep{Cordiner2009ApJ...697...68C,Decin2011A&A...534A...1D} with a shell-intershell density contrast of a factor 10. The wind region close to the stellar surface is responsible for the bright bar at zero offset in the PV-diagrams, while the outer density-enhanced shells create well-distinguished correlated curved structures at larger offset in the PV-diagram. Including the ALMA instrumental effects results in a (virtual) fragmentation of the shells (see right panels in Fig.~\ref{Fig:PV_radial_outflow}). Due to the short hour-angle coverage of the simulated observations in
Fig.~\ref{Fig:PV_radial_outflow} (right panels), the simulated PV diagrams at different position angles are not identical\footnote{Note that for the real ALMA observations this effect is not so strong since they were spread out more in hour-angle due to the longer gaps between scans spent on the calibration sources.}.
The simulated PV diagrams for the density enhanced shells taking the ALMA instrumental effects into account show some resemblance with the ALMA $^{13}$CO J=6-5 PV diagram for a position angle around 22\hbox{$^\circ$}\ to the North-East (Fig.~\ref{Fig:PV_13COJ6_other_angles}). However, a spherical symmetric geometry and isotropic density wind structure as applied in the current setup can not explain the different morphologies seen in the PV diagram with position angles 90\deg\ apart from each other and the asymmetry when reflecting around zero-offset (Fig.~\ref{Fig:PV_13COJ6_other_angles}).
\begin{figure*}[htp]
\begin{minipage}[t]{.3\textwidth}
\centerline{\resizebox{\textwidth}{!}{\includegraphics{PV_radial.pdf}}}
\end{minipage}
\hfill
\begin{minipage}[t]{.69\textwidth}
\centerline{\resizebox{\textwidth}{!}{\includegraphics{Var_Mdot_1_PV.pdf}}}
\end{minipage}
\caption{Simulated $^{13}$CO J=6-5 PV diagrams for a beam of 0\farcs43$\times$0\farcs23.
\textit{Top left:} Model for radially outflowing wind for a constant mass-loss rate ($\rho \propto r^{-2}$). \textit{Bottom left:} Model for a radial outflowing wind with varying wind density, with the shell-intershell density contrast being 10. The density-enhanced shells are placed with an interval of 1\hbox{$^{\prime\prime}$}. No instrumental effects were taken into account. \textit{Right:} Simulations for the PV-diagram for the density-enhanced shells taking the ALMA instrumental effects into account: in the four sub-panels, the slit is varied to create a PV diagram along the right ascension coordinate, the declination coordinate, and 22.5\deg\ to the north-east (NNE) or to the north-west (NNW).}
\label{Fig:PV_radial_outflow}
\end{figure*}
\subsubsection{PV-diagram for binary-induced spiral wind structure} \label{Sec:Shape_PV_binaries}
In this section, we simulate the effect on the PV diagram of a spiral structure. Both the spiral structure due to the companion's wake as due to the reflex motion of the mass-losing AGB star are simulated. To do so, we follow the results described by \citet{Kim2012ApJ...744..136K} and \citet{Kim2012ApJ...759...59K} by defining an Archimedean spiral $r = a_p \theta$, with $r$ the radial coordinate, $\theta$ the longitudinal coordinate, and $a_p$ a constant controlling the distance between the successive turnings.
The spiral structure is generated by a thin shell mesh with its position given as the position vector $\vec{r}$ in spherical coordinates $(r,\theta,\phi)$ as a mapping of the unit sphere (1,$\theta$,$\phi$), with $\phi$ the latitude ($\phi=90$\deg\ being the equator). The coordinates of the mesh vertices are given by
\begin{equation}
\vec{r}(r,\theta,\phi) = \left(a_p \theta, \theta, \phi \right)\,,
\label{Eq:spiral}
\end{equation}
with $r$ in units of meter, $\theta$ between 0 and 360\deg, and $\phi$ between 0 and 180\deg. The spacing between two successive turnings is hence $a_p \times 360 \equiv a$, with $a$ in units of arcsec for a distance of 150\,pc.
The overall radial density is taken $\propto r^{-p}$ \citep[with $p\sim2$,][]{Kim2012ApJ...759...59K}, which is then modified by the longitudinal and latitudinal dependencies. To simulate both types of spiral arms (confined toward the orbital plane and extending almost completely toward the orbital axis), we define the latitudinal dependence of the density as
\begin{equation}
\frac{w^2}{(\phi - 90\deg)^2 + w^2}\,,
\label{Eq:w}
\end{equation}
in which case small values of $w$ (in units of degrees) generate a spiral arm confined toward the orbital plane. The thickness $\beta$ of the spiral arms is given as a function of distance $r$ from the center as $\beta(r) = b r + c$, where $b$ and $c$ are parameters that determine the growth rate and the initial thickness of the spiral shell.
The different free parameters ($a, p, b, c$, inclination $i$, and $w$) have been changed to study their effect with the aim to understand the morpho-kinematical structure visible in the $^{13}$CO J=6-5 PV diagrams. In Homan et al. (in prep.) a much more detailed study will be presented. Although limited by the short duration of the observations, we estimate a value for the spiral arm spacing $a$ in the range of 1--2\hbox{$^{\prime\prime}$}\ using Fig.~\ref{Fig:13CO_channel}.
\begin{figure}
\includegraphics[width=.48\textwidth]{Fig_PV_model_overview_grid_new.jpg}
\caption{$^{13}$CO J=6-5 PV diagram for different model setups for a spiral structure, with the x-axis going from $-$6 to 6\hbox{$^{\prime\prime}$} and the y-axis from $-20$ to $20$\,km/s: \textit{panel~1:}~radial outflowing wind with density enhanced shells, for a shell-intershell contrast of a factor 10, \textit{panel~2:}~basic model for the spiral wind structure, with parameters $a=1.45$, $p=2$, $b=0$, $c=8\times 10^{12}$\,cm (or 16\,\Rstar), and $w=50$ at a PA along the orbital plane, \textit{panels~3--8:} same as basic model but with one parameter changed, as indicated on the top of each panel. For each panel, the color wedge goes from zero to maximum intensity (in Jy/beam), with redder colors indicating a higher intensity.}
\label{Fig:PV_overview}
\end{figure}
As setup for our basic spiral model, we chose as free parameters $i=80$\deg\ \citep{Jeffers2014}, $a=1.45$\hbox{$^{\prime\prime}$}, $p=2$, $b=0$, $c=8\times 10^{12}$\,cm (or 16\,\Rstar), and $w=50$. Each of the parameters is then changed one after the other to demonstrate its effect (see Fig.~\ref{Fig:PV_overview}). The PV diagram for the basic model of a one-armed spiral at a position angle along the orbital plane (second panel in Fig.~\ref{Fig:PV_overview}) shows very regular structures due to the formation of higher density arcs in a plane perpendicular to the orbital plane \citep[see also Fig.~11 in][]{Mastrodemos1999ApJ...523..357M}, although its signature is slightly more asymmetric as compared to the PV diagram for the radial outflowing wind with density-enhanced shells (first panel in Fig.~\ref{Fig:PV_overview}). At position angles not aligning with the orbital axis or orbital plane, the difference with the radial outflowing wind is much more pronounced, as can be seen in the first panel of Fig.~\ref{Fig:diff_inc}.
At a PA along the orbital plane, the morphological difference between the models for an inclination $i=80$\deg\ and $i=20$\deg\ is mainly seen around zero offset for which a higher intensity contrast is
reached for the case of $i=20$\deg. For position angles not aligning with the orbital axis or orbital plane, a more distinct morphological difference is seen for different inclination angles (see Fig.~\ref{Fig:diff_inc}): at inclination angle $\ga$60\deg\ an '$S$-type' structure around
zero-offset is seen in the PV-diagram. This $S$-type signature is also seen in the $^{13}$CO J=6-5 ALMA images of CW Leo (see Fig.~\ref{Fig:PV_13COJ6}); a feature that can not be reproduced by a model assuming a radial outflowing wind with density-enhanced shells (see Fig.~\ref{Fig:PV_radial_outflow}). The difference in PV diagrams for different position angles at different inclinations will be used to constrain the inclination angle of the spiral arms in CW~Leo (Sect.~\ref{Sec:Similarities}).
\begin{figure}
\includegraphics[width=0.48\textwidth]{Fig_overview_grid_PA337.jpg}
\caption{$^{13}$CO J=6-5 PV diagram for a PA of 145\deg\ for the basic model at $i=80$\deg\ (\textit{left}), at $i=20$\deg\ (\textit{middle}), and for $w=5$\deg (\textit{right}). }
\label{Fig:diff_inc}
\end{figure}
As demonstrated by \citet{Kim2012ApJ...759...59K}, the mean density in the binary-induced stellar wind is proportional to $r^{-p}$, with $p$$\sim$2. Fig.~\ref{Fig:PV_overview} shows the effect for increasing the value of $p$ from 2 to 2.4 (although one has to realize that the mass is not conserved for this latter simulation). In the case of $p$$\sim$2.4, the density is more concentrated in the inner arm spirals, clearly visible in the PV diagram.
When the parameter $b$ is larger than zero, the thickness of the spiral arms increases with radial distance from the star, simulating the effect of dispersion. This leads to a lower mean density in the outer spiral arms. In contrast, lowering the initial thickness of the spiral shells from $8\times10^{12}$\,cm to $3\times10^{12}$\,cm results in a higher mean density in the complete spiral structure due to the conservation of mass. Decreasing the parameter $a$ leads to a decrease in the distance between the successive arm turnings of the spiral.
A value for the parameter $w$ around 50 results in the mean density being rather uniformly distributed in latitude, while the much smaller value of $w=5$ results in the latitudinal dependence of the density being more concentrated toward the orbital plane, hence mimicking the effect of gravitational focusing. While at a PA along the orbital axis, the difference is not so large, the latitudinal dependence is more pronounced at PA=145\deg\ (see Fig.~\ref{Fig:diff_inc}).
As discussed by \citet{Mastrodemos1999ApJ...523..357M}, the effect of a binary companion is to focus some fraction of the material into a spiral structure, the remainder still forming a quasi-spherical envelope. This smooth wind structure is well-visible in the ALMA PV diagrams as the dominant maximum around zero spatial offset covering the full velocity range (see Fig.~\ref{Fig:wind}). As will be shown by Homan et al.\ (in prep), the flux contrast between the correlated arcs and bright inner bar can be used to deduce the density contrast between the smooth wind and spiral structure.
\begin{figure}[htp]
\begin{minipage}[b]{.28\textwidth}
\vspace*{-.3cm}
\centerline{ \includegraphics[width=\textwidth]{Fig_wind_text.jpg}}
\end{minipage}
\hfill
\begin{minipage}[b]{.2\textwidth}
\centerline{\includegraphics[height=5ex]{blanco.pdf}}
\caption{PV diagram for a spiral structure with parameters $a=1.12$, $p=2$, $b=0$, $c=3\times 10^{12}$\,cm, and $w=10\deg$, and $i=80$\deg. In the right panel only the spiral structure is modeled, while in the left panel a smooth wind is included with a density contrast of a factor 4.}
\label{Fig:wind}
\end{minipage}
\end{figure}
\section{Discussion}\label{Sec:Discussion}
\subsection{Morphological comparison between ALMA $^{13}$CO PV diagrams and the theoretical simulations} \label{Sec:Similarities}
With an on-source integration time of only 17\,minutes, an in-depth understanding of the $^{13}$CO J=6-5 channel maps (Fig.~\ref{Fig:13CO_channel}) is challenging. However, it is clear that the different PV diagrams (Fig.~\ref{Fig:PV_13COJ6} and Fig.~\ref{Fig:PV_13COJ6_other_angles}) display correlated structures which can not be interpreted with a simple spherical symmetric wind model. While the wind model with the density enhanced shell structures show some resemblance to the ALMA PV data, the asymmetry when reflecting around zero-offset seen in the ALMA PV diagrams (Fig.~\ref{Fig:PV_13COJ6_other_angles}) can not be explained by this simple model, neither can it explain the difference in the PV diagrams with 90\deg\ difference in PA. A possible scenario to explain the structures seen in the ALMA PV diagrams is a binary-induced spiral structure. This interpretation is corroborated by other data and analyses already presented in the literature (see Sect.~\ref{Sec:qualitative}). Using the simple
mathematical
prescription for a binary-induced spiral structure described in Sect.~\ref{Sec:Shape_models}, we
derive from the ALMA PV diagrams at PA$\sim$10-20\deg\ that the spiral parameter $a$ is $\sim$1.1\hbox{$^{\prime\prime}$}. Different combinations for $p$, $b$, and $c$ give a good fit to the data, but the current limitations in the ALMA sensitivity of the current data and the limited UV-coverage inhibit a strict constraint. We therefore opt to use $p$=2 (mass conservation) and $b$=0 (the thickness of the spiral arms stays constant). A good fit to correlated structures in the ALMA PV data is obtained for the spiral parameters $a=1.12\hbox{$^{\prime\prime}$}$, $p=2$, $b=0$, $c=3\times 10^{12}$\,cm, and $w=30\deg$, and $i=60-80$\deg\ (see Fig.~\ref{Fig:comparison_Shape_ALMA}). The best constrained parameter is the spiral arm distance $a$, with an uncertainty of only $\sim$10\%. The general morphology is reproduced quite nicely, but the fit to the relative intensities can be improved. The latter is not unexpected taking into account the simple approach for modeling the spiral structure. An in-depth study describing the results of a
large parameter grid, including a variation in radial temperature structure of the spiral, density contrast, etc.\ will be presented in Homan et al.\ (in prep).
It is clear that the {\sc Shape} simulations shown in Fig.~\ref{Fig:comparison_Shape_ALMA} do not capture the clumpiness seen in the ALMA data. Part of the fragmentation is artificial and comes from the poor UV coverage of the data (see Fig.~\ref{Fig:PV_radial_outflow}). However, part of the clumpiness seems also inherent to the wind structure of CW~Leo and indicates a non-smooth gas (and dust) density distribution within the spiral arms. Magnetic cool spots, photospheric convective motions, flow instabilities, etc.\ may result in localized gas and dust clumps (see the discussion in Sect.~\ref{Sec:qualitative}). The fact that we see the arc structure extending toward the orbital axis (see illustration in Fig.~\ref{Fig:sketch}) implies that the reflex motion of the primary AGB star (CW~Leo itself) acts as a cause for the arc-like signature detected in the ALMA PV, Herschel and
Hubble Space Telescope images.
\begin{figure*}
\includegraphics[width=\textwidth]{Fig_compare_ALMA_model.jpg}
\caption{\textit{Left:} Simulated channel maps for a spiral wind structure with parameters $a=1.12$, $p=2$, $b=0$, $c=3\times 10^{12}$\,cm, $w=30\deg$, and $i=60$\deg, including a smooth wind structure with a spiral/wind density contrast of a factor 4. The central velocity of the panel at the top left is $-$16\,km/s (w.r.t. v$_{\rm{LSR}}$), at the bottom right corner +16\,km/s, hence $\Delta v$ is 1.33\,km/s for adjacent panels. The field-of-view is 5\hbox{$^{\prime\prime}$}$\times$5\hbox{$^{\prime\prime}$}. The ALMA instrumental effects are not taken into account in this simulation. \textit{Right:} Corresponding simulated PV diagrams (in grey-scale) for a slit along the declination axis (top) and along the right ascension axis (bottom). The contours of the simulated PV diagrams are shown in blue and for the ALMA data in red, each time at (0.05,0.1,0.2,0.4,0.6,0.8) times the maximum intensity. The shaded region in the bottom panel indicates the place where sidelobe effects deteriorate the quality of the ALMA PV data.}
\label{Fig:comparison_Shape_ALMA}
\end{figure*}
\subsection{Orbital period and binary separation} \label{Sec:period}
The model fitting of the ALMA $^{13}$CO J=6-5 PV data indicates a spiral arm spacing of $a\sim$1.12\hbox{$^{\prime\prime}$}, which is in agreement with the arc spacing seen in the HST wide-V band image of the inner 10\hbox{$^{\prime\prime}$}\ region of CW~Leo \citep[see Fig.~3 in][]{Mauron2000A&A...359..707M}.
The arm spacing is determined by the product of the binary orbital period $T_p$ and the pattern propagation speed in the orbital plane, which following \citet{Kim2012ApJ...759L..22K} is given by
\begin{equation}
\Delta r_{\rm{arm}} = \Big(\langle V_w \rangle + \frac{2}{3} V_p \Big) \times \frac{2 \pi r_p}{V_p}\,,
\label{Eq:Tp}
\end{equation}
with $\langle V_w \rangle$ the wind velocity, $V_p$ the orbital velocity, and $r_p$ the orbital radius of the primary, and the orbital period $T_p$ given by the last term ($ 2 \pi r_p/V_p$). The first term in the right-hand part of Eq.~\ref{Eq:Tp}, $\langle V_w \rangle + \frac{2}{3} V_p$, is the pattern propagation speed throughout the orbital plane, which is close to the wind speed if $V_w$ dominates over the orbital and sound speeds \citep[which is the case for almost all AGB binary simulations, see][and \textit{Kim priv.\ comm.}]{Kim2012ApJ...759...59K}. At a distance of 1\hbox{$^{\prime\prime}$}, the wind has already reached its terminal velocity of 14.5\,km/s (see Sect.~\ref{Sec:vgas}). The derived arm spacing hence results in an orbital period of $\sim$55\,yr.
The derived orbital period is lower than the orbital periods derived for some other carbon-rich AGB stars, of which a binary companion is thought to be the cause of the detected spiral arm structure. \citet{Maercker2012Natur.490..232M} derived for R~Scl a period of 350\,yr on the basis of CO J=3-2 ALMA data, for CIT~6 \citet{Dinh2009ApJ...701..292D} found a period of $\sim$600\,yr and for AFGL~3068 an orbital period of 830\,yr was obtained by \citet{Mauron2006A&A...452..257M}. The longer orbital periods for these AGB stars indicates that the companion must be on a wide orbit around the AGB star; typically around 50--70\,au. Using Kepler's third law, and assuming a primary initial mass of 4\,\Msun\ for CW~Leo \citep{Guelin1995A&A...297..183G} and the mass of the secondary being lower than that of the primary, the mean binary separation, $d$, for CW~Leo is $\sim$25$\pm$2\,au (or $\sim$8.2\,\Rstar=0.17\hbox{$^{\prime\prime}$}), i.e.\ a binary system that can reside within the inferred dust lane of $\sim$0.5--1\hbox{$^{\prime\prime}$}\ radius (
see
Fig.~\ref{Fig:sketch}). In the case that the primary has already lost some 2\,\Msun\ through its stellar wind \citep{Decin2011A&A...534A...1D}, the binary separation would reduce to $\sim$19\,au.
\subsection{Companion mass} \label{Sec:companion_mass}
According to the method outlined by \citet{Kim2012ApJ...759L..22K}, one needs the (projected) separation between the primary star and its companion to derive the binary mass ratio. However, based on an extensive search throughout the literature, no direct identification of the potential binary companion was possible. We are hence lacking diagnostics to constrain the mass of the companion. However, the fact that the ALMA PV data around PA$\sim$10-20\deg\ might be explained by a spiral shock caused by the reflex motion of the mass-losing primary star implies that the mass of the secondary can not be very low, i.e. the secondary can not be a Jupiter-like planet or a brown dwarf.
Although the simulations of \citet{Mastrodemos1999ApJ...523..357M} are not fine-tuned toward the specific situation of CW~Leo (i.e., the radius of CW~Leo is larger than the input radius for the model simulations, and the estimated mass of CW~Leo is around 4\,\Msun, while the input model mass for the primary, $M_p$, is 1.5\,\Msun), we can use their results as a guideline to estimate the mass-ratio, $M_s/M_p$, with $M_s$ the mass of the secondary. Their model sequence M10$\rightarrow$M17$\rightarrow$M18 shows that for a decrease of the mass of the secondary (1\,\Msun$\rightarrow$0.5\,\Msun$\rightarrow$0.25\Msun) the morphology changes from bipolar$\rightarrow$elliptical$\rightarrow$quasi-spherical, i.e.\ the latter case resembling the signature in our ALMA data and multiple shells seen in the optical and infrared images \citep{Mauron2000A&A...359..707M, Decin2011A&A...534A...1D}. This would imply a mass ratio around $1/6$ or a mass for the secondary around 0.6\,\Msun. Using Eq.~(2) in \citet{
Eggleton1983ApJ...268..368E}, this implies that the effective radius of the Roche Lobe, $r_L$, is 0.54 times the orbital separation, or $r_L$ is $\sim$13.5\,au (4.5\,\Rstar).
An alternative method is offered by \citet{Huggins2009MNRAS.396.1805H}, who used the models of \citet{Mastrodemos1999ApJ...523..357M} to develop a prescription for the observed envelope shapes in terms of the binary parameters. The companion modifies the mass loss by gravitationally focusing the wind towards the orbital plane, and thereby determines the shape of the envelope at large distances from the star. They define the envelope shape parameter, $K_n$, as being the density contrast between the pole and the equator
\begin{equation}
K_n - 1 = (n_{\rm{eq}} - n_{\rm{po}}) / n_{\rm{po}}\,
\end{equation}
with $n_{\rm{eq}}$ and $n_{\rm{po}}$ being the density at the equator and pole, respectively. Using the results of \citet{Mastrodemos1999ApJ...523..357M}, \citet{Huggins2009MNRAS.396.1805H} developed an empirical relation between the envelope shape parameter $K_n$ and the binary parameters, being the masses of the primary and secondary ($M_p$ and $M_s$), the binary separation ($d$) and the velocity of the wind at the orbit of the secondary ($V_s$). From the numerical simulations, they derive following equation (Eq.~(4) in their paper)
\begin{equation}
\log{(K_n - 1 )} = 5.19(\pm0.66)+1.40(\pm0.21) \log{(M_s/V_s^2 d)}\,,
\label{Eq:Kn_binary}
\end{equation}
independent of the mass of the primary $M_p$, and with $M_s$ in units of solar mass, $V_s$ in km/s and $d$ in astronomical units (au). Although the models are not finetuned toward the specific situation of CW~Leo (i.e., the radius of CW~Leo is larger than the input radius for the model simulations), we can use this relation and the value of $w$, characterizing the latitudinal dependence (see Sect.~\ref{Sec:Shape_PV_binaries}), to get a first crude estimate of the companion mass.
In Sect.~\ref{Sec:Similarities} we derive a best-fit value for $w$ of $\sim$30\deg, resulting in a value of $K_n$ around 10 (using Eq.~\ref{Eq:w}). Using Eq.~\ref{Eq:Kn_binary}, we derive that
\begin{equation}
\frac{M_s}{V_s^2 d} = 0.8\times10^{-3}\,{\rm{\frac{\Msun}{(km/s)^{2}\,au}}}\,.
\label{Eq_Kn_applied}
\end{equation}
For a binary separation $d$ of 20\,au, and the velocity of the wind at the orbit of the secondary, $V_s$, being $\sim$8\,km/s (see Fig.~\ref{Fig:vexp_LAS_sFWHM}) the derived mass for the secondary, $M_s$, is $\sim$1.1\,\Msun. However, the uncertainty on the estimated value for $M_s$ is significant due to the square dependence on the highly uncertain value of $V_s$ and the uncertainty of $\sim$15\deg\ in the value of $w$.
The derived rough estimate for the companion mass puts the companion in the category of a white dwarf or an unevolved low-mass main sequence star. However, a binary with a white dwarf companion could result in a so-called 'dusty symbiotic system'. This kind of system harbours a very hot and active accretion disk, with clear signatures in the UV and the presence of many forbidden atomic lines of, e.g., [O\,III], [Ne\,III], and [Fe\,IV] in the optical spectrum. Since no signs of activity are seen in the optical/UV spectra of CW~Leo, we tentatively postulate that the companion of CW~Leo is an unevolved M-type dwarf.
\subsection{Other scenarios}
\begin{figure}[htp]
\begin{minipage}[b]{.28\textwidth}
\vspace*{-.3cm}
\centerline{ \includegraphics[width=\textwidth]{Fig_other_scenarios.jpg}}
\end{minipage}
\hfill
\begin{minipage}[b]{.2\textwidth}
\centerline{\includegraphics[height=5ex]{blanco.pdf}}
\caption{PV diagrams for enhanced mass-loss ejections at PA=90\deg. \textit{Left:} Simulations for localized mass-loss ejections covering $\sim$10\% of the surface at an interval of 1\hbox{$^{\prime\prime}$} \citep{Dinh2008ApJ...678..303D}. \textit{Right:} Simulations for latitudinal bands according to the model of \citet{LeBertre1989Msngr..55...25L}.}
\label{Fig:PV_extra}
\end{minipage}
\end{figure}
One might wonder if localized mass-loss ejections radially driven outward, without the action of a binary companion, might result in the type of PV diagrams seen in the ALMA data. \citet{Dinh2008ApJ...678..303D} suggested that the shells seen in the VLA data of HC$_3$N and HC$_5$N at a distance of $\sim$15\hbox{$^{\prime\prime}$}\ from the central star cover some 10\% of the stellar surface at the time of ejection. One can simulate this scenario using the simple parametric description for density enhanced shells (Sect.~\ref{Sec:PV_radial}), but constraining randomly the longitudinal and latitudinal coordinate for each of the five shells so that each segment covers only $\sim$10\% of the surface. This kind of suggestion gives one a lot of (parametric) freedom to model the wind of CW~Leo. An example of such a PV diagram is shown in the left panel of Fig.~\ref{Fig:PV_extra}. While this example shows some resemblance with the ALMA PV diagrams, this scenario would need a lot of finetuning to get a model showing almost symmetrical
signatures as seen in the ALMA PV diagram for PA$\sim$10--20\deg\ (Fig.~\ref{Fig:PV_13COJ6_other_angles}), but for which almost no structure is left in a PV diagram taken at a PA 90\deg\ apart.
A way to constrain the huge parameter space entailed by the model for the localized mass-loss ejections is offered by the model presented by \citet{LeBertre1989Msngr..55...25L}, who suggested that ejected material also moves due to the stellar rotation, creating latitudinal bands. This kind of model was recently used by \citet{Jeffers2014} to explain the ExPo images of CW~Leo. The {\sc Shape} simulations for this scenario are also based on the model for the density enhanced shells (Sect.~\ref{Sec:PV_radial}), but the latitudinal coordinates of each shell only cover $\sim$5\deg.
These simulations do not rule out the ejection of clumps with initial extent $\sim$10\% of the
stellar surface, nor the possible effects of stellar rotation, but
the addition of the effects of the companion in creating a spiral
provides a more robust explanation for the asymmetry in the PV diagrams when reflecting around zero-offset, for the differences seen in the PV diagrams 90\deg\ apart and for the other diagnostics discussed in Sect.~\ref{Sec:qualitative} and shown in Fig.~\ref{Fig:sketch}.
\subsection{Further evolution}
There are numerous similarities between CW~Leo and the young post-AGB star CRL\,2688 \citep[also called the Egg Nebula;][]{Sahai1998ApJ...493..301S}. Both objects are carbon-rich, suggesting an initial mass larger than 3\,\Msun, Although CRL\,2688 is further in its evolution, both objects have an optically thick circumstellar core with biconical cavities and shell-like density enhancements are observed in scattered light as incomplete arcs. In CRL\,2688 high-speed collimated jets are present with velocities around 300\,km/s, which seem to be only powered during the last 200\,years. According to \citet{Soker1994ApJ...421..219S}, different mechanisms can explain the presence of the collimated outflows in CRL\,2688, each of them requiring the central star to be a binary, with the collimated outflow resulting from an accretion disk. The discovery of a spiral arm structure in the ALMA data of CW~Leo, which might be induced by the presence of a binary companion, supports the suggestion that CW~Leo will evolve
into
an
object similar to CRL\,2688.
\section{Conclusions} \label{Sec:conclusions}
We have presented the first ALMA band~9 data of the inner wind region of CW~Leo at a spatial resolution of 0\farcs42$\times$0\farcs24 and with a sensitivity is 0.2--0.3\,Jy beam$^{-1}$ per 488\,kHz channel. We have detected 25 emission lines, all of them centered on the dust continuum peak which has a total (star+dust) flux density within the 3$\sigma$ contour of 5.66\,Jy. The images prove that the vibrational lines are excited just above the stellar photosphere and that SiO is a parent molecule, formed close to the stellar surface, probably due to shock-induced non-equilibrium chemistry.
The velocity traced by the line width of the emission lines suggests a steep increase of the wind velocity starting around 5\,\Rstar\ reaching almost the terminal velocity at $\sim$11\,\Rstar.
Both the dust emission and the emission of the brightest lines show a clear asymmetric distribution. The position-velocity (PV) maps at different position angles of the $^{13}$CO J=6-5 line display correlated arc-like structures, which can be explained by a spiral arm. This spiral arm can be caused by the presence of a binary companion. Using the {\sc Shape} modeling tool we have modeled the $^{13}$CO J=6-5 PV diagrams. We deduce that the orbital axis lies at a position angle of $\sim$10--20\deg\ to the North-East and that the spiral arm spacing is $\sim$1.12\hbox{$^{\prime\prime}$}. At a distance of 150\,pc, this leads to an orbital period of 55\,yr and a binary separation of $\sim$20-25\,au (or $\sim$6--8\,\Rstar). We tentatively suggest that the companion is an unevolved low mass main sequence star.
The scenario of a binary system can explain (1)\,the spiral arm structure seen in the ALMA PV data, (2)\,the signature of a bipolar structure seen at arcsecond scales, and (3)\,the presence of multiple non-concentric shells detected in the outer wind, which might represent the limb-brightened edges of the spiral arms seen almost edge-on. Granted ALMA cycle~1 time will increase the signal-to-noise and UV-coverage, hence facilitating the in-depth interpretation of the ALMA data.
-------------------------------------------------------------------
\begin{acknowledgements}
This paper makes use of the following ALMA data:
ADS/JAO.ALMA\#2011.0.00277.S. ALMA is a partnership of ESO (representing
its member states), NSF (USA) and NINS (Japan), together with NRC
(Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of
Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
W.S.\ acknowledges support by grant UNAM-PAPIIT 101014. The authors thank Nicholas Koning and Miguel Santander-Garc\'{\i}a for technical support with {\sc Shape}.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,090,255 | arxiv | \section{Introduction}
The time structure of hadronic showers plays a key role when evaluating the timing capabilities of calorimeter systems. This is of particular interest in the context of the development of detector concepts for the Compact Linear Collider (CLIC) \cite{Lebrun:2012hj}, where time stamping of signals on the nanosecond level is of key importance to reject pile-up from hadrons produced in two-photon processes, and where tungsten is used as absorber material for the hadronic calorimeters \cite{Linssen:2012hp}. This choice of a heavy absorber for the hadron calorimeter is expected to result in a particularly rich time structure of the hadronic cascade. The sensitivity to neutrons in the later parts of the hadronic cascade also influences the spatial structure of the visible signal in the detector, and is thus of importance for the performance of particle flow algorithms \cite{Thomson:2009rp, Marshall:2012ry} . These algorithms, which are used for jet reconstruction with unprecedented precision in linear collider detectors, rely on two-particle separation in the calorimeters. The spatially resolved measurement of the time structure of showers and the comparison of detection media with different sensitivity to neutrons, such as plastic scintillators and gaseous detectors, is thus of high relevance for the development of calorimeter technologies for such a future collider. The comparison of the measurements to simulations provide a means of validating the modelling of the time structure of the showers by different GEANT4 \cite{Agostinelli:2002hh} physics lists. Due to limited previous experimental input and the expected larger effects of late shower components in tungsten-based calorimeters this is particularly relevant for the case of tungsten absorbers.
\section{Experimental Setup}
\begin{figure}
\centering
\includegraphics[height=0.35\textwidth]{figures/T3BInstallation.jpg}
\hspace{0.5 cm}
\includegraphics[height=0.35\textwidth]{figures/FastRPC.jpg}
\caption{The T3B setup downstream of the CALICE analog tungsten HCAL (left) and the FastRPC setup downstream of the CALICE digital tungsten HCAL, showing the RPC and the readout board mounted to it (right).}
\label{fig:T3BInstallation}
\end{figure}
The average time structure of hadronic showers is measured by the T3B (Tungsten Timing Test Beam) and the FastRPC experiments,
which were taking data together with the CALICE analog scintillator tungsten HCAL (WAHCAL) \cite{LucaciTimoce201388, Adloff:2010hb} and the CALICE digital RPC tungsten HCAL (WDHCAL) \cite{Bilki:2008df}, with the active elements of both calorimeters which were originally operated with steel absorbers using the same tungsten absorber structure in the CERN test beams. In addition, the T3B experiment also took data with CALICE semi-digital HCAL (SDHCAL) \cite{Laktineh:2011zz} with steel absorbers to provide a comparison of the impact of different absorber material. The T3B setup \cite{Soldner:2011np} consists of 15 scintillator cells with a size of $3\,\times\,3$ cm$^2$ and a thickness of 5 mm, with directly coupled Hamamatsu MPPCs following the scintillator tile design presented in \cite{Simon:2010hf}, while FastRPC uses a glass RPC \cite{Drake:2007zz} with pad readout with a geometry identical to that of T3B. The 15 cells / pads are arranged in one row extending from the center of the
calorimeter out to one side of the detector, covering the full radial extent of the showers. Figure \ref{fig:T3BInstallation} shows the T3B and the FastRPC setups installed downstream of the CALICE WAHCAL and WDHCAL, respectively.
Both systems use the same readout chain, starting with a custom-designed pre-amplifier board which feeds the analog signals from the SiPMs and the RPC into a set of four 4-channel USB-oscilloscopes\footnote{PicoTech PicoScope 6403 (http://www.picotech.com/)} which provide a sampling rate of 1.25\,GSa/s on all channels and are therefore well suited for precise timing measurements in the nanosecond region. Long acquisition windows of \mbox{2.4 $\mu$s} per event are recorded to study the time structure of the energy deposits in the active medium in detail, providing information on the time structure of hadronic showers in the calorimeter.
The small number of channels is insufficient for event-by-event measurements, but is used to measure the average time structure of showers in large data samples. In addition, the information from the main calorimeters can be used to reconstruct the position
of the first inelastic hadronic interaction event by event, allowing to measure the time structure of the shower at various depths with
respect to the shower start, which can be used to measure the averaged timing profile over the full longitudinal and lateral extent of the shower.
In addition to the measurements performed with tungsten absorbers, T3B also took data together with the CALICE semi-digital calorimeter with steel absorbers to provide a comparison of the two absorbers.
\section{Results - The Time Structure in Tungsten with Scintillators and RPCs}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{figures/ScintRPC.pdf}
\caption{Comparison of the time of first hit with scintillator and RPC readout. For reference, the time distribution for muons in FastRPC is also shown.}
\label{fig:ScintRPC}
\end{figure}
For both experiments, a sophisticated calibration and reconstruction framework has been developed. In the case of scintillator readout, this system is capable of determining the arrival time of each photon on the photon sensor on the nanosecond level by iteratively subtracting single photon signals from the recorded waveform. Further analysis is then performed on the photon time distribution. With RPC readout, the recorded pulse is used directly in the analysis. To provide a robust base for comparison between the two systems and to eliminate effects from afterpulsing of the SiPMs, the time of first hit is studied, which is defined by the time of the first energy deposit corresponding to at least the equivalent of 0.3 minimum-ionizing particles within 9.6 ns in a given cell in an event. Due to the high granularity of the readout, the probability for multiple hits in one cell in one event is on the percent level. The use of the time of first hit rather than using all observed hits thus does not result in an appreciable bias.
Figure \ref{fig:ScintRPC} shows the distribution of the time of first hit, normalized to the number of events with at least one hit in the timing layer for both scintillator and RPC readout. This normalization also accounts for the difference in beam energy, which was found to not result in significant differences in this distribution. Here, the data sets with the highest statistics for the two experiments are used. For reference, the distribution for muons with RPC readout is also shown, indicating the distribution observed with an instantaneous signal. It is clearly apparent that hadronic showers lead to a substantial late signal component with both readout types that extends substantially beyond \mbox{200 ns}. The figure also shows a discrepancy of up to a factor of eight in the time region from 10 ns to 50 ns, a region where signals are expected to originate to a large extent from MeV-scale spallation neutrons. Due to the low density and the low hydrogen content in the RPCs, their sensitivity to this component is significantly reduced compared to plastic scintillator.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{figures/Radial.pdf}
\caption{Mean time of first hit in hadronic showers as a function of radial distance from the beam axis for scintillator and RPC readout.}
\label{fig:Radial}
\end{figure}
Figure \ref{fig:Radial} shows the radial distribution of the mean time of first hit, calculated up to 200 ns, again comparing gaseous and plastic scintillator readout. On the shower axis, the mean is close to zero in both cases due to the dominance of energy deposits from compact electromagnetic subshowers and from relativistic hadrons. The late components, which spreads out more widely than the relativistic particles due to the diffusion of neutrons, gain in importance at larger radii. Since the sensitivity in particular to the intermediate component is suppressed with for RPC readout the mean stays low out to larger radii compared to scintillator readout, making the core of the shower more compact in time with gaseous readout.
\section{Results - Comparison of T3B Data with GEANT4 Simulations}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{figures/TungstenvsSteel.pdf}
\caption{Comparison of the time of first hit for muons (red) and 60 GeV pions in steel (green) and tungsten (blue) with the T3B plastic scintillator readout.}
\label{fig:TungstenvsSteel}
\end{figure}
For T3B, a sophisticated GEANT4-based simulation framework has been developed, which includes a detailed modelling of the detector response including detector effects on the time distribution of the signals. For FastRPC, a comparable simulation framework is not yet available, so we only discuss scintillator results here. Also, although the main emphasis of the T3B and Fast\-RPC programs is on the investigation of the time structure in a hadron calorimeter with tungsten absorbers, reference data with steel were also taken with scintillator readout. Figure \ref{fig:TungstenvsSteel} shows the comparison of the distribution of the time of first hit in tungsten and steel. In both cases, hadronic showers lead to a considerable late signal component compared to the muon reference. These late components are substantially more pronounced in tungsten than in steel, stressing the importance of a realistic modelling of the time structure when developing tungsten-based calorimeter systems for future collider detectors.
\begin{figure}
\centering
\includegraphics[width=0.495\textwidth]{figures/TofH_MCvsData-Tungsten.pdf}
\hfill
\includegraphics[width=0.495\textwidth]{figures/TofH_MCvsData-Steel.pdf}
\caption{Mean time of first hit in hadronic showers in tungsten (left) and steel (right) compared to GEANT4 simulations with three different physics lists. QGSP\_BERT\_HP and QBBC use specific models for low-energy neutrons.}
\label{fig:DataMCTimeDistribution}
\end{figure}
Figure \ref{fig:DataMCTimeDistribution} shows the distribution of the time of first hit observed in tungsten and steel compared to simulations with different physics models, which differ in particular in their treatment of low-energy neutrons \cite{Geant4:PhysicsLists}. While QGSP\_BERT\_HP and QBBC, which both have specialized neutron components, reproduce the observations in both tungsten and steel, QGSP\_BERT, which was the main production physics list for the LHC experiments and for the ILC and CLIC detector optimization studies, is only capable of describing the time structure in steel, while predicting significantly too much late shower activity in tungsten.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figures/MTofH_VS_ShowerDepth_CompareSimData_AtShowerCenter_For60GeVHadrons.pdf}
\caption{Mean time of first hit along the beam axis measured in the central tile of T3B as a function of depth in the hadronic shower, compared to Geant4 simulations with different physics lists.}
\label{fig:LongitudinalProfile}
\end{figure}
During data taking with the WAHCAL, the T3B data acquisition was synchronized with the main CALICE DAQ, which allows a combined analysis of the data in tungsten. Due to the low trigger rate of the first prototype run of the complete SDHCAL, such a synchronization was not practical for the case of steel. The combined analysis provides the possibility to determine the point of the first inelastic interaction ("shower start") on an event-by-event basis. This provides a measurement of the depth of the T3B layer within the shower, which is used to reconstruct an average longitudinal time profile of hadronic showers in tungsten by combining the data of many events, similar to the radial time profile discussed above, which are by design averaged over all shower starting positions. Figure \ref{fig:LongitudinalProfile} shows the longitudinal profile of the mean time of first hit in the shower core along the shower axis compared to the three GEANT4 physics lists also used above. The data are by construction corrected for the time-of-flight of a ultra-relativistic particle since the T3B layer always has the same distance with respect to the trigger scintillators which set the event time. The measurement shows the dominance of prompt shower components in the front part of the cascade, driven by the instantaneous electromagnetic component and by relativistic hadrons, and some contributions of later energy deposits which gain in importance towards the rear of the shower. Only the models with special treatment of low-energetic neutrons are capable of reproducing the time structure of the deeper part of the shower, consistent with the observations made for the overall time distribution of the energy deposits discussed above.
\section{Summary and Conclusions}
With the two add-on experiments T3B and FastRPC, which were taking data together with the main hadron calorimeter prototypes, CALICE has extended its program to a spatially resolved study of the time structure of hadronic showers. The comparison of the observed time structure with plastic scintillator and with RPC readout in calorimeters with tungsten absorbers shows a substantially reduced sensitivity of the gaseous detectors to the intermediate time component from approximately 10 ns to 50 ns. This is consistent with the expectation that MeV-scale neutrons contribute strongly in this region, which are efficiently detected with plastic scintillator due to its high hydrogen content. This also results in differences in the radial timing profile, which is more dominated by prompt components in calorimeters with RPC readout. The comparison of the time structure observed with scintillators in a steel calorimeter with that observed with tungsten absorbers shows substantially increased late activity in the case of tungsten. While the time distributions in steel are generally well modelled by the standard QGSP\_BERT physics model in GEANT4, the reproduction of the tungsten results requires physics lists with high precision neutron treatment, such as QGSP\_BERT\_HP, further demonstrating the increased influence of low-energy neutrons in calorimeters with heavy absorber material.
|
2,877,628,090,256 | arxiv | \section{Introduction}
\subsection{General remarks}
History of physics is full of situations when experimental observations lead
to deep mathematical results. Discovery of Yang-Mills (Y-M) fields in 1954
[1] falls out of this trend. Furthermore, if one believes that theory of
these fields makes sense, they should never be directly observed. \ To make
sure that these fields do exist, it is necessary to resort to all kinds of
indirect methods to probe them. Physically, the rationale for the Y-M fields
is explained already in the original Yang and Mills \ paper [1].
Mathematically, such a field is easy to understand. It is a non Abelian
extension of Maxwell's theory of electromagnetism. In 1956 Utiyama [2]
demonstrated that gravity, Y-M and electromagnetism can be obtained from
general principle of local gauge invariance of the underlying Lagrangian.
The explicit form of the Lagrangian is fixed then by assumptions about its
symmetry. For instance, by requiring invariance of such a Lagrangian with
respect to the Abelian U(1) group, the \ functional for the Maxwell field is
obtained, while\ doing the same operations but using the Lorentz group the
Einstein-Hilbert functional for gravitational field is recovered. By
employing the SU(2) non Abelian gauge group the original Y-M result [1] is
recovered.
Only Maxwell's electromagnetic field is reasonably well understood both at
the classical and quantum level. Due to their nonlinearity, the Y-M fields
are much harder to study even at the semi/classical level. In particular, no
classical solutions e.g. solitons (or lumps)\ with finite action are known
in Minkowski space-time. This result was proven by many authors, e.g. see
[3-4] and refrerences therein. The situation changes dramatically in
Euclidean space where the self-duality constraint allows to obtain
meaningful classical solutions [5,6]. These are helpful for development of
the theory of quantum Y-M fields. Such solutions are useful in the fields
other than quantum chromodynamics (QCD) since the self-duality equations are
believed to be at the heart of all exactly integrable systems [7]. Although
the self-duality equations\ originate from study of the Y-M functional, not
all solutions [6] of these equations are relevant to QCD. \ In this paper we
discuss the rationale behind the selection procedure. In QCD solutions of
self-duality equations, known as \textsl{instantons}, are describing
tunneling between different \ QCD vacua [8]. It should be noted though that
\ treatment of instantons in mathematics [9-11] and physics literature [8]
is different. This fact is important.\ It is important since one of the
major tasks of nonperturbative QCD lies in developing \ mathematically
correct \ and physically meanigful description of these vacua. According to
a point of view existing in physics literature the QCD has a countable
infinity of topologically different vacua. \ Supposedly, the Faddeev-Skyrme
(F-S) model \ is designed for description of these vacua. If this model can
be used for this purpose, then each vacuum state is expected to be
associated with a particular knot (or link) configuration. Under these
conditions the instantons are believed to be \ well localized objects
interpolating between different knotted/linked vacuum configurations
[12-16]. These configurations upon quantization are expected to posses a
tower of excited states. Whether or not such a tower has a gap in its
spectrum or \ the spectrum is gapless is the essence of the millennium prize
problem\footnote
E.g. see
\par
http://www.claymath.org/millennium/Yang-Mills\_Theory/}. Originally, the
above results were obtained and discussed only for SU(2) gauge fields [17].
They were extended to SU(N) case, $N\geq 2,$ only quite recently [18
\footnote
In this work, in accord with experimental evidence, we demonstrate that
N\leq 3.$}. Although such a description of \ QCD vacua is in accord with
general principles of instanton calculations [8], it is in\textsl{\ formal}
disagreement with results known in mathematics [9-11]. Indeed, it is well
known that complement of a particular knot in $S^{3}$ is 3-manifold. Since
instantons "live" in $\mathbf{R}^{4}$ (or any Riemannian 4-manifold allowing
an anti self -dual decomposition of the Y-M field (e.g. see Ref.[9], pages
38-39 \footnote
In physics literature, both anti and self dual instantons are allowed to
exist, e.g. see Ref.[19], page 481.}), this means that all knots \ in
\mathbf{R}^{4}($or $S^{4}$) are trivial and one should talk about knotted
spheres instead of knotted rings [20]. This known topological fact is in
apparent contradiction with results of [13-15]. In this work we shall
provide evidence that such a contradiction is only apparent and that,
indeed, knotted configurations in $S^{3}$ are consistent with the notion of
instantons as formulated in mathematics. This is achieved by using results
by Floer [21]. It should be noted, though, that known to us "proofs" \
[22-24] of the existence of the mass gap in pure Y-M theory done at the
physical level of rigor ignore instanton effects altogether. Among these
papers only Ref.[22] uses the F-S SU(2) model for such mass gap
calculations. It also should be noted that results\ of such calculation
sensitively depend upon the way the F-S model is quantized. For instance, in
the work by Faddeev and Niemi [25], done for the SU(2) gauge group, the
results of quantization produce gapless spectrum. To fix the problem the
same authors suggested to extend the original model in ad hock fashion.
Other authors, e.g. see Ref.[26], proposed different ad hoc solution of the
same problem.
The above results are formally destroyed by the effects of gravity. Indeed,
in 1988 Bartnik and McKinnon numerically demonstrated [27] that the combined
Y-M and gravity fields lead to a stable particle-like (solitonic) solutions
while neither source-free gravity nor pure Y-M fields are capable of
producing such solutions\footnote
More accurately, neither pure Y-M fields nor pure gravity have nontrivial
static globally regular (i.e.nonsingular, asymptotically flat) solitons.}.
Such situation has interesting cosmological ramifications\footnote
E.g. Einstein-Y-M hairy black holes} [28] causing disappearance of
singularities in spacetime as shown by \ Smoller et al [29]. In this work we
do not discuss implications of these remarkable results. Instead, in the
spirit of Floer's ideas [21], we argue that even without taking these
results into account, the effects of gravity on processes of high energy
physics are quite substantial.
\subsection{ Statements of the problems to be solved}
In this paper several problems are posed and solved. \ In particular, we
would like to investigate the physics and mathematics behind gravity-Y-M
correspondence discovered by Louis Witten [30] for SU(2) gauge fields. Is
this correspondence accidental? If it is not accidental, how it should be
related to commonly shared opinion that the Standard Model (SM) of particle
physics does not account for gravity? Can \ this correspondence be extended
to other gauge fields, e.g. SU(N), N\TEXTsymbol{>}2 ? If the answer is
"yes", will such correspondence be valid for all N's or just for few? \ In
the last case, what such a restriction means physically? How the noticed
correspondence is helping to solve the gap problem? \ What role the F-S
model is playing in this solution? \ Is this model instrumental in solving
the gap problem or are there other aspects of this problem which the F-S
model is unable to account? How this correspondence affects \ known
string-theoretic and loop quantum gravity (LQG) results ? \ What place the
topology-changing (scattering) processes occupy in this correspondence? Is
there any relevance of the results of this work to searches for Higgs boson?
\subsection{Organization of the rest of the paper and summary of obtained
results}
Sections 2,3 and 6,\textsl{\ \ }and Appendix A are devoted to detailed
investigation of gravity-Y-M correspondence. Section 4 is devoted to the
physics-style exposition of works by Andreas Floer [11, 21] on Y-M theory
with purpose of connecting his mathematical formalism for Y-M fields with
the F-S model. \ In the same section we also consider the Y-M fields
monopole and instanton solutions and their \ meaning and place within
Floer's theory. Our exposition is based on results of Sections 2 and 3.
Section 5 is entirely devoted to solution of the gap problem for pure Y-M
fields. \ Although \ the solution \ depends on results of previous sections,
numerous additional facts from statistical mechanics and nuclear physics are
being used. \ In Section 6 we discuss various implications/corollaries of
the obtained results, especially for the SM of particle physics. \ \ In
Section 7 we discuss possible directions for further research based on the
results presented in this paper. These include (but not limited to):
connections with the LQG, the role and place of the Higgs boson,
relationship between real space-time scattering processes of high energy
physics and processes of topology change associated with such scattering.
Based on the results of this paper, we argue that this task can be
accomplished with help of the formalism developed by G. Perelman for his
proof of the Poincare$^{\prime }$ and geometrization conjectures.
\ The major \ new results \ of this paper are summarized as follows.
1. In subsection 5.4.4, while solving the gap problem, we reproduced \ by
employing entirely different methods, the main results of the paper by
Korotkin and Nicolai [31] on quantizing dimensionally reduced gravity. From
these results it follows that for gravity and Y-M fields possessing the same
symmetry the nonperrturbative quantization proceeds essentially in the same
way.
2. In subsection 6.3 we demonstrated that gravity-Y-M correspondence
discovered by L.Witten for gauge group SU(2) can be extended \textsl{only}
to the SU(3) gauge group. This group contains SU(2)$\times $U(1) group as a
subgroup. This fact allowed us to come up with the anticipated (but never
proven!) conclusion about \ symmetry of the SM. It is given by SU(3)$\times
SU(2)$\times $U(1). The obtained result is very rigid. It is deeply rooted
into not widely known/appreciated (discussed in Appendix A) properties of \
the gravitational field. It is these properties which\ ultimately determine
the conditions of gravity-Y-M correspondence.
3. The latest papers Refs.[32-34] \ are aimed at reproduction of the
classification scheme of particles and fields in the SM within the framework
of LQG formalism. These results match perfectly with the results of our
paper because of the noticed and developed gravity-Y-M correspondence. In
view of this correspondence, the results of Refs.[32-34] can be reproduced
with help of \textsl{minimal} \ gravity model described in subsections 3.2,
3.4, and 7.2 . This \ minimal model has differential-geometric /topological
meaning in terms of the dynamics of the extended Ricci flow [35,36]. Such a
flow is the minimal extension of the Ricci flow \ now famous because of its
relevance in proving the Poincare$^{\prime }$ and geometrization
conjectures. \
4. The formalism developed in this paper explains why using pure gravity one
can talk about the particle/field content of the SM. Not only it is
compatible with just mentioned LQG results but also with those, coming from
noncommutative geometry [37], where it is demonstrated that use of pure
gravity (that is "minimal model") combined with 0- dimensional internal
space is sufficient for description of the SM.
\section{Emergence of the Ernst equation in pure gravity and Y-M fields}
\subsection{Some facts about the Ernst equation}
Study of static vacuum Einstein fields was initiated by Weyl in 1917.
Considerable progress made in later years \ is documented in Ref.[38]. To
develop formalism of this paper we need to discuss some facts about these
static fields. Following Wald [39], a spacetime is considered to be \textsl
stationary }if there is a one-parameter group of isometries $\sigma _{t\text{
}}$ whose orbits are time-like curves e.g. see [40]. With such group of
isometries is associated a time-like Killing vector $\xi ^{i}.$ Furthermore,
a spacetime is \textsl{axisymmetric} if there exists a one-parameter \ group
of isometries $\chi _{\phi \text{ }}$ whose orbits are closed spacelike
curves. Thus, a spacelike Killing vector field $\psi ^{i}$ has integral
curves \ which are closed. The spacetime is \textsl{stationary and
axisymmetric} if it possesses both of these symmetries, provided that
\sigma _{t\text{ }}\circ \chi _{\phi \text{ }}=\chi _{\phi \text{ }}\circ
\sigma _{t\text{ }}.$ If $\xi =(\frac{\partial }{\partial t})^{\text{ \ }}
and $\psi =(\frac{\partial }{\partial \phi })^{\text{ \ }}$ so that $[\xi
,\psi ]=0,$ one can choose coordinates as follows: $x^{0}=t,x^{1}=\phi
,x^{2}=\rho ,x^{3}=z.$ Under such identification, the metric tensor $g_{\mu
\nu }$ becomes a function of only $x^{2}$and $x^{3}.$ Explicitly
\begin{equation}
ds^{2}=-V(dt-wd\phi )^{2}+V^{-1}[\rho ^{2}d\phi ^{2}+e^{2\gamma }(d\rho
^{2}+dz^{2})], \tag{2.1}
\end{equation
where functions $V$, $w$ and $\gamma $ depend on $\rho $ and $z$ only. In
the case when $V=1,w=\gamma =0,$ the metric can be presented as
ds^{2}=-\left( dt\right) ^{2}$ +$\left( d\tilde{s}\right) ^{2}$, wher
\begin{equation}
\left( d\tilde{s}\right) ^{2}=\rho ^{2}d\phi ^{2}+d\rho ^{2}+dz^{2}
\tag{2.2}
\end{equation
is the standard flat 3 dimensional metric written in cylindrical
coordinates. The four-dimensional set of vacuum Einstein equations $R_{ij}=0$
with help of metric given by Eq.(2.2) acquires the following for
\begin{equation}
\mathbf{\nabla \cdot \{}V^{-1}\mathbf{\nabla }V+\rho ^{-2}V^{2}w\mathbf
\nabla }w\}=0 \tag{2.3a}
\end{equation
an
\begin{equation}
\mathbf{\nabla \cdot \{}\rho ^{-2}V^{2}\mathbf{\nabla }w\}=0. \tag{2.3b}
\end{equation
In these equations $\mathbf{\nabla \cdot }$ and $\mathbf{\nabla }$ are
three-dimensional flat (that is with metric given by Eq.(2.2)) divergence
and gradient operators respectively. In addition to these two equations,
there are another two needed for determination of factor $\gamma $ in the
metric, Eq.(2.1). They require knowledge of $V$ and $w$ as an input.
Solutions of\ Eq.s(2.3) is described in great detail in the paper by Reina
and Trevers [41] with final result:
\begin{equation}
\left( \func{Re}\epsilon \right) \nabla ^{2}\epsilon =\mathbf{\nabla
\epsilon \cdot \mathbf{\nabla }\epsilon . \tag{2.4}
\end{equation
This equation is known in literature as the Ernst equation. The complex
potential $\epsilon $ is defined in by $\epsilon =V+i\omega $ with $V$
defined as above and $\omega $ being an auxiliary potential whose explicit
form we do not need in this work. As it was recognized by Ernst [42,43] such
an equation can be also used for description of the combined
Einstein-Maxwell fields. \ We shall exploit this fact in Section 6. \textsl
In Appendix A and in} \textsl{Section 6 we provide proofs that knowledge of
static vacuum solutions} of the Ernst equation \textsl{is necessary and
sufficient} for restoration of static Einstein-Maxwell fields.\footnote
Surprisingly, upon changes of variables in these \ static solutions, the \
exact results for propagating gravitational waves can be obtained as well.}\
Fields other than Y-M should be also restorable \footnote
This is so because each of these fields is a source of gravitational field
which, in turn, can be eliminated locally. See Appendix A.}. To proceed, we
need to list several properties of the Ernst equation to be used below.
First, following [41] and using prolate spheroidal coordinates, the Ernst
equation reproduces the Schwarzschild metric, and with another choice of
coordinates it reproduces the Kerr and Taub-NUT metric. Thus, the Ernst
equation is the most general equation describing \ physically interesting
vacuum spacetimes compatible with the Cauchy formulation of general
relativity\ [39,40,44,45]. Such a formulation is convenient staring point
for quantization of gravitational field via superspace formalism [39]
leading to the Wheeler- De Witt equation, etc. \textsl{Since in this work we
advocate different \ approach to quantization of gravity, this topic is not
being discussed further. \ }Second, following Ref.[38], page 283, a \textsl
stationary} solution of Einstein's field equations is called \textsl{static
if the timelike Killing vector is orthogonal to the Cauchy surface. In such
a case from the Table 18.1. of the same reference \ it follows that the
Ernst potential $\epsilon $ is real. This observation allows us to simplify
Eq.(2.4) considerably. For the sake of \ notational comparison with Ref.[38]
we redefine the potential $\epsilon =V+i\Phi .$ In the static case we have
\epsilon \equiv -F\equiv -e^{2u}\footnote
The minus sign in front of $F$ is written in accord with \ conventions of
Chapter 30.2 of the 1st edition of Ref.[38].}.$ Using this result in
Eq.(2.4) produce
\begin{equation}
\Delta _{\rho ,z}u=0, \tag{2.5}
\end{equation
where $\Delta _{\rho ,z}$ is flat Laplacian written in cylindrical
coordinates defined by the metric, Eq.(2.2).
\subsection{ Isomorphism between the SU(2) self-dual gauge \ and vacuum
Einstein \ field equations}
This isomorphism was discovered by Louis Witten in 1979 [30]. His work was
inspired by \ earlier works of Ernst [42] and Yang [46]. \ To our knowledge,
since time when Ref.[30] was published such an isomorphism was \ left
undeveloped. In this paper we correct this omission in order to demonstrate
that when both fields are mathematically indistinguishable, their
quantization should proceed in the same way. The result analogous to that
discovered by Witten was obtained using different arguments\ a year later by
Forgacs, Horvath and Palla [47] and, in a simpler form, by Singleton [48].
These authors used essentially the paper by Manton, Ref.[49], in which it
was cleverly demonstrated that the 't Hooft-Polyakov monopole can be
obtained \textsl{without} \ actual use of the auxiliary Higgs field. Both
Refs.[47,48] and the original paper by Witten [30] use the axial symmetry of
either gravitational or Y-M fields essentially. Only in this case it can be
shown that the axisymmetric version of the self-duality equations obtained
by Manton can be rewritten in the form of the Ernst equation. In the light
of above information, following Ref.[5\textbf{],} we shall discuss briefly
contributions of Yang and Witten. For this purpose, we need to consider
first the following auxiliary system of \textsl{linear} equation
\begin{equation}
\mathbf{\Psi }_{x}=\mathbf{X\Psi ;\Psi }_{t}=\mathbf{T\Psi }. \tag{2.6}
\end{equation
Here $\mathbf{\Psi }_{x}=\frac{\partial }{\partial x}\mathbf{\Psi }$ and
\mathbf{\Psi }_{t}=\frac{\partial }{\partial t}\mathbf{\Psi }.$ In this
system $\mathbf{X}$ and $\mathbf{T}$ are square matrices of the same
dimension and such that
\begin{equation}
\mathbf{X}_{t}-\mathbf{T}_{x}+[\mathbf{X},\mathbf{T}]=0 \tag{2.7}
\end{equation
This\ result easily follows from the compatibility condition: $\mathbf{\Psi
_{xt}=\mathbf{\Psi }_{tx}$. The matrices \textbf{X }and \textbf{T} can be
realized as
\begin{equation}
\mathbf{X}=\left(
\begin{array}{cc}
-i\zeta & q(x,t) \\
r(x,t) & i\zet
\end{array
\right) \text{ , \ }\mathbf{T}=\left(
\begin{array}{cc}
A & B \\
C & -
\end{array
\right) \tag{2.8}
\end{equation
with $\zeta $ being a spectral parameter and, $\ A,B$ and $C$ being some
Laurent polynomials in $\zeta .$ The above system can be extended to four
variables $x_{1},x_{2},t_{1},t_{2}$ in a simple minded fashion as follow
\begin{equation}
(\frac{\partial }{\partial x_{1}}+\zeta \frac{\partial }{\partial x_{2}}
\mathbf{\Psi =(X}_{1}\mathbf{+}i\mathbf{X}_{2}\mathbf{)\Psi ,} \tag{2.9a}
\end{equation
\begin{equation}
(\frac{\partial }{\partial t_{1}}+\zeta \frac{\partial }{\partial t_{2}}
\mathbf{\Psi =(T}_{1}\mathbf{+}i\mathbf{T}_{2}\mathbf{)\Psi .} \tag{2.9b}
\end{equation
In the most general case, the matrices $\mathbf{X}_{1},\mathbf{X}_{2}
\mathbf{T}_{1},\mathbf{T}_{2}$ \ are made of functions which "live" in
\textbf{C}$^{4}.$ They are representatives of the Lie algebra $sl(n,\mathbf{
})$ of $n\times n$ trace-free matrices. The compatibility conditions for
this case are equivalent to the self-duality condition for the Y-M fields \
associated with algebra $sl(n,\mathbf{C}).$It is instructive to illustrate
these general statements explicitly.
In $\mathbf{R}^{4}$ the (anti)self-duality condition for the Y-M curvature
reads: $\ast F=(-1)F$ so that for the self-dual case we obtain:
\begin{equation}
F_{01}=F_{23},F_{02}=F_{31},F_{03}=F_{12}. \tag{2.10}
\end{equation
In the "light cone" coordinates $\sigma =\frac{1}{\sqrt{2}
(x_{1}+ix_{2}),\tau =\frac{1}{\sqrt{2}}(x_{0}+ix_{3})$ the Y-M field
one-form can be written as $A_{\mu }dx^{\mu }=A_{\sigma }d\sigma +A_{\tau
}d\tau +A_{\bar{\sigma}}d\bar{\sigma}+A_{\bar{\tau}}d\bar{\tau}$ with the
overbar labeling the complex conjugation. In such notations $A_{0}=\frac{1}
\sqrt{2}}(A_{\tau }+A_{\bar{\tau}}),$ $A_{1}=\frac{1}{\sqrt{2}}(A_{\sigma
}+A_{\bar{\sigma}}),$ $A_{2}=\frac{1}{\sqrt{2}}(A_{\sigma }-A_{\bar{\sigma
}),$ $A_{3}=\frac{1}{\sqrt{2}}(A_{\tau }-A_{\bar{\tau}}).$ In these
notations Eq.s (2.9) acquire the following for
\begin{equation}
F_{\sigma \tau }=0,\text{ }F_{\bar{\sigma}\bar{\tau}}=0\text{ and }F_{\sigma
\bar{\sigma}}+F_{\tau \bar{\tau}}=0. \tag{2.11}
\end{equation
They can be obtained as compatibility condition for the isospectral linear
proble
\begin{equation}
(\partial _{\sigma }+\zeta \partial _{\bar{\tau}})\mathbf{\Psi =(}A_{\sigma
\mathbf{+}\zeta A_{\bar{\tau}}\mathbf{)\Psi }\text{ and }(\partial _{\tau
}-\zeta \partial _{\bar{\sigma}})\mathbf{\Psi =(}A\mathbf{_{\tau }-\zeta }
\mathbf{_{\bar{\sigma}})\Psi ,} \tag{2.12}
\end{equation
where the spectral parameter is $\zeta $ and $\mathbf{\Psi }$ is the local
section of the Y-M fiber bundle. The compatibility condition reads:
(\partial _{\sigma }-\zeta \partial _{\bar{\tau}})(\partial _{\sigma }+\zeta
\partial _{\bar{\tau}})\mathbf{\Psi }=(\partial _{\sigma }+\zeta \partial _
\bar{\tau}})(\partial _{\sigma }-\zeta \partial _{\bar{\tau}})\mathbf{\Psi ,}
$ thus leading to
\begin{equation}
\lbrack F_{\sigma \tau }-\zeta (F_{\sigma \bar{\sigma}}+F_{\tau \bar{\tau
})+\zeta ^{2}F_{\bar{\sigma}\bar{\tau}}]\mathbf{\Psi }=0. \tag{2.13}
\end{equation
This equation allows us to recover Eq.s(2.11). \ The first two equations of
Eq.s(2.11) can be used in order to represent the $A$ -fields as follows:
A_{\sigma }=\left( \partial _{\sigma }C\right) C^{-1},$ $A\mathbf{_{\tau }=
\left( \partial _{\tau }C\right) C^{-1},$ $A\mathbf{_{\bar{\sigma}}=}\left(
\partial _{\bar{\sigma}}D\right) D^{-1}$ and $A_{\bar{\tau}}=\left( \partial
_{\tau }D\right) D^{-1},$ where both $C$ and $D$ are some matrices in the
Lie group $G$, e.g. $G=SU(2)$. \ By introducing the matrix $M=C^{-1}D\in G$
the last of equations in Eq.(2.11) become
\begin{equation}
\partial _{\bar{\sigma}}(M^{-1}\partial _{\sigma }M)+\partial _{\bar{\tau
}(M^{-1}\partial _{\tau }M)=0. \tag{2.14a}
\end{equation
Thus, the self-duality conditions for the Y-M fields are equivalent to
Eq.(2.14a). For the future use, following Yang [46], we notice that in such
formalism the gauge transformations for Y-M fields are expressible through
D\rightarrow DE$ and $C\rightarrow CE$ so that $F_{\sigma \bar{\sigma
}\rightarrow E^{-1}F_{\sigma \bar{\sigma}}E$ and $F_{\tau \bar{\tau
}\rightarrow E^{-1}F_{\tau \bar{\tau}}E$ with the matrix $E=E(\sigma ,\bar
\sigma},\tau ,\bar{\tau})\in $ $SU(2)$ leaving self-duality Eq.s(2.10) (or
(2.13)) unchanged.
To connect Eq.(2.14a) with the Ernst equation, following L.Witten [30] it is
sufficient to assume that the matrix $M$ is a function of $\rho =\sqrt
x_{1}^{2}+x_{2}^{2}}$ and $z=x_{3}.$ In such a case it is useful to remember
that $\rho ^{2}=2\sigma \bar{\sigma}$ and $z=\frac{i}{\sqrt{2}}(\tau -\bar
\tau})$. With help of these facts Eq.(2.14a) can be rewritten as
\begin{equation}
\partial _{\rho }(\rho M^{-1}\partial _{\rho }M)+\rho \partial
_{z}(M^{-1}\partial _{z}M)=0. \tag{2.14b}
\end{equation
By assuming that the matrix $M$ is representable by the $SL(2,R)$-type
matrix, and writing it in the form
\begin{equation}
M\mathbf{=}\frac{1}{V}\left(
\begin{array}{cc}
1 & \Phi \\
\Phi & \Phi ^{2}+V^{2
\end{array
\right) , \tag{2.15}
\end{equation
Eq.(2.14b) is reduced to the pair of equations
\begin{equation*}
V\nabla ^{2}V=\nabla V\cdot \nabla V-\nabla \Phi \cdot \nabla \Phi \text{
and }V\nabla ^{2}\Phi =2\nabla V\cdot \nabla \Phi .
\end{equation*
With help of the Ernst potential $\epsilon =V+i\Phi $ these \ two equations
can be brought to the canonical form of the Ernst equation, Eq.(2.4). Below,
we shall provide sufficient evidence that such a reduction of the Ernst
equation is compatible with analogous reduction in instanton/monopole
calculations for the Y-M fields.
\section{From analysis to synthesis}
\subsection{General remarks}
The results of previous section \ demonstrate that for axially symmetric
fields both pure gravity and pure self-dual Y-M fields are described by the
same (Ernst) equation. In this section we reformulate these results in terms
of the nonlinear sigma model with purpose of using such a reformulation
later in the text. To do so we need to recall some results from our recent
works, Ref.s[50,51]. In particular, \ we notice that under conformal
transformations $\hat{g}=e^{2u}g$ in $d$-dimensions the curvature scalar \
R(g)$ changes as follows:
\begin{equation}
\hat{R}(\hat{g})=e^{-2u}\{R(g)-2(d-1)\Delta _{g}u-(d-1)(d-2)\left\vert
\bigtriangledown _{g}u\right\vert ^{2}\}. \tag{3,1}
\end{equation
Since this equation is Eq.(2.11) of \ our Ref.[50] we shall be interested
only in transformations for which $\hat{R}(\hat{g})$ is a constant. This is
possible only if the total volume of the system is conserved. Under this
constraint we need to consider Eq.(3.1) for $d=3$ in more detail.\ Without
loss of generality we can assume that initially $R(g)=0$. For this case we
shall write $g=g_{0}$ so that Eq.(3.1) acquires the for
\begin{equation}
\hat{R}(\hat{g})=-2e^{-2u}[2\Delta _{g_{0}}u+\left\vert \bigtriangledown
_{g_{0}}u\right\vert ^{2}] \tag{3.2}
\end{equation
in which $\Delta _{g_{0}}$ is the flat space Laplacian. Now we can formally
identify it with that in Eq.(2.5). \ Accordingly, we shall be interested in
such conformal transformations for which $\Delta _{g_{0}}u=0$ in Eq.(3.2).
If they exist, Eq.(3.2) can be rewritten as
\begin{equation}
e^{2u}\hat{R}(\hat{g})=-2\left( \vec{\bigtriangledown}_{g_{0}}u\right) \cdot
\left( \vec{\bigtriangledown}_{g_{0}}u\right) . \tag{3.3}
\end{equation
This allows us to interpret Eq.(3.3) an
\begin{equation}
\Delta _{g_{0}}u=0 \tag{3.4}
\end{equation
as interdependent equations: solutions of Eq.(3.4) determine the scalar
curvature $\hat{R}(\hat{g})$ in Eq.(3.3)$.$ Clearly, under conditions at
which these results are obtained only those solutions of Eq.(3.4) should be
used which yield the constant scalar curvature $\hat{R}(\hat{g}).$ \
Eq.(3.3) \ contains information about the Ricci tensor. To recover this
information we notice that $\hat{g}_{ij}=-e^{2u}\delta _{ij}$. Therefore we
obtain:
\begin{equation}
\hat{R}_{ij}(\hat{g})=2\nabla _{i}u\nabla _{j}u, \tag{3.5}
\end{equation
in accord with Eq.(18.55) of Ref.[38] where this result was obtained by
employing entirely different arguments. From the same reference we find that
Eq.(3.4) comes as result of use of the contracted Bianci identities applied
to $\hat{R}_{ij}(\hat{g})\footnote
Ref.[38], page 283, bottom}.$ It is instructive to place the obtained
results into broader context. This is accomplished in the next subsection.
\subsection{Connection with the nonlinear sigma model}
Some time ago Neugebauer and Kramer (N-K), Ref.[38], obtained Eq.s(3.4) and
(3.5) using variational principle. In less general form this principle was
used previously by Ernst [42] \ resulting in now famous Ernst equation.
Neugebauer and Kramer proposed the Lagrangian \ and the associated with it
action functional $S_{N-K}$ producing upon minimization both Eq.s(3.4) and
(3.5). To describe these results, we also use some results by Gal'tsov [52].
The functional $S_{N-K}$ is given by
\begin{equation}
\mathcal{S}_{N-K}=\frac{1}{2}\int\limits_{M}\sqrt{\hat{g}}[\hat{R}(\hat{g})
\hat{g}^{ij}G_{AB}(\mathbf{\varphi })\partial _{i}\varphi ^{A}\partial
_{j}\varphi ^{B}]d^{3}x, \tag{3.6}
\end{equation
easily recognizable as three-dimensional nonlinear sigma model coupled to
3-d Euclidean gravity. The number of \ components for \ the auxiliary field
\mathbf{\varphi }$ \ as well as the metric tensor $G_{AB}(\mathbf{\varphi })$
of the target space is determined by the problem in question. In our case
upon variation of $S_{N-K}$ with respect to $\varphi _{i}^{A}$ and $\hat{g
_{ij}$ we should be able to re obtain Eq.s(3.4) and (3.5).\ To do so,\
following Ref.[53], \ we introduce the curren
\begin{equation}
J_{i}=M^{-1}\partial _{i}M. \tag{3.7}
\end{equation
In view of results of subsection\textsl{\ }2.2, we have to identify the
matrix $M$ with that defined by Eq.(2.15) and, taking into account
Eq.(2.14a), the index $i$ should take two values: $\sigma $ and $\tau .$
With such definitions we can replace the functional $\mathcal{S}_{N-K}$ by \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\begin{equation}
\mathcal{S}=\frac{1}{2}\int\limits_{M}\sqrt{\hat{g}}[\hat{R}(\hat{g})-\hat{g
^{ik}\frac{1}{4}tr(J_{i\text{ }}J_{k})]d^{3}x. \tag{3.8}
\end{equation
\ \ The actual calculations with such type of functionals \ can be made
using results of Ref.[53]. Thus, \ using this reference we obtain,
\begin{equation}
\hat{R}_{ij}(\hat{g})=\frac{1}{4}tr(J_{i\text{ }}J_{j}) \tag{3.9}
\end{equation
an
\begin{equation}
\partial _{i}J_{i\text{ }}=0. \tag{3.10}
\end{equation
Evidently, by construction Eq.(3.10) coincides with Eq.(2.14a) and,
ultimately, with Eq.(3.4). It is also easy also to check that Eq.(3.9) \
does coincide with Eq.(3.5). For this purpose it is sufficient to notice
that
\begin{equation}
tr(J_{i\text{ }}J_{j})=-tr(\partial _{i}M\partial _{j}M^{-1}). \tag{3.11}
\end{equation
To check correctness of our calculations the entries of the matrix $M$,
Eq.(2.15), can be restricted to $V$ (that is we can put $\Phi =0).$ Since
V=-F\equiv -e^{2u}$ (e.g. see discussion prior to Eq.(2.5)), a simple
calculation indeed brings Eq.(3.9) back to Eq.(3.5) as required.
It is interesting and important to observe at this point that\textsl{\ the
equation} \textsl{of motion, Eq.(3.10), formally is not affected by effects
of gravity}. This conclusion requires some explanation. From subsection 2.2,
especially from Eq.s(2.14),(2.15), it should be clear that Eq.(3.10) is the
Ernst equation determining gravitational field. Hence, it is physically
wrong to expect that it is going to be affected by the effects of gravity.
Eq.s(3.9) and (3.10) are the same as Eq.s(3.5) and (3.4) whose meaning was
explained in the previous subsection. Clearly, the functional, Eq.(3.8), can
be used for coupling of \textsl{other} fields to gravity. This is indeed
demonstrated in Ref.[52]. This is done with purpose of connecting results
for the nonlinear sigma models with those for heterotic strings. We would
like to discuss this connection now since it will be used later in the text.
\subsection{Connection with heterotic string models}
The functional $\mathcal{S}$, Eq.(3.8), \ is related to that for the
heterotic string model, e.g. see Ref.[54]. For such a model the sigma
model-like functional is obtainable from 10 dimensional supersymmetric
string model by means of compactification scheme (ideologically similar to
that used in the Kaluza-Klein theory of gravity and electromagnetism) aimed
at bringing the \ effective dimensionality to physically acceptable values
(e.g. 2, 3 or 4). For dimensionality $D<10$ such compactified/reduced action
functional reads (e.g. see Ref.[54], Eq.(9.1.8)):
\begin{eqnarray}
S_{D}^{heterotic} &=&\int d^{D}x\sqrt{-\det G}e^{-2\phi }[R+4\partial ^{\mu
}\phi \partial _{\mu }\phi -\frac{1}{12}\hat{H}^{\mu \nu \rho }\hat{H}_{\mu
\nu \rho } \notag \\
&&-\frac{1}{4}(M^{-1})_{ij}F_{\mu \nu }^{i}F^{j\mu \nu }+\frac{1}{8
tr(\partial _{\mu }M\partial ^{\mu }M^{-1})]. \TCItag{3.12}
\end{eqnarray
The compactification procedure is by no means unique. There are many ways to
make a compactified action to look exactly like that given by Eq.(3.8) (e.g.
see [55\textbf{]}). Evidently, there should be a way to relate such actions
to each other since they all are having the same origin - 10 dimensional
heterotic string action. Because of this, we would like to make some
comments on action given by Eq.(3.12) by specializing to $D=3$ for reasons
explained in Refs[\textbf{68,69}] and to be clarified below, in Section 6.
Under such conditions if we require the dilaton $\phi $, the antisymmetric
H $-field (associated with string orientation) and the electromagnetic field
$F $ to vanish$,$ the remaining action will coincide with that given by
Eq.(3.8). Because of this, the following steps can be made.
\ First, \ as explained in our work, Ref.[50], for closed 3-manifolds we
can/should drop the dilaton field $\phi $. Second, by properly selecting
string model we can ignore the antisymmetric field $H$. Third, by taking
into account results of Appendix A we can also drop the electromagnetic
field since it can be always restored from pure gravity. Thus, we end up
with the action functional $\mathcal{S}$, Eq.(3.8), which we shall call
\textsl{minimal}". In Section 6 we shall provide evidence that its
minimality is deeply rooted into gravity-Y-M correspondence which does not
leave much room for "improvements" abundant in physics literature. We shall
begin explaining this fact immediately below and will end our arguments in
Section 6.
\subsection{The extended Ricci flow}
Thus far use of the variational principle apparently had not brought us any
new results (at least at the classical level). Situation changes in the
light of recent work by List [35]. Following Ref.s[35,36\textbf{]}, it is
convenient to introduce Perelman-like entropy functional $\mathcal{F}(\hat{g
_{ij},u,f)
\begin{equation}
\mathcal{F}(\hat{g}_{ij},u,f)=\int\limits_{M}(\hat{R}(\hat{g})-2\left\vert
\nabla _{\hat{g}}u\right\vert ^{2}+\left\vert \nabla _{\hat{g}}f\right\vert
^{2})e^{-f}dv \tag{3.13}
\end{equation
coinciding with Eq.(7.22b) of our work, Ref.[50], when $u=0$.$\footnote
It should be noted that there is an obvious typographical error in
Eq.(7.22b): the term $\left\vert \nabla _{h}f\right\vert ^{2}$ is typed as
\left\vert \nabla _{h}f\right\vert .$}$. Because of this observation, if
formally we make a replacement $\mathcal{R(}\hat{g};u)=\hat{R}(\hat{g
)-2\left\vert \nabla u\right\vert ^{2}$ in Eq.(3.13), \ we are able to
identify Eq.(3.13) with Perelman's entropy functional enabling us to follow
the same steps as were made in Perelman's papers aimed at proof of the
geometrization and Poincare$^{\prime }$ conjectures. Such a program was
indeed completed in the PhD thesis by List [36]. Minimization of entropy
functional $\mathcal{F}(\hat{g}_{ij},f)$ produces \ the \ following set of
equation
\begin{equation}
\frac{\partial }{\partial t}g_{ij}=-2(\hat{R}_{ij}+\nabla _{i}\nabla _{j}f
\text{ }+4\nabla _{i}u\nabla _{j}u, \tag{3.14a}
\end{equation
\begin{equation}
\frac{\partial }{\partial t}u=\Delta _{\hat{g}}u-\left( \nabla u\right)
\cdot \left( \nabla f\right) , \tag{3.14b}
\end{equation
an
\begin{equation}
\frac{\partial }{\partial t}f=-\hat{R}-\Delta _{\hat{g}}f+2\left\vert \nabla
u\right\vert ^{2}, \tag{3.14c}
\end{equation
coinciding with Eq.s(7.28a), (7.28b) of our work, Ref.[50], when $u=0.$In
these equations $\left\vert \nabla _{\hat{g}}u\right\vert ^{2}$ $=\hat{g
^{ij}\nabla _{i}u\nabla _{j}u$ , etc. \ From the next section and results
below it follows that physically we should be interested in closed 3
manifolds. \ For such manifolds one can use Lemma 2.13, proven by List [36],
which can be formulated as follows:
Let $\hat{g},u,f$ be a solution of Eq.s(3.14) for $t\in \lbrack 0,T)$ on a
closed manifold $M$. Then the evolution of the entropy is given b
\begin{equation}
\partial _{t}\mathcal{F}(\hat{g}_{ij},u,f)=2\int\limits_{M}[\left\vert
\mathcal{R}_{ij}\mathcal{(}\hat{g};u)+\nabla _{i}\nabla _{j}f\right\vert
^{2}+2(\Delta _{\hat{g}}u-\left( \nabla u\right) \cdot \left( \nabla
f\right) )^{2}]e^{-f}dv\geq 0. \tag{3.15}
\end{equation
Thus, the entropy is non decreasing with equality taking place if and only
if the solution of Eq.(3.14) is a gradient soliton. This happens when the
following two conditions hol
\begin{equation}
\mathcal{R}_{ij}\mathcal{(}\hat{g};u)+\nabla _{i}\nabla _{j}f=0\text{ and
\Delta _{\hat{g}}u-\left( \nabla u\right) \cdot \left( \nabla f\right) =0.
\tag{3.16}
\end{equation
For $u=0$ the result of Perelman, Eq.(7.30) of Ref [50], for steady gradient
soliton is reobtained, as required. Since for closed compact manifolds
f=const$ Eq.s(3.16) coincide with Eq.s(3.4) and (3.5) as anticipated.
\textsl{Thus, existence of steady gradient solitons in the present context
is equivalent to existence of solutions of static Einstein's equations for
pure gravity.} This fact alone could be mathematically interesting but
requires some reinforcement to be of interest physically. We initiate this \
reinforcement process in the following subsection.
\subsection{Relationship between the F-S and the Ernst functionals}
The F-S functional was mentioned in the Itroduction. In this subsection we
would like to initiate study of its connection with the Ernst functional. We
begin with the following observation. In steps leading to Eq.(2.14b) (or
(3.10)) the Euclidean time coordinate $x_{0}$ was eventually dropped
implying that solutions of selfduality for Y-M equations, when substituted
back into Y-M action functional, will produce physically meaningless
(divergent) results. While in subsection 4.4 we discuss a variety of means
for removing of such apparent divergence, in this subsection we notice that
already Ernst [42] suggested the action functional whose minimization
produces the Ernst equation. He gave two equivalent forms for such a
functional, now bearing his name. These are eithe
\begin{equation}
S_{E_{1}}[\epsilon ]=\int\limits_{M}dv\frac{\mathbf{\nabla }\epsilon \cdot
\mathbf{\nabla }\epsilon ^{\ast }}{\left( \func{Re}\epsilon \right) ^{2}}
\tag{3.17}
\end{equation
o
\begin{equation}
S_{E_{2}}[\xi ]=\int\limits_{M}dv\frac{\mathbf{\nabla }\xi \cdot \mathbf
\nabla }\xi ^{\ast }}{\left( \xi \xi ^{\ast }-1\right) ^{2}}. \tag{3.18}
\end{equation
Minimization of $S_{E_{1}}[\epsilon ]$ leads to Eq.(2.4) while functional,
Eq.(3.18), is obtained from $S_{E_{1}}[\epsilon ]$ by means of substitution:
$\epsilon =(\xi -1)/(\xi +1).$ In both functionals $dv$ is 3-dimensional
Euclidean volume element so that apparently the manifold $M$ is just $E^{3}$
(or, with one point compactification, it is $S^{3}).$Evidently$,$both
S_{E_{1}}[\epsilon ]$ and $S_{E_{2}}[\xi ]$ are functionals for the
nonlinear sigma model. If we drop the curvature term in Eq.(3.6) such
truncated functional can be identified, for example, with $S_{E_{2}}[\xi ]$.
This explains why Eq.(3.10) is formally unaffected by gravity. In
mathematics literature the nonlinear sigma models are known as harmonic
maps. \ Since Reina [56] demonstrated that the functional $S_{E_{2}}[\xi ]$
describes the harmonic map from $S^{3}$ to \textbf{H}$^{2},$ it is not too
difficult to write analogous functional $S_{E_{3}}[\xi ]$ describing the
mapping from $S^{3}$ to $S^{2}.$ It is given b
\begin{equation}
S_{E_{3}}[\xi ]=\int\limits_{M}dv\frac{\mathbf{\nabla }\xi \cdot \mathbf
\nabla }\xi ^{\ast }}{\left( \xi \xi ^{\ast }+1\right) ^{2}} \tag{3.19}
\end{equation
and is part of the F-S model. If needed, both $S_{E_{2}}[\xi ]$ and
S_{E_{3}}[\xi ]$ can be supplemented by additional (topological) terms which
in the simplest case are winding numbers. Thus, we shall be dealing either
with the truncated F-S model, Eq.(3.19), or with its hyperbolic analog,
Eq.(3.18). \ The choice between these models is nontrivial and will
discussed in detail in Section 6\textsl{.} To facilitate this discussion, we
need to observe the following. In the static case, we argued, e.g. see
Eq.(2.5), that $\epsilon =-F=-e^{2u}.$ Substitution of this result back into
$S_{E_{1}}[\epsilon ]$ produces (up to a constant) the following result
\begin{equation}
\tilde{S}_{E_{1}}[\epsilon ]=\int\limits_{M}dv\mathbf{\nabla }u\cdot \mathbf
\nabla }u \tag{3.20}
\end{equation
leading to Eq.(2.5) as anticipated. At the same time, consider the following
H-E action functiona
\begin{equation}
S_{H-E}[\hat{g}]=\int\limits_{M}dv\sqrt{\hat{g}}\hat{R}(\hat{g}), \tag{3.21}
\end{equation
and take into account Eq.(3.3) and the fact that $\hat{g}_{ij}=-e^{2u}\delta
_{ij\text{ }}$. Straightforward calculation leads us then to the result (up
to a constant)
\begin{equation}
S_{H-E}[\hat{g}]=-\int\limits_{M}dv\mathbf{\nabla }u\cdot \mathbf{\nabla }u.
\tag{3.22}
\end{equation
The minus sign in front of the integral is important and will be explained
below. Before doing so, we notice that the Ernst functional (in whatever
form) is essentially equivalent to the H-E functional! Since in the original
paper by Ernst $M$ is $E^{3}$ (or $S^{3}),$ apparently, such a functional
should be zero. This is surely not the case in general but the explanation
is nontrivial.\ Suppose that minimization of the Ernst functional leads to
some knotted/linked structures\footnote
We shall postpone detailed discussion of this topic till Section 6\textsl{.}
. If such knots/links are hyperbolic then, by construction, complements of
these knots/links in $S^{3}$ are $\mathbf{H}^{3}$ modulo some discrete
group. This conclusion is in accord with properties of the Ernst equation
discovered by Reina and Trevers [41]. Following this reference,\ we
introduce the complex space $\mathbf{C}\times \mathbf{C}=\mathbf{C}^{2}$ so
that $\forall $ $\mathbf{z=(}u\mathbf{,}v\mathbf{)}\in \mathbf{C}^{2}$ the
scalar product $z_{\alpha }^{\ast }z^{\alpha }$ can be made with the metric
\pi _{\alpha \beta }=diag\{1,-1\}$. Furthermore, the Ernst Eq.(2.4) can be
rewritten with help of substitution $\epsilon =(u-v)/(u+v)$ as the set of
two equations
\begin{equation}
z^{\alpha }z_{\alpha }^{\ast }\nabla ^{2}z^{\beta }=2z_{\alpha }^{\ast
\mathbf{\nabla }z^{\alpha }\cdot \mathbf{\nabla }z^{\beta }. \tag{3.23}
\end{equation
Such a system of equations is invariant with respect to transformations from
unimodular group SU(1,1) which is equivalent to SL(2,\textbf{C}). But SL(2
\textbf{C}) is the group of isometries of hyperbolic space $\mathbf{H}^{3}$
as was discussed extensively in our work, Ref.[57]. Thus, minimization of
both the F-S and Ernst functionals should account for knotted/linked
structures. This conclusion is strengthened in the next subsection.
\subsection{Relationship between the Ernst and Chern-Simons functionals}
Even though we need to find this relationship anticipating results of the
next section, by doing so, some unexpected connections with previous
subsection \ are also going to be revealed. For this purpose, we notice that
\ for $u=0$ the functional $\mathcal{F}(\hat{g}_{ij},u,f)$ introduced
earlier is just Perelman's entropy functional. As such, it was discussed in
our work, Ref.[50]. Evidently, both $\mathcal{F}$ and Perelman's functional
can be used for study of \ topology of 3-manifolds. We believe, though, that
use of Perelman's functional is more advantageous as we would like to
explain now. For this purpose,\ it is convenient to introduce the Raleigh
quotient $\lambda _{g}$ vi
\begin{equation}
\lambda _{g}=\inf_{\varphi }\frac{\int\limits_{M}dV(4\left\vert \nabla
\varphi \right\vert ^{2}+R(g)\varphi ^{2})}{\int\limits_{M}dV\varphi ^{2}},
\tag{3.24}
\end{equation
e.g. see Eq.(7.24) of [50], to be compared against the Yamabe quotient $(p
\frac{2d}{d-2}$ and $\alpha =4\frac{d-1}{d-2})$
\begin{equation*}
Y_{g}=\frac{\int d^{d}x\sqrt{\hat{g}}\hat{R}(\hat{g})}{\left( \int d^{d}
\sqrt{\hat{g}}\right) ^{\frac{2}{p}}}=\left( \frac{1}{\int\nolimits_{M}d^{d}
\sqrt{g}\varphi ^{p}}\right) ^{\frac{2}{p}}\int\nolimits_{M}d^{d}x\sqrt{g
\{\alpha \left( \nabla _{g}\varphi \right) ^{2}+R(g)\varphi ^{2}\}\equiv
\frac{E[\varphi ]}{\left\Vert \varphi \right\Vert _{p}^{2}}
\end{equation*
also discussed in [50]. Because of similarity of these two quotients the
question arises: Can they be equal to each other? The affirmative answer to
this question is obtained in Ref.[58]. It can be formulated as
\textbf{Theorem} [58]. Suppose that $\gamma $ is a conformal class on $M$
which \textsl{does not} contain metric of \textsl{positive scalar} $\QTR{sl}
curvature.}$ Then
\begin{equation}
Y_{\gamma }=\sup_{g\in \gamma }\lambda _{g}V(g)^{\frac{2}{d}}\equiv \bar
\lambda}(M), \tag{3.25a}
\end{equation
where $\bar{\lambda}(M)$ is Perelman's $\bar{\lambda}$ invariant.
Equivalently,
\begin{equation}
\lambda _{g}V(g)^{\frac{2}{d}}\leq Y_{\gamma }, \tag{3.25b}
\end{equation
where $V(g)=\int d^{d}x\sqrt{\hat{g}}$ \ is \ the volume.
The equality happens when $g$ is the Yamabe minimizer. It is metric of unit
volume for manifold $M$ of constant scalar curvature (which, according to
theorem above, should be negative so that $M$ is hyperbolic 3-manifold). \ \
Only for hyperbolic 3-manifolds whose \textsl{Yamabe invariant} $\mathcal{Y
^{-}(M)=\sup_{\gamma }Y_{\gamma }$ $\ $the gravitational Cauchy problem for
source-free gravitational field is well posed [45,46]. For $g$ which is
Yamabe minimizer we have $S_{H-E}[\hat{g}]\leq Y_{\gamma }.$ This result can
be further extended by noticing that $\mathcal{N}S_{H-E}[\hat{g}]=CS(\mathbf
A}),$ where $\mathcal{N}$ is some constant whose value depends upon the
explicit form of the gauge field \textbf{A, }and\textbf{\ }$CS(\mathbf{A})$
is the Chern-Simons invariant to be described in the next section.
To demonstrate that $\mathcal{N}S_{H-E}[\hat{g}]=CS(\mathbf{A})$ it is
sufficient to use some results from works by Chern and Simons [59] and by
Chern [60]. In [59] it was proven that: a) the Chern-Simons (C-S) functional
$\ CS(\mathbf{A})$ (to be defined in next section) is a conformal invariant
of $M$ (Theorem 6.3. of [59]) and, b) that the critical points of $CS
\mathbf{A})$ correspond to 3-manifolds which are (at least locally)
conformally flat (Corollary 6.14 of [59]). Subsequently, these results were
reobtained by Chern, Ref.[60], in much simpler and more \ physically
suggestive way. \ In view of these facts, at least for Yamabe minimizers we
obtain, $CS(\mathbf{A})=\mathcal{N}S_{Y}[\varphi ],$ where $\mathcal{N}$ is
some constant (different for different gauge groups). That this is the case
should come as not too big of a surprise since for Lorentzian 2+1 gravity
Witten, Ref.[61], demonstrated the equivalence of the Hilbert-Einstein and
C-S functionals without reference to results of Chern and Simons just cited.
At the same time, the Euclidean/Hyperbolic 3d gravity was discussed \ only\
much more recently, for instance, in the paper by Gukov, Ref.[62]. To avoid
duplications we refer our readers to these papers for further details.
\section{Floer-style nonperturbative treatment of Y-M fields}
\subsection{Physical content of the Floer's theory}
Striking resemblance between results of \ nonperturbative treatment of
4-dimensional Y-M fields and two dimensional nonlinear sigma model at the
classical level is well documented in Ref.[63\textbf{]}. Zero curvature
equations, e.g. Eq.(2.7), can be obtained either by using the two-
dimensional nonlinear sigma model or three-dimensional C-S functional. \ As
discussed in previous section, the self-duality condition for Y-M fields
also leads (upon reduction)\ to zero curvature condition. Since the Ernst
equation describing static gravitational (and electrovacuum) fields is
obtainable \ both from conditions of self-duality for the Y-M field and from
minimization of 3-dimensional nonlinear sigma model,\ it follows that 3-d
gravitational nonlinear sigma model, Eq.(3.8), contains nonperturbative
information about Y-M fields. Furthermore, in view of results of Appendix A,
it also should contain information about the static electromagnetic fields,
for the combined gravitational and electromagnetic waves and, with minor
modifications, for the combined gravitational, electromagnetic and neutrino
fields. The nonperturbative treatment of Y-M fields is usually associated
either with the instanton or monopole calculations. This observation leads
to the conclusion that, at least in some cases, zero curvature equation
should carry all nonperurbative information about Y-M fields. This point of
view\ is advocated and developed by Floer [11,21\textbf{]}. \ Below,we \
shall discuss Floer's point of view now in the language used in physics
literature. For the sake of illustration, it is convenient to present our
arguments for Abelian Y-M (that is electromagnetic) fields first.
\ The action functional $S$ in this case is given by\footnote
Up to an unimportant scale factor.}
\begin{equation}
S=\frac{1}{2}\int\limits_{0}^{t}dt\int\limits_{M}dv[\mathbf{E}^{2}-\mathbf{B
^{2}], \tag{4.1}
\end{equation
where $\mathbf{B}=\mathbf{\nabla }\times \mathbf{A}$ and $\mathbf{E}=$ $
\mathbf{\nabla }\varphi -\frac{\partial }{\partial t}\mathbf{A}$ , $\varphi
\equiv \mathbf{A}_{0}.$ It is known that, at least for electromagnetic
waves, it is sufficient to put $\mathbf{A}_{0}=0$ (temporal gauge). In such
a case the above action can be rewritten as
\begin{equation}
S[\mathbf{A}]=\frac{1}{2}\int\limits_{0}^{t}dt\int\limits_{M}dv[\mathbf{\dot
A}}^{2}-\mathbf{(\nabla \times A)}^{2}], \tag{4.2}
\end{equation
where $\mathbf{\dot{A}=}\frac{\partial }{\partial t}\mathbf{A.}$ From the
condition $\delta S/\delta \mathbf{A}=0$ we obtain $\frac{\partial \mathbf{E
}{\partial t}=\mathbf{\nabla }\times \mathbf{B}$. The definition of $\mathbf
B}$\textbf{\ }guarantees \ the validity of the condition $\mathbf{\nabla
\cdot \mathbf{B}=0$ while from the definition of $\mathbf{E}$ we get another
Maxwell equation $\frac{\partial \mathbf{B}}{\partial t}=-\mathbf{\nabla
\times \mathbf{E}$.\ The question arises: will these results imply the
remaining Maxwell's equation $\mathbf{\nabla }\cdot \mathbf{E}=0$ essential
for correct formulation of the Cauchy problem? If such a constraint
satisfied at $t=0,$ naturally, it will be satisfied for $t>0$.
Unfortunately, for $t=0$ the existence of such a constraint does not follow
from the above equations and should be introduced as independent. This
causes decomposition of the field $\mathbf{A}$ as $\mathbf{A}=\mathbf{A
_{\parallel }+\mathbf{A}_{\perp }$.Taking into account that $\mathbf{E}=$ $
\frac{\partial }{\partial t}\mathbf{A,}$ we obtain as well $\mathbf{\nabla
\cdot (\mathbf{E}_{\parallel }$ +$\mathbf{E}_{\perp }).$ Then, by design
\mathbf{\nabla }\cdot \mathbf{E}_{\perp }=0,$ while $\mathbf{\nabla }\cdot
\mathbf{E}_{\parallel }$ remains to be defined by the initial and boundary
data. Because of this, it is always possible to choose $\mathbf{A
_{\parallel }=0$ and to use only $\mathbf{A}_{\perp }$ for description of
the field propagation [64]. Hence, the action functional $S$ can be finally
rewritten as
\begin{equation}
S[\mathbf{A}_{\perp }]=\frac{1}{2}\int\limits_{0}^{t}dt\int\limits_{M}dv
\mathbf{\dot{A}}_{\perp }^{2}-\mathbf{(\nabla \times A}_{\perp }\mathbf{)
^{2}]. \tag{4.3}
\end{equation
\ In such a form it can be used \ as action in the path integrals, e.g. see
Ref.[64], page 152, describing free electromagnetic field. Such path
integral can be evaluated both in Minkowski and Eucldean spaces by the
saddle point method. There is, however, a closely related method more
suitable for our purposes. It is described in the monograph by Donaldson,
Ref.[11]. Following this reference, we replace time variable $t$ by $-i\tau $
in the functional $S[\mathbf{A}_{\perp }]$ .Consider now this replacement in
some detail. We have\footnote
We shall assume (without loss of generality) that \textbf{\.{A}}$_{\perp }
\textbf{\ }is collinear with\textbf{\ A}$_{\perp }.$
\begin{eqnarray}
&&\frac{1}{2}\int\limits_{0}^{T}d\tau \int\limits_{M}dv[\mathbf{\dot{A}
_{\perp }^{2}+\mathbf{(\nabla \times A}_{\perp }\mathbf{)}^{2}] \notag \\
&=&\frac{1}{2}\int\limits_{0}^{T}d\tau (\int\limits_{M}dv[[\mathbf{\dot{A}
_{\perp }+\mathbf{(\nabla \times A}_{\perp }\mathbf{)}]^{2}-\frac{\partial }
\partial \tau }\mathbf{(A}_{\bot }\mathbf{\cdot \nabla \times A}_{\perp
\mathbf{)]).} \TCItag{4.4}
\end{eqnarray
Since variation of $\mathbf{A}_{\perp }$ is fixed at the ends of $\tau $
integral, the last term can be dropped so that we are left with the
condition
\begin{equation}
\frac{\partial }{\partial \tau }\mathbf{A}_{\perp }\mathbf{=-B}_{\perp }
\tag{4.5}
\end{equation
extremizing the Euclidean action $S_{E}[\mathbf{A}_{\perp }].$ The above
results are transferable to the non Abelian Y-M field by continuity and
complementarity. Since in the Abelian case fields $\mathbf{E}$ and $\mathbf{
}$ are dual to each other, by applying the curl operator to both sides of
Eq.(4.5) (and removing \ the subscript $\perp )$ we obtain the \ equivalent
form of self-duality equations in accord with those on page 33 of Ref.[6].
This calculation provides an independent check of Donaldson's method of \
computation. Since the (anti)self-duality condition in the Abelian case can
be written as $\mathbf{B}=\mp \mathbf{E}$ [\textbf{9}].and since $\mathbf{E
= $ $-\frac{\partial }{\partial \tau }\mathbf{A}$, we conclude that Eq.(4.5)
is the self-duality equation. This conclusion is immediately transferable to
the non Abelian Y-M case where the analog of Eq. (4.5) i
\begin{equation}
\frac{\partial }{\partial \tau }\mathbf{A=\ast F(A(}\tau \mathbf{)),}
\tag{4.6}
\end{equation
in accord with Floer. The symbol * denotes the Hodge star operation in 3
dimensions. Following Donaldson [11] this result should be understood as
follows. Introduce a connection matrix $\mathbf{A}=A_{0}d\tau
+\sum\limits_{i=1}^{3}A_{i}dx_{i}$ such that both $A_{0}$ and $A_{i}$ depend
upon all four variables $\tau ,x_{1},x_{2}$ and $x_{3}.$ In the temporal
gauge $A_{0}$ should be discarded so that $\tau $ becomes a parameter in the
remaining $\ A_{i}^{\prime }s.$ Evidently, it can be associated with the
spectral parameter (e.g. see previous section). The Hodge star operator in
Eq.(4.6) \ is needed to make this equation as an equation for one-forms The
obtained results fit nicely into Cauchy formulation of dynamics of both Y-M
and gravity. Indeed,\ under conditions \ analogous to that discussed in
[45,46] the space-time (4-manifold) is decomposable into direct product
M\times R$ \ (a trivial fiber bundle) in such a way that all differential
operations \ acting on 4-manifold are been projected down to 3-manifold $M$.
This is essential part of Floer's theory. Furthermore, since $\delta CS
\mathbf{A})/\delta \mathbf{A}=\mathbf{F}(\mathbf{A})$ the above Eq.(4.6) can
be equivalently rewritten as
\begin{equation}
\frac{\partial }{\partial \tau }\mathbf{A=\ast \lbrack }\delta CS(\mathbf{A
)/\delta \mathbf{A}] \tag{4.7}
\end{equation
so that the Chern-Simons functional is playing a role of a "Hamiltonian" in
Eq.s(4.7). From the theory of dynamical systems it follows then that the
dynamics is taking place between the points of equilibria defined by zero
curvature condition $\mathbf{F}(\mathbf{A})=0.$ At the same time, using our
work, Ref.[50], it is easily recognize Eq.(4.7) as an equation for the
gradient flow, e.g. see Eq.s(3.14). For the sake of space we shall not
discuss this topic any further. Interested readers are encouraged to consult
Ref.[65]. For supersymmetric Y-M fields participating in Seiberg-Witten
theory the gradient flow equations are discussed in detail in Ref.[66]
The mechanical system described by Eq.(4.7) should be eventually quantized.
Since the quantization procedure is outlined in Ref.[67], to avoid
duplications, we shall concentrate attention of our readers on aspects \ of
Floer's theory not covered in [67] but still relevant to this paper. To do
so, we follow Donaldson [11]. This is accomplished in several steps.
\textsl{First}, in the previous section we noticed that the axially
symmetric self-dual solution for Y-M fields does not depend on $x_{0}$ (or
\tau )$ variable. Therefore, if such solution is substituted back into Y-M
functional, it produces divergent result. Although the cure for this issue
is discussed in subsection 4.4, in this subsection we provide needed
background. For this purpose, following Ref.[68] \ we consider the Y-M
action $S[\mathbf{F}]$ for the pure Y-M field\footnote
Strictly following notations of Ref.[68] we do not indicate that in general
the integration should be made over some 4-manifold $\mathcal{M}$. In
physics literature, and in Eq.s(2.11), it is assumed that we are dealing
with \textbf{R}$^{4}$ (or $S^{4}$ upon compactification). In Floer's theory
it is essential that the 4-manifold is decomposable as $M\times R$ . This
decomposition should be treated with care as described in the Donaldson's
book [11]
\begin{equation}
S[\mathbf{F}]=-\frac{1}{8}\int_{\mathbf{R}^{4}}d^{4}xtr(F_{\mu \nu }F_{\mu
\nu }). \tag{4.8}
\end{equation
The duality condition\footnote
With the convention that $\varepsilon _{1234}=-1.$}$\ast F_{\mu \nu }=\frac{
}{2}\varepsilon _{\mu \nu \alpha \beta }F_{\alpha \beta }$ allows \ us then
to rewrite this action as follow
\begin{equation}
S[\mathbf{F}]=-\frac{1}{16}\int_{\mathbf{R}^{4}}d^{4}x[tr((F_{\mu \nu }\mp
\ast F_{\mu \nu })(F_{\mu \nu }\mp \ast F_{\mu \nu }))\pm 2tr(F_{\mu \nu
}\ast F_{\mu \nu })] \tag{4.9}
\end{equation
since $tr(F_{\mu \nu }F_{\mu \nu })=tr(\ast F_{\mu \nu }\ast F_{\mu \nu }).$
The winding number $N$ \ for SU(2) gauge field is defined as\footnote
We follows notations of Ref.\textbf{[}68] in which\textbf{\ R}$^{4}$ is
actually standing for $S^{4}\in SU(2)$}
\begin{equation}
N=-\frac{1}{8\pi ^{2}}\int_{\mathbf{R}^{4}}d^{4}xtr(F_{\mu \nu }\ast F_{\mu
\nu })\equiv -\frac{1}{8\pi ^{2}}\int_{\mathbf{R}^{4}}tr(F_{\mu \nu }\wedge
F_{\mu \nu }) \tag{4.10}
\end{equation
so that use of this definition in Eq.s(4.8),(4.9) produce
\begin{equation}
S[\mathbf{F}]\geq \pi ^{2}\left\vert N\right\vert \tag{4.11}
\end{equation
with the equality taking place when the (anti) self-duality condition (e.g.
see Eq.(2.10) ) holds. In such a case the saddle point action is becoming
just a winding number (up to a constant).
\textsl{Second}, if our space-time 4-manifold $\mathcal{M}$ can be
decomposed as $M\times \lbrack 0,1],$ the following identity \ can be used
[11]
\begin{equation}
\int_{M\times \lbrack 0,1]}tr(F_{\mu \nu }\wedge F_{\mu \nu })=\int_{M}tr
\mathbf{A}\wedge d\mathbf{A+}\frac{2}{3}\mathbf{A}\wedge \mathbf{A}\wedge
\mathbf{A})\doteqdot CS(\mathbf{A}). \tag{4.12}
\end{equation
Here the symbol $\doteqdot $ means "up to a constant". The decomposition
M\times \lbrack 0,1]$ reflects the fact that the C-S functional is defined
up to mod \textbf{Z}. This ambiguity can be removed if we agree to consider
C-S functional as a quotient \textbf{R}/\textbf{Z}. Accordingly, this allows
us to replace $M\times R$ by $M\times \lbrack 0,1].$ Details can be found in
Ref.[11]. Thus, one way or another the winding number $N$ in Eq.(4.10) can
be replaced by the Chern-Simons functional.
\textsl{Third}, since the equation of motion for the C-S functional is zero
curvature condition $\mathbf{F}=0$, i.e
\begin{equation}
d\mathbf{A}+\mathbf{A}\wedge \mathbf{A}=0, \tag{4.13}
\end{equation
implying that the connection $\mathbf{A}$ is flat, we can use this result in
Eq.(4.12) in order to rewrite it as (e.g. for SU(2))
\begin{equation}
\frac{1}{8\pi ^{2}}\int_{M}tr(\mathbf{A}\wedge d\mathbf{A+}\frac{2}{3
\mathbf{A}\wedge \mathbf{A}\wedge \mathbf{A})=-\frac{1}{24\pi ^{2}
\int_{M}tr(\mathbf{A}\wedge \mathbf{A}\wedge \mathbf{A).} \tag{4.14}
\end{equation
For other groups the prefactor and the domain of integration will be
different in general.
\textsl{Fourth}, zero curvature Eq.(4.13) involves connections which are
functions of three arguments and a spectral/time parameter. \ In such
setting minimization of Y-M functional is not divergent in view of Eq.(4.11).
\ \textsl{Fifth}, the obtained result, Eq.(4.14), coincides with that known
for the winding number for SU(2) instantons \ in physics literature[8,19]
where it was obtained with help of entirely different arguments. \ It should
be noted though that in spite \ of apparent simplicity of these results,
actual calculations of C-S functionals (invariants) for different
3-manifolds are, in fact, very sophisticated [69,70]. In accord with Floer
and Ref.[67], we conclude that nonperturbatively the 4-dimensional pure Y-M
quantum field theory is a topological field theory of C-S type.
\textsl{Sixth}, the isomorphism noticed by Louis Witten acquires\ now
natural explanation. It becomes possible in view of results just presented,
on one hand, and \ the fact that $\mathcal{N}S_{H-E}[\hat{g}]=CS(\mathbf{A})$
(previous section), on another. For fields with axial symmetry, equations of
motion, Eq.(4.13), for gravity and Y-M fields coincide. \
\textsl{Seventh}, the instantons in Floer's theory are \textsl{not} the same
as considered in physics literature [8,19]. To understand this, we must take
into account that in Floer's theory manifolds under consideration are
4-manifolds $\mathcal{M}$ with tubular ends. Such manifolds are complete
Riemannian manifolds with finite number of tubular ends \ made of half tubes
$(0,\infty )$ so that locally each such manifold looks like
U_{i}=L_{i}\times $ $(0,\infty )$ with $L_{i\text{ }}$ being a compact
3-manifold (called a "crossection of a tube") and $i$ numbering the tubes.
The closure of $\mathcal{M\setminus }\bigcup\nolimits_{i=1}^{n}U_{i}$ is a
compact manifold with boundary. If the crossection \ is $S^{3},$ then $U$ is
conformally equivalent to a punctured ball $B^{4}\setminus \{0\}.$ This
implies that a manifold $\mathcal{M}$ with tubular ends is conformally
equivalent to a punctured manifold $\mathcal{\tilde{M}}\setminus \{$
p_{1},...,p_{n}\}$ where $\mathcal{\tilde{M}}$ is compact. The instanton
moduli problem for $\mathcal{M}$ is equivalent to that for the punctured
manifold [11]. Recall that the moduli space of instantons is defined as set
of solutions of anti self-dual equations modulo gauge equivalence.
Being armed with these definitions and taking into account that the (anti)
self-duality Eq.(4.7) we can interpret the instanton as a path connecting
one flat connection $\mathbf{F}=0$ at "time" $\tau =-\infty $ with another
flat connection at "time" $\tau =\infty $ [11]. It is permissible for the
path to begin at one flat connection, to wind around a tube (modulo gauge
equivalence) and to end up at the same flat connection, Ref.[11], page 22.
Evidently, this case involves 4-manifolds with just one tubular end.
Physically, each flat connection $\mathbf{F}=0$ represents the vacuum state
so that the instantons discussed in the Introduction should be connecting
different vacua. In this sense there is a difference between the
interpretation of instantons in mathematics and physics literature. As in
the case of standard quantum mechanics, only imposition of some additional
physical constraints permits us to select between all possible solutions
only those which are physically relevant. In the present context it is known
that all exactly integrable systems are described by the zero curvature
equation $\mathbf{F}=0$ [5,6\textbf{]}. It is also known that differences
between these equations are caused in part by differences in a way the
spectral parameter enters into these equations. Since for the Floer's
instantons $\mathbf{F}\neq 0,$ it means that the curvature $\mathbf{F}$
should be parametrized in such a way that the "time" parameter \ should
become a spectral parameter when $\mathbf{F}=0.$ In this work we do not
investigate this problem\footnote
See Ref.[7] for introduction into this topic.}. Instead, we shall focus our
attention on different vacua, that is \ on different (knot-like) solutions
of zero curvature equation $\mathbf{F}=0.\footnote
A complement of each knot in $S^{3}$ is 3-manifold. Floer's instantons are
in fact connecting various three-manifolds. These 3- manifolds (with tubular
ends) should be glued together to form $\mathcal{M}.$The gluing procedure is
extremely delicate mathematical operation [11]. It is above the level of
rigor of this paper. To imagine the connected sum of knots [20] is much
easier than the connected sum of 3-manifolds. This sum has physical meaning
discussed in Section 6.}$
\subsection{The Faddeev-Skyrme model and vacuum states of the Y-M functional}
In the light of results just presented, we would like to argue that the F-S
model \ is indeed capable of representing the vacuum states of pure Y-M
fields. For this purpose it is sufficient to recall the key results of the
paper by Auckly and Kapitansky [71]. These authors were able to rewrite the
Faddeev functiona
\begin{equation}
E[\mathbf{n}]=\int\limits_{S^{3}}dv\{\left\vert d\mathbf{n}\right\vert
^{2}+\left\vert d\mathbf{n}\wedge d\mathbf{n}\right\vert ^{2}\} \tag{4.15}
\end{equation
in the equivalent form given by
\begin{equation}
E_{\varphi }[a]=\int\limits_{S^{3}}dv\{\left\vert D_{\mathbf{a}}\mathbf
\varphi }\right\vert ^{2}+\left\vert D_{\mathbf{a}}\mathbf{\varphi }\wedge
D_{\mathbf{a}}\mathbf{\varphi }\right\vert ^{2}\}. \tag{4.16}
\end{equation
In this expression, the covariant derivative $D_{a}\mathbf{\varphi }=
\mathbf{\varphi }+[\mathbf{a},\mathbf{\varphi }]$. Evidently, $E_{\varphi }
\mathbf{a}]$ acquires its minimum when $\mathbf{\varphi }=\mathbf{a}$ and
the connection becomes flat (that is covariant derivative becomes zero). \
Since this result is compatible with those discussed in previous subsection,
it implies that, indeed, Faddeev's model can be used for description of
vacuum states for pure Y-M fields. The only question remains: Is this model
the only model describing QCD vacuum? \ In view of Eq.s (3.18),(3.19) it
should be clear that this is not the case. The full explanation is given \
below, in Sections 5,6. \ In addition, the disadvantage of the F-S model as
such (that is without modifications) lies in \ the absence of gap \ upon its
quantization as was recognized already by Faddeev and Niemi in Ref.[25]. \ \
In\textsl{\ }Sections 5,6 we shall eliminate this deficiency in a way
different from that described in the Introduction (e.g. in Ref.s[25,26]). In
the meantime, we would like to find the place for monopoles in our
calculations.
\subsection{Monopoles and the Ernst equation}
\subsubsection{Monopoles versus instantons}
To introduce notations and for the sake of uninterrupted reading, we need to
describe briefly the alternative point of view at the results of previous
subsection. For this purpose, following Manton [49], we need to make a
comparison between the Lagrangians for SU(2) Y-M and the Y-M-Higgs fields
described respectively b
\begin{equation}
\mathcal{L}_{Y-M}=-\frac{1}{4}tr(F_{\mu \nu }F^{\mu \nu }) \tag{4.17}
\end{equation
an
\begin{equation}
\mathcal{L}_{Y-M-H}=-\frac{1}{4}tr(\mathbf{F}_{\mu \nu }\mathbf{F}^{\mu \nu
})-\frac{1}{2}tr\left( D_{\mu }\mathbf{\Phi \cdot }D^{\mu }\mathbf{\Phi
\right) -\frac{\lambda }{2}(1-\mathbf{\Phi }\cdot \mathbf{\Phi })^{2}
\tag{4.18}
\end{equation
with covariant derivative for the Higgs field defined as $D_{\mu }\mathbf
\Phi =\partial }_{\mu }\mathbf{\Phi +[A}_{\mu },\mathbf{\Phi }]$ and \ with
connection $\mathbf{A}_{\mu }$ used to define the Y-M curvature tensor
\mathbf{F}_{\mu \nu }=\mathbf{\partial }_{\mu }\mathbf{A}_{\nu }-\mathbf
\partial }_{\nu }\mathbf{A}_{\mu }+\mathbf{[A}_{\mu },\mathbf{A}_{\nu }],$
provided that $\mathbf{\Phi =}\Phi ^{a}t^{a}$, $\mathbf{A}_{\mu }=A_{\mu
}^{a}t^{a},$ and $[t^{a},t^{b}]=-2\varepsilon _{abc}t^{c}.$ Now the
self-duality condition $\mathbf{F}=\ast \mathbf{F}$ can be equivalently
rewritten as $\mathbf{F}_{ij}=-\varepsilon _{ijk}\mathbf{F}_{k0}$ with
indices $i,j,k$ running over 1,2,3. Incidentally, in the temporal gauge this
result is equivalent to Floer's Eq.(4.6) Consider now the limit $\lambda
\rightarrow 0$ in Eq.(4.18). In the Minkowski spacetime the field equations
originating from the Y-M-Higgs Lagrangian can be solved by using the
Bogomolny ansatz equations $\mathbf{F}_{ij}=-\varepsilon _{ijk}D_{k}\mathbf
\Phi }$ \ in which $\mathbf{A}_{0}=0$ (temporal gauge). \ Instead of
imposing the temporal gauge condition, we can identify the Higgs field
\mathbf{\Phi }$ with $\mathbf{A}_{0}$ so that the Bogomolny equations read
now as follows
\begin{equation}
\mathbf{F}_{ij}=-\varepsilon _{ijk}D_{k}\mathbf{A}_{0}. \tag{4.19}
\end{equation
Bogomolny demonstrated that the Prasad-Sommerfield monopole solution can be
obtained using Eq.(4.19). Thus, any static (that is time-independent)
solution of self-duality equations is leading to
Bogomolny-Prasad-Sommerfield (BPS) monopole solution of the Y-M fields,
provided that we interpret the component $\mathbf{A}_{0}$ as the Higgs
field. Suppose now that there is an axial symmetry. Forgacs, Horvath and
Palla [72] (FHP) demonstrated equivalence of the\ set of axially symmetric
Bogomolny Eq.s (4.19) to the Ernst equation. The static monopole solution is
time-independent self-dual gauge field. Because of this time independence,
its four-dimensional action is \textsl{infinite}\textbf{\ (}because of the
time translational invariance) while that for instantons \textsl{is finite}.
Furthermore, the boundary conditions for monopoles and instantons are
different. The infinity problem for monopoles can be cured somehow by
considering the monopole dynamics [68] but this topic at this moment "is
more art than science", e.g. read [68], page 309. For the same reason we
avoid in this section talking about dyons (pseudo particles having both
electric and magnetic charge). Hence, we would like to conclude our
discussion with description of more \ mathematically rigorous treatments. By
doing so we shall establish connections with results presented in previous
sections
\subsection{Calorons$.$}
Calorons are instantons on $R^{3}\times S^{1}.$ From this definition it
follows that, physically, these are just instantons at finite temperatur
\footnote
This explains the word "caloron".}. Calorons are related to both instantons
on \textbf{R}$^{4}$ (or $S^{4})$ and monopoles on \textbf{R}$^{3}($or
S^{3}).$ Heuristically, the large period calorons are instantons while the
small period calorons \ are monopoles [73,74]. \ These results do not
account yet for the fact that both the Y-M action and the self-duality
equations are conformally invariant. Atiyah [75]. noticed that the Euclidean
metric \ can be represented either as
\begin{equation}
ds_{E}^{2}=\left( dx^{1}\right) ^{2}+\left( dx^{3}\right) ^{2}+\left(
dx^{3}\right) ^{2}+\left( dx^{4}\right) ^{2} \tag{4.20a}
\end{equation
or as
\begin{equation}
ds^{2}=\frac{r^{2}}{R^{2}}[R^{2}(\frac{\left( dx^{1}\right) ^{2}+\left(
dx^{2}\right) ^{2}+\left( dr\right) ^{2}}{r^{2}})+R^{2}d\varphi ^{2}]
\tag{4.20b}
\end{equation
with $R$ being some constant. \ The above representation involves polar
r,\varphi $ coordinates in the $(x^{3},x^{4})$ plane thus implying some kind
of axial symmetry. Since self-duality equations are conformally invariant,
the prefactor $\frac{r^{2}}{R^{2}}$ can be dropped so that the Euclidean
space \textbf{R}$^{4}$ becomes conformally equivalent to the product \textbf
H}$^{3}\times S^{1}.$For such manifold the constant scalar curvature of the
hyperbolic 3-space \textbf{H}$^{3}$ is $-1/R^{2}.$ Furthermore, the
remaining term represents the metric on a circle of radius $R$. Following
Ref.[73], let $(x^{1},x^{2},x^{3})$ be coordinates for the hyperbolic ball
model of \textbf{H}$^{3}$ so that the radial coordinate be $\mathcal{R}
\sqrt{\left( x^{1}\right) ^{2}+\left( x^{2}\right) ^{2}+\left( x^{3}\right)
^{2}}$. Let $0\leq \mathcal{R}\leq R.$ Let $\tau $ \ be a coordinate on
S^{1}$ with period $\beta ,$ then the metric on \textbf{H}$^{3}\times S^{1}$
can be represented a
\begin{equation}
ds_{H}^{2}=d\tau ^{2}+\Lambda ^{2}(d\mathcal{R}^{2}+\mathcal{R}^{2}d\Omega
^{2}), \tag{4.21a}
\end{equation
where $\Lambda =(1-\mathcal{R}/R)^{-1}$ and $d\Omega ^{2}$ is the metric on
2-dimensional sphere. If we introduce an auxiliary coordinate $\mu
=(R/2)\arctan $h$(\mathcal{R}/R),$ and complex coordinate $z=\mu +i\tau ,
the above hyperbolic metric can be rewritten as
\begin{equation}
ds_{H}^{2}=d\tau ^{2}+d\mu ^{2}+\Xi ^{2}d\Omega ^{2} \tag{4.21b}
\end{equation
with $\Xi =(R/2)\sinh (2\mu /R).$ By analogy with transition from Eq.(4.20a)
to (4.20b) we can proceed as follows. Let $r=\sqrt{\left( y^{1}\right)
^{2}+\left( y^{2}\right) ^{2}+\left( y^{3}\right) ^{2}}$ with (
y^{1},y^{2},y^{3},y^{0})$ being coordinates on \textbf{R}$^{4}.$ By letting
t=y^{0}$ the Euclidean metric can be written \ as usual, i.e.
\begin{equation}
ds_{E}^{2}=dt^{2}+dr^{2}+r^{2}d\Omega ^{2} \tag{4.22a}
\end{equation
so that
\begin{equation}
ds_{H}^{2}=\xi ^{2}ds_{E}^{2}, \tag{4.22b}
\end{equation
with $\xi =(R/2)[\cosh (2\mu /R)+\cos (2\tau /R)].$ This correspondence
between \textbf{R}$^{4}\smallsetminus \mathbf{R}^{2}$ and \textbf{H}
^{3}\times S^{1}$ is made with help of the mapping $w=tanh(z/R)$ (with
w=r+it$ and $\beta =\pi R).$ Let $\mathcal{M}=$\textbf{H}$^{3}\times S^{1}$
(or \textbf{H}$^{3}\times R)$ then, in view of conformal invariance, we can
rewrite Eq.(4.8) as\
\begin{equation}
S[\mathbf{F}]=-\frac{1}{8}\int_{\mathcal{M}}tr(F_{\mu \nu }F^{\mu \nu })\Xi
^{2}d\tau d\mu d\Omega . \tag{4.23}
\end{equation
\ We have to rewrite the winding number, Eq.(4.10), accordingly. Since it is
a topological invariant, this means that the self-duality equations must be
adjusted accordingly. For instance, \ for the \textsl{hyperbolic calorons}
on \textbf{H}$^{3}\times S^{1}$ the self-duality equation read
\begin{equation}
F_{0i}=\frac{1}{2\Lambda }\varepsilon _{ijk}F_{jk}. \tag{4.24}
\end{equation
The action $S[\mathbf{F}]$ now is finite with $tr(F_{\mu \nu }F^{\mu \nu
})\rightarrow 0$ when $\mu \rightarrow \infty .$ For hyperbolic instantons
we have finite action with $tr(F_{\mu \nu }F^{\mu \nu })\rightarrow 0$ when
\mu ^{2}+\tau ^{2}\rightarrow \infty .$
The results just described match nicely with the results by Witten [76] on
Euclidean SU(2) instantons invariant under the action of SO(3) Lie group.
His results\ will be discussed in detail in the next section. Notice, that
Euclidean metric, Eq.(4.22a), becomes that for \textbf{H}$^{2}\times S^{2}$
if we rewrite it as
\begin{equation}
ds_{E}^{2}=r^{2}(\frac{dt^{2}+dr^{2}}{r^{2}}+d\Omega ^{2}) \tag{4.25a}
\end{equation
and, as before, we drop the conformal factor $r^{2}$ so that it become
\begin{equation}
ds_{H}^{2}=\frac{dt^{2}+dr^{2}}{r^{2}}+d\Omega ^{2}. \tag{4.25b}
\end{equation
Interestingly enough, that \ results by Witten initially developed for
\textbf{H}$^{2}\times S^{2}$ can be also used without change for \textbf{H}
^{3}\times S^{1}$ and \textbf{H}$^{3}\times \mathbf{R}$ since the action of
SO(3) pulls back to these manifolds [73]. \textsl{This fact is of importance
since such an} \textsl{extension makes his results compatible with both
Floer's \ method of calculations for Y-M fields and with} \textsl{results of
Section 3.} Omitting all details, the action, Eq.(4.23), is reduced to that
known for two dimensional Abelian Ginzburg-Landau (G-L) model "living" on
the hyperbolic 2 manifold $X$ coordinatized by $\mu $ and $\tau $ \ with the
metri
\begin{equation}
ds_{H}^{2}=\frac{d\mu ^{2}+d\tau ^{2}}{\Xi ^{2}}. \tag{4.26}
\end{equation
Explicitly, such G-L action functional $S_{G-L}$ is given by [73
\begin{equation}
S_{G-L}=\frac{\pi }{2}\int\limits_{X}d\tau d\mu \lbrack \Xi ^{2}(\mathbf
\nabla }\times \mathbf{A})^{2}+2\left\vert (\mathbf{\nabla }+i\mathbf{A}
\mathbf{\phi }\right\vert ^{2}+\Xi ^{-2}(1-\left\vert \mathbf{\phi
\right\vert ^{2})] \tag{4.27}
\end{equation
with $\mathbf{A}$ and $\mathbf{\phi }$ being respectively the Abelian gauge
\ and the Higgs fields, $\mathbf{\phi }=\phi _{1}+i\phi _{2}$ so that
\left\vert \mathbf{\phi }\right\vert ^{2}=\phi _{1}^{2}+\phi _{2}^{2}.$ This
functional is obtained upon substitution of solution of the self-duality
equations into the Y-M action functional, Eq.(4.23). We refer our readers to
the original paper, Ref.[73] for details. \ In the limit $\beta \rightarrow
\infty $ the above functional coincides with that obtained by Witten [76].
The self-duality equations obtained by Witten describe instantons which lie
along the fixed axis while Fairlie, Corrigan, 't Hooft, and Wilczek [77]
developed an ansatz \ (CFtHW ansatz)\ for the self-duality equations
producing instantons at arbitrary locations. Manton [78] demonstrated that
Witten's and CFtHW multi-instanton solutions are gauge equivalent while
Harland [73,74\textbf{] } demonstrated how these instantons and monopoles
can be obtained from calorons in various limits. The obtained results \
provide needed background information for solution of the gap problem. This
solution is discussed in the next section.
\section{Solution of the gap problem}
\subsection{ Idea of \ the proof}
By cleverly using symmetry of the problem Witten [76] reduced the non
Abelian Y-M action \ functional to that for the Abelian G-L model "living"
in the hyperbolic plane. This is one of examples of\ the Abelian reduction
of QCD discussed in our paper, Ref.[79]. Vortices existing in the G-L model
could be visualized as made of some two-dimensional surfaces (closed
strings) living in the ambient space-time. These are known as Nambu-Gotto
strings. Their treatment by Polyakov [80] made them to exist in spaces of
higher dimensionality. In order for them to be useful for QCD, Polyakov
suggested to modify string action by adding an extra (rigidity) term into
string action functional. By doing so the problem was created of reproducing
Polyakov rigid string model from QCD action functional. The latest proposal
by Polyakov [81] involves consideration of spin chain models while that by
Kondo [82] involves the F-S model derived directly from QCD action
functional. As explained in\textbf{[}79\textbf{]}, in the case of scattering
processes of high energy physics one is confronted essentially with \ the
same combinatorial problems as were encountered at the birth of quantum
mechanics. In Ref.[83] we explained in detail why Heisenberg's
(combinatorial) method of developing quantum mechanical formalism is
superior to that by Schr\"{o}dinger. In Ref.[74] using these general results
we demonstrated how the combinatorial analysis of scattering data leads to
spin chain models as microscopic models describing excitation spectrum of
QCD. Thus, the mass gap problem can be considered as \ already solved in
principle. Nevertheless, in [\textbf{94}] such a solution is obtained
"externally", just based on the rules of combinatorics. \ As with quantum
mechanics, where atomic model is used to test Heisenberg's ideas, there is a
need to reproduce this combinatorial result "internally" by using
microscopic model of QCD. \ For this purpose, we shall use the G-L
functional, Eq.(4.27). By analogy with the flat case, we expect that it can
be rewritten in terms of interacting vortices. In the \ present case, in
view of Eq.(4.26), vortices "live" not in the Euclidean plane but in 3+1
Minkowski space-time. This is easy to understand if we recall the SO(3)
\rightleftarrows $SU(2) correspondence and take into account the analogous
correspondence between SU(1,1) and SO(2,1).
Within such a picture it is sufficient to look at evolution dynamics of the
individual vortex. Typically, it is well described by the dynamics of the
continuous Heisenberg spin chain model [84,85] in Euclidean space. \ In the
present case, this formalism should be extended to the Minkowski space and,
eventually, to hyperbolic space (that is to the \ case of Abelian model
discovered by Witten). Details of such an extension are summarized in
Appendix B. After that, the next task lies in connecting these results with
the Ernst equation. In the next subsection we initiate this study.
\subsection{Heisenberg spin chain model and the Ernst equation}
For the sake of space, this subsection is written under assumption that our
readers are familiar with the book "Hamiltonian methods in the theory of
solitons" [86] (or its equivalent) where all needed details can be found.
The continuous XXX Heisenberg spin chain is described with help of the spin
vector\footnote
In compliance with\ [86] we suppress the time-dependence.} $\vec{S
(x)=(S_{1}(x),S_{2}(x),S_{3}(x))$ restricted to \ live on the unit sphere
S^{2}:$
\begin{equation}
\vec{S}^{2}(x)=\sum\limits_{i=1}^{3}S_{i}^{2}(x)=1 \tag{5.1}
\end{equation
while obeying the equation of motion
\begin{equation}
\frac{\partial }{\partial t}\vec{S}=\vec{S}\times \frac{\partial ^{2}}
\partial x^{2}}\vec{S} \tag{5.2}
\end{equation
known as the Landau-Lifshitz (L-L) equation. By introducing matrices $U$(
\lambda )$ and $V$($\lambda )$ vi
\begin{equation}
U(\lambda )=\frac{\lambda }{2i}S\text{, }V(\lambda )=\frac{i\lambda ^{2}}{2
S+\frac{\lambda }{2}S\frac{\partial }{\partial x}S,S=\vec{S}\cdot \vec{\sigm
} \tag{5.3}
\end{equation
so that $\sigma _{i}$ \ is one of Pauli's spin matrices and $\lambda $ is
the spectral parameter and requiring that $S^{2}=I,$ where $I$ is the unit
matrix, \ the zero curvature condition
\begin{equation}
\frac{\partial }{\partial t}U-\frac{\partial }{\partial x}V+[U,V]=0
\tag{5.4}
\end{equation
is obtained. \ With account of the constraint $S^{2}=I$ it can be converted
into equation
\begin{equation}
\frac{\partial }{\partial t}S=\frac{1}{2i}[S,\frac{\partial ^{2}}{\partial
x^{2}}S] \tag{5.5}
\end{equation
equivalent to Eq.(5.2). The correspondence between Eq.s(5.2) and (5.5) can
be made for \ $S(x,t)$ matrices of arbitrary dimension.
Having in mind Witten's result [76], we want now to extend these Euclidean
results to the case of noncompact Heisenberg spin chain model "living"
either in Minkowski or hyperbolic space. In doing so we follow, in part,
Ref.[\textbf{56}] and Appendix B. For this purpose we need to remind our
readers some facts about the Lie group SU(1,1). \ Since this group is
related to SO(2,1), very much like SU(2) is related to SO(3), we can proceed
\ by employing the noticed analogy. In particular, since $S=\vec{S}\cdot
\vec{\sigma},$ we can preserve this relation by writing now $S=\vec{S}\cdot
\vec{\tau}.$ Using this result we obtain,
\begin{equation}
S=\left(
\begin{array}{cc}
S^{z} & iS^{-} \\
iS^{+} & -iS^{z
\end{array
\right) \in su(1,1),\text{ }S^{\pm }=S^{x}\pm iS^{y}, \tag{5.6}
\end{equation
where the form of \ matrices generating su(1,1) Lie algebra is \ similar to
that for Pauli matrices. This time, however, $\det S=-1$ even though
S^{2}=I $ . Explicitly, $\left( S^{z}\right) ^{2}-\left( S^{x}\right)
^{2}-\left( S^{y}\right) ^{2}=1,$that is the motion is taking place on the
unit pseudosphere $S^{1,1}.$ Matrices $\tau _{i}$ \ generating $su(1,1)$ \
are \ fully characterized by the following two propertie
\begin{equation}
tr(\tau _{\alpha }\tau _{\beta })=2g_{\alpha \beta }\text{ , }[\tau _{\alpha
},\tau _{\beta }]=2if_{\alpha \beta \gamma }\tau _{\gamma }\text{;
g_{\alpha \beta }=diag(-1,-1,1);\alpha ,\beta ,\gamma =1,2,3 \tag{5.7}
\end{equation
with $f_{\alpha \beta \gamma }$ being structure constants for $su(1,1)$
algebra. An analog of the equation of motion, Eq.(5.5), now reads \
\begin{equation}
\frac{\partial }{\partial t}S^{\alpha }=\sum\limits_{\beta ,\gamma
}f^{\alpha \beta \gamma }S_{\beta }\frac{\partial ^{2}}{\partial x^{2}
S_{\gamma }. \tag{5.8}
\end{equation
If we defines the Poisson brackets as $\{S^{\alpha }(x),S^{\beta
}(y)\}=-f^{\alpha \beta \gamma }S^{\gamma }(x)\delta (x-y),$ then the above
equation of motion can be rewritten in the Hamiltonian for
\begin{equation}
\frac{\partial }{\partial t}S^{\alpha }=\{H,S^{\alpha }\}, \tag{5.9}
\end{equation
provided that the Hamiltonian \ $H$ is given by
\begin{equation}
H=\frac{1}{2}\int\limits_{-\infty }^{\infty }dx\left( \nabla _{x}S^{\alpha
}\right) g_{\alpha \beta }\left( \nabla _{x}S^{\beta }\right) \equiv \frac{
}{4}tr\int\limits_{-\infty }^{\infty }dx\left( \nabla _{x}S\right) ^{2}.
\tag{5.10}
\end{equation
Since now the motion takes place on pseudosphere $\check{S}^{2}$, it is
convenient to introduce the pseudospherical coordinates by analogy with
spherical, e.g
\begin{equation}
S^{x}(x,t)=\text{sinh }\theta (x,t)\cos \varphi (x,t),S^{y}(x,t)=\sinh
\theta (x,t)\sin \varphi (x,t),S^{z}(x,t)=\cosh \theta (x,t). \tag{5.11}
\end{equation
Also, by analogy with spherical case we can use the stereographic projection
: from pseudosphere to hyperbolic plane. Recall [\textbf{102}], that in the
case of a sphere $S^{2}$ the inverse stereographic projection: from complex
plane $\mathbf{C}$ to $S^{2}$ is given b
\begin{equation}
S^{+}=\frac{2z}{1+\left\vert z\right\vert ^{2}},S^{-}=\frac{2z^{\ast }}
1+\left\vert z\right\vert ^{2}},S^{z}=\frac{1-\left\vert z\right\vert ^{2}}
1+\left\vert z\right\vert ^{2}}. \tag{5.12}
\end{equation
The mapping from \textbf{C} to \textbf{H}$^{2}$ is obtained \ with help of
Eq.(5.12) in a straightforward way as
\begin{equation}
S^{+}=\frac{2\xi }{1-\left\vert \xi \right\vert ^{2}},S^{-}=\frac{2\xi
^{\ast }}{1-\left\vert \xi \right\vert ^{2}},S^{z}=\frac{1+\left\vert \xi
\right\vert ^{2}}{1-\left\vert \xi \right\vert ^{2}}. \tag{5.13}
\end{equation
Using this correspondence the equations of motion, Eq.(5.10), \ rewritten in
terms of $\xi $ and $\xi ^{\ast }$variables (while keeping in mind that they
are parametrized by $x$ and $t)$ are given b
\begin{equation}
i\frac{\partial }{\partial t}\xi +\frac{\partial ^{2}}{\partial x^{2}}\xi +
\frac{\xi ^{\ast }}{1-\left\vert \xi \right\vert ^{2}}\left( \frac{\partial
}{\partial x}\xi \right) ^{2}=0. \tag{5.14}
\end{equation
In the static ($t-$independent) case \ the above equation is reduced t
\begin{equation}
(\left\vert \xi \right\vert ^{2}-1)\nabla _{x}^{2}\xi =2\xi ^{\ast }\left(
\nabla _{x}\xi \right) ^{2} \tag{5.15}
\end{equation
easily recognizable as the Ernst equation. \ In his paper, Ref. [42], Ernst
used variational principle applied to the functional Eq.(3.18). \ From
Appendix B we know that both the L-L equation and its hyperbolic version
describe the motion of (could be knotted) vortex filament. Because of this,
the functional, Eq.(3.18), should undergo the same reduction as was made in
going from Eq.(B1.a) to (B1.b). Explicitly, this means that the functional,
Eq.(3.18), should be reduced in such a way that the Hamiltonian, Eq.(5.10),
should be replaced by
\begin{equation}
H=-2\int\limits_{-\infty }^{\infty }dx\frac{\left\vert \nabla _{x}\xi
\right\vert ^{2}}{(1-\left\vert \xi \right\vert ^{2})^{2}}, \tag{5.16}
\end{equation
\ where the sign in front is chosen in accord with Ref.[87] and our
Eq.(3.22). The Hamiltonian equation of motion, Eq.(5.9), reproducing
Eq.(5.14) \ can be obtained if the Poisson bracket is defined as by $\{\xi
(x),\xi ^{\ast }(y)\}=(1-\left\vert \xi \right\vert ^{2})^{2}\delta (x-y).
The obtained results set up the stage for quantization. It will be discussed
in subsection 5.4. In the meantime, we need to connect results of Witten's
work, Ref.[76], with those we just obtained.
\subsection{From Abelian Higgs to Heisenberg spin chain model}
\subsubsection{The Abelian Higgs model}
The work by Witten [76] had been \ further analyzed in the paper by Forgacs
and Manton [88]. The major outcome of their \ work lies in demonstration of
uniqueness of the self-duality ansatz proposed by Witten. The self-duality
equations obtained in Witten's work are reduced to the system of three
coupled equations describing interaction between the Abelian Y-M and Higgs
field
\begin{equation}
\partial _{0}\varphi _{1}+A_{0}\varphi _{2}=\partial _{1}\varphi
_{2}-A_{1}\varphi _{1}, \tag{5.17a}
\end{equation
\begin{equation}
\partial _{1}\varphi _{1}+A_{1}\varphi _{2}=-(\partial _{0}\varphi
_{2}-A_{0}\varphi _{1}), \tag{5.17b}
\end{equation
\begin{equation}
r^{2}(\partial _{0}A_{1}-\partial _{1}A_{0})=1-\varphi _{1}^{2}-\varphi
_{2}^{2}. \tag{5.17c}
\end{equation
To analyze these equations, we recall that the original self-duality
equations for the Y-M fields are conformally invariant. We can take
advantage of this fact now by temporarily dropping the conformal factor
r^{2}$ in Eq.(5.17c). Then, the above equations become the Bogomolny
equations for the \ flat space Abelian Higgs model, e.g. for the model
described by the action functional, Eq.(4.24), with the conformal factor
\Xi =1$ [89]. Such obtained equations contain all information about the
Abelian Higgs model and, hence, they are equivalent to this model. It is of
importance for us to demonstrate this explicitly for both Euclidean and
hyperbolic spaces. For this purpose we introduce a covariant derivative
D_{\mu }=\partial _{\mu }-iA_{\mu }$, $\mu =0,1,$ and the complex field
\phi =\phi _{1}+i\phi _{2}$. Consider the Bogomolny equation following [68]
\begin{equation}
D_{0}\phi +iD_{1}\phi =0. \tag{5.18}
\end{equation
Using the above definitions straightforward computation reproduces
Eq.s(5.17a,b). These equations can be used to obtai
\begin{equation}
r^{2}\left( D_{0}-iD_{1}\right) (D_{0}+iD_{1})\phi =0 \tag{5.19}
\end{equation
implyin
\begin{equation}
r^{2}(D_{0}D_{0}+D_{1}D_{1})\phi =-ir^{2}[D_{0},D_{1}]\phi =-r^{2}\left(
\partial _{0}A_{1}-\partial _{1}A_{0}\right) \phi =-(1-\varphi
_{1}^{2}-\varphi _{2}^{2})\phi , \tag{5.20}
\end{equation
where the last equality was obtained with help of Eq.(5.17c). Evidently, the
equatio
\begin{equation}
(D_{0}D_{0}+D_{1}D_{1})\phi +\frac{1}{r^{2}}(1-\varphi _{1}^{2}-\varphi
_{2}^{2})\phi =0 \tag{5.21}
\end{equation
is one of the equations of "motion" for the G-L model on \textbf{H}$^{2}$,
e.g. see Ref.[89](Eq.(11.3) page 98). The second is the Ampere's equation
\begin{equation}
\varepsilon _{\mu \nu }\partial _{\mu }(r^{2}B)=i(\phi \bar{D}_{\nu }\bar
\phi}-\bar{\phi}D_{\nu }\phi ) \tag{5.22}
\end{equation
with the "magnetic field" $B=\partial _{0}A_{1}-\partial _{1}A_{0}.$ Details
of derivation are given in Ref.[68], pages 198-199. Eq.(5.22) also coincides
with that given in the book by Taubs and Jaffe, Ref.[\textbf{105}]
(Eq.(11.3) page 98).
\textbf{Corollary 1}. \textsl{Since both equations can be obtained by
mimization of the functional,} \textsl{Eq.}(4.27)\textsl{, they are
equivalent to the Abelian Higgs model which, in turn, is the reduced form of
the Y-M functional for pure gauge fields.}
\ We continue with the discussion \ of Witten's treatment of Eq.s(5.17)
since we shall need his results later on. First, he selects physically
convenient gauge condition via $\partial _{\mu }A_{\mu }=0.$ This leads to
the choice: $A_{\mu }=\varepsilon _{\mu \nu }\partial _{\nu }\psi $ (for
some scalar $\psi )$. With such a choice for $A_{\mu }$ the first two of
Eq.s(5.17) can be rewritten as
\begin{equation}
(\partial _{0}-\partial _{0}\psi )\varphi _{1}=(\partial _{1}-\partial
_{1}\psi )\varphi _{2}, \tag{5.23a}
\end{equation
\begin{equation}
(\partial _{1}-\partial _{1}\psi )\varphi _{1}=-(\partial _{0}-\partial
_{0}\psi )\varphi _{2}. \tag{5.23b}
\end{equation
Let now $\varphi _{1}=e^{\psi }\chi _{1}$ and $\varphi _{2}=e^{\psi }\chi
_{2}.$ Then the above equations are reduced to the Cauchy-Riemann-type
equations: $\partial _{0}\chi _{1}=\partial _{1}\chi _{2}$ and $\partial
_{1}\chi _{1}=\partial _{0}\chi _{2}.$ Introduce the function $f=\chi
_{1}-i\chi _{2}$. Then, the last of Eq.s(5.17) acquires the for
\begin{equation}
-r^{2}\nabla ^{2}\psi =1-ff^{\ast }e^{2\psi }. \tag{5.24}
\end{equation
Notice that $-r^{2}\nabla ^{2}=-r^{2}(\frac{\partial ^{2}}{\partial t^{2}}
\frac{\partial ^{2}}{\partial r^{2}})$ is the hyperbolic Laplacian [90].
Eq.(5.24) is still gauge invariant in the sense that by changing
f\rightarrow fh$ and $\psi \rightarrow \psi -\frac{1}{2}\ln (hh^{\ast })$ in
this equation we observe that it preserves its original form. This is so
because $\nabla ^{2}\ln (hh^{\ast })=0$ for any analytic function which does
not have zeros. If $h$ does have zeros for $r>0$, then substitution of $\psi
\rightarrow \psi -\frac{1}{2}\ln (hh^{\ast })$ into Eq.(5.24) produces
isolated singularities at these zeros. After these remarks, Eq.(5.24) can be
simplified further. For this purpose, let $\psi =\ln r-\frac{1}{2}\ln
(ff^{\ast })+\rho $, provided that $\nabla ^{2}\ln (ff^{\ast })=0$ for any
analytic function $f$ which does not have zeros\footnote
In the case if it does, the treatment is also possible \ as explained by
Witten. Following his work, we shall temporarily ignore this option.}. Under
such conditions we end up with the Liouville equatio
\begin{equation}
\nabla ^{2}\rho =e^{2\rho }. \tag{5.25}
\end{equation
It is of major importance for what follows.
\subsubsection{The Heisenberg spin chain model}
The results of Appendix B imply that the L-L Eq.(5.2) (or\ their hyperbolic
equivalent, Eq.(5.8)) could be interpreted in terms of equations \ for the
Serret-Frenet moving triad. Treatment along these lines \ suitable for
immediate applications is given in papers by Lee and Pashaev [91] and
Pashaev [92]. Below we superimpose their results with those of our work,
Ref.[84], to achieve our goals.
We begin with definitions. \ A collection of smooth vector fields \textbf{n}
_{\mu }$(x,t), $\mu =0-2$, forming an orthogonal basis is called the "moving
frame". If $x\in \mathcal{S}$ where $\mathcal{S}$ is some two dimensional
surface, then let \textbf{n}$_{1}($x,t$)$ and \textbf{n}$_{2}($x,t$)$ form
basis for the tangent plane to $\mathcal{S}$ $\forall x\in \mathcal{S}$.
Then, the Gauss map (that is the map from $\mathcal{S}$ to two dimensional
sphere $S^{2}$ or pseudosphere $S^{1,1})$ is given by \textbf{n}$_{2}$(x,t)
\equiv \mathbf{s}$. By design, it should obey Eq.(5.1). This observation
provides needed link between the spin and the moving frame vectors. Details
are given in [91,92] and Appendix B\textsl{. }It should be clear that \
since one can draw curves on surfaces both formalisms should involve the
same elements. The restriction for the curve to lie at the surface causes
additional complications in general but nonessential in the present case.
Next, we introduce the combinations $\mathbf{n}_{\pm }=\mathbf{n}_{0}\pm
\mathbf{n}_{1}$ possessing the following propertie
\begin{equation}
(\mathbf{n}_{+},\mathbf{n}_{+})=(\mathbf{n}_{-},\mathbf{n}_{-})=0\text{ , }
\mathbf{n}_{+}\text{,}\mathbf{n}_{-})=2/\kappa ^{2}, \tag{5.26}
\end{equation
where $\kappa ^{2}=1$ for $S^{2}$ and $\kappa ^{2}=-1$ for $S^{1,1}$ and
\textbf{H}$^{2}.$ Furthermore, $(..,..)$ defines the scalar product (in
Euclidean or pseudo-Euclidean spaces). Also,
\begin{equation}
\mathbf{n}_{+}\times \mathbf{s=}i\mathbf{n}_{+},\mathbf{n}_{-}\times \mathbf
s=-}i\mathbf{n}_{-},\mathbf{n}_{-}\times \mathbf{n}_{+}\mathbf{=}2i\kappa
^{2}\mathbf{s.} \tag{5.27}
\end{equation
In addition, we shall use the vectors
\begin{equation}
q_{\mu }=\frac{\kappa ^{2}}{2}(\partial _{\mu }\mathbf{s},\mathbf{n}_{+}
\text{ and }\bar{q}_{\mu }=\frac{\kappa ^{2}}{2}(\partial _{\mu }\mathbf{s}
\mathbf{n}_{-}) \tag{5.28}
\end{equation
in terms of which the equations of motion for the moving frame vectors look
as follows
\begin{equation}
D_{\mu }\mathbf{n}_{+}=-2\kappa ^{2}q_{\mu }\mathbf{s,} \tag{5.29}
\end{equation
\begin{equation}
\partial _{\mu }\mathbf{s=}q_{\mu }\mathbf{n}_{-}+\bar{q}_{\mu }\mathbf{n
_{+}, \tag{5.30}
\end{equation
with covariant derivative $D_{\mu }=\partial _{\mu }-\frac{i}{2}V_{\mu }$
and $V_{\mu }=-2\kappa ^{2}(\mathbf{n}_{1},\partial _{\mu }\mathbf{n}_{0}).$
Consider now Eq.(5.30) for $\mu =1.$ Apply to it the operator $\partial _{1}$
and use the equations of motion and \ the definitions just introduced in
order to obtain
\begin{equation}
\partial _{1}^{2}\mathbf{s=}\left( D_{1}q_{1}\right) \mathbf{n}_{-}+\left(
\bar{D}_{1}\bar{q}_{1}\right) \mathbf{n}_{+}-\frac{4}{\kappa ^{2}}\left\vert
q_{1}\right\vert ^{2}\mathbf{s.} \tag{5.31}
\end{equation
It can be shown that $q_{0}=iD_{1}q_{1}.$ In view of this, Eq.(5.30) for
\mu =0$ acquires the following form
\begin{equation}
\partial _{0}\mathbf{s=}iD_{1}q_{1}\mathbf{n}_{-}-i\bar{D}_{1}\bar{q}_{1
\mathbf{n}_{+}. \tag{5.32}
\end{equation
This equation happens to be of major importance because of the following.
Multiply (from the left) Eq.(5.31) by $\mathbf{s}\times $ and use
Eq.s(5.27). Then (depending on signature of $\kappa ^{2})$ the obtained
result is equivalent to the L-L Eq.(5.2) \ or its pseudoeuclidean version,
Eq.(5.8). Furthermore, for this to happen the fields $V_{\mu }$ and $q_{\mu
} $ must be subject to the following constraint equations obtainable
directly from Eq.s (5.29
\begin{equation}
D_{\mu }q_{\nu }=D_{\nu }q_{\mu }, \tag{5.33a}
\end{equation
\begin{equation}
\lbrack D_{\mu },D_{\nu }]=-2\kappa ^{2}(\bar{q}_{\mu }q_{\nu }-\bar{q}_{\nu
}q_{\mu }). \tag{5.33b}
\end{equation
We are going to demonstrate now that these equations are equivalent to
Eq.s(5.17) obtained by Witten.
We begin with the following observation. Let indices $\mu $ and $\nu $ be
respectively 1 and 0. Then, taking into account that $q_{0}=iD_{1}q_{1}$ we
can rewrite Eq.(5.33b) as
\begin{equation}
F_{10}=B_{1}=-2\kappa ^{2}i(\bar{q}_{1}D_{1}q_{1}-q_{1}\bar{D}_{1}\bar{q
_{1}). \tag{5.34}
\end{equation
Surely, by symmetry we could use as well: $q_{1}=-iD_{0}q_{0}$. This would
give us an equation similar to Eq.(5.34). Take now the case $\kappa ^{2}=-1$
(that is consider $S^{1,1}$) in these equations and compare them with the
Ampere's law, Eq.(5.22). We notice that these equations are not the same.
However, since the G-L model was originally designed for phenomenological
(thermodynamical) description of superconductivity (as explained in detail
in our work, Ref.[84]), we know that the underlying equations (obtainable
from the G-L functional) contain the London equation which reads\footnote
This is not the form of the London equation one can find in textbooks. But
in our work, Ref.[84], we demonstrated that Eq.(5.35) is equivalent to the
London equation.
\begin{equation}
\mathbf{\nabla }\times \mathbf{B}=C\mathbf{B} \tag{5.35}
\end{equation
with $C$ being some constant (determined by physical considerations).
Evidently, in view of the London (5.35), Eq.s(5.22) and (5.34) become
equivalent. Consider now Eq.(5.33a). To understand better this equation, it
is useful to rewrite Eq.(5.18) as follow
\begin{equation}
D_{0}\phi =-iD_{1}\phi \text{ or }D_{0}\phi _{1}=D_{1}\phi _{0}, \tag{5.36}
\end{equation
where $\phi _{1}=\phi $ and $\phi _{0}=-i\phi .$ Take into account now that
\phi =a+ib$ and identify $\phi _{1}$ with $q_{1}$ and $\phi _{0}$ with
q_{0}.$ Then, Eq.(5.33b) acquires the following form $(\kappa ^{2}=-1):$
\begin{equation}
(\partial _{0}V_{1}-\partial _{1}V_{0})=-i4(\bar{\phi}_{0}\phi _{1}-\phi _{0
\bar{\phi}_{1})=-4(a^{2}+b^{2}). \tag{5.37}
\end{equation
Looking at Eq.(5.17c) we can make the following \ identifications:
V_{1}=A_{1},V_{0}=A_{0},\pm 2a=\varphi _{1},\pm 2b=\varphi _{2}.$ Then,
comparison between Eq.s(5.17c) and (5.37) indicates that we are still
missing a factor of $r^{2}$ in the l.h.s. and 1 in the r.h.s. Looking at
Witten's derivation of the Liouville Eq.(5.25), we realize that these two
factors are interdependent. By clever choice of the function $\psi $ they
can be made to disappear. This makes physical sense since locally the
underlying surface is almost flat. This observation makes Eq.s(5.37) and
(5.24) (or 5.17c) equivalent.
\textbf{Corollary 2}.\textsl{\ The L-L and\ 2 dimensional G-L models are \
essentially} \textsl{equivalent in the sense just described both in
Euclidean and in Minkowski spaces}.
\textbf{Corollary 3}.\ \textsl{The "hyperbolc" L-L Eq.}(5.14)\textsl{\ or
its Euclidean analog should be identified with Floer's Eq.}(4.6)\textsl{.}
These results play an important role in the rest of this work and, in
particular, in the next subsection.
\subsection{The proof (implementation)}
\subsubsection{General remarks}
In Ref.[79], we demonstrated how treatment of combinatorial data associated
with real scattering experiments leads to restoration of the underlying
quantum mechanical model reproducing the meson spectrum. It was established
that the underlying microscopic model is the Richardson-Gaudin (R-G) XXX
spin chain model originally designed for description of spectrum of
excitations in the Bardeen-Cooper-Schriefer (BCS) model of
superconductivity. \ Subsequently, the same model was used for description
of spectra of the atomic nuclei. Since the energy spectrum of the BCS model
has the famous gap between the ground and the first excited state, the
problem emerges :
\textsc{Can spectral properties of nonperturbative quantum Y-M field theory
be described by the R-G model}?
To answer this question affirmatively the "equivalence principle" discovered
by L.Witten is very helpful. Using it, we can proceed with quantization of \
pure Y-M fields by using results by Korotkin and Nicolai, Ref.[31], for
gravity. By comparing the main results of our paper, Ref.[79], done for QCD,
with those of Ref.[31], done for gravity, \ we found a complete agreement.
In particular, the Knizhnik-Zamolodchikov Eq.s(4.14),(4.15) and the R-G
Eq.(4.29) of Ref.[79] coincide \ respectively with Eq.s(4.27),(4.26) and
(4.50) of Ref.[31] even though methods of deriving of these equations are
entirely different! Both Ref.s [79] and [31] do not reveal the underlying
physics sufficiently deeply though. In the remainder of this section we
shall explain why this is indeed so and demonstrate ways this deficiency can
be corrected. Experimentally the challenge lies in designing scattering
experiments providing \ clean information about the spectrum of glueballs.
Thus far this task was accomplished only in lattice calculations done for
unphysically large number of colors, e.g. $N_{c}\rightarrow \infty .[$23$].$
When it comes to interpreting \textsl{real} experiments (always having only
three colors to consider\footnote
E.g. read Section 6\textbf{.}}), the situation is even less clear, e.g. see
Ref.[93]. Hence, the gap problem is full of challenges for both theory and
experiment. Fortunately, at least theoretically, the problem does admit
physically meaningful solution as \ we\ explained already. \ We continue
with ramifications in the next subsection.
\subsubsection{From Landau-Lifshitz to Gross-Pitaevskii equation via
Hashimoto map}
Since the F-S model \ is believed to be capable of describing QCD vacua and
\ is also capable of describing knotted/linked structures [17], two
questions arise: a) Is this the only model capable of describing QCD vacua?
b) To what extent it matters that the F-S model is also capable of
describing knots and links? The negative answer to the first question
follows from Corollary 3 implying that, in principle, both Euclidean and
hyperbolic versions of the L-L equation are capable of describing QCD vacua:
different vacua correspond to different steady-state solutions of the L-L
equations. The negative answer to the second question can be found in a
review, Ref.[85], by \ Annalisa Calini. From this reference it follows that,
besides the F-S model, knotted/linked structures can be also obtained by
using standard\ (that is Euclidean) L-L equation, e.g. see Eq.(B.4) of
Appendix B. This fact still does not explain why knots/links are of
importance to QCD. We address the above issues in more detail in Section 6.
In view of what is said above, wether or not the hyperbolic version of L-L
equation is capable of describing knotted structures is not immediately
important for us. Far more important is the connection between the
hyperbolic L-L and the Ernst equation. Only with this connection it is
possible to reproduce results by Korotkin and Nicolai [31].
Eq.(3.19) is just the F-S functional without winding number term. When the
commutation relations for su(1,1) introduced\ in subsection 5.2 \ are
replaced by those for su(2) this leads to the standard L-L equation (instead
of Eq.(5.14)). This replacement causes us to abandon \ the connection with
Ernst equation and, ultimately, with the results of Ref.[31]. In such a case
the gap problem should be investigated from scratch. In Ref.[25] Faddeev and
Niemi indicated that, unless some amendments to the F-S model are made, it
is gapless. At the same time from Appendix B it is known that the L-L
equation associated with the F-S model \ can be transformed into the NLSE\ \
with help of the Hashimoto map. \ Recently, Ding [94] and Ding and Inoguchi
[95] were able to find analogs of the Hashimoto map for \ the vortex
filaments in hyperbolic, de Sitter and anti de Sitter spaces. It is helpful
to describe their findings using terminology familiar from physics
literature [96].This leads us to the discussion of properties of the
Gross-Pitaevskii equation known in mathematics as the NLSE. In the system of
units in which $\hslash =1$ and $m=1/2$ this equation can be written as [86
\begin{equation}
i\psi _{t}=-\psi _{xx}+2\kappa \left( \left\vert \psi \right\vert
^{2}-c^{2}\right) \psi =0. \tag{5.38}
\end{equation
Zakharov and Shabat [97,98] performed detailed investigation of this
equation for both positive and negative values of the coupling constant
\kappa .$ For $\kappa <0$ the above equation is used for description of
knots/links [85]. The standard Hashimoto map brings the L-L equation
associated with the truncated F-S model to the NLSE with $\kappa <0$ [94,
95]. \ From the same references it can be found that the Hashimoto-like map
brings the (hyperbolic) L-L-like equation to the NLSE for which $\kappa >0.$
Zakharov and Shabat studied in detail differences in treatments of the NLSE
for both negative and positive coupling constants. This difference is caused
by difference in underlying physics which in both cases can be explained in
terms of the properties of non ideal Bose gas [99,100]. The attentive reader
might have noticed at this point that Eq.(5.38) apparently contains no
information about the number of particles in such a gas. This parameter, in
fact, is hidden in the constant $c$ $($the chemical potential) or it can be
obtained selfconsistently with help of Eq.(5.38) (from which $c$ is removed
in a way described in Appendix B) as explained in Ref.[100]. With this
information at our disposal we are ready to make the next step.
\subsubsection{From non ideal Bose gas to Richardson-Gaudin equations}
Even though statistical mechanics of 1-d interacting Bose gas was considered
in detail by Lieb and Linger [101], solid state physics literature is full
of refinements of their results up to moment of this writing. These
refinements have been inspired by experimental and theoretical advancements
in the theory of Bose condensation [96]. Among this literature we selected
Ref.s[102,103] as the most relevant to our needs.
Following [102], the Hamiltonian \ for $N$ interacting bosons moving on the
circle of length $L$ is given by
\begin{equation}
H=-\sum\limits_{i=1}^{N}\frac{\partial ^{2}}{\partial x_{i}^{2}}+2\check{c
\sum\limits_{1\leq i<j\leq N}\delta (x_{i}-x_{j}) \tag{5.39}
\end{equation
with constant $2\check{c}$ coinciding with $2\kappa $ in the system of units
$\hslash =1$ and $m=1/2.$ The case $\check{c}>0$ (repulsive Bose gas)
corresponding to the L-L equation in the hyperbolic plane/space happens to
be of immediate relevance. \textsl{Only for this case it is} \textsl
possible to establish the connection with work by Korotkin and Nicolai} [31]!
We begin by noticing that in the standard BCS theory of superconductivity
electrons are paired into singlets (Cooper pairs) with zero centre of mass
momentum. The pairing interaction term in this theory accounts only for
pairs of attractive electrons with opposite spin and momenta so that the
degeneracy for each energy state is a doublet, with level degeneracy $\Omega
=2$ \footnote
We use here the same notations as in our work, Ref.[\textbf{94}].}. In the
interacting \textsl{repulsive} Bose gas model byRichardson [104] to mimic
this pairing he coupled two bosons with opposite momenta $\pm k_{j}$ into
one (pseudo) Cooper pair. An assembly of such formed pairs forms\textsl{\
repulsive} Bose gas \ which in the simplest case is described by the
Hamiltonian, Eq.(5.39). \textsl{Hence, the fermionic \ BCS-type model with
strong attractive pairing} \textsl{interaction can be mapped into bosonic
repulsive model proposed by Richardson.} Although the idea of such mapping\
looks very convincing, its actual implementation in Ref.[102] has some
flaws. Because of this, we shall use results of this reference selectively.
For this purpose, fist of all we need to make an explicit connection between
the repulsive Bose gas model described by Eq.(5.39) and the model proposed
by Richardson. In the weak coupling limit $\check{c}L\ll 1$ the Bethe ansatz
equations for the repulsive Bose gas model described by the Hamiltonian,
Eq.(5.39), acquire the following form
\begin{equation}
k_{i}=\frac{2\pi d_{i}}{L}+\frac{2\check{c}}{L}\sum\limits_{\substack{ j=1
\\ \left( j\neq i\right) }}^{N}\frac{1}{k_{j}-k_{i}},i=1,...,N. \tag{5.40}
\end{equation
Here $d_{i}=0,\pm 1,\pm 2,...$ denote the excited states for fixed $N$. To
link this result with Richardson's (repulsive boson) model, consider \ the
case of even number of bosons and make $N=2M$. Next, consider the ground
state of this model first. To the first order in $\check{c}$, it is clear
that we can write $k_{i}=\pm \sqrt{E_{i}}$. Specifically, let \ $k_{1,2}=\pm
\sqrt{E_{1}},k_{3,4}=\pm \sqrt{E_{3}},...,k_{2M-1,2M}=\pm \sqrt{E_{M}}.$
Using these results in Eq.(5.40), with the accuracy just stated, the Bethe
ansatz equations after some algebra are converted into the following form:
\begin{equation}
\frac{L}{2\check{c}}+\sum\limits_{\substack{ j=1 \\ \left( j\neq i\right) }
^{\tilde{M}}\frac{2}{E_{j}-E_{i}}=\frac{1}{2E_{i}},i=1,...,M;\text{ }\tilde{
}\leq M. \tag{5.41}
\end{equation
To analyze these equations, we expect that our readers\ are familiar with
works of both Richardson-Sherman, Ref.[105], and Richardson, Ref.[104]. In
[105] diagonalization of the pairing force Hamiltonian describing the
BCS-type superconductivity was made. Such a Hamiltonian is given by
\begin{equation}
H=\sum\limits_{f}2\varepsilon _{f}\hat{N}_{f}-g\sum\nolimits_{f}^{\prime
}\sum\nolimits_{f^{\prime }}^{\prime }b_{f}^{\dag }b_{f^{\prime }},
\tag{5.42}
\end{equation
where $\hat{N}_{f}=\frac{1}{2}(a_{f+}^{\dag }a_{f-}+a_{f-}^{\dag
}a_{f-}),b_{f}=a_{f-}a_{f+}$, with $a_{f\sigma }^{\dag }$ and $a_{f\sigma }$
being \textsl{fermion} creation and annihilation operators obeying usual
anticommutation relations $[a_{f\sigma },a_{f^{\prime }\sigma ^{\prime
}}^{\dag }]_{+}=\delta _{\sigma \sigma ^{\prime }}\delta _{ff^{\prime }}$,
where $\sigma =\pm $ denotes states conjugate under time reversal. The above
Hamiltonian is diagonalized along with the seniority operators (taking care
of the number of unpaired fermions at each level $f$) defined by
\begin{equation}
\hat{\nu}_{f}=a_{f+}^{\dag }a_{f-}-a_{f-}^{\dag }a_{f-}. \tag{5.43}
\end{equation
By construction, $[H,\hat{N}_{f}]=[H,\hat{\nu}_{f}]=0.$ The classification
of the energy levels is done in such a way that the eigenvalues $\nu _{f}$
of the operator $\hat{\nu}_{f}$ ($0$ and $\sigma )$ are appropriate for the
case when $g=0.$ This observation allows us to subdivide the Hamiltonian
into two parts:\ $H_{1},i.e.$that which does not contain Cooper pairs \ (for
which $\nu _{f}=\sigma )$ and $H_{2},i.e.$that which may contain such pairs
(for which $\nu _{f}=0).$ The matrix elements of $H_{2}$ are calculated with
help of the \textsl{bosonic-type} commutation relation
\begin{equation}
\lbrack b_{f^{\prime }},\hat{N}_{f^{\prime }}]=\delta _{ff^{\prime }}b_{f
\text{ \ and }[b_{f}\text{ , }b_{f^{\prime }}^{\dag }]=\delta _{ff^{\prime
}}(1-2\hat{N}_{f^{\prime }}). \tag{5.44}
\end{equation
These commutators are bosonic but nontraditional. In the traditional case we
have $[b_{f}$ , $b_{f^{\prime }}^{\dag }]=\delta _{ff^{\prime }}.$ \ We
refer our readers to Ref.[105] for details of how this commutator difficulty
is resolved. In the light of this resolution, in Ref.[104] Richardson
proposed to deal with the interacting bosons model from the beginning.
\textsl{Supposedly, such bosonic model can be designed} \textsl{to reproduce
results of the fermionic pairing model of} Ref.[105]. \ An attempt to do
just this was made in Ref.[102]. In the repulsive boson model by Richardson
the "pairing" Hamiltonian is given by\footnote
To avoid ambiguities, our coupling constant $\frac{g}{2}$ is chosen exactly
the same as in [104].}
\begin{equation}
H=\sum\limits_{l}2\varepsilon _{l}\hat{n}_{l}+\frac{g}{2}\sum\nolimits_{f}^
\prime }\sum\nolimits_{f^{\prime }}^{\prime }A_{f}^{\dag }A_{f^{\prime }}.
\tag{5.45}
\end{equation
in which $\hat{n}_{l}$ and $A_{f^{\prime }}$ are bosonic analogs of $\hat{N
_{f}$ and $b_{f}.$ It is essential that the sign of the coupling constant $g$
is nonnegative (repulsive bosons). Upon diagonalization, the total energy $E$
is given by
\begin{equation}
E=\sum\limits_{l=1}^{n}\varepsilon _{l}\nu _{l}+\sum\limits_{j=1}^{m}E_{j}
\tag{5.46}
\end{equation
so that summation in the first sum takes place over the unpaired bosons
while in the second- over the paired bosons whose energies $E_{j}$ are
determined from the Richardson's equation (Eq.(2.29) of Ref. [104])\footnote
Since Gaudin's equation is obtained in the limit $\left\vert g\right\vert
\rightarrow \infty .$ from Eq.(5.47) the spin -like model described by this
equation is known as the Richardson-Gaudin (R-G) model.
\begin{equation}
\frac{1}{2g}+\sum\limits_{l=1}^{n}\frac{d_{l}}{2\varepsilon _{l}-E_{k}
+\sum\limits_{\substack{ i=1 \\ i\neq k}}^{m}\frac{2}{E_{i}-E_{k}
=0,k=1,...,m \tag{5.47}
\end{equation
in which $n$ is the total number of single particle (unpaired) levels, $m$
is the total number of pairs, $d_{l}=\frac{1}{2}(2\nu _{l}+\Omega _{l}).$
From [104] it follows that for\ the bosonic model to mimic the BCS-type
pairing model the degeneracy factor $\Omega _{l}=1$ and $\nu _{l}=0.$ It
should be noted though that such an identification is not of much help in
comparing the repulsive bosonic model with the attractive BCS-type fermionic
model (contrary to claims made in Ref.[102]). This can be easily seen by
comparison between Eq.(5.47) (that is Eq.(2.29) of Ref.[104]) with such
chosen $\Omega _{l}$ and $\nu _{l}$ with Eq.(3.24) of Ref.[105]. By
replacing $g$ in Eq.(5.47) by $-g$ we still will not obtain the analog of
the key Eq.(3.24) of Ref.[105]! This fact has group-theoretic origin to be
explained in the next subsection. In the meantime, Eq.(5.47) still can be
used to connect it with Eq.(5.41) originating from \ different bosonic model
described by the Hamiltonian Eq.(5.39). To do so we follow the path
different from that suggested in Ref.[102]. Instead, following the original
Richardson's paper [104], \ we let $n=1$ in Eq.(5.47) then, without loosing
generality, we can put $\varepsilon _{1}=0$ so that Eq.(5.47) acquires the
following for
\begin{equation}
\frac{1}{E_{k}}=\frac{1}{2g}+\sum\limits_{\substack{ i=1 \\ i\neq k}}^{M
\frac{2}{E_{i}-E_{k}},\text{ }k=1,...,M. \tag{5.48}
\end{equation
The rationale for replacing $m$ by $M$ is given on page 1334 of [104].
Evidently, Eq.s (5.41) and (5.48) are identical. This observation allows us
to use the Richardson model instead of that described by Eq.(5.39). \ At
first sight such an identification looks a bit artificial. To convince our
readers that it does make sense, we would like to use the work by Dhar and
Shastry [106,107] on excitation spectrum of the ferromagnetic Heisenberg
spin chain. By analogy with Eq.(5.41) these authors derived a similar
equation obtained by reducing the Bethe ansatz equations for Heisenberg
ferromagnetic chain. It reads\footnote
The physical meaning of constants entering this equation is not important
for us. It is given in Ref.[106]..}
\begin{equation}
\frac{1}{E_{l}}=\pi d-\frac{d}{n}\sum\limits_{\substack{ i=1 \\ i\neq l}
\frac{2}{E_{i}-E_{l}}. \tag{5.49}
\end{equation
Even though Eq.s(5.48) and (5.49) look almost the same, they are not the
same! \ The crucial difference lies in the signs in front of the second term
in the r.h.s. of these equations. Because of this difference Heisenberg's
ferromagnetic spin chain model is mapped onto Bose gas model with \textsl
attractive} interaction in complete accord with what was said immediately
after Eq.(5.38). Regrettably, this result is still not the same as for the
BCS-type model \ investigated in Richardson-Sherman's paper, Ref.[105]. This
fact was recognized and discussed in some detail already by Richardson
[104]. For completeness, we mention that the problem of \ BCS-Bose-Einstein
condensation (BEC) crossover which follows exactly the qualitative picture \
\ just described was made quantitative only very recently in Ref.[108].
Fortunately, it is possible to by-pass this result as explained in the next
subsection.
\subsubsection{From Richardson-Gaudin to Korotkin-Nicolai equations}
In Ref.[109] bosonic and fermionic formalism for pairing models discussed in
the previous subsection was developed. This formalism happens to be the most
helpful for investigation of the gap problem. Indeed, define three operators
$\hat{n}_{l}=\sum\nolimits_{m}a_{lm}^{\dag }a_{lm},$ $A_{l}^{\dag }=\left(
A_{l}^{{}}\right) ^{\dag }=\sum\nolimits_{m}a_{lm}^{\dag }a_{l\bar{m}}^{\dag
}$ . They can be used for construction of operators $K_{l}^{0}=\frac{1}{2
\hat{n}_{l}\pm \frac{1}{4}\Omega _{l}$ \ and $K_{l}^{+}=\frac{1}{2
A_{l}^{\dag }=\left( K_{l}^{-}\right) ^{\dag }$ such that they obey the
following commutator algebr
\begin{equation}
\lbrack K_{l}^{0},K_{l}^{+}]=\delta _{ll}K_{l}^{+},\text{
[K_{l}^{+},K_{l}^{-}]=\mp 2\delta _{ll}K_{l}^{0}. \tag{5.50}
\end{equation
In this algebra as well as in the preceding expressions, the upper sign
corresponds to bosons and the lower to fermions. In Ref.[79], we discussed
such an algebra for the fermionic case only, e.g. see Eq.s (4.31) of [79].
These results can be extended now for the bosonic case. In fact, such an
extension is already developed in Ref.[109]. Unlike [79], where we used
sl(2,\mathbf{C})$ Lie algebra, only its compact version, that is $su(2)$,
was used in [109] for representing fermions. For bosonic case the
commutation relations, Eq.(5.50), are those for $su(1,1)$ Lie algebra.
Incidentally, in the paper by Korotkin and Nicolai, Ref.[31], exactly the
same Lie algebra was used. Furthermore, in the same paper it was argued that
it is permissible to replace $su(1,1)$ by $sl(2,\mathbf{R})$ Lie algebra
while constructing the K-Z-type equations, e.g. read p.428 of this
reference. Since in [79] the $sl(2,\mathbf{C})$ Lie algebra \ was used, that
is a complexified version of $sl(2,\mathbf{R}),$ this allows us to use many
results from our work. Thus, in this subsection we shall discuss only those
results of [109] which are absent in our Ref.[79]. In particular, following
this reference the set of Gaudin-like commuting Hamiltonians written in
terms of operators $K_{l}^{0},K_{l}^{+}$ and $K_{l}^{-}$ is given by
\begin{equation}
H_{l}=K_{l}^{0}+2g\{\sum\limits_{l^{\prime }(\neq l)}\frac{X_{ll^{\prime }}}
2}(K_{l}^{+}K_{l^{\prime }}^{-}+K_{l}^{-}K_{l^{\prime }}^{+})\mp
Y_{ll^{\prime }}K_{l}^{0}K_{l^{\prime }}^{0}\}. \tag{5.51}
\end{equation
Here $X_{ll^{\prime }}=Y_{ll^{\prime }}=(\varepsilon _{l}-\varepsilon
_{l^{\prime }})^{-1}.$ For $g\rightarrow \infty $ the first term can be
ignored and the remainder can be used in the K-Z-type equations. \ The
semiclassical treatment of these equations discussed in detail in [79] is
resulting in the following set of Bethe (or R-G) ansatz equations
\begin{equation}
\sum\limits_{l=1}^{n}\frac{d_{l}}{2\varepsilon _{l}-E_{k}}\pm \sum\limits
_{\substack{ i=1 \\ i\neq k}}^{m}\frac{2}{E_{i}-E_{k}}=0,\text{ }k=1,...,m
\tag{5.52}
\end{equation
to be compared with Eq.(5.47). Unlike Eq.(5.47), in the present case $d_{l}=$
$\frac{1}{2}(2\nu _{l}\pm \Omega _{l}).$ The bosonic version of Eq.(5.52)
corresponding to $su(1,1)$ Lie algebra coincides with Eq.(4.50) of Korotkin
and Nicolai paper, Ref.[31], provided that the following identifications are
made: $d_{l}\rightleftarrows s_{l}$, $2\varepsilon _{l}\rightleftarrows
\gamma _{j}$. Unlike Ref.[31], where Eq.(5.52) was obtained by standard
mathematical protocol, in this work it is obtained based on the underlying
physics. Because of this, it is appropriate to extend our physics-stype
analysis by considering the case of finite $g^{\prime }s.$ Then, Eq.(5.52)
should be replaced b
\begin{equation}
\frac{1}{2g}\pm \sum\limits_{l=1}^{n}\frac{d_{l}}{2\varepsilon _{l}-E_{k}
\pm \sum\limits_{\substack{ i=1 \\ i\neq k}}^{m}\frac{2}{E_{i}-E_{k}
=0,k=1,...,m. \tag{5.53}
\end{equation
In Ref.[31] the gap problem was discussed in detail for the fermionic case
when the coupling constant $g$ is negative (BCS pairing Hamiltonian), e.g.
see Eq.s (4.43)-(4.45) of Ref.[31]. In the present case we are dealing with
the bosonic case for which the coupling constant is positive. Hence the gap
problem should be re analyzed. For this purpose, it is convenient to
consider both positive and negative coupling constants \ in parallel for
reasons which will become apparent upon reading.
\subsubsection{Emergence of the gap and \ the gap dilemma}
Eq.s(5.53) cannot be solved without some physical input. \ Initially, such
an input \ was coming from nuclear physics (e.g. read [110-112] for general
information on nuclear physics). \ \ Indeed, Richardson's papers [104,105]
were written having applications to nuclear physics in mind. Given this, the
question arises about the place of the R-G model among other models
describing nuclear spectra and nuclear properties. We need an answer to this
question to finish proof of the gap's existence in QCD.
Looking at the Gaudin-like Hamiltonian, Eq.(5.51), and comparing it with the
Hamiltonian, Eq.(6), in Ref.[113]\footnote
Published in 1961!} it is easy to notice that they are almost the same! \
Because of this, it becomes possible to transfer the methodology of
Ref.[113] to the present case. Thus, it makes sense to recall briefly
circumstances at which the gap emerges in nuclear physics.
As is well known, the nuclei are made of protons and neutrons. One can talk
about the number $\mathcal{N}$ of nucleons, the number Z of protons and the
number N of neutrons in a given nucleus. Nuclear and atomic properties
happen to be interrelated. For instance, in analogy with atomic physics one
can think of some effective nuclear potential in which nucleons can move
"independently". This assumption leads to the \textsl{shell model} of
nuclei. Use of Pauli principle guides fillings of shells \ the same way as
it guides these fillings in atomic physics. This leads to emergence of magic
numbers 2, 8, 20, 28, 50, 82 and 126 for either protons or neutrons for the
totally filled shells. Accordingly, the most stable are the doubly magic
(for both protons and neutrons) nuclei. It is of interest to know what kinds
of excitations are possible in such shell models? The simplest of these is
when some nucleon is moving from the closed shell to the empty shell thus
forming \textsl{a hole}. When the number of nucleons increases, the question
about the validity of the shell model emerges, again in analogy with atomic
physics. As in atomic physics, one can think about the Hartree-Fock (H-F)
and other many-body computational schemes, including that developed by
Richardson-Sherman and Gaudin. For our purposes, it is sufficient to use
only the Tamm-Dankoff (T-D) approximation \ to the H-F equations described,
for example, in Ref.[112]. The essence of this approximation lies in
restricting the particle-hole interactions to nucleons lying in the same
shell.\textsl{\ The T-D approximation is obtainable from the} is \textsl{R-G
Eq.s}(5.53)\textsl{\ when the last term (effectively taking care of Pauli
principle) in these equations is dropped}. The T-D approximation was
successfully applied for description of the giant nuclear dipole resonance
[110-112]. At the classical level the physics of this resonance was
explained in the paper by Goldhaber and Teller [114]. The resonance is
caused by two nuclear vibrational modes: one, when protons and neutrons move
in the opposite directions and another- when they move in the same
direction. Upon quantization of such classical model and taking into account
the isotopic spin of nucleons, the truncated Eq.s(5.53) are obtained in
which both signs for the coupling constant are allowed since the nucleon
system is expected to be in two isospin states : $T=1$ and $T=0\footnote
This can be easily understood based on the fact that isospin for both
particles and holes is equal to 1/2 [110-112].}$. Details of these
calculations are given in Ref.[112], page 221. Solutions of \ the T-D
equations can be obtained graphically in complete analogy with that
described in our work, Ref.[79]. These graphical solutions reflect the
particle-hole duality built into the T-D approximation. Because of this
duality, the magnitude of the gap in both cases should be the same. To
demonstrate this, the \textsl{seniority} scheme described in [110-112] is
helpful. The seniority operator was defined by Eq.(5.43). It determines the
number of unpaired particles in the nuclear system. Since it commutes with
the Hamiltonian, the many-body states can be classified with help of its
eigenvalues $\nu _{f}.$ Suppose at first that all single particle energies
\varepsilon _{f}$ are the same (that is $\varepsilon _{f}=\varepsilon )$ so
that all seniority eigenvalues $\nu _{f}$ are $\nu .$ Let then $\mathcal{N}$
be the total number of nucleons. Thus, the state for which $\nu =0$ contains
only pairs, analogously, the state $\nu =1$ contains just one unpaired
nucleon, $\nu =2$ has 2 unpaired nucleons and $\mathcal{N}$ should be even
and so on. So, states $\nu =0,\nu =2,\nu =4,...$ can exist only in even
nuclei. For such nuclei the gap is nonzero. To see this, we follow
Refs.[110-112] which we would like now to superimpose with the results of
the Richardson-Sherman paper, Ref.[105]. Specifically, on page 231 of this
reference one can find the following result for the ground state ($\nu =0)$
energy
\begin{equation}
E_{\nu =0}(N)=2N\varepsilon -gN(\Omega -N+1) \tag{5.54}
\end{equation
where $N$ is the number of pairs. To connect this result with that in
Refs.[110-112], let $N=\mathcal{N}$/2 and consider the difference
\begin{equation}
\mathcal{E}_{\nu =0}(N\text{)}=E_{\nu =0}(\mathcal{N}/2)-\mathcal{N
\varepsilon =-\frac{g}{4}\mathcal{N}(2\Omega -\mathcal{N}+2). \tag{5.55}
\end{equation
The obtained result coincides with Eq.(11.14) of Ref.[112] as required. To
obtain \ states of seniority $\nu =2n$ we use Eq.(3.2) of \ Ref.[105]. It
reads
\begin{equation}
E_{\nu =2n}(N)=2N\varepsilon -g\left( N-n\right) (\Omega -N-n+1),\text{
n=0,...,N. \tag{5.56}
\end{equation
Repeating the same steps as in\ $\nu =0$ case we obtain,
\begin{equation}
\mathcal{E}_{\nu }(\mathcal{N}\text{)}=-\frac{g}{4}\left( \mathcal{N}\text{-
\nu \right) (2\Omega -\mathcal{N}\text{-}\nu +2). \tag{5.57}
\end{equation
Finally, consider the differenc
\begin{equation}
\mathcal{E}_{\nu }(\mathcal{N}\text{)}-\mathcal{E}_{\nu =0}(\mathcal{N}\text
)=}\frac{g}{4}\nu (2\Omega -\nu +2). \tag{5.58}
\end{equation
This result is in accord with Eq.(11.22) of Ref.[112]. Since the obtained
difference is $\mathcal{N}$-independent it can be used both ways: a) for
calculations in the thermodynamic limit $\mathcal{N}\rightarrow \infty $ and
b) for making accurate calculations in the opposite limit of very small
number of nucleons. In the simplest case we should consider only one shell
and the first excited state of seniority 2 for this shell. Initially (the
ground state) we have just one pair while finally (the first excited state)
we have two independent particles occupying single particle levels.
Looking at Eq.s(5.53) and letting there $m=1$(one pair) we recognize that
the second sum in this set of equations disappears. Thus, by design, we are
left with the T-D approximation. Using Eq.(5.58) for $\nu =2$ we obtain the
\ following value of the gap $\Delta :
\begin{equation}
\Delta =\mathcal{E}_{2}(\mathcal{N}\text{)}-\mathcal{E}_{0}(\mathcal{N}\text
)}=g\Omega . \tag{5.59a}
\end{equation
Notice, that since $\Omega $ is the degeneracy, there could be no more than
\mathcal{N}=\Omega $ particles at the single particle level. Thus, in
general we should have $\mathcal{N}\leq \Omega .$ Because of the
particle-hole duality, it is permissible to look also at the situation for
which $\mathcal{N}\geq \Omega .$This is equivalent to changing the sign in
front of the coupling constant. Repeating again all steps leads to the final
result for the ga
\begin{equation}
\Delta =\mathcal{E}_{2}(\mathcal{N}\text{)}-\mathcal{E}_{0}(\mathcal{N}\text
)=}\left\vert \text{g}\right\vert \Omega . \tag{5.59b}
\end{equation
It is demonstrated in Ref.s [110-112] that in the limit $\mathcal{N
\rightarrow \infty ,$ when the continuum approximation (replacing summation
by integration) can be used leading to a more familiar BCS-type equation for
the gap, the result just obtained survives. \ Indeed, in Ref.[103] the
BCS-type result is obtained in the continuum approximation for the \textsl
attractive} Bose gas. In view of the results just obtained, it should be
clear that such a result should hold for both attractive and repulsive Bose
gases. This conclusion is in accord with accurate recent Bethe ansatz
calculations done in Ref.[115] for \ systems of finite size. Thus, we just
arrived at the\ issue which we shall call \textsl{the gap dilemma}. While
the results obtained above strongly\textsc{\ }favor\textsc{\ }use of the
repulsive Bose gas model\textsc{\ }\textit{(\textsl{not linked with the F-S
model}), }the results\textsc{\ }obtained in this subsection indicate that,
after all, the F-S model\textit{\ (}\textsl{\ linked with the}\textit{\
\textsl{attractive Bose gas model }) }can also be used for description of
the ground and excited states\textsc{\ }of pure Y-M fields\textit{. }\textsc
The essence of the dilemma lies in deciding which of these results should
actually be used}.
While the answer is provided in the next section, we are not yet done with
the gap discussion. This is so because the seniority model is applicable
only to the case when all single-particle levels have the same energy. This
is too simplistic. We would like now to discuss more realistic case
Before doing so, few comments are appropriate. In particular, with all
successes of \ nuclear physics models, these models are much less convincing
than those in atomic physics. Indeed, all nuclei are made of hadrons which
are made of quarks and gluons. Thus the excitations in nuclei are in fact \
the excitations of quark-gluon plasma. This observation qualitatively
explains why the R-G equations work well both in nuclear and particle
physics. \ Some attempts to look at the processes in nuclear physics from
the standpoint of hadron physics can be found in Refs.[116,117]. \
Now we can return to the discussion of \ the T-D equations. Fortunately,
detailed analytical study of these equations was recently made in Ref.[118].
The same authors extended these results to the case of two pairs in [119].
Since the results obtained in [119] are in qualitative agreement with those
obtained in Ref.[118], we shall focus attention of our readers only on
results of Ref.[118]. Thus, we need to find some kind of analytic solution
of the following T-D equatio
\begin{equation}
\sum\limits_{i=1}^{L}\frac{\Omega _{i}}{2\varepsilon _{i}-E_{{}}}=\frac{1}{g
. \tag{5.60}
\end{equation
For different $\varepsilon _{i}^{\prime }s$ normally it should have \ $L$
eigenvalues $E_{\mu }$ $(1\leq \mu \leq L).$ Since we are interested in
finding the gap, the above equation is written for just one nucleon pair.
Thus the seniority $\nu =0.$ It is of interest to check first what happens
when all $\varepsilon _{i}^{\prime }s$ coalesce. In such a case we obtain,
\begin{equation}
\frac{\Omega _{{}}}{2\bar{\varepsilon}-E}=\frac{1}{g}, \tag{5.61}
\end{equation
where $\Omega =\sum\nolimits_{i}\Omega _{i}$ and $\varepsilon _{i}=\bar
\varepsilon}$ $\forall i=1,...,L.$ Eq.(5.61) can be\ equivalently rewritten
a
\begin{equation}
E_{0}=2\bar{\varepsilon}-\Omega g. \tag{5.62}
\end{equation
\ This result for the ground state is in agreement with Eq.(5.54) for $N=1$.
The first excited state is made of one broken pair so that the pairing
disappears and the energy $E_{\nu =2}=2\bar{\varepsilon}$. From here, the
value of the gap is obtained as $E_{\nu =2}-$ $E_{0}=$ $g\Omega $ in
agreement with Eq.(5.59). If now we make all energy levels different, then
one can see that solutions to Eq.(5.60) are subdivided into those lying in
between the single particle levels (\textsl{trapped solutions}) and those \
which lie outside these levels (\textsl{collectivized solutions}). For
\left\vert g\right\vert $ sufficiently large the solution, Eq.(5.61), is the
leading term (in the sense described below) representing the collectivized
solution. Since the trapped solutions represent corrections to energies of
single particle states, they do not contribute directly to the value of the
gap. They do contribute to this value indirectly. Indeed, following
Ref.[118] we rewrite Eq.(5.60) as
\begin{equation}
\sum\limits_{i=1}^{L}\frac{\Omega _{i}}{2\varepsilon _{i}-E_{{}}}=\frac{1}{
\bar{\varepsilon}-E}\sum\limits_{i}\frac{\Omega _{i}}{1+2\frac{\varepsilon
_{i}-\bar{\varepsilon}}{2\bar{\varepsilon}-E}}=\frac{1}{g} \tag{5.63}
\end{equation
and expand the denominator of Eq.(5.63) in a power series. As result, the
following expansion
\begin{equation}
\frac{E-2\bar{\varepsilon}}{g\Omega }=-1-\alpha ^{2}+\gamma \alpha
^{3}+O(\alpha ^{4}) \tag{5.64}
\end{equation
is obtained in which $\bar{\varepsilon}=\frac{1}{\Omega }\sum\limits_{i
\Omega _{i}\varepsilon _{i}$, $\alpha =\frac{2\sigma }{g\Omega },\sigma
\sqrt{\frac{1}{\Omega }\sum\limits_{i}\Omega _{i}(\varepsilon _{i}-\bar
\varepsilon})^{2}}$ and $\gamma $ is related to the higher order moments (
details are in Ref.s[118,119]). Using these results, the gap is obtained in
the same way as before.
The quality of computations in Ref.[118] was tested for 3-dimensional
harmonic oscillator (by adjusting dimensionality of this oscillator\ it can
be thought of \ as "closed string model" representing \ both the shell model
for atomic nucleus and the gluonic ring for the Y-M fields) for which
\varepsilon _{i}=(i+3/2)$ \ (in the system of units in which $\hbar \omega
=1)$ and $\Omega _{i}=(i+1)(i+2)/2.$ \ For this 3-dimensional \ oscillator
corrections to the collectivized energy, Eq.(5.64), become negligible
already for $\left\vert g\right\vert \geq 0.2,$ provided that $L\geq 8.$
Obtained results allow us to close this section at this point. These results
are of no help in solving the gap dilemma though. This \ task is
accomplished in the next section.
\section{Resolution of the gap dilemma}
\subsection{Motivation}
In the previous section we provided evidence linking the gap problem for Y-M
fields with the problem about the excitation spectrum of the repulsive Bose
gas. \ The gap equation, Eq.(5.59), is also used in nuclear physics where it
is known to produce the same value for the gap for both signs of the
coupling constant $g$. Since both options are realizable in Nature in the
case of nuclear physics, the question arises about such possibility in the
present case. In the case of nuclear physics experimental realization (giant
nuclear dipole resonance) of both options for the coupling constant is
experimentally testable. Thus, in the present case we have to find some
alternative physical evidence. If, indeed, such evidence could be found,
this would allow us to bring back into play the well studied F-S model which
microscopically is essentially equivalent to the XXX 1d Heisenberg
ferromagnet as results of Appendix B and subsections 3.5 and 5.2 indicate. \
The next subsection supplies us with the alternative physical evidence.
\subsection{Some facts about harmonic maps and their uses in general
relativity}
Suppose we are interested in a map from $m-$dimensional Riemannian manifold
\ $\mathcal{M}$ with coordinates $x^{a}$ and metric $\gamma _{ab}(\mathbf{x
) $ to $n$-dimensional Riemannian manifold \ $\mathcal{N}$ with coordinates
\varphi ^{A\text{ }}$ and metric $G_{AB}(\varphi )$. A map $\mathcal{M
\rightarrow \mathcal{N}$ is called \textsl{harmonic }if $\varphi ^{A\text{
}(x^{a})$ satisfies the Euler-Lagrange (E-L) equations originating from
minimization of the following Lagrangia
\begin{equation}
\mathcal{L}=\sqrt{\gamma }G_{AB}(\mathbf{\varphi })\gamma ^{ab}(\mathbf{x
)\varphi _{,a}^{A}\varphi _{,b}^{B} \tag{6.1}
\end{equation
in which $\gamma =\det (\gamma _{ab}).$ \ Since such defined Lagrangian is
part of the Lagrangian given by Eq.(3.6), the E-L equations for Eq.(6.1), in
fact, coincide with Eq.s(3.10). \ In the most general form they can be
written as [38]\footnote
We use the 1st edition of Ref.\textbf{[}38] for writing this equation. This
means that we have to define $\Gamma _{BC}^{A}$ as $\Gamma _{BC}^{A}=\frac{
}{2}G^{ad}\{\frac{\partial }{\partial \varphi ^{c}}G_{bd}+\frac{\partial }
\partial \varphi ^{b}}G_{cd}-\frac{\partial }{\partial \varphi ^{d}
G_{bc}\}. $}
\begin{equation}
\varphi _{,a}^{A;a}+\Gamma _{BC}^{A}\varphi _{,a}^{B}\varphi ^{C_{,a}}=0.
\tag{6.2}
\end{equation
In such a form we can look at transformations $\varphi ^{A^{\prime
}}=\varphi ^{A^{\prime }}(\varphi ^{B})$ keeping $\mathcal{L}$
form-invariant. To find such transformations, following Neugebauer and
Kramer [38], we introduce the auxiliary Riemannian space defined by the
metri
\begin{equation}
dS^{2}=G_{AB}(\mathbf{\varphi })d\varphi ^{A}d\varphi ^{B}. \tag{6.3}
\end{equation
Use of the above metric allows us to investigate the invariance of $\mathcal
L}$ with help of standard methods of Riemannian geometry. In the present
case, this means that one should study Killing's equations in spaces with
metric $G_{AB}.$ Specifically, let us consider the Lagrangian for
source-free Einstein-Maxwell fields admitting at least one non-null Killing
vector $\xi .$ To design such a Lagrangian we begin with the Ernst equation,
Eq.(2.4), for pure gravity and replace the Ernst potential $\epsilon
=-F+i\omega \footnote
Recall, that $-F=V$ according to notations introduced in connection with
Eq.(2.4).}$ by two complex potentials $\mathcal{E}$ and $\Phi $. Then, by
symmetry, the equations for stationary Einstein-Maxwell fields can be
written as follows [38]
\begin{equation}
F\mathcal{E}_{,a}^{;a}+\gamma ^{ab}\mathcal{E}_{,a}(\mathcal{E}_{,b}+2\Phi
_{,b}\bar{\Phi})=0,F\Phi _{,a}^{;a}+\gamma ^{ab}\Phi _{,a}(\mathcal{E
_{,b}+2\Phi _{,b}\bar{\Phi})=0. \tag{6.4}
\end{equation
These equations are obtained by minimization of the Lagrangian
\begin{equation}
\mathcal{L}=\sqrt{\gamma }[\hat{R}_{ab}+2F^{-1}\gamma ^{ab}\Phi _{,a}\Phi
_{,b}+\frac{1}{2}F^{-2}\gamma ^{ab}(\mathcal{E}_{,a}+2\bar{\Phi}\Phi _{,a})
\mathcal{E}_{,b}+2\bar{\Phi}\Phi _{,b})], \tag{6.5}
\end{equation
i.e. from equations $\frac{\delta \mathcal{L}}{\delta \gamma ^{ab}}=0,\frac
\delta \mathcal{L}}{\delta \Phi }=0$ and\ $\frac{\delta \mathcal{L}}{\delta
\mathcal{E}}=0.$Taking these results into account, the auxiliary metric,
Eq.(6.3), can now be written a
\begin{equation}
dS^{2}=2F^{-1}d\Phi d\bar{\Phi}+\frac{1}{2}F^{-2}\left\vert d\mathcal{E}+
\bar{\Phi}d\Phi \right\vert ^{2}. \tag{6.6}
\end{equation
The analysis done by Neugebauer and Kramer [38] shows that there are eight
independent Killing vectors leading to the following finite transformations
\begin{equation}
\begin{array}{cc}
\mathcal{E}^{\prime }=\alpha \bar{\alpha}\mathcal{E}, & \Phi ^{\prime
}=\alpha \Phi ; \\
\mathcal{E}^{\prime }=\mathcal{E}+ib, & \Phi ^{\prime }=\Phi ; \\
\mathcal{E}^{\prime }=\mathcal{E}(1+ic\mathcal{E})^{-1}, & \Phi ^{\prime
}=(1+ic\mathcal{E})^{-1}; \\
\mathcal{E}^{\prime }=\mathcal{E}-2\bar{\beta}\Phi -\beta \bar{\beta}, &
\Phi ^{\prime }=\Phi +\beta ; \\
\mathcal{E}^{\prime }=\mathcal{E}(1-2\bar{\gamma}\Phi -\gamma \bar{\gamma
\mathcal{E})^{-1}, & \Phi ^{\prime }=(\Phi +\gamma \mathcal{E})(1-2\bar
\gamma}\Phi -\gamma \bar{\gamma}\mathcal{E})^{-1}
\end{array}
\tag{6.7}
\end{equation
Complex parameters $\alpha ,\beta ,\gamma $ as well as real parameters $b$
and $c$ are connected with these eight symmetries. Evidently, solutions
\mathcal{E}^{\prime }$,$\Phi ^{\prime }$ are also solutions of Eq.s(6.4),
provided that $\gamma ^{ab}$ stays the same. Therefore if, say, we choose
some vacuum solution as a "seed", we would obtain, say, the electrovacuum
solution in accord with Appendix A. Incidentally, the electrovacuum
solutions obtained by Bonnor (Appendix A) cannot be obtained with help of
transformations given by Eq.s(6.7). They are considered separately below.
These observations allow us to reduce the Lagrangian $\mathcal{L}$ to the
absolute minimum without loss of information. In 1973 Kinnersley [38] found
that the group of symmetry transformations for the Einstein-Maxwell
equations with non null Killing vector is the group SU(2,1) which has eight
independent generators. In view of the above mentioned reduction of
\mathcal{L}$ it is sufficient to replace the metric in Eq.(6.6) by a
collection of much simpler metric related to each other by transformations
Eq.(6.7). All the possibilities are described in the Table 34.1 of Ref.[38].
For our needs we focus only on three of these (much simpler/reduced) metric
listed in this table. These are
\begin{equation}
\text{ }dS^{2}=\frac{2d\xi d\bar{\xi}}{(1-\xi \bar{\xi})^{2}},\mathcal{E}
\frac{1-\xi }{1+\xi }, \tag{6.8}
\end{equation
\begin{equation}
dS^{2}=\frac{2d\Phi d\bar{\Phi}}{(1-\Phi \bar{\Phi})^{2}}, \tag{6.9}
\end{equation
an
\begin{equation}
dS^{2}=\frac{-2d\Phi d\bar{\Phi}}{(1+\Phi \bar{\Phi})^{2}}. \tag{6.10}
\end{equation}
\bigskip
The first \ and the second \ of these metric correspond to the vacuum state,
respectively, with $\Phi =0$ and $\mathcal{E}=-1$, of pure gravity
associated with the subgroup SU(1,1) of SU(2,1). The third metric,
Eq.(6.10), corresponds to a subgroup SU(2). It is related to the
electrostatic fields ($\mathcal{E}=1$) \ such that the space-time becomes
asymptotically flat for $\mathcal{E}\mathcal{\rightarrow }0.$It is important
that the metric, Eq.(6.10), is related to the vacuum metric,
Eq.s(6.8),(6.9), via transformations either listed in Eq.(6.7) or related to
these transformations. In particular, the related transformations can be
obtained as follows. Using Ref.[38], it is convenient to make the parameters
$b$ and $c$ in Eq.s(6.7) complex and to consider all eight complex
parameters as independent of their complex conjugates. Under such
conditions\ the metric given by Eq.(6.10) \ is related to that given by
Eq.(6.8) by the simplest complex transformation: $\Phi ^{\prime }=i\xi $ \
and $\bar{\Phi}^{\prime }=i\bar{\xi}$ . These transformations indicate that,
starting with real vacuum solution for pure gravity as a seed, the above
transformations are capable of reproducing some electrovacuum solutions.
Additional details are discussed below.
These results can be interpreted as follows. \ While the Ernst functional,
Eq.(3.18), is representing pure axially symmetric gravity, the F-S-type
functional, Eq.(3.19), should describe some special case of electrovacuum
(Maxwell-Einstein) gravity. In view of results of Appendix C, it is possible
to use these transformations in reverse (see below), that is to obtain the
results for pure gravity from those for electrovacuum. This peculiar
"duality" property of gravitational fields provides physically motivated
resolution of the gap dilemma and, in addition, it allows us to obtain many
new results.
\subsection{Resolution of the gap dilemma and SU(3)$\times $SU(2)$\times
U(1) symmetry of the Standard Model}
The original F-S-type model thus far is limited only to SU(2) gauge theory.
SU(2) gauge theory is known to be used for description of electroweak
interactions where, in fact, one has to use the gauge group SU(2)$\times
SU(1) [19\textbf{]}. The hadron physics (that is QCD) requires us to use the
gauge group SU(3). This is caused by the fact that quark model of hadrons
uses flavors (e.g. u,d,s,c,b, t) labeling quarks of different masses. Each
of these quarks can be in three different colors (r,g,b) standing for "red",
"green" and "blue". Presence of different colors leads to fractional charges
for quarks. \ Far from the target\ the scattering products are always
colorless. The gauge group SU(3) is\ used for description of \ these colors.
Although theoretically the number of colors can be greater than three, this
number is strictly three experimentally [19]. The results of this work allow
us to reproduce this number of colors. For this purpose we have to be able
to provide the answer to the following \ \textsl{fundamental question}:
\textsc{Can equivalence between gravity and Y-M fields (for SU(2) gauge
group) discovered by Louis Witten be extended to the group SU(3)?}
Very fortunately, this can be done! \ For the sake of space, we shall be
brief whenever details can be found in literature, e.g. see Refs.[120-122].
To proceed, first, we have to go back to Eq.s(2.14),(2.15) and to modify
these equations in such a way that instead of the Ernst Eq.(2.4) for the
vacuum \ (gravity) field we should be able to obtain Eq.s (6.4) for
electrovacuum. In the limit $\Phi =0$ the obtained set of equations should
be reducible to Eq.(2.4). As it was noticed by G\"{u}rses and Xanthopoulos
[120], in general, this task cannot be accomplished. Indeed, these authors
demonstrated that the self-duality Eq.s(2.14) for SU(2) and for SU(3) Lie
groups look exactly the same for axially symmetric fields. Nevertheless, in
the last case, upon explicit computation instead of the vacuum Ernst
Eq.(2.4) one gets an electovacuum equations (e.g. see Eq.s(6.4)) which,
following Ernst [43], can be explicitly written a
\begin{equation}
\left( \func{Re}\mathcal{E}+\left\vert \Phi \right\vert ^{2}\right) \nabla
^{2}\mathcal{E}=(\mathbf{\nabla }\mathcal{E}+2\bar{\Phi}\mathbf{\nabla }\Phi
)\cdot \mathbf{\nabla }\mathcal{E}, \tag{6.11a}
\end{equation
\begin{equation}
\left( \func{Re}\mathcal{E}+\left\vert \Phi \right\vert ^{2}\right) \nabla
^{2}\Phi =(\mathbf{\nabla }\mathcal{E}+2\bar{\Phi}\mathbf{\nabla }\Phi
)\cdot \mathbf{\nabla }\Phi . \tag{6.11b}
\end{equation
These equations are obtained if, instead of the matrix $M$ given by
Eq.(2.15), one use
\begin{equation}
M=f^{-1
\begin{bmatrix}
1 & \sqrt{2}\Phi & -\frac{i}{2}(\mathcal{E}-\mathcal{\bar{E}}-2\Phi \bar{\Ph
}) \\
\sqrt{2}\bar{\Phi} & -\frac{i}{2}(\mathcal{E}+\mathcal{\bar{E}}-2\Phi \bar
\Phi}) & -i\sqrt{2}\bar{\Phi}\mathcal{E} \\
\frac{i}{2}(\mathcal{\bar{E}}-\mathcal{E}-2\Phi \bar{\Phi}) & i\sqrt{2
\mathcal{\bar{E}}\Phi & \mathcal{E}\text{ }\mathcal{\bar{E}
\end{bmatrix}
\tag{6.12}
\end{equation
in which, instead of the one complex potential $\epsilon =-F+i\omega $ used
for solution of \ the vacuum Ernst Eq.(2.4), two complex potentials
\mathcal{E}$ and $\Phi $ are being used. In this expression the overbars
denote the complex conjugation and $f=$ $-\frac{1}{2}(\epsilon +\bar{\epsilo
}+2\Phi \bar{\Phi}).$ Since the Einstein-Maxwell Eq.s(6.4) (or (6.11)) are
invariant with respect to transformations given by Eq.s(6.7), there should
be a matrix $A$ \ with constant coefficients such that the $M^{\prime
}=AMA^{\dag }$ will have primed potentials $\mathcal{E}$ and $\Phi $ taken
from those listed in the set Eq.(6.7). Authors of [120] found explicit form
of such $A$-matrices. However, when instead of matrix $M$ we substitute the
matrix $M^{\prime }$ into self-duality Eq.s(2.14), the combination
M^{\prime -1}\partial M^{\prime }$ looses this information. As result, we
are left with the following situation: while on the gravity side the matrix
M^{\prime }=AMA^{\dag }$ does allow us to obtain new and physically
meaningful solutions from the old ones, on the Y-M side all this information
is lost. Thus, the one-to-one correspondence discovered by L.Witten for
SU(2) is $\QTR{sl}{apparently}$ lost for SU(3). Very fortunately, this
happens only apparently! This is so because the Neugebauer- Kramer (N-K)
transformations described by Eq.s(6.7) do not exhaust all possible
transformations which can be applied to the matrix $M$, Eq.(6.12). Among
those which are not accounted by N-K transformations are those by Bonnor [3
\textbf{,}123] whose work is mentioned in Appendix A. These are given by
\begin{equation}
\mathcal{E}=\epsilon \bar{\epsilon};\Phi =\frac{1}{2}(\epsilon -\bar{\epsilo
})=i\omega , \tag{6.13}
\end{equation
where $\epsilon =-F+i\omega $ is solution of the Ernst Eq.(2.4). In view of
the results of Appendix A one can be sure that the $\ $potentials $\mathcal{
}$ and $\Phi $ satisfy Eq.s(6.11). This means that one can use these
(Bonnor's) potentials in the matrix $M$ to reproduce Eq.s(6.11). This time,
there is one-to one correspondence between the self-duality Y-M and the
Einstein-Maxwell equations. Even though this is true, the question
immediately arises about relevance of such solutions to the solution of the
gap problem discussed in Section 5. In Section 5 the Ernst Eq.(2.4) was used
essentially for this purpose while Eq.s(6.11) are seemingly different from
Eq.(2.4). Again, fortunately, the difference is only apparent.
From the definition of Bonnor transformations, Eq.(6.13), it follows that
the potential $\mathcal{E}$ is real. Also, from the same definition it
follows that $\left\vert \Phi \right\vert ^{2}=\omega ^{2}.$Introduce now
new potential $Z=\mathcal{E}+\omega ^{2}$. For it, we obtai
\begin{equation}
\mathbf{\nabla }Z=\mathbf{\nabla }(\mathcal{E}+\omega ^{2})=\mathbf{\nabla
\mathcal{E+}2\omega \mathbf{\nabla }\omega =\mathbf{\nabla }\mathcal{E}+
\bar{\Phi}\mathbf{\nabla }\Phi . \tag{6.14}
\end{equation
Using this result, Eq.s(6.11) can be rewritten as follow
\begin{equation}
\left( Z\nabla ^{2}\mathcal{-}\mathbf{\nabla }Z\cdot \mathbf{\nabla }\right)
\left(
\begin{array}{c}
\mathcal{E} \\
\omeg
\end{array
\right) =0. \tag{6.15}
\end{equation
Furthermore, consider the related equatio
\begin{equation}
\left( Z\nabla ^{2}\mathcal{-}\mathbf{\nabla }Z\cdot \mathbf{\nabla }\right)
\omega ^{2}=0. \tag{6.16}
\end{equation
Evidently, if it can be solved, then equation $\left( Z\nabla ^{2}\mathcal{-
\mathbf{\nabla }Z\cdot \mathbf{\nabla }\right) \omega =0$ can be solved as
well. This being the case, the system of Eq.s(6.15) will be solved if the
Ernst-type vacuum equation
\begin{equation}
Z\nabla ^{2}\mathcal{=}\mathbf{\nabla }Z\cdot \mathbf{\nabla }Z \tag{6.17}
\end{equation
of the same type as Eq.(2.4) is solved. The obtained result is opposite to
that derived by Bonnor, described in Appendix A\textsl{\ }$\QTR{sl}{(}$see
also works Hauser and Ernst [124] and by Ivanov [125]). This means that, at
least in some cases (having physical significance) the self-dual Y-M fields
for both SU(2) and SU(3) gauge groups are obtainable as solutions of the
Ernst Eq.(2.4). This means that all results of Section 5 obtained for SU(2)
go through for the gauge group SU(3).
With these results at our disposal we would like to discuss their
applications to the Standard Model [19\textbf{, }126]. From Ref.[120] it is
known that the matrix $M\in $ SU(3) has subgroups which belong to SU(2). In
particular, one of such subgroups is obtained if we let $\Phi =0$ in
Eq.(6.12). Then, in view of Eq.(6.17), it is permissible to replace
\mathcal{E}$ by $\epsilon $ of Eq.(2.4). Thus, the obtained matrix $M$ is
decomposable as $M=M_{1}+M_{2},$ where the matrix $M_{1}$ is given b
\begin{equation}
M_{1}=f^{-1
\begin{bmatrix}
1 & 0 & \omega \\
0 & 0 & 0 \\
\omega & 0 & \mathcal{\epsilon }\bar{\epsilon
\end{bmatrix
. \tag{6.18}
\end{equation
in agreement with the matrix $M$ defined by Eq.(2.15) since in this case $f=$
$-\frac{1}{2}(\mathcal{\epsilon }+\mathcal{\bar{\epsilon}})=F.$ At the same
time, the matrix $M_{2}$ is given by
\begin{equation}
M_{2}
\begin{bmatrix}
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 &
\end{bmatrix
. \tag{6.19}
\end{equation
\ Using elementary operations with matrices we can represent matrix $M$ in
the form
\begin{equation}
\tilde{M}
\begin{bmatrix}
0 & 0 & 1 \\
a & b & 0 \\
b & c &
\end{bmatrix}
\tag{6.20}
\end{equation
where $a=$ $1/F$, $b=$ $\omega /F$ and $c=(F^{2}+\omega ^{2})/F.$ \ Such a
form of the matrix $\tilde{M}$ is typical for the \ semidirect product of
groups (when group elements are represented by matrices). In general case
one should replace $\tilde{M}$ by
\begin{equation*}
\tilde{M}
\begin{bmatrix}
0 & 0 & 1 \\
a & b & \alpha _{1} \\
b & c & \alpha _{2
\end{bmatrix
\end{equation*
\ Since the 2$\times $2 submatrix belongs to SU(2) (because its determinant
is 1) normally describing a rotation in 3d space (in view of SU(2)
\rightleftarrows $SO(3) correspondence), the parameters $\alpha _{1}$ and
\alpha _{2}$ are responsible for translation. In this, more general case,
the matrix $M$ describes the Galilean transformations, that is a combination
of translations and rotations. If the translational motion is one
dimensional it can be compactified to a circle \ in which case we obtain the
centralizer of SU(3) as SU(2)$\times $U(1). At the level of Lie algebra
su(3) this result was obtained in Ref.[127], pages 232 and 267. Its physical
interpretation discussed in this reference is essentially the same as ours.
The obtained centralizer is the symmetry group of the Weinberg-Salam model
(part of the standard model describing electroweak interactions).
All these arguments were meant only to demonstrate that the F-S-type model,
Eq.(3.19), should be used for description of electroweak interactions. For
description of strong interactions, in accord with Ref.[120], we claim that
the matrix $M$ given by Eq.(6.12) in which $\mathcal{E}$ and $\Phi $ are
taken from Bonnor's Eq.s(6.13) is \textsl{intrinsically} of SU(3) type. That
is,\textsl{\ it cannot be obtained} \textsl{from the matrix} $M$ (in which
\Phi =0)$ by applications of the N-K transformations, i.e. there are no
transformations of the type $M^{\prime }(\Phi )=AM(\Phi =0)A^{\dag }.$
Therefore, this type of SU(3) matrix should be associated with QCD part of
the SM. Hence we have to use the Ernst functional, Eq.(3.18), instead of the
F-S-type, Eq.(3.19). These results provide resolution of the gap dilemma.
\textsl{\ Evidently,\ this resolution is equivalent to the} \textsl
statement that the symmetry group of the SM is} SU(3)$\times $SU(2)$\times
U(1). This result should be taken into account in designing all possible
grand unified theories (GUT). \ In the next subsection we shall discuss the
rigidity of this result.
\subsubsection{Remarkable rigidity of symmetries of the Standard Model and
the extended Ricci flow}
In addition to Bonnor's transformations there are many other transformations
from vacuum to electrovacuum. In particular, in Appendix A we mentioned
transformations discovered by Herlt. By looking at Eq.s(A.5)-(A.7)
describing these transformations and comparing them with those by Bonnor,
Eq.(6.13), \ it is an easy exercise to check that all arguments leading from
Eq.s(6.11) to (6.17) go through unchanged. \ By using \ superposition of N-K
transformations and those either by Bonnor or by Herlt it is possible to
generate a countable infinity of vacuum-to electrovacuum transformations
such that they could be brought back to the vacuum Ernst solution,
Eq.(6.17), using results of previous subsection. This property of Einstein
and Einstein -Maxwell equations we shall call "rigidity". \ In view of
results of previous subsection, this rigidity explains the remarkable
empirical rigidity of symmetries of the SM. Indeed, suppose that the color
subgroup SU(3) can be replaced by SU(N), N\TEXTsymbol{>}3. In such a case it
is appropriate again to pose a question : Can self-dual Y-M fields-gravity
correspondence discovered by L.Witten for SU(2) be extended for SU(N),
\TEXTsymbol{>}3? In Ref.[128] G\"{u}rses demonstrated that, indeed, this is
possible but under nonphysical conditions. Indeed, this correspondence
requires for SU(n+1) self-dual Y-M fields to be in correspondence with the
set of n-1 Einstein-Maxwell fields. \ Since $n=1$ and $n=2$ cases have been
already described, we need only \ to worry about $n>2$. In such a case we
shall have many-to-one correspondence between the replicas of electrovacuum
and vacuum Einstein fields which, while permissible mathematically, is not
permissible physically since the Bonnor-type transformations require
one-to-one correspondence between the vacuum and electrovacuum fields.
Herrera-Aguillar and Kechkin, Ref.[129], found a way of transforming the
compactified fields of heterotic string (e.g. see Eq.(3.12)) into
Einstein--multi-Maxwell fields of exactly the same type as discussed in the
paper by G\"{u}rses [128]. While in the paper by G\"{u}rses these replicas
of Maxwell's fields needed to be postulated, in [129] their stringy origin
was found explicitly. \ From here, it follows that results obtained in this
subsection make the minimal functional, Eq.(3.8), and the associated with it
Perelman-like functional, Eq.(3.13), universal. \ \ The universality of the
associated \ with it Ricci flow, Eq.s (3.14), has physical significance to
be discussed below.$\ \ \ \ \ \ \ \ \ \ \ $
\section{Discussion}
\subsection{Connections with loop quantum gravity}
A large portion of this paper was spent on justification, extension and
exploitation of the remarkable correspondence between gravity and self-dual
Y-M fields noticed by Louis Witten. \ Such correspondence is \ achievable
only non perturbatively. In a different form it was emphasized in the paper
by Mason and Newman [130] inspired by work by Ashtekar, Jacobson and Smolin
[131]. It is not too difficult to notice that, in fact, papers [130,131] are
compatible with Witten's result since reobtaining of Nahm's equations in the
context of gravity is the main result of Ref.[131]. In this context the Nahm
equations are just equations for moving triad on some 3-manifold. \ Since
the connection of Nam's equations with monopoles can be found in Ref.[68]
and \ with instantons in Ref.[132] the link with Witten's results can be
established, in principle. Since the authors of [131] are the main
proponents of loop quantum gravity (LQG) such refinements might be helpful
for developments in the field of LQG. We shall continue our discussion of
LQG in \ the next subsection.
\subsection{Topology changing processes, the extended Ricci flow and the
Higgs boson}
According to the existing opinion the SM does not account for effects of
gravity. At the same time, in the Introduction we mentioned that in recent
works by Smolin and collaborators [32-34] it was shown that "topological
features of certain quantum gravity theories\footnote
That is LQG.} can be interpreted as particles, matching known fermions and
bosons of the first generation in the Standard Model". Similar results were
also independently obtained in works by Finkelstein, e.g. see Ref.[133] and
references therein. In particular, Finkelstein recognized that all quantum
numbers describing basic building blocks(=particles) of the SM can be neatly
organized with help of numbers used for description of knots. More
precisely, with projections of these knots onto some plane. It happens, that
for description of all particles of the electroweak portion of the SM the
numbers describing trefoil knot are sufficient. The task of
topological/knotty description of the entire SM was accomplished to some
extent in Ref.[33]. This reference as well as Ref.s[32-34] \ in addition are
capable of describing particle dynamics/transformations. \ All these works
share one common feature: calculations \textsl{do not} require Higgs boson.
This fact is consistent with results discussed in subsection 4.3.1.
The question arises: Is this feature a serious deficiency of these
topological methods or are these methods\ so superior to other, that the
Higgs boson should be looked upon as an artifact of the\ previously existing
perturbative methods used in SM calculations? To answer this \ self-imposed
question requires several steps.
First, we recall that according to the existing opinion the SM does not
account for effects of gravity. In such a case all the above results should
have nothing in common with the SM which is not true.
Second, the results obtained in this paper indicate that knots/links/braids
mentioned above have not only virtual (combinatorial/topological) but also
differential-geometric description (Appendix B). Because of this,
topological description should be looked upon as complementary to that
obtainable with help of the F-S-type models.
Third, it is known that knot/link- describing Faddeev model can be converted
into Skyrme model [134]. It is also known that the Skyrme-type models
\textsl{do not account for quarks explicitly}, Ref.[68], page 349. This is
not a serious drawback as we shall explain momentarily.
Fourth, much more important for us is the fact that the Skyrme model can be
used both in nuclear [135] and high energy [136\textbf{]} physics where it
is used for description of both QCD (\textsl{nicely describing} \textsl{the
entire known hadron spectra}) and electroweak interactions.
To account for quarks one has to go back to the Faddeev-type models capable
of describing knots/links and to make a connection between these \textsl
physical} knots/links and \textsl{topological/combinatorial} knots/ links
discussed in Refs[32-34\textbf{,}133]. This is still insufficient! It is
insufficient because Floer's Eq.(4.7) connects different vacua each is\
being described by the zero curvature condition Eq.(4.13). It is always
possible to look at such a condition as describing some knot/link
differential geometrically. With each knot, say in S$^{3},$ some 3-manifold
is associated. Furthermore such a manifold should be hyperbolic (subsection
3.6), that is either associated with hyperbolic-type knot/links [20,137] in
$^{3}$ or with knots/links \ "living" in hyperboloid embedded in the
Minkowski spacetime. Such a restriction is absent in Ref.s[32-34,133]. At
the same time the Y-M functional, Eq.(4.12), is defined for \ a particular
3-manifold whose construction is quite sophisticated. Eq.(4.7) describes
processes of topology change by connecting different vacua. Such changes
formally are not compatible with the fact that we are dealing with one and
the same 3-manifold M$\times $[0,1]. From the mathematical standpoint [11]
no harm is made if one considers just this 3-manifold, e.g. read Ref.[11],
page 22, bottom. Since particle dynamics is encoded in dynamics of
transformations between knots/links, it causes us to consider transitions
between different 3-manifolds. These 3-manifolds should be carefully glued
together as described in Ref.[11]. In this picture particle dynamics\
involving particle scattering/transformation is synonymous with processes
involving topology change. These are carried out naturally by instantons.
Such processes can be equivalently \ and more physically described in terms
of the \ properties of \ the (extended) Ricci flow (subsection 3.4)
following ideas of Perelman's proof of the Poincare$^{\prime }$ conjecture.
Indeed, experimentally there is only finite number of stable particles.
Without an exception, \ the end products of all scattering processes involve
only stable particles. This observation matches perfectly with the
irreversibility of Ricci flow processes involving changes in topology: from
more complex-to less complex 3-manifolds. Such Ricci flow model upon
development could provide mathematical justification to otherwise rather
vague statements \ by Finkelstein that "more complicated knots ( particles)
can therefore dynamically decay to trefoils (stable particles)", Ref.[133],
page 10, bottom.
\subsection{\protect\smallskip Elementary particles as black holes}
In the paper [138] by Reina and Treves and also in [139] by Ernst it was
found that for asymptotically flat Einstein-Maxwell fields generated from
the vacuum fields by means of transformations of the type described above,
in Section 6, the gyromagnetic factor $g=2$. For the sake of space, we refer
our readers to a recent review by Pfister and King [140] for definitions of
g$ \ and many historical facts and developments. In [140] it was noticed
that such value of $g$ is typical for most of stable particles of the SM. In
view of the quantum gravity-Y-M correspondence promoted in this paper, the
interpretation of elementary particles as black holes makes sense,
especially in view of \ the following excerpt \ from Ref.[38], page 526,
"There is one-to-one correspondence between stationary vacuum fields with
sources characterized by masses and angular momenta and stationary
Einstein-Maxwell fields with purely electromagnetic sources, i.e. charges
and currents."
\bigskip
\textbf{Appendix A}
\smallskip \medskip \textbf{Peculiar interrelationship between
gravitational, electromagnetic and other fields}
Unification of gravity and electromagnetism was initiated by Nordstr\"{o}m
in 1913- before general relativity was formulated by Einstein. Almost
immediately after \ Einstein's formulation, Kaluza, in 1921, and Klein, in
1926, proposed unification of electromagnetism and gravity by embedding
Einstein's 4-dimensional theory into 5 dimensional space in which 5th
dimension is a circle. These results and their generalizations (up to 1987)
can be found in the collection of papers compiled by Applequist, Chodos and
Freund [141]. Regrettably, this collection does not contain alternative
theories of unification. Since such alternative theories are much less
known/popular to/with string and gravity theoreticians, here we provide a
brief \ representative sketch of these alternative theories.
The 1st unified Einstein-Maxwell theory in 4-dimenssional space-time was
proposed and solved by Rainich in 1925. It was discussed in great detail by
Misner and Wheeler [142]. After Rainich there appeared many other works on
exact solutions of Einstein-Maxwell fields [38]. The most striking outcome
of these, more recent, works is the fact that multitude of exact solutions
of the \textsl{combined} Einstein-Maxwell equations can be obtained from
solutions of the \textsl{vacuum} Einstein equations.
In 1961 Bonnor [123]obtained the following remarkable result (e.g. read his
Theorem 1). Suppose solutions of the vacuum \ Einstein equations are known.
Using these solutions, it is possible to obtain a certain class of solutions
of Einstein-Maxwell equations.
In Section 6 we obtained the reverse result: Einstein's solutions for pure
gravity were obtained from solutions of the Einstein-Maxwell equations.
Without doing extra work, the electrovacuum solution \ obtained by Bonnor
can be converted into that describing propagation of the combined
cylindrical gravitational and electromagnetic waves. With some additional
efforts one can use the obtained results as an input for results describing
the combined gravitational, electromagnetic \ and neutrino wave propagation
[143-144\textbf{].}
The results \ by Bonnor comprise only a small portion of results connecting
static gravity fields with electromagnetic and neutrino fields. The next
example belongs to Herlt [38\textbf{,}145]. It provides a flavor of how this
could be achieved. \ We begin with Eq.(2.5). When written explicitly, this
equation read
\begin{equation}
\left( \partial _{\rho }^{2}+\frac{1}{\rho }\partial _{\rho }+\partial
_{z}^{2}\right) u=0. \tag{A.1}
\end{equation
This type of solution is the result of use of the matrix $M$, Eq.(2.15), in
Eq.(2.14b). Nakamura [146] demonstrated that there is another matrix $Q$
given by
\begin{equation}
Q=\left(
\begin{array}{cc}
f & f\omega \\
f\omega & f^{2}\omega ^{2}-\rho ^{2}f^{-1
\end{array
\right) \tag{A.2}
\end{equation
and the associated with it analog of Eq.(2.14b
\begin{equation}
\partial _{\rho }(\rho \partial _{\rho }Q\cdot Q^{-1})+\partial _{z}(\rho
\partial _{z}Q\cdot Q^{-1})=0 \tag{A.3}
\end{equation
leading to the equation analogous to Eq.(A.1), that i
\begin{equation}
\left( \partial _{\rho }^{2}-\frac{1}{\rho }\partial _{\rho }+\partial
_{z}^{2}\right) \tilde{u}=0. \tag{A.4}
\end{equation
Nakamura demonstrated that the solution $\tilde{u}$ is obtainable from
solution of Eq.(A.1). and vice versa. Thus, instead of the Ernst Eq.(2.4) we
can use Eq.(A.4). This fact plays crucial role in Hertl's work. \ In it, he
uses Eq.(A.4) to obtain $u$ in Eq.(A.1) as follow
\begin{equation}
\exp (2u)=\left( \tilde{u}^{-1}+G\right) ^{2} \tag{A.5}
\end{equation
with $G$ given b
\begin{equation}
G=\tilde{u}_{,\rho }[\rho (u_{,\rho }^{2}+u_{,z}^{2})-\tilde{u}\tilde{u
_{,\rho }]^{-1}. \tag{A.6}
\end{equation
These results allow him to introduce a potential $\chi $ vi
\begin{equation}
\chi =\tilde{u}^{-1}-G. \tag{A.7}
\end{equation
Using the original work of Ernst [43] as well as Ref.[38], we find that
solution of the static axially symmetric coupled Einstein-Maxwell equations
is given in terms of complex potentials $\mathcal{\epsilon }$ and $\Phi .$
In particular, in purely electrostatic case one has $\mathcal{\epsilon =\bar
\epsilon}=}e^{2u}-\chi $ and $\Phi =\bar{\Phi}=\chi $ while the
magnetostatic case is obtained from the electrostatic by requiring -$\Phi
\bar{\Phi}=\psi $ and $\mathcal{\epsilon =\bar{\epsilon}=}e^{2u}-\psi $ . In
this case $\psi $ is just relabeled $\chi .$ Ref.[38] contains many other
examples of the coupled Einstein-Maxwell equations obtained from the vacuum
solutions of Einstein equations.
The above results should be looked upon from the standpoint of fundamental
problem of the energy-momentum conservation in general relativity requiring
introduction (in the simplest case) of the Landau-Lifshitz (L-L)
energy-momentum pseudotensor. The description of more complicated
pseudotensors (incorporating that by L-L) can be found in the monograph by
Ortin [147]. To this one should add the problem about the positivity of mass
in general relativity. The difficulties with these concepts stem from the
very basic observation, lying at the heart of general relativity, that at
any given point\ of space-time gravity field can be eliminated by moving in
the appropriately chosen accelerating frame (the equivalence principle).
This fact leaves unexplained the origin \ of the tidal forces requiring
observation of motion of at least two test particles separated by some
nonzero distance. The explanation of this phenomenon within general
relativity framework is nontrivial.It can be found in [148]. In turn, it
leads to speculations about the limiting procedure leading to elimination of
gravity at a given point\footnote
The abundance of available energy-momentum pseudotensors is result of these
speculations.}. Apparently, this problem is still not solved
rigorously[147]. \ An outstandig collection of rigorous results on general
relativity can be found in the recent monograph by Choquet-Bruhat [149]
while \ [150] discusses peculiar relationship between the Newtonian and
Einsteinian gravities at the scale of \ our Solar system.
Conversely, one can think of other fields at the point/domain where gravity
is absent as subtle manifestations of gravity. Interestingly enough, such an
idea was originally put forward \ by Rainich already in 1925 ! Recent status
of \ these ideas is given in paper by Ivanov [125]. From such a standpoint,
the functional given by Eq.(3.13) (that is the Perelman-like entropy
functional) is sufficient for description of all fields with integer spin.
With minor modifications (e.g. involving either the Newman-Penrose formalism
[143,144] or supersymmeric formalism used in calculation of Seiberg-Witten
invariants [66]), it can be used for description of all known fields in
nature.\
\smallskip
\textbf{Appendix B}
\textbf{Some facts about integrable dynamics of knotted vortex filaments}
\smallskip
B.1 \textsl{Connection with the Landau-Lifshitz equation}
Following Ref.[85], we discuss motion of a vortex filament in the
incompressible fluid. Some historical facts relating this problem to string
theory are given in our \ recent work, Ref.[84]. \ Let $\mathbf{u}$ be a
velocity field in the fluid such that div$\mathbf{u}=0$. Therefore, we can
write $\mathbf{u}=\mathbf{\nabla }\times \mathbf{A}$ . Next, we define the
vorticity $\mathbf{w}=\mathbf{\nabla }\times \mathbf{u}$ so that eventually
\begin{equation}
u=-\frac{1}{4\pi }\int d^{3}x\frac{(\mathbf{x}-\mathbf{x}^{\prime })\times
\mathbf{w}(\mathbf{x}^{\prime })}{\left\Vert \mathbf{x}-\mathbf{x}^{\prime
}\right\Vert ^{3}}. \tag{B.1a}
\end{equation
This expression can be simplified by assuming that there is a \textsl{line}
vortex which is modelled by a tube with a cross-sectional area $dA$ \ and
such that the vorticity \textbf{w} is everywhere tangent to the line vortex
and has a constant magnitude w. Let then $\Gamma =\int wdA$ so that
\begin{equation}
u=-\frac{\Gamma }{4\pi }\oint \frac{(\mathbf{x}-\mathbf{x}^{\prime })\times
\mathbf{\gamma }}{\left\Vert \mathbf{x}-\mathbf{x}^{\prime }\right\Vert ^{3}}
\tag{B.1b}
\end{equation
with $d\mathbf{\gamma }$ being an infinitesimal line segment along the
vortex. Such a model of a vortex resembles very much model used for
description of dynamics of ring polymers [84]. Because of this, it is
convenient to make the following identification : $\mathbf{u}(\mathbf{\gamma
}(s,t))=\frac{\partial \mathbf{\gamma }}{\partial t}(s,t)$, with $s$ being a
position along the vortex contour and $t$-time. This allows us to write
\begin{equation}
\frac{\partial \mathbf{\gamma }}{\partial t}(s^{\prime },t)=-\frac{\Gamma }
4\pi }\oint \frac{(\mathbf{\gamma (}s^{\prime }\mathbf{,}t\mathbf{)}-\mathbf
\gamma (}s\mathbf{,}t\mathbf{)})}{\left\Vert \mathbf{\gamma (}s^{\prime
\mathbf{,}t\mathbf{)}-\mathbf{\gamma (}s\mathbf{,}t\mathbf{)}\right\Vert ^{3
}\times \frac{\partial \mathbf{\gamma }}{\partial s}ds \tag{B.1c}
\end{equation
and to make a Taylor series expansion in order to rewrite Eq.(B1c) as
\begin{equation}
\frac{\partial \mathbf{\gamma }}{\partial t}=\frac{\Gamma }{4\pi }[\frac
\partial \mathbf{\gamma }}{\partial s^{\prime }}\times \frac{\partial ^{2
\mathbf{\gamma }}{\partial s^{\prime 2}}\int \frac{ds}{\left\vert
s-s^{\prime }\right\vert }+...]. \tag{B.1d}
\end{equation
In this expression only the leading order result is written explicitly. By
introducing a cut off $\varepsilon $ such that $\left\vert s-s^{\prime
}\right\vert \geq \varepsilon $ and by rescaling time: $t\rightarrow \frac
\Gamma }{4\pi }t\ln (\varepsilon ^{-1})$ one finally arrives at the basic
vortex filament equatio
\begin{equation}
\frac{\partial \mathbf{\gamma }}{\partial t}=\frac{\partial \mathbf{\gamma
}{\partial s^{\prime }}\times \frac{\partial ^{2}\mathbf{\gamma }}{\partial
s^{\prime 2}}. \tag{B.2}
\end{equation
Introduce now the Serret-Frenet frame made of vectors $\mathbf{B}$,$\mathbf{
}$ and $\mathbf{N}$ so that $\mathbf{B}=\mathbf{T}\times \mathbf{N,\check
\kappa}N=}\frac{d\mathbf{T}}{ds},\mathbf{T}=\frac{\partial \mathbf{\gamma }}
\partial s},$ where $\check{\kappa}$ is a curvature of $\mathbf{\gamma }$.
Then, Eq.(B.2) can be equivalently rewritten as
\begin{equation}
\frac{\partial \mathbf{\gamma }}{\partial t}=\check{\kappa}\mathbf{B}
\tag{B.3}
\end{equation
or, as
\begin{equation}
\frac{\partial \mathbf{T}}{\partial t}=\mathbf{T}\times \mathbf{T}_{xx}.
\tag{B.4}
\end{equation
In the last equation the replacement $s\leftrightharpoons x$ was made so
that the obtained equation coincides with the Landau-Lifshitz (L-L) equation
describing dynamics of 1d Heisenberg ferromagnets [86].
\medskip
B.2 \textsl{Hashimoto map and the Gross-Pitaevskii equation}
Hashimoto [85] found ingenious way to transform the L-L equation into the
nonlinear Scr\"{o}dinger equation (NLSE) which is also widely known in
condensed matter physics literature as the Gross-Pitaevskii (G-P) equation
[96]. \ Because\ of its is uses in nonlinear optics and in condensed matter
physics for description of the Bose-Einstein condensation (BEC) theory of
this equation is well developed. Some facts from this theory are discussed
in the main text. Here we provide a sketch of how Hashimoto arrived at his
result.
Let $\mathbf{T}$,$\mathbf{U}$ and $\mathbf{V}$ be another triad such that
\begin{equation}
U=\cos (\int\limits^{x}\tau ds)\mathbf{N}-\sin (\int\limits^{x}\tau ds
\mathbf{B,}\text{ }\mathbf{V}=\sin (\int\limits^{x}\tau ds)\mathbf{N}+\cos
(\int\limits^{x}\tau ds)\mathbf{B} \tag{B.5}
\end{equation
in which $\tau $ is the torsion of the curve. Introduce new curvatures
\kappa _{1}$ and $\kappa _{2}$ \ in such a way tha
\begin{equation*}
\kappa _{1}=\check{\kappa}\cos (\int\limits^{x}\tau ds)\text{ and }\kappa
_{2}=\check{\kappa}\sin (\int\limits^{x}\tau ds),
\end{equation*
then, it can be shown that
\begin{equation}
\frac{\partial \mathbf{\gamma }}{\partial t}=-\kappa _{2}\mathbf{U}+\kappa
_{1}\mathbf{V} \tag{B.6a}
\end{equation
an
\begin{equation}
\frac{\partial ^{2}\mathbf{\gamma }}{\partial x^{2}}=\kappa _{1}\mathbf{U
+\kappa _{2}\mathbf{V.} \tag{B.6b}
\end{equation
Using these equations and taking into account that $\mathbf{U}_{t}\mathbf{V
=-\mathbf{UV}_{t}$ after some algebra one obtains the following equation
\begin{equation}
i\psi _{t}+\psi _{xx}+[\frac{1}{2}\left\vert \psi \right\vert ^{2}-A(t)]\psi
=0 \tag{B.7}
\end{equation
in which $\psi =\kappa _{1}+i\kappa _{2}$ and $A(t)$ is some arbitrary
x-independent function. By replacing $\psi $ with $\psi \exp
(-i\int\limits^{t}dt^{\prime }A(t^{\prime }))$ in this equation we arrive at
the canonical form of the NLSE which is also known as focussing cubic NLSE.
\begin{equation}
i\psi _{t}+\psi _{xx}+\frac{1}{2}\left\vert \psi \right\vert ^{2}\psi =0
\tag{B.8}
\end{equation
It can be shown that its solution allows us to restore the shape of the
curve/filament $\mathbf{\gamma }(s,t).$ The G-P equation can be identified
with Eq.(B.7) if we make $A(t)$ time-independent. In its canonical form it
is written as (in the system of units in which $\hslash =1$, $m=1/2$) [86
\begin{equation}
i\psi _{t}=-\psi _{xx}+2\kappa \left( \left\vert \psi \right\vert
^{2}-c^{2}\right) \psi =0. \tag{B.9}
\end{equation
In general, the \ sign of the coupling constant $\kappa $ can be both
positive and negative. In view of Eq.(B.8), when motion of the vortex
filament takes place in Euclidean space, the sign of $\kappa $ is negative.
This is important if one is interested in dynamic of \textsl{knotted} vortex
filaments [85]. For purposes of this work it is \ also of interest to study
motion of \ vortex\ filaments in the Minkowski and related (hyperbolic, \ de
Sitter ) spaces. This should be done with some caution since the transition
from Eq.(B.1a) to (B.2) is specific for Euclidean space. Thus, study can be
made at the level of Eq.s (B.3) and (B.4). Fortunately, such study was
performed quite recently [94,95]. The summary of results obtained in these
papers can be made with help of the following definitions. Introduce a
vector $n=\{n_{1},n_{2},n_{3}\}$ so that the unit sphere $S^{2}$ is defined
b
\begin{equation}
S^{2}:n_{1}^{2}+n_{2}^{2}+n_{3}^{2}=1. \tag{B.10}
\end{equation
Respectively, the de Sitter space $S^{1,1}$ \ (or unit pseudo sphere in
\textbf{R}$^{2,1}$ ) is defined by
\begin{equation}
S^{1,1}:n_{1}^{2}+n_{2}^{2}-n_{3}^{2}=1, \tag{B.11}
\end{equation
while the hyperbolic space \textbf{H}$^{2}($or hyperboloid embedded in
\textbf{R}$^{2,1}$) is defined b
\begin{equation}
\mathbf{H}^{2}:n_{1}^{2}+n_{2}^{2}-n_{3}^{2}=-1,n_{3}>0. \tag{B.12}
\end{equation
Using these definitions, it was proven in [94,95] that: a) for both de
Sitter and \textbf{H}$^{2}$ spaces there are analogs of the L-L equation (
e.g. those discussed in the main text, in subsection 5.2.); b) the Hasimoto
map can be extended for these spaces so that the respective L-L equations
are transformed into the same NLSE (or G-P) equation in which $\kappa $ is
\textsl{positive}.
\smallskip \pagebreak
\bigskip
\textbf{References}
\bigskip
[1] \ \ \ C.N Yang, R.Mills, Conservation of isotopic spin and isotopic gauge
\ \ \ \ \ \ \ invariance, Phys.Rev. 96 (1954) 191-195. \
[2] \ \ \ R.Utiyama, Invariant theoretical interpretation of interaction,
\ \ \ \ \ \ \ Phys.Rev. 101 (1956) 1597-1607.
[3] \ \ \ S.Coleman, There are no classical glueballs, Comm.Math.Phys.
\ \ \ \ \ \ \ 55 (1977) 113-116.
[4] \ \ \ H.Heseng, On the classical lump of Yang-Mills fields,
\ \ \ \ \ \ \ Lett.Math.Phys. 22 (1991) 267-275.
[5] \ \ \ M.Ablowitz, S.Chakabarty, R.Halburd, Integrable systems
\ \ \ \ \ \ \ and reductions of the self-dual Yang-Mills equations,
\ \ \ \ \ \ \ J.Math.Phys. 44 (2003) 3147-3173.
[6] \ \ \ L.Mason, N.Woodhouse, Integrability, Self-Duality,
\ \ \ \ \ \ \ and Twistor Theory, Clarendon Press, Oxford, 1996.
[7] \ \ \ N.Nekrasov, S.Shatashvili, Quantization of integrable systems and
\ \ \ \ \ \ \ \ four dimensional gauge theories, arXiv:0908.4052.
[8] \ \ \ \ T. Shafer, E. Shuryak, Instantons in QCD,
\ \ \ \ \ \ \ \ Rev.Mod.Phys. 70 (1998) 323-425.
[9] \ \ \ \ S.Donaldson, P.Kronheimer, The Geometry of Four Manifolds,
\ \ \ \ \ \ \ \ Clarendon Press, Oxford, 1990.
[10] \ \ D.Freed, K.Uhlenbeck, Instantons and Four Manifolds, 2nd Edition,
\ \ \ \ \ \ \ \ Springer-Verlag, New York, 1991.
[11] \ \ \ S.Donaldson, Floer Homology Groups in Yang-Mills Theory,
\ \ \ \ \ \ \ \ Cambridge University Press, Cambridge, 2002.
[12] \ \ E.Langmann, A.Niemi, Towards a string representation
\ \ \ \ \ \ \ \ of infrared SU(2) Yang-Mills theory, Phys.Lett.B 463\ (1999)
252-256.
[13] \ \ \ P.van Baal, A.Wipf, Classical gauge vacua as knots,
\ \ \ \ \ \ \ \ \ Phys.Lett.B 515 (2001) 181-184.
[14] \ \ \ T. Tsurumaru, I.Tsutsui, A.Fujii, Instantons, monopoles and the
\ \ \ \ \ \ \ \ \ flux quantization in the Faddeev-Niemi decomposition,
\ \ \ \ \ \ \ \ \ Nucl.Phys. B 589 (2000) 659-668.
[15] \ \ \ O.Jahn, Instantons and monopoles in general Abelian gauges,
\ \ \ \ \ \ \ \ \ J.Phys. A 33 (2000) 2997-3019.
[16] \ \ \ Y.Cho, Khot topology of classical QCD vacuum,
\ \ \ \ \ \ \ \ \ Phys.Lett.B 644 (2007) 208-211.
[17] \ \ \ L.Faddeev, Knots as possible excitations of the quantum Yang-Mills
\ \ \ \ \ \ \ \ \ fields, arXiv: 0805.1624.
[18] \ \ \ K-I.Kondo, T.Shinohara, T.Murakami, Reformulating SU(N)
\ \ \ \ \ \ \ \ \ Yang-Mills theory based on change of variables, arXiv:
0803.0176.
[19] \ \ \ T.Cheng, L.Li, Gauge Theory of Elementary Particle Physics,
\ \ \ \ \ \ \ \ \ Oxford U.Press, Oxford, 1984.
[20] \ \ \ C.Adams, The Knot Book, W.H.Freeman and Co., New York, 1994.
[21] \ \ \ \ A.Floer, An instanton-invariant for 3-manifolds,
\ \ \ \ \ \ \ \ \ Comm.Math.Phys. 118 (1988) 215-240.
[22] \ \ \ \ K-I Kondo, A.Ono, A.Shibata,T.Shinobara,T.Murakami,
\ \ \ \ \ \ \ \ \ Glueball mass from\ quantized knot solitons and
gauge-invariant
\ \ \ \ \ \ \ \ \ gluon mass, J.Phys. A 39 (2006) 13767-13782.
[23] \ \ \ \ L. Freidel, R.Leigh, D. Minic, A.Yelnikov On the spectrum of
pure
\ \ \ \ \ \ \ \ \ Yang-Mills theory, arxiv: 0801.1113.
[24] \ \ \ H.Meyer, Glueball Regge trajectories, Ph D, Oxford University.,
\ \ \ \ \ \ \ \ \ 2004; arxiv: hep/lat/0508002.
[25] \ \ \ L.Faddeev, A.Niemi, Aspects of electric and magnetic variables in
\ \ \ \ \ \ \ \ \ SU(2) Yang-Mills theory, arXiv: hep-th/0101078.
[26] \ \ \ A.Wereszczynski, Integrability and Hopf solitons in models with
\ \ \ \ \ \ \ \ \ explicitlly broken O(3) symmetry,
\ \ \ \ \ \ \ \ \ European Phys. Journal C 38 (2004) 261-265.
[27] \ \ \ R.Bartnik and J.McKinnon, Particle-like solutions of the
\ \ \ \ \ \ \ \ \ Einstein-Yang-Mills equations, Phys.Rev. Lett. 61 (1988)
141-144.
[28] \ \ \ M.Volkov, D.Galt'sov, Gravitating non Abelian solitonts and black
\ \ \ \ \ \ \ \ \ holes with Yang-Mills fields, Phys.Reports 319 (1999) 1-83.
[29] \ \ \ J.Smoller, A.Wasserman, S-T. Yau, J.McLeod, Smooth static
\ \ \ \ \ \ \ \ \ solutions of the Einstein-Yang-Mills equations,
\ \ \ \ \ \ \ \ \ Comm.Math.Phys. 143 (1991) 115-147.
[30] \ \ \ L.Witten, Static axially symmetric solutions of self-dual SU(2)
gauge
\ \ \ \ \ \ \ \ \ fields in Euclidean four-dimensional space,
\ \ \ \ \ \ \ \ \ Phys.Rev.D 19 (1979) 718-720.
[31] \ \ \ D.Korotkin, H.Nicolai, Isomonodromic quantization of
\ \ \ \ \ \ \ \ \ dimensionally reduced gravity, Nucl.Phys.B 475 (1996)
397-439.
[32] \ \ \ S.Bilson-Thomson, F.Macropoulou, L.Smolin, Quantum
\ \ \ \ \ \ \ \ \ gravity and the standard model, Class.Quant.Gravity 24
(2007)
\ \ \ \ \ \ \ \ \ 3975-3993.
[33] \ \ \ S.Bilson-Thompson, J.Hackett, L.Kauffman, L.Smolin,
\ \ \ \ \ \ \ \ \ Particle identification from symmetries of braided ribbon
\ \ \ \ \ \ \ \ \ network invariants, arXiv: 0804.0037.
[34] \ \ \ S.Bilson-Thomson, J.Hackett, L.Kauffman, Particle topology,
\ \ \ \ \ \ \ \ \ braids and braided belts, arXiv: 0903.137.
[35] \ \ \ B.List, Evolution of an extended Ricci flow system,
\ \ \ \ \ \ \ \ \ Communications in Analysis and Geometry 16 (2008)
1007-1048.
[36] \ \ \ B.List, Evolution of an extended Ricci flow system, PhD thesis,
\ \ \ \ \ \ \ \ \ Freie University of Berlin, 2005.
[37] \ \ \ A.Chamseddine, A.Connes, Why the Standard Model,
\ \ \ \ \ \ \ \ \ JGP 58 (2008) 38-47.
[38] \ \ \ H.Stephani, D.Kramer, M.MacCallum, C. Hoenselaers, E.Herlt,
\ \ \ \ \ \ \ \ \ Exact Solutions of Einstein's Field Equations,
\ \ \ \ \ \ \ \ \ 2nd Edition, Cambridge University Press, Cambridge, UK,
2006.
[39] \ \ \ R.Wald, General Relativity, The University of Chicago Press,
\ \ \ \ \ \ \ \ \ Chicago, IL,1984.
[40] \ \ \ B.O'Neill, Semi-Riemannian Geometry, Academic Press,
\ \ \ \ \ \ \ \ \ New York, 1983.
[41] \ \ \ C.Reina, A.Trevers, Axisymmetric gravitational fields,
\ \ \ \ \ \ \ \ \ Gen.Relativ.Gravit. 7 (1976) 817-838.
[42] \ \ \ F.Ernst, New formulation of the axially symmetric gravitational
\ \ \ \ \ \ \ \ \ field problem, \ Phys.Rev. 167 (1968) 1175-1178.
[43] \ \ \ F.Ernst, New formulation of the axially symmetric gravitational
field
\ \ \ \ \ \ \ \ \ problem. II, Phys.Rev. 168 (1968) 1415-1417.
[44] \ \ \ J.Isenberg, Parametrization of the space of solutions of
Einstein's
\ \ \ \ \ \ \ \ \ equations, Physical Rev.Lett. 59 (1987) 2389-2392.
[45] \ \ \ J.Isenberg, Constant mean curvature solutions of the Einstein's
\ \ \ \ \ \ \ \ \ constraint \ equations on closed manifolds,
\ \ \ \ \ \ \ \ \ Class.Quantum Grav.12 (1995) 2249-2274.
[46] \ \ \ C.N.Yang, \ Condition of self-duality for SU(2) gauge fields on
\ \ \ \ \ \ \ \ \ Euclidean four-dimensional space,
\ \ \ \ \ \ \ \ \ Phys.Rev.Lett. 38 (1977) 1477-1379.
[47] \ \ \ P.Forgacs, Z.Horvath, L.Palla, Generating the
\ \ \ \ \ \ \ \ \ Bogomolny-Prasad-Sommerfeld one monopole solution by a
\ \ \ \ \ \ \ \ \ B\"{a}cklund transformation, Phys.Rev.Lett. 45 (1980)
505-508.
[48] \ \ \ \ \ D.Singleton, Axially symmetric solutions for SU(2) Yang-Mills
theory,
\ \ \ \ \ \ \ \ \ J.Math.Phys. 37 (1996) 4574-4583.
[49] \ \ \ N.Manton, Complex structure of monoploles,
\ \ \ \ \ \ \ \ \ Nucl.Phys.B135 (1978) 319-332.
[50] \ \ \ \ A.Kholodenko, Towards physically motivated proofs of the
Poincare$^{\prime }$
\ \ \ \ \ \ \ \ \ and geometrization conjectures, J.Geom.Phys. 58 (2008)
259-290.
[51] \ \ \ A.Kholodenko, E. Ballard, From Ginzburg-Landau to
\ \ \ \ \ \ \ \ \ Hilbert-Einstein via Yamabe, Physica A 380 (2007) 115-162.
[52] \ \ \ D.Gal'tsov, Integrable systems in stringy gravity,
\ \ \ \ \ \ \ \ \ Phys.Rev.Lett. 74 (1995) 2863-2866.
[53] \ \ \ P.Breitenlohner,D.Maison, G.Gibbons, 4-dimensional black
\ \ \ \ \ \ \ \ \ holes from Kaluza-Klein theories,
\ \ \ \ \ \ \ \ \ Comm.Math.Phys.120 (1988) 295-333.
[54] \ \ \ E.Kiritsis, String Theory in a Nutshell, Princeton U.Press,
\ \ \ \ \ \ \ \ \ Princeton, 2007.
[55] \ \ \ A.Sen, Strong-weak coupling duality in three-dimensional
\ \ \ \ \ \ \ \ \ string theory, arXiv:hep-th/9408083.
[56] \ \ \ C.Reina, Internal symmetries of the axisymmetric gravitational
\ \ \ \ \ \ \ \ \ fields, J.Math.Phys. 20 (1979) 303-304.
[57] \ \ \ \ A.Kholodenko, Boundary conformal field theories,
\ \ \ \ \ \ \ \ \ limit sets of Kleinian groups and holography,
\ \ \ \ \ \ \ \ \ J.Geom.Phys.35 (2000) 193-238.
[58] \ \ \ K.Akutagawa, M.Ishida, C.Le Brun, Perelman's invariant,
\ \ \ \ \ \ \ \ \ Ricci flow and the Yamabe invariants of smooth manifolds,
\ \ \ \ \ \ \ \ \ Arch.Math. 88 (2007) 71-75.
[59] \ \ \ S.S. Chern, J.Simons, Characteristic forms and geometric
\ \ \ \ \ \ \ \ \ invariants, Ann.Math. 99 (1974) 48-69.
[60] \ \ \ S.S.Chern, On a conformal invariant of three-dimensional
\ \ \ \ \ \ \ \ \ manifolds, in Aspects of Mathematics and its Applications,
\ \ \ \ \ \ \ \ \ pp.245-252, Elsevier Science Publishers, \ Amsterdam, 1986.
[61] \ \ \ E.Witten, 2+1 dimensional gravity as an exactly solvable model,
\ \ \ \ \ \ \ \ \ Nucl.Phys. B311 (1988/89) 46-78.
[62] \ \ \ S.Gukov, Three-dimensional quantum gravity, Chern-Simons theory
\ \ \ \ \ \ \ \ \ and the A-polynomial, Comm.Math.Phys. 255 (2005) 577-627.
[63] \ \ \ J.Zinn-Justin, Quantum Field Theory and Critical Phenomena,
\ \ \ \ \ \ \ \ \ Clarendon Press, Oxford, UK, 1989.
[64] \ \ \ K.Huang, Quarks, Leptons and Gauge Fields, World Scientific,
\ \ \ \ \ \ \ \ \ Singapore, 1982.
[65] \ \ \ M-C.Hong, G.Tian, Global existence of m-equivariant
\ \ \ \ \ \ \ \ \ Yang-Mills flow in four dimensional spaces,
\ \ \ \ \ \ \ \ \ Comm.Analysis and Geometry 12 (2004) 183-211.
[66] \ \ \ \ P.Kronheimer, T.Mrowka, Monopoles and Three-Manifolds,
\ \ \ \ \ \ \ \ \ Cambridge U.Press, Cambridge, 2007.
[67] \ \ \ E.Frenkel, A.Losev, N.Nekrasov, Instantons beyond topological
\ \ \ \ \ \ \ \ \ theory II, arXiv:0803.3302.
[68] \ \ \ \ N.Manton, P.Sutcliffe, Topological Solitons,
\ \ \ \ \ \ \ \ \ Cambridge U.Press, Cambridge, 2004.
[69] \ \ \ \ D.Auckly, Topological methods to compute Chern-Simons
invariants,
\ \ \ \ \ \ \ \ \ Math.Proc.Camb.Phil. Soc.115 (1994) 220-251.
[70] \ \ \ P.Kirk, E.Klassen, Chern-Simons invariants of 3-manifolds and
\ \ \ \ \ \ \ \ \ representation spaces of knot groups, Math.Ann.287 (1990)
343-367.
[71] \ \ \ D.Auckly, L.Kapitanski, Analysis of S$^{2}$-valued maps and
\ \ \ \ \ \ \ \ \ Faddeev's model, Comm.Math.Phys. 256 (2005) 611-620.
[72] \ \ P.Forgacs, Z.Horvath, L.Palla, Soliton-theoretic framework
\ \ \ \ \ \ \ \ for generating multimonopoles, Ann.Phys. 136 (1981) 371-396.
[73] \ D.Harland, Hyperbolic calorons, monopoles and instantons,
\ \ \ \ \ \ \ Comm.Math.Phys. 280 (2008) 727-735.
[74] \ D.Harland, Large scale and large period limits of symmetric
\ \ \ \ \ \ \ calorons, arXiv: 0704.3695.
[75] \ M.Atiyah, Magnetic monopoles in hyperbolic space, in
\ \ \ \ \ \ \ Vector Bundles on Algebraic Varieties, pages 1-33, Tata
Institute,
\ \ \ \ \ \ \ Bombay, 1984.
[76] \ \ E.Witten, Some exact multipseudoparticle solutions of classical
\ \ \ \ \ \ \ Yang-Mills theory, PRL 38 (1977) 121-124.
[77] \ R.Radjaraman, Solitons and Instantons, North-Holland,
\ \ \ \ \ \ \ Amsterdam, 1982.
[78] \ N.Manton, Instantons on the line, Phys. Lett.B 76 (1978) 111-112.
[79] \ A.Kholodenko, Veneziano amplitudes, spin chains and
\ \ \ \ \ \ \ Abelian reduction of QCD, J.Geom.Phys.59 (2009) 600-619.
[80] \ A.Polyakov, Gauge Fields and Strings, Harwood Academic
\ \ \ \ \ \ \ Publishers, New York, 1987.
[81] \ A.Polyakov, Supermagnets and sigma models,
\ \ \ \ \ \ \ arXiv:hep-th/0512310.
[82] \ \ K.-I.Kondo, Magnetic monopoles and center vortices as
\ \ \ \ \ \ \ gauge-invariant topological defects simultaneously responsible
\ \ \ \ \ \ \ for confinement, J.Phys.G 35 (2008) 085001.
[83] \ \ A.Kholodenko, Heisenberg honeycombs solve Veneziano puzzle,
\ \ \ \ \ \ \ Int.Math.Forum 4 (2009) 441-509.
[84] \ A.Kholodenko, E.Ballard, Topological character of hydrodynamic
\ \ \ \ \ \ \ screening in suspensions of hard spheres: \ An example of the
\ \ \ \ \ \ \ universal phenomenon, Physica A 388 (2009) 3024-3062.
[85] \ \ A.Calini, Integrable dynamics of knotted vortex filaments, in
\ \ \ \ \ \ \ \ Geometry, Integrability and Quantization, pages 11--50,
\ \ \ \ \ \ \ \ Softex, Sofia, 2004.
[86] \ \ L.Faddeev, L.Takhtajan, Hamiltonian Methods in Theory of Solitons,
\ \ \ \ \ \ \ \ Springer-Verlag, Berlin, 1987.
[87] \ \ O.Pashaev, S.Sergeenkov, Nonlinear sigma model with noncompact
\ \ \ \ \ \ \ \ symmetry group in the theory of a nearly ideal Bose gas,
\ \ \ \ \ \ \ \ Physica A, 137 (1986) 282-294.
[88] \ \ P.Forgacs, N.Manton, Space-time symmetries in gauge theories,
\ \ \ \ \ \ \ \ Comm.Math.Phys. 72 (1980) 15-35.
[89] \ \ A.Jaffe, C.Taubes, Vortices and Monopoles, Birh\"{a}user, Boston,
1980.
[90] \ \ G. Landweber, Singular instantons with SO(3) symmetry,
\ \ \ \ \ \ \ \ arXiv:math/0503611.
[91] \ \ J.-H. Lee, O.Pashaev, Moving frames hierarchy and B-F theory,
\ \ \ \ \ \ \ \ J.Math.Phys. 39 (1998) 102-123.
[92] \ \ \ O.Pashaev, The Lax pair by dimensional reduction of Chern-Simons
\ \ \ \ \ \ \ \ gauge theory, J.Math.Phys. 37 (1996) 4368-4387.
[93] \ \ \ L.Faddeev, A.Niemi, U.Wiedner, Glueballs, closed fluxtubes and
\ \ \ \ \ \ \ \ \ $\eta (1440),$arXiv:hep-ph/0308240.
[94] \ \ \ Q.Ding, A note on the NLS and Schrodinger flow of maps,
\ \ \ \ \ \ \ \ \ Phys.Lett.A 248 (1998) 49-56.
[95] \ \ \ Q.Ding, J.-I. Inoguchi, Schrodinger flows, binormal motion
\ \ \ \ \ \ \ \ \ for curves and second AKNS hierarchies, Chaos, Solitons and
\ \ \ \ \ \ \ \ \ fractals, 21 (2004) 669-677.
[96] \ \ \ F.Dalfovo, S.Giorgi, L.Pitaevskii, S.Stringari, Theory of
Bose-Einstein
\ \ \ \ \ \ \ \ condensation in trapped gases, Rev.Mod.Phys. 71 (1999)
463-512.
[97] \ \ \ V.Zakharov, A.Shabat, Exact theory of two-dimensional
self-focusing
\ \ \ \ \ \ \ \ \ and one-dimensional self-modulation of waves in nonlinear
media,
\ \ \ \ \ \ \ \ \ Sov.Phys. JETP 34 (1972) 62-69.
[98] \ \ \ V.Zakharov, A.Shabat, Interaction between solitons in a stable
\ \ \ \ \ \ \ \ \ medium, Sov.Phys. JETP 37 (1973) 823-828.
[99] \ \ \ Y.Castin, C.Herzog, Bose-Einstein condensates in symmetry
\ \ \ \ \ \ \ \ \ breaking states, arXiv: cond-math/0012040
[100] \ \ J.Maki and T.Kodama, Phenomenological quantization scheme in a
\ \ \ \ \ \ \ \ \ nonlinear Schr\"{o}dinger equation, PRL 57 (1986)
2097-2100.
[101] \ \ E.Lieb, W.Linger, Exact analysis of an interacting Bose gas I.
\ \ \ \ \ \ \ \ \ The General solution and the ground state,
\ \ \ \ \ \ \ \ \ Phys.Rev.130 (1963) 1605-1616.
[102] \ \ M.Batchelor, X.Guan, J.McGuirre, Ground state of 1d bosons
\ \ \ \ \ \ \ \ \ with delta interaction: link to the BCS model, J.Phys. A
37 (2004)
\ \ \ \ \ \ \ \ \ L497-504.
[103] \ \ A.Ovchinnikov, On exactly solvable pairing models for bosons,
\ \ \ \ \ \ \ \ \ J.Stat.Mechanics: Theory and Experiment (2004) P07004.
[104] \ \ R.Richardson, Exactly solvable many-boson model,
\ \ \ \ \ \ \ \ \ J.Math.Physics 9 (1968) 1327-1343.
[105] \ \ R.Richardson, N. Sherman, Exact eigenstates of the
\ \ \ \ \ \ \ \ \ \ pairing-force hamiltonian, Nucl.Phys. 52 (1964) 221-238.
[106] \ \ \ A.Dhar, B.Shastry, Bloch walls and macroscopic string states
\ \ \ \ \ \ \ \ \ \ in Bethe solution of the Heisenberg ferromagnetic linear
chain,
\ \ \ \ \ \ \ \ \ \ PRL 85 (2000) 2813-2816.
[107] \ \ \ A.Dhar, B.Shastry, Solution of a generalized Stiltjes problem,
\ \ \ \ \ \ \ \ \ \ J.Phys. A 34 (2001) 6197-6208.
[108] \ \ \ J.Fuchs, A.Recati, W. Zwerger, An exactly solvable model of
\ \ \ \ \ \ \ \ \ \ the BCS-BEC crossover, PRL 93 (2004) 090408.
[109] \ \ \ J.Dukelsky,C.Esebbag, P.Schuck, Class of exactly solvable
\ \ \ \ \ \ \ \ \ \ pairing models, PRL 87 (2001) 066403.
[110] \ \ \ P.Ring, P.Schuck, The Nuclear Many-Body Problem,
\ \ \ \ \ \ \ \ \ \ Springer-Verlag, Berlin, 1980.
[111] \ \ \ J.Eisenberg, W.Greiner, Nuclear Theory, Vol.3,
\ \ \ \ \ \ \ \ \ \ Microscopic Theory of the Nucleus,
\ \ \ \ \ \ \ \ \ \ North-Holland, Amsterdam, 1972.
[112] \ \ \ D.Rowe, Nuclear Collective Motion,
\ \ \ \ \ \ \ \ \ \ Methhuen and Co.Ltd., London, 1970.
[113] \ \ \ A.Kerman, R.Lawson, M.Macfarlane, Accuracy of the
\ \ \ \ \ \ \ \ \ \ superconductivity approximation for pairing forces in
\ \ \ \ \ \ \ \ \ \ nuclei, Phys.Rev.124 (1961) 162-167.
[114] \ \ \ M.Goldhaber, E.Teller, On nuclear dipole vibrations,
\ \ \ \ \ \ \ \ \ \ Phys.Rev.74 (1948) 1046.
[115] \ \ \ A.Sykes, P.Drummond, M.Davis, Excitation spectrum of
\ \ \ \ \ \ \ \ \ \ bosons in a finite one-dimensional circular waveguide via
\ \ \ \ \ \ \ \ \ \ \ Bethe ansatz, Phys.Rev. A 76 (2007) 06320.
[116] \ \ \ \ D.Dean,M.Hjorth-Jensen, Pairing in nuclear systems:
\ \ \ \ \ \ \ \ \ \ \ from neutron stars to finite nuclei,
\ \ \ \ \ \ \ \ \ \ \ Rev.Mod.Phys. 75 (2003) 607-654.
[117] \ \ \ \ P.Braun-Munzinger, J.Wambach, \ Phase diagram of strongly
\ \ \ \ \ \ \ \ \ \ \ interacting \ matter, Rev.Mod.Phys. 81 (2009)
1031-1050.
[118] \ \ \ \ M.Barbaro, R.Cenni, A.Molinari, M.Quaglia, Analytic solution of
\ \ \ \ \ \ \ \ \ \ \ the pairing problem: one pair in many levels,
\ \ \ \ \ \ \ \ \ \ \ Phys.Rev.C 66 (2002) 03410.
[119] \ \ \ \ M.Barbaro, R.Cenni, A.Molinari, M.Quaglia, The many levels
\ \ \ \ \ \ \ \ \ \ \ pairing Hamiltonian for two pairs, The European Phys.J.
\ \ \ \ \ \ \ \ \ \ \ A22 (2004) 377-390.
[120] \ \ \ \ M.G\"{u}rses, B.Xanthopoulos, Axially symmetric, static
self-dual
\ \ \ \ \ \ \ \ \ \ \ SU(3) gauge fields and stationary Einstein-Maxwell
metrics,
\ \ \ \ \ \ \ \ \ \ \ Phys.Rev.D 26 (1982) 1912-1915.
[121] \ \ \ \ M.G\"{u}rses, Axially symmetric, static self-dual Yang-Mills
and
\ \ \ \ \ \ \ \ \ \ \ stationary Einstein-gauge field equations, Phys.Rev.D
30 (1984)
\ \ \ \ \ \ \ \ \ \ \ 486-488.
[122] \ \ \ \ P.Mazur, A relationship between the electrovacuum Ernst
\ \ \ \ \ \ \ \ \ \ \ equations and nonlinear $\sigma -$model, \
\ \ \ \ \ \ \ \ \ \ \ Acta Phys.Polonica B14 (1983) 219-234.
[123] \ \ \ \ W.Bonnor, Exact solutions of the Einstein-Maxwell equations,
\ \ \ \ \ \ \ \ \ \ \ Z.Phys.161 (1961) 439-444.
[124] \ \ \ \ I.Hauser,F.Ernst, SU(2,1) generation of electrovacs from
\ \ \ \ \ \ \ \ \ \ \ Minkowski space, J.Math.Phys. 20 (1979) 1041-1055.
[125] \ \ \ \ B.Ivanov, Purely electromagnetic spacetimes,
\ \ \ \ \ \ \ \ \ \ \ Phys.Rev.D 77 (2008) 044007.
[126] \ \ \ \ W.Cottingham, D.Greenwood, An Introduction to the
\ \ \ \ \ \ \ \ \ \ \ Standard Model of Particle Physics,
\ \ \ \ \ \ \ \ \ \ \ Cambridge U.Press, Cambridge, 2007.
[127] \ \ \ J.Charap, R.Jones, P.Williams, Unitary symmetry,
\ \ \ \ \ \ \ \ \ \ Rep.Progr.Phys. 30 (1967) 227-283.
[128] \ \ \ M.G\"{u}rses, Axially symmetric, static self-dual Yang-Mills and
\ \ \ \ \ \ \ \ \ \ stationary Einstein-gauge field equations,
\ \ \ \ \ \ \ \ \ \ Phys.Rev.D 30 (1984) 486-488.
[129] \ \ \ A.Herrera-Aguillar, O.Kechkin, String theory extensions of
\ \ \ \ \ \ \ \ \ \ Einstein-Maxwell fields: The stationary case,
\ \ \ \ \ \ \ \ \ \ J.Math.Physics 45 (2004) 216-229.
[130] \ \ \ L.Mason, E.Newman, A connection between the Einstein
\ \ \ \ \ \ \ \ \ \ \ and Yang-Mills equations, Comm.Math.Phys. 121 (1989)
659-668.
[131] \ \ \ A. Ashtekar, T.Jacobson, L.Smolin, A new characterization of
\ \ \ \ \ \ \ \ \ \ Half-flat solutions to Einstein's equation,
\ \ \ \ \ \ \ \ \ \ Comm.Math.Phys. 115 (1988) 631-648.
[132] \ \ \ T. Ivanova, Self-dual Yang-Mills connections and generalized
\ \ \ \ \ \ \ \ \ \ Nahm equations, in Tensor and Vector Analysis, pages
57-70,
\ \ \ \ \ \ \ \ \ \ Gordon and Breach, Amsterdam, 1998.
[133] \ \ \ R.Finkelstein, A.Cadavid, Masses and interactions of
\ \ \ \ \ \ \ \ \ \ q-fermionic knots, Int.J.Mod.Phys. A21 (2006) 4269-4302,
\ \ \ \ \ \ \ \ \ \ arXiv: hep-th/0507022.
[134] \ \ \ \ F.Lin, Y.Yang, Analysis on Faddeev knots and Skyrme solitons:
\ \ \ \ \ \ \ \ \ \ Recent progress and open problems, in Perspectives in
Nonlinear
\ \ \ \ \ \ \ \ \ \ Partial Differential Equations, pp 319-343, AMS
Publishers,
\ \ \ \ \ \ \ \ \ \ Providence, RI, 2007.
[135] \ \ \ N.Manton, S.Wood, Light nuclei as quantized Skyrmions:
\ \ \ \ \ \ \ \ \ \ Energy spectra and form factors, arXiv: 0809.350.
[136] \ \ \ J.Schechter, H.Weigel, The Skyrme model for barions,
\ \ \ \ \ \ \ \ \ \ arXiv: hep-ph/990755.
[137] \ \ \ F.Bonahon, Low-Dimensional Geometry.
\ \ \ \ \ \ \ \ \ \ From Euclidean Surfaces to Hyperbolic Knots.
\ \ \ \ \ \ \ \ \ \ AMS Publishers, Providence,RI. 2009.
[138] \ \ \ C.Reina, A.Treves, Gyromagnetic ratio of Einstein-Maxwell fields,
\ \ \ \ \ \ \ \ \ \ \ Phys.Rev.D 11 (1975) 3031-3032.
[139] \ \ \ \ F.Ernst, Charged version of Tomimatsu-Sato spinning mass field,
\ \ \ \ \ \ \ \ \ \ \ Phys.Rev.D 7 (1973) 2520-2521.\ \ \
[140] \ \ \ \ H.Pfister,M.King, The gyromagnetic factor in electrodynamics,
\ \ \ \ \ \ \ \ \ \ \ quantum theory and general relativity,
\ \ \ \ \ \ \ \ \ \ \ Class.Quant.Gravity 20 (2003) 205-213.
[141] \ \ \ \ T.Appelquist, A.Chodos, P.Freund, Modern Kaluza-Klein
\ \ \ \ \ \ \ \ \ \ \ Theories, Addison-Wesley Publ.Co., Reading, MA, 1987.
[142] \ \ \ \ C.Misner, J.Wheeler, Clsassical physics as geometry,
\ \ \ \ \ \ \ \ \ \ \ Ann.Phys. 2 (1957) 525-603.
[143] \ \ \ \ N. Sigbatulin, Oscillations and Waves in Strong Gravitational
\ \ \ \ \ \ \ \ \ \ \ and Electromagnetic Fields, Springer-Verlag, Berlin,
1991.
[144] \ \ \ \ J.Griffiths Colliding Plane waves in General Relativity,
\ \ \ \ \ \ \ \ \ \ \ Clarendon Press, Oxford, 1991.
[145] \ \ \ \ E.Herlt, Static and stationary axially symmetric gravitational
\ \ \ \ \ \ \ \ \ \ \ fields of bounded sourses. I Solutions obtainable from
the
\ \ \ \ \ \ \ \ \ \ \ van Stockum metric, Gen.Relativ.Gravit. 9 (1978)
711-719.
[146] \ \ \ \ Y.Nakamura, Symmetries of stationary axially symmetric
\ \ \ \ \ \ \ \ \ \ \ vacuum Einstein equations and the new family of exact
\ \ \ \ \ \ \ \ \ \ \ solutions, JMP 24 (1983) 606-609.
[147] \ \ \ \ T.Ortin, Gravity and Strings, Cambridge U. Press,
\ \ \ \ \ \ \ \ \ \ \ Cambridge, 2004.
[148] \ \ \ \ \ A.Besse, Einstein Manifolds, Springer-Verlag,
\ \ \ \ \ \ \ \ \ \ \ Berlin,1987.
[149] \ \ \ \ \ Y. Choquet-Bruhat, General Relativity and the
\ \ \ \ \ \ \ \ \ \ \ Einstein Equations, Oxford U.Press, Oxford, 2009.
[150] \ \ \ \ A.Kholodenko, Newtonian limit of Einsteinian gravity
\ \ \ \ \ \ \ \ \ \ \ and dynamics of Solar System, arXiv: 1006.4650.
\ \ \ \ \ \ \ \ \ \ \
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\end{document}
|
2,877,628,090,257 | arxiv | \section{Introduction}
\vspace*{-0.1in}
In this paper, we consider the following stochastic optimization problem:
\begin{align}\label{eqn:stocn}
\min_{\mathbf{x}\in\mathbb{R}^d}f(\mathbf{x})=\mathbb{E}_{\xi}[f(\mathbf{x}; \xi)],
\end{align}
where $f(\mathbf{x}; \xi)$ is a random function but not necessarily convex.
The above formulation plays an important role for solving many machine learning problems, e.g., deep learning~\cite{goodfellow2016deep}.
A prevalent algorithm for solving the problem is stochastic gradient descent (SGD)~\cite{ghadimi2013stochastic}. However, SGD can only guarantee convergence to a first-order stationary point (i.e., $\|\nabla f(\mathbf{x})\|\leq\epsilon_1$, where $\|\cdot\|$ denotes the Euclidean norm) for non-convex optimization, which could be a saddle point.
A potential solution to address this issue is to find a nearly second-order stationary point $\mathbf{x}$ such that $\|\nabla f(\mathbf{x})\|\leq \epsilon_1\ll 1$, and $-\lambda_{\text{min}}(\nabla^2 f(\mathbf{x}))\leq \epsilon_2\ll 1$, where $\lambda_{\text{min}}(\cdot)$ denotes the smallest eigenvalue. When the objective function is non-degenerate (e.g., strict saddle~\cite{pmlr-v40-Ge15} or whose Hessian at all saddle points has a negative eigenvalue), an approximate second-order stationary point is close to a local minimum.
Although there emerged a number of algorithms for finding a nearly second-order stationary point for non-convex optimization with a deterministic function~\cite{nesterov2006cubic,conn2000trust,Cartis2011,Cartis2011b,DBLP:conf/stoc/AgarwalZBHM17,DBLP:journals/corr/CarmonDHS16,royer2017complexity}, results for stochastic non-convex optimization are still limited. There are three closely related works~\cite{pmlr-v40-Ge15, DBLP:conf/colt/ZhangLC17,natasha2}. A summary of algorithms in these works and their convergence results is presented in Table~\ref{tab:2}. It is notable that Natasha2, which involves switch between several sub-routines including SGD, a degenerate version of Natasha1.5 for finding a first-order stationary point, and an online power method (i.e., the Oja's algorithm~\cite{oja1982simplified}) for computing the negative curvature (i.e., the eigen-vector corresponding to the minium eigen-value) of the Hessian matrix, is more complex than noisy SGD and SGLD.
\begin{table*}[t]
\caption{Comparison with existing stochastic algorithms for achieving an $(\epsilon_1, \epsilon_2)$-second-order stationary solution to~(\ref{eqn:stocn}), where $p$ is a number at least $4$, IFO (incremental first-order oracle) and ISO (incremental second-order oracle) are terminologies borrowed from~\cite{reddi2017generic}, representing $\nabla f(\mathbf{x}; \xi)$ and $\nabla^2 f(\mathbf{x}; \xi)\mathbf{v}$ respectively, $T_h$ denotes the runtime of ISO and $T_g$ denotes the runtime of IFO. The proposed algorithms SNCG have two variants with different time complexities, where the result marked with $*$ has a practical improvement detailed later. }
\centering
\label{tab:2}
\begin{small}\begin{tabular}{l|lll}
\toprule
algo.& oracle & second-order guarantee in &time complexity\\
&& expectation or high probability&\\
\midrule
Noisy SGD~\cite{pmlr-v40-Ge15} &IFO&$(\epsilon, \epsilon^{1/4})$, high probability&$\widetilde O\left(T_gd^p\epsilon^{-4}\right)$\\
\midrule
SGLD~\cite{DBLP:conf/colt/ZhangLC17} &IFO&$(\epsilon, \epsilon^{1/2})$, high probability&$\widetilde O\left(T_gd^p\epsilon^{-4}\right)$\\
\midrule
Natasha2~\cite{natasha2} &IFO + ISO&$(\epsilon, \epsilon^{1/2})$, expectation&$\widetilde O\left( T_g\epsilon^{-3.5}+T_h\epsilon^{-2.5} \right)$\\
\midrule
SNCG&IFO + ISO&$(\epsilon, \epsilon^{1/2})$, high probability&$\widetilde O\left(T_g\epsilon^{-4} + T_h\epsilon^{-3}\right)^*$\\
&&&$\widetilde O\left(T_g\epsilon^{-4} + T_h\epsilon^{-2.5}\right)$\\
\bottomrule
\end{tabular}
\end{small}
\vspace*{-0.2in}
\end{table*}
In this paper, we propose new stochastic optimization algorithms for solving~(\ref{eqn:stocn}). Similar to several existing algorithms, we also use the negative curvature to escape from saddle points. The key difference is that we compute a noisy negative curvature based on a proper mini-batch of sampled random functions. A novel updating step is proposed that follows a stochastic gradient or the noisy negative curvature depending on which decreases the objective value most. Building on this step, we present two algorithms that have different time complexities. A summary of our results and comparison with previous similar results are presented in Table~\ref{tab:2}. To the best of our knowledge, the proposed algorithms are the first for stochastic non-convex optimization with a second-order convergence in {\it high probability} and a time complexity that is {\it almost linear} in the problem's dimensionality. It is also notable that our result is much stronger than the mini-batch SGD analyzed in~\cite{Ghadimi:2016:MSA:2874819.2874863} for stochastic non-convex optimization in that (i) we use the same number of IFO as in~\cite{Ghadimi:2016:MSA:2874819.2874863} but achieve the second-order convergence using a marginal number of ISO; (ii) our high probability convergence is for a solution from a single run of the proposed algorithms instead of from multiple runs and using a boosting technique as in~\cite{Ghadimi:2016:MSA:2874819.2874863}.
Before moving to the next section, we would like to remark that stochastic algorithms with second-order convergence result are recently proposed for solving a finite-sum problem~\cite{reddi2017generic}, which alternates between a first-order sub-routine (e.g., stochastic variance reduced gradient) and a second-order sub-routine (e.g., Hessian descent). Since full gradients are computed occasionally, they are not applicable to the general stochastic non-convex optimization problem~(\ref{eqn:stocn}) and hence are excluded from comparison. Nevertheless, our idea of the proposed NCG-S step that lets negative curvature descent competes with the gradient descent can be borrowed to reduce the number of stochastic Hessian-vector products in their Hessian descent. We will elaborate this point later.
\section{Preliminaries and Building Blocks}
\vspace*{-0.1in}
Our goal is to find an $(\epsilon_1, \epsilon_2)$-second order stationary point $\mathbf{x}$ such that
$\|\nabla f(\mathbf{x})\|\leq \epsilon_1$, and $\lambda_{\min}(\nabla^2 f(\mathbf{x}))\geq -\epsilon_2$.
To this end, we make the following assumptions regarding~(\ref{eqn:stocn}).
\begin{ass}\label{ass:1} (i) Every random function $f(\mathbf{x}; \xi)$ is twice differentiable, and it has Lipschitz continuous gradient, i.e., there exists $L_1>0$ such that $\|\nabla f(\mathbf{x}; \xi) - \nabla f(\mathbf{y}; \xi)\|\leq L_1\|\mathbf{x} - \mathbf{y}\|$, (ii) $f(\mathbf{x})$ has Lipschitz continuous Hessian, i.e., there exists $L_2>0$ such that $\|\nabla^2 f(\mathbf{x}) - \nabla^2 f(\mathbf{y})\|_2\leq L_2\|\mathbf{x} - \mathbf{y}\|$, (iii) given an initial point $\mathbf{x}_0$, there exists $\Delta<\infty$ such that $f(\mathbf{x}_0) - f(\mathbf{x}_*)\leq \Delta$, where $\mathbf{x}_*$ denotes the global minimum of $f(\mathbf{x})$; (iv) there exists $G>0$ such that $\mathbb{E}[\exp(\|\nabla f(\mathbf{x}; \xi) - \nabla f(\mathbf{x})\|/G)]\leq \exp(1)$ holds.
\end{ass}
\vspace*{-0.1in}
{\bf Remark:} The first three assumptions are standard assumptions for non-convex optimization in order to establish second-order convergence. The last assumption is standard for stochastic optimization necessary for high probability analysis.
The proposed algorithms require noisy first-order information at each iteration and maybe noisy second-order information. We first discuss approaches to compute these information, which will lead us to the updating step NCG-S. To compute noisy first-order information, we use incremental first-order oracle (IFO) that takes $\mathbf{x}$ as input and returns $\nabla f(\mathbf{x}; \xi)$. In particular, at a point $\mathbf{x}$ we sample a set of random variables $\mathcal{S}_1 = \{\xi_1, \xi_2, \ldots,\}$ and compute a stochastic gradient $\mathbf{g}(\mathbf{x}) = \frac{1}{|\mathcal S_1|}\sum_{\xi_i\in\mathcal{S}_1}\nabla f(\mathbf{x}; \xi_i)$ such that $\|\mathbf{g}(\mathbf{x}) - \nabla f(\mathbf{x})\|\leq \epsilon_4\leq \min(\frac{1}{2\sqrt{2}}\epsilon_1, \epsilon_2^2/(24L_2))$ holds with high probability. This can be guaranteed by the following lemma.
\begin{lemma}
\label{lem:gc}
Suppose {\bf Assumption 1} (iv) holds. Let $\mathbf{g}(\mathbf{x}) = \frac{1}{|\mathcal S_1|}\sum_{\xi_i\in\mathcal{S}_1}\nabla f(\mathbf{x}; \xi_i)$. For any $\epsilon_4,\delta\in(0,1)$, $\mathbf{x}\in\mathbb{R}^d$, when
$|\mathcal{S}_1|\geq\frac{4G^2(1+3\log^2(1/\delta))}{\epsilon_4^2}$,
we have
$ \Pr(\|\mathbf{g}(\mathbf{x})-\nabla f(\mathbf{x})\|\leq\epsilon_4)\geq 1-\delta.$
\end{lemma}
The lemma can be proved by using large deviation theorem of vector-valued martingales (e.g., see~\cite{Ghadimi:2016:MSA:2874819.2874863}[Lemma 4]).
To compute noisy second-order information, we calculate a noisy negative curvature of a stochastic Hessian that is sufficiently close to the true Hessian. In particular, at a point $\mathbf{x}$ we sample a set of random variables $\mathcal{S}_2 = \{\xi'_1, \xi'_2, \ldots, \}$ and compute a noisy negative curvature $\mathbf{v}$ of the stochastic Hessian $H(\mathbf{x}) = \frac{1}{|\mathcal S_2|}\sum_{\xi'_i\in\mathcal{S}_2}\nabla^2 f(\mathbf{x}; \xi'_i)$, where $|\mathcal{S}_2|$ is sufficiently large such that $\|H(\mathbf{x}) - \nabla^2 f(\mathbf{x})\|_2\leq \epsilon_3\leq \epsilon_2/24$ holds with high probability, where $\|\cdot\|_2$ denotes the spectral norm of a matrix. This can be guaranteed according to the following lemma.
\begin{lemma}
\label{lem:Hc}
Suppose {\bf Assumption 1} (i) holds. Let $H(\mathbf{x}) = \frac{1}{|\mathcal S_2|}\sum_{\xi_i\in\mathcal{S}_2}\nabla^2 f(\mathbf{x}; \xi_i)$. For any $\epsilon_3,\delta\in(0,1)$, $\mathbf{x}\in\mathbb{R}^d$, when $|\mathcal{S}_2|\geq\frac{16L_1^2}{\epsilon_3^2}\log(\frac{2d}{\delta})$, we have
$ \Pr(\|H(\mathbf{x})-\nabla^2 f(\mathbf{x})\|_2\leq\epsilon_3)\geq 1-\delta'.$
\end{lemma}
The above lemma can be proved by using matrix concentration inequalities. Please see \cite{peng16inexacthessian}[Lemma 4] for a proof. To compute a noisy negative curvature of $H(\mathbf{x})$, we can leverage approximate PCA algorithms~\cite{DBLP:conf/nips/ZhuL16,DBLP:conf/icml/GarberHJKMNS16} using the incremental second-order oracle (ISO) that can compute $\nabla^2 f(\mathbf{x}; \xi)\mathbf{v}$.
\begin{lemma}\label{lem:approxPCA}
Let $H = \frac{1}{m}\sum_{i=1}^mH_i$ where $\|H_i\|_2\leq L_1$. There exists a randomized algorithm $\mathcal A$ such that with probability at least $1- \delta$, $\mathcal A$ produces a unit vector $\mathbf{v}$ satisfying $\lambda_{\min}(H)\geq \mathbf{v}^{\top}H\mathbf{v} - \varepsilon$ with a time complexity of $\widetilde O(T_h^1\max\{m, m^{3/4}\sqrt{L_1/\varepsilon}\})$, where $T_h$ denotes the time of computing $H_i\mathbf{v}$ and $\widetilde O$ suppresses a logarithmic term in $d, 1/\delta, 1/\varepsilon$.
\end{lemma}
\textbf{NCG-S: the updating step.} With the approaches for computing noisy first-order and second-order information, we present a novel updating step called NCG-S in Algorithm \ref{alg:sgnc}, which uses a competing idea that takes a step along the noisy negative gradient direction or the noisy negative curvature direction depending on which decreases the objective value more. One striking feature of NCG-S is that the noise level in computing a noisy negative curvature of $H(\mathbf{x})$ is set to a free parameter $\varepsilon$ instead of the target accuracy level $\epsilon_2$ as in many previous works~\cite{DBLP:conf/stoc/AgarwalZBHM17,DBLP:journals/corr/CarmonDHS16,peng16inexacthessian}, which allows us to design an algorithm with a much reduced number of ISO calls in practice. The following lemma justifies the fact of sufficient decrease in terms of the objective value of each NCG-S step.
\begin{lemma}
\label{lemma:ncg-s}
Suppose Assumption 1 holds.
Conditioned on the event $\mathcal A=\{\|H(\mathbf{x}_j) - \nabla^2 f(\mathbf{x}_j)\|_2\leq \epsilon_3\} \cap \{\|\mathbf{g}(\mathbf{x}_j) - \nabla f(\mathbf{x}_j)\|\leq \epsilon_4\}$ where $\epsilon_3\leq \epsilon_2/24$ and $\epsilon_4 \leq \min(\frac{1}{2\sqrt{2}}\epsilon_1, \epsilon_2^2/(24L_2))$, the update $\mathbf{x}_{j+1}=\text{NCG-S}(\mathbf{x}_j,\varepsilon,\delta,\epsilon_2)$ satisfies
$ f(\mathbf{x}_j) - f(\mathbf{x}_{j+1})\geq \max\left(\frac{1}{4L_1}\|\mathbf{g}(\mathbf{x}_j)\|^2 - \frac{\epsilon_1^2}{8L_1}, \frac{-\epsilon_2^2\mathbf{v}_j^\top H(\mathbf{x}_j)\mathbf{v}_j}{2L_2^2} - \frac{11\epsilon_2^3}{48L_2^2}\right).$
\end{lemma}
\setlength\floatsep{0.1\baselineskip plus 3pt minus 1pt}
\setlength\textfloatsep{0.1\baselineskip plus 1pt minus 1pt}
\setlength\intextsep{0.1\baselineskip plus 1pt minus 1 pt}
\begin{algorithm}[t]
\caption{The stochastic NCG step: $(\mathbf{x}^+, \mathbf{v}^\top H(\mathbf{x})\mathbf{v})=\text{NCG-S}(\mathbf{x}, \varepsilon, \delta,\epsilon_1, \epsilon_2)$}\label{alg:sgnc}
\textbf{Input}: $\mathbf{x}$, $\varepsilon$, $\delta$, $\epsilon_1, \epsilon_2$\;
let $\mathbf{g}(\mathbf{x})$ and $H(\mathbf{x})$ be a stochastic gradient and Hessian according to Lemma~\ref{lem:gc} and~\ref{lem:Hc}\;
Find a unit vector $\mathbf{v}$ such that $\lambda_{\min}(H(\mathbf{x}))\geq \mathbf{v}^{\top}H(\mathbf{x})\mathbf{v} - \varepsilon$
according to Lemma~\ref{lem:approxPCA}\;
\If{$-\frac{\epsilon_2^2}{2L_2^2}\mathbf{v}^\top H(\mathbf{x})\mathbf{v}-\frac{11\epsilon_2^3}{48L_2^2}>\frac{\|\mathbf{g}(\mathbf{x})\|^2}{4L_1} - \frac{\epsilon_1^2}{8L_1}$}{
Compute $\mathbf{x}^+ = \mathbf{x} - \frac{\epsilon_2}{L_2}\text{sign}(\mathbf{v}^{\top}\mathbf{g}(\mathbf{x}))\mathbf{v}$\;
}
\Else{
Compute $\mathbf{x}^+ = \mathbf{x} - \frac{1}{L_1}\mathbf{g}(\mathbf{x})$\;
}
return $\mathbf{x}^+, \mathbf{v}^\top H(\mathbf{x})\mathbf{v}$
\end{algorithm}
\section{The Proposed Algorithms: SNCG}
\vspace*{-0.1in}
In this section, we present two variants of the proposed algorithms based on the NCG-S step shown in Algorithm~\ref{alg:sgncA} and Algorithm~\ref{alg:SGSNCG}. The differences of these two variants are (i) SNCG-1 uses NCG-S at every iteration to update the solution, while SNCG-2 only uses NCG-S when the approximate gradient's norm is small; (ii) the noise level $\varepsilon$ for computing the noisy negative curvature (as in Lemma~\ref{lem:approxPCA}) in SNCG-1 is set to $\max(\epsilon_2, \|\mathbf{g}(\mathbf{x}_j)\|^\alpha)/2$ adaptive to the magnitude of the stochastic gradient, where $\alpha\in(0,1]$ is a parameter that characterizes $\epsilon_2 = \epsilon_1^\alpha$. In contrast, the noise level $\varepsilon$ in SNCG-2 is simply set to $\epsilon_2/2$. These differences lead to different time complexities of the two algorithms.
\setlength\floatsep{0.1\baselineskip plus 3pt minus 2pt}
\setlength\textfloatsep{0.1\baselineskip plus 1pt minus 2pt}
\setlength\intextsep{0.1\baselineskip plus 1pt minus 2 pt}
\begin{algorithm}[t]
\DontPrintSemicolon
\caption{SNCG-1: $(\mathbf{x}_0, \epsilon_1, \alpha, \delta)$}\label{alg:sgncA}
\textbf{Input}: $\mathbf{x}_0$, $\epsilon_1, \alpha$, $\delta$\;
Set $\mathbf{x}_1=\mathbf{x}_0$, $\epsilon_2 = \epsilon_1^\alpha$, $\delta' = \delta /(1+\max\left(\frac{48L_2^2}{\epsilon_2^3}, \frac{8L_1}{\epsilon_1^2}\right)\Delta)$\;
\For{$j=1,2,\ldots,$}{
$(\mathbf{x}_{j+1}, \mathbf{v}_j^\top H(\mathbf{x}_j)\mathbf{v}_j) = \text{NCG-S}(\mathbf{x}_j, \max(\epsilon_2, \|\mathbf{g}(\mathbf{x}_j)\|^\alpha)/2, \delta', \epsilon_1, \epsilon_2)$\;
\If{ $\mathbf{v}_j^{\top}H(\mathbf{x}_j)\mathbf{v}_j> -\epsilon_2/2$ and $\|\mathbf{g}(\mathbf{x}_j)\|\leq \epsilon_1$}
{return $\mathbf{x}_j$}
}
\end{algorithm}
\begin{algorithm}[t]
\DontPrintSemicolon
\caption{SNCG-2: $(\mathbf{x}_0, \epsilon_1, \delta)$}\label{alg:SGSNCG}
\textbf{Input}: $\mathbf{x}_0$, $\epsilon_1, \delta$\;
Set $\mathbf{x}_1=\mathbf{x}_0$, $\delta' = \delta /(1+\max\left(\frac{48L_2^2}{\epsilon_2^3}, \frac{8L_1}{\epsilon_1^2}\right)\Delta)$\
\For{$j=1,2,\ldots,$}{
Compute $\mathbf{g}(\mathbf{x}_j)$ according to Lemma~\ref{lem:gc}\;
\If{$\|\mathbf{g}(\mathbf{x}_j)\|\geq\epsilon_1$}{
compute $\mathbf{x}_{j+1}=\mathbf{x}_j-\frac{1}{L_1}\mathbf{g}(\mathbf{x}_j)$// SG step\; }
\Else{
compute $(\mathbf{x}_{j+1},\mathbf{v}_j^\top H(\mathbf{x}_j)\mathbf{v}_j)=\text{NCG-S}(\mathbf{x}_j,\epsilon_2/2,\delta',\epsilon_1, \epsilon_2)$\;
\If{$\mathbf{v}_j^\top H(\mathbf{x}_j)\mathbf{v}_j>-\epsilon_2/2$}
{
return $\mathbf{x}_j$\;
}
}
}
\end{algorithm}
\begin{theorem}
\label{cor:SGSNCG}
Suppose Assumption~\ref{ass:1} holds,
$\epsilon_3\leq \epsilon_2/24$ and $\epsilon_4 \leq \min(\frac{1}{2\sqrt{2}}\epsilon_1, \epsilon_2^2/(24L_2))$.
With probability $1-\delta$, SNCG-1 terminates with at most $[1+\max\left(\frac{48L_2^2}{\epsilon_2^3},\frac{8L_1}{\epsilon_1^2}\right)\Delta]$ NCG-S steps,
and furthermore, each NCG-S step requires time in the order of $
\widetilde O\left(T_h|\mathcal S_2| + T_h|\mathcal S_2|^{3/4} \frac{\sqrt{L_1}}{\max(\epsilon_2, \|\mathbf{g}(\mathbf{x}_j)\|^\alpha)^{1/2}} + |\mathcal S_1|T_g\right)$;
SNCG-2 terminates with at most $\frac{8L_1}{\epsilon_1^2}\Delta$ SG steps and at most $(1+\frac{48L_2^3}{\epsilon_2^3})\Delta$ NSG-S steps, each NCG-S step requires time in the order of $
\widetilde O\left(T_h|\mathcal S_2| + T_h|\mathcal S_2|^{3/4} \frac{\sqrt{L_1}}{\epsilon_2^{1/2}} + |\mathcal S_1|T_g\right)$.
Upon termination, with probability $1-3\delta$, both algorithms return a solution $\mathbf{x}_{j_*}$ such that $\|\nabla f(\mathbf{x}_{j_*})\|\leq2\epsilon_1 $ and $\lambda_{\text{min}}\left(\nabla^2f(\mathbf{x}_{j_*})\right)\geq -2\epsilon_2.$
\end{theorem}
\vspace*{-0.1in}
{\bf Remark:} To analyze the time complexity, we can plug in the order of $|\mathcal{S}_1|$ and $|\mathcal{S}_2|$ as in Lemma~\ref{lem:gc} and Lemma~\ref{lem:Hc}. It is not difficult to show that when $\epsilon_2=\sqrt{\epsilon_1}$, the worst-case time complexities of these two algorithms are given in Table~\ref{tab:2}, where the result marked by $^*$ corresponds to SNCG-1. However, this worse-case result is computed by simply bounding $T_h/\sqrt{\max(\epsilon_2, \|\mathbf{g}(\mathbf{x})\|^\alpha)}$ by $T_h/\sqrt{\epsilon_2}$. In practice, before reaching a saddle point (i.e., $\|\mathbf{g}(\mathbf{x}_j)\|\geq \epsilon_1)$, the number of ISO calls for each NCG-S step in SNCG-1 can be less than that of each NCG-S step in SNCG-2. In addition, the NCG-S step in SNCG-1 can be faster than the SG step in SNCG-2 before reaching a saddle point. More importantly, the idea of competing between gradient descent and negative curvature descent and the adaptive noise parameter $\varepsilon$ for computing the noisy negative curvature can be also useful in other algorithms. For example, in~\cite{reddi2017generic} the Hessian descent (also known as negative curvature descent) can take the competing idea and uses adaptive noise level for computing a noisy negative curvature.
\section{Conclusion}
\vspace*{-0.1in}
In this paper, we have proposed new algorithms for stochastic non-convex optimization with strong high probability second-order convergence guarantee. To the best of our knowledge, the proposed stochastic algorithms are the first with a second-order convergence in {high probability} and a time complexity that is almost linear in the problem's dimensionality.
{
\bibliographystyle{abbrv}
|
2,877,628,090,258 | arxiv | \section{Introduction}\label{introduction}
For Poynting-dominated jets, where field lines tie the black hole to
large distances, the energy flux is determined by the Blandford-Znajek
(BZ) process \citep{bz77} (for a review see
\citealt{rw75,rees82,begelman84,bk00,punsly2001,mg04,lev05}).
The BZ effect depends on the magnetic field strength near the black
hole and the Kerr black hole spin parameter $a/M$, where $-1\le a/M\le
1$. Self-consistent production of a relativistic Poynting jet likely
requires a rotating black hole accreting a thick disk with a disk
height ($H$) to radius ($R$) ratio of $H/R\gtrsim 0.1$
\citep{ga97,lop99,meier2001}. As discussed below, a rapidly rotating
($a/M\sim 0.5-0.95$) black hole accreting a thick ($H/R\gtrsim 0.1$)
disk is probably common for jet systems.
However, prior estimates of the BZ power output assume the presence of
an infinitely thin ($H/R\sim 0$) disk, only apply for $a/M\sim 0$, and
do not self-consistently determine the magnetic field strength or
field geometry. The purpose of this paper is to provide useful
formulae for the total and jet Blandford-Znajek power for arbitrarily
rapidly rotating black holes accreting a realistic disk. We also
discuss the dominance of the BZ effect over other relativistic jet
mechanisms.
\section{Black Hole Accretion Systems}
The accretion of a thick disk around a rapidly rotating black hole is
often invoked as the engine to power GRBs
\citep{narayan1992,w93,pac98,mw99,narayan2001,brod04}. Typical GRB
models invoke a relatively thick ($H/R\sim 0.1-0.9$) disk
\citep{mw99,pwf99,kohri2005}. During the GRB event, the black hole
forms with $a/M\sim 0.5-0.75$ and evolves to a rapidly rotating state
with $a/M\sim 0.9-0.95$ \citep{narayan1992,mw99,shap02,shib02}. GRB
models based upon internal shocks require an ultrarelativistic jet
with a typical Lorentz factor of $\Gamma\sim 100-1000$ in order to
overcome the compactness problem \citep{ls01,piran2005}, while Compton
drag models require $\Gamma\sim 20-100$
\citep{ghis00,lazzati2004,brod04}. Direct observations of GRB
afterglow show evidence for relativistic motion
\citep{goodman1997,taylor2004a,taylor2004b}. Large Lorentz factors
require a relatively large jet energy flux, which could be BZ-driven
and Poynting-dominated rather than neutrino-annihilation-driven and
enthalpy-dominated
\citep{mr97,pwf99,dimat02,mckinney2005a,mckinney2005b}. Core-collapse
of a rapidly rotating star leads to an inner disk with a strong
uniform (perhaps net) poloidal field.
The accretion of a relatively thick ($H/R\sim 0.9$) disk around a
rapidly rotating black hole is probably the engine that powers jets
from AGN and some black hole x-ray binaries. Both radio-loud AGN and
x-ray binaries in the low-hard state show a correlation between radio
and x-ray emission, which is consistent with radio synchrotron
emission and hard x-ray emission generated from Comptonization through
a thick disk \citep{merloni2003}. This suggests the disk is
geometrically thick when a system produces a jet, where the disk is
probably ADAF-like with $H/R\sim 0.9$ \citep{ny95}.
Based on Soltan-type arguments, AGN each probably harbor a rapidly
rotating ($a/M\sim 0.9-0.95$) black hole
\citep{up95,erz02,gsm04,shapiro2005}. AGN are observed to have jets
with $\Gamma\lesssim 10$ \citep{up95,biretta99}, even $\Gamma\sim 30$
\citep{brs94,gc01,jorstad01}, while some observations imply
$\Gamma\lesssim 200$ \citep{ghis93,kraw02,kono03}. For example, the
jet in M87 shows a large-scale opening angle of $10^\circ$ with
$\Gamma\sim 6$ \citep{junor99,biretta2002}. AGN probably accrete a
uniform field from solar wind capture or the ISM
\citep{narayan2003,pu05}.
Black hole x-ray binaries might have $a/M\sim 0.5-0.95$ \citep{gsm04},
while some may have $a/M\lesssim 0.5$ \citep{gd04}. X-ray binary
systems produce outflows and jets \citep{mr99,mcclintock2003}. For
example, black hole x-ray binary GRS 1915+105 has a jet with
apparently superluminal motion with $\Gamma\sim 1.5-3$
\citep{mr94,mr99,fb04,kaiser04}. Solar-wind capture x-ray binaries
probably accrete a uniform field \citep{narayan2003}.
\section{The Blandford-Znajek Effect}\label{jets}
Most authors estimate the BZ power based upon the \citet{bz77} model
of a {\it slowly} spinning black hole threaded by a {\it
monopole}-based magnetic field and accreting an {\it infinitely thin
disk}, which gives
\begin{equation}\label{pbz}
P_{BZ,old} \approx P_0 \left(B^r[{\rm G}]\right)^2
\left(\Omega_H^2/c\right) r_g^4 ,
\end{equation}
where $B^r$ is the radial field strength, $r_g\equiv GM/c^2$,
$\Omega_H=ac/(2Mr_H)$ is the rotation frequency of the hole,
$r_H=r_g(1+\sqrt{1-(a/M)^2})$ is the radius of the horizon for angular
momentum $J=a GM/c$, and the dimensionless Kerr parameter has $-1\le
a/M\le 1$. The parameter $P_0=0.01$ -- $0.1$, where the uncertainty
in $P_0$ arises because the strength of the magnetic field is not
self-consistently determined (see, e.g.,
\citealt{mt82,tm82,membranebook}). Force-free numerical models agree
with the above BZ model \citep{kom01}. GRMHD numerical models of
slowly spinning, accreting black holes mostly agree with the BZ model
for the nearly force-free funnel region of the Poynting-dominated jet
\citep{mg04}.
The force-free solution for the monopole BZ flux is $\propto
\sin^2{\theta}$ \citep{bz77}, but the accretion of a relatively thick
disk diminishes total BZ power output substantially \citep{mg04}.
This is because the electromagnetic energy accreted as a disk
dominates the energy extracted. Some black hole spin energy does
escape into the diffuse part of the corona, so the coronal outflow has
more Poynting flux for faster spinning holes
\citep{mg04,krolik05}. For rapidly rotating black holes, the field is
no longer monopolar and a significant amount of flux is generated
closer to the nearly force-free poles.
\begin{figure}
\includegraphics[width=3.33in,clip]{f1}
\caption{Top panel: total (triangles: data, dashed line: fit) and jet
(squares: data, solid line: fit) efficiency. Open points represent
negative efficiencies. Middle panel: coefficient ($\tilde{\eta}$)
least squares fit to $\eta$ formulae. Bottom panel: normalized field
(in Gauss) squared.}
\label{BZplot}
\end{figure}
HARM \citep{gmt03} was used to evolve a series of otherwise identical
GRMHD models with spin $a/M=-0.999 - 0.999$ and $H/R=0.1,0.2,0.5,0.9$
to determine the total BZ power ($P_{tot}$), jet BZ power ($P_{jet}$),
and field strength ($B^r$). A similar series of models were studied
in \citet{mg04}, and they give a description of the model setup,
limitations, and related results. The evolved field geometry is
relevant to most black hole systems and corresponds to a turbulent
disk field with a self-consistently generated large-scale flux
threading the black hole.
Figure~\ref{BZplot} shows the data and fits described below for
$H/R=0.2$. There is a weak dependence on $H/R\gtrsim 0.1$ for the
{\it jet} BZ power or efficiency since the thicker the disk, the less
solid angle available to the jet, but the field strength there is
larger in compensation. The non-jet results for other $H/R$ will be
presented in a separate paper.
The total (disk+corona+jet) BZ power efficiency in terms of the mass
accretion rate ($\dot{M}$) for $a/M>0.5$ is well fit by
\begin{equation}\label{EMTOTEFF}
\eta_{tot}=\frac{P_{tot}}{\dot{M}c^2} \approx 14.8\% \left(\frac{\Omega_H}{\Omega_H[a/M=1]}\right)^4 ,
\end{equation}
where the coefficient ($\tilde{\eta}$) is obtained by a least
squares fit. Net electromagnetic energy is {\it accreted} for
$a/M\lesssim 0.4$ (including retrograde) when an accretion disk is
present. This fact surprisingly agrees with the thin disk study of
\citet{li00}, who find that $\eta_{tot}>0$ only if $a/M\gtrsim
0.36$, corresponding to $\Omega_H>\Omega_K[{\rm ISCO}]$, the
Keplerian angular velocity at the inner-most stable circular orbit
(ISCO). The sparse spin study of \citet{krolik05} is in basic
agreement with our $\eta_{tot}$. Notice that the spin dependence of
our $\eta_{tot}$ is consistent with \citet{mg04} for their fit given
by equation 61 of data shown in figure 11, where they found
$\eta_{tot}\propto (2-r_H)^2\propto (a/M)^4$. Their fit coefficient
of $6.8\%$ was inaccurate for $a/M\approx 1$ since they had no model
beyond $a/M=0.97$ and they included points with $a/M\lesssim 0.5$
that do not fit well.
The nearly force-free jet region contains field lines that tie the
black hole (not the disk) to large distances, leading to the BZ-effect
and a jet. For $a/M> 0.5$,
\begin{equation}\label{EMJETEFF}
\eta_{jet}=\frac{P_{jet}}{\dot{M}c^2} \approx 6.8\% \left(\frac{\Omega_H}{\Omega_H[a/M=1]}\right)^5
\end{equation}
over both polar jets. If $a/M\approx 0.9$, then $\approx 1\%$ of the
accreted rest-mass energy is emitted back as a Poynting jet.
The horizon value of $B^r\equiv {^{^*}\!\!F}^{rt}$, where ${^{^*}\!\!F}$ is dual of the
Faraday, determines the black hole power output \citep{mg04}. For all
$a/M\ge 0$, the total and jet fields are
\begin{eqnarray}
\label{BRSQTOT}
\frac{(B^r_{tot}[{\rm G}])^2}{\rho_{0,disk}c^2} & \approx & 0.6+ 20\left(\frac{\Omega_H}{\Omega_H[a/M=1]}\right)^4,\\
\label{BRSQJET}
\frac{(B^r_{jet}[{\rm G}])^2}{\rho_{0,disk}c^2} & \approx & 1.0+ 81\left(\frac{\Omega_H}{\Omega_H[a/M=1]}\right)^5 ,
\end{eqnarray}
where the equipartition field satisfies $(B^r[{\rm
G}])^2/(8\pi)=\rho_{0,disk}c^2$, where $\rho_{0,disk}\equiv
\dot{M}t_g/r_g^3$ and $t_g\equiv GM/c^3$. Hence,
\begin{eqnarray}
\label{PTOT}
P_{tot} & \approx & 7.4\times 10^{-3}((B^r_{tot}[{\rm G}])^2 r_g^2 c - 0.6\dot{M}c^2)
,\\\nonumber\\
\label{PJET}
P_{jet} & \approx & 8.4\times 10^{-4}((B^r_{jet}[{\rm G}])^2 r_g^2 c - 1.0\dot{M}c^2) ,
\end{eqnarray}
where since the field is determined self-consistently, no explicit
spin dependence appears. This demonstrates the competition between
electromagnetic energy extraction and accretion. Notice that no
direct comparison can be cleanly made to equation~\ref{pbz} due to the
presence of two ambiguities $P_0$ and $B^r$, while our formulae have
no ambiguities. We suggest using the power output as given by
equations~\ref{EMTOTEFF} and~\ref{EMJETEFF}. The black hole mass
accretion rate must be determined independent of $a/M$ using a
model-dependent study.
The coefficient in the formulae above depends on the type of accreted
field geometry. As an extreme example, the accretion of a net
vertical field leads to an increase in the net electromagnetic
efficiency by a factor of five \citep{mg04}. Also, the accretion of a
net toroidal field leads to negligible energy extraction \citep{dv05}.
Future studies should focus on the physical relevance, stability, and
long-temporal evolution of accreting net toroidal and vertical fields
with realistic perturbations rather than exact symmetries. Accretion
of a net vertical field has been studied with nonrelativistic MHD
simulations \citep{igumenshchev2003}.
\section{BZ Jet Power for Collapsar Model}
For the collapsar model with black hole mass $M\sim 3{\rm M_{\odot}}$ feeding at
an accretion rate of $\dot{M}=0.1{\rm M_{\odot}}/s$, the magnetic field at the
poles of an $a/M\sim 0.9$ black hole is $B^r_{jet}\approx 10^{16}{\rm
G}$ and $\rho_{0,disk}\approx 3.4\times 10^{10}{\rm g}{\rm\,cm}^{-3}$. This
gives a per polar axis Poynting flux jet energy of $P_{jet}\approx
10^{51}{\rm\,erg~s^{-1}}$. Notice that the neutrino annihilation jet luminosity
for such a collapsar model gives $L_{\nu\bar{\nu},ann,jet}\sim 10^{50}
- 10^{51}{\rm\,erg~s^{-1}}$ \citep{pwf99}, so these processes are likely both
important. However, realistic models suggest that Poynting flux
dominates neutrino-annihilation energy flux
\citep{mckinney2005a,mckinney2005b}. Similar estimates can be made for
AGN and x-ray binaries.
\section{Dominance of Blandford-Znajek Effect}
\begin{figure}
\includegraphics[width=3.33in,clip]{f2}
\caption{Field type \#9 dominates in GRMHD numerical models and is
associated with Blandford-Znajek effect. Types \#1,2,3,5,6 dynamically
important. Type \#4 transient. Types \#7,8 not dynamically
stable. }
\label{fieldtypes}
\end{figure}
Figure~\ref{fieldtypes} shows the possible types of field geometries
in the disk (see also, e.g., \citealt{blandford02,hirose04}). Field
type \#1 corresponds to the Balbus-Hawley instability \citep{bh91},
which is present in our simulations.
Field type \#2 corresponds to models for which the field ties material
inside the ISCO to the outer disk \citep{gam99,krolik99}. As they
predicted, unlike in the $\alpha$-viscosity model, there is no feature
at the ISCO or a direct plunge into the black hole
\citep{mg04,krolik05}, which impacts any radiative model of the
inner-radial accretion disk. Field type \#3 corresponds to models
that consider the role of the black hole and disk on the disk
efficiency \citep{gam99,krolik99}. They suggested that efficiencies
of order unity or higher could be achieved by extracting energy from
the hole. This field type is present, but the disk efficiency is near
the thin disk efficiency \citep{mg04}. Thus, surprisingly, the
magnetic field and disk thickness play little role in modifying the
disk efficiency. In contrast, the angular momentum accreted is
reduced by magnetic field lines that tie from the disk to the hole
\citep{mg04,krolik05}.
Field types \#4 and \#5 correspond to surface reconnections. Type \#4
geometries are temporary and type \#5 are common. Thus, reconnection
efficiently removes large loops that tie the disk to itself. Field
type \#6 corresponds to the Blandford-Payne type model \citep{bp82}.
We find that the lab-frame $|B|\propto r^{-5/4}$ as in their model,
but the lab-frame $\rho\propto r^{0.0}$ instead of their $\rho\propto
r^{-3/2}$ and there are few such field lines present in a stable
configuration due to the inner-radial corona being convectively
unstable and magnetically unstable to magnetic buoyancy. Thin disks
likely have more stable surfaces that might allow for a stable wind.
Field type \#7 corresponds to coronal outflows or ergospheric-driven
winds \citep{pc90a,pc90b}. There are no dynamically stable field
lines that tie the inner-radial disk to large distances. Even for
$a/M=0.999$, no additional Poynting flux is created in the ergosphere
and the electromagnetic energy at infinity completely dominates the
hydrodynamic energy at infinity associated with the MHD Penrose
process, in basic agreement with the results of \citet{kom05} and
counter to the results of \citet{koide2002,punsly2005}.
\citet{koide2002} evolved for much too short a time.
\citet{punsly2005} used the 3D GRMHD near-horizon results of
\citet{krolik05}, but their near-horizon results could have numerical
artifacts associated with their use of Boyer-Lindquist coordinates.
However, there is a convectively and magnetically unstable,
self-consistently mass-loaded, collimated, mildly relativistic
($v/c\lesssim 0.95$) coronal outflow \citep{mg04,dv05}. A rotating
black hole is not required for nonrelativistic ($v/c\lesssim 0.6$)
coronal outflows \citep{mg02,mg04}.
Field type \#8 corresponds to \citet{uzdensky2005} type models. These
field geometries appear rarely and do not transfer a significant
amount of energy or angular momentum. Such geometries may be more
important for thin disks.
Field type \#9, the dominant feature, is associated with the
Blandford-Znajek model. Since the magnetic field confines the disk
matter away from the polar region, the rest-mass flux there is
arbitrarily low. The large BZ flux to low rest-mass flux ratio can
translate into an arbitrarily fast jet. The mass-loading of this jet
is considered in \citet{mckinney2005a,mckinney2005b}.
Notice that for accretion models with a net vertical field, the
resulting structure is essentially identical \citep{mg04}.
Reconnection efficiently erases the initial geometrical differences.
\section{Conclusions}
Typical BZ power output estimates assume an infinitely thin disk, a
slowly rotating black hole, and do not self-consistently determine the
magnitude or geometry of the magnetic field. We use GRMHD numerical
models to self-consistently determine the total and jet BZ efficiency
when a disk is present. There is a significantly stronger dependence
on black hole spin than prior estimates suggest.
Near the rotating black hole, the field geometry of the accretion
system is dominated by the field that leads to the BZ-effect. Since
the polar region is magnetically confined against disk material and
the BZ power is large, the jet Lorentz factor can be arbitrarily
large. This is unlike disk-related jet mechanisms that are directly
loaded by disk material.
\section*{Acknowledgments}
This research was supported by NASA-ATP grant NAG-10780 and a Harvard
CfA ITC fellowship.
|
2,877,628,090,259 | arxiv | \section{Introduction}
{A change-point model is a mixture-type model used to infer changes in a time series subjected to random shifts in its characteristics/features.} This means that the data can be broken down into segments and each segment follows a statistical model with different parameters. The time when a segment ends is called \emph{change point} and the segment is often referred to as \emph{regime} or \emph{state}. The inference based on change-point models focuses on two major issues: i) the estimate of number and locations of the change-points; ii) the choice of the best statistical model for each segment.
The change-point literature, starting from \cite{Quandt1958} and \cite{chernoff1964}, is by now fairly extensive in both frequentist \citep{BHATTACHARYA1987183,Hawkins2001} and Bayesian framework \citep{Carlin1992,Giordani2008,Chaturvedi2015}.
In the former model estimation can be difficult since the likelihood function becomes rapidly intractable as the number of change points increases \citep[for a discussion see][]{elliott2003}. On the other hand, in the more recently developed Bayesian models, the estimation procedures, generally based on Markov chain Monte Carlo (MCMC) algorithms, is always feasible, raising attention to this modelling approach. Among the existing Bayesian models, the most commonly used is the one proposed by \cite{Chib1998} \cite[see for example][]{Pastor2001,Chang2004,koop2004,ko2015}.
In \cite{Chib1998} a time series is modelled introducing a latent realization of a discrete time series, that denotes the regime membership, with temporal evolution ruled by a first order Markov process. The change-point model is then obtained assuming a transition matrix constrained so that regimes are visited in a non-reversible sequence; the model can then be seen as a constrained hidden Markov model (HMM) \citep[for an extensive introduction on the HMM, see][]{zucchini2009b}.
In \cite{Chib1998}, the Bayes factor is used to asses the number of segments through an off-line procedure.
Informational criteria, such as the Bayes factor, AIC and BIC, has been criticized \citep{Dziak} since they often suggest different models and it is not always clear which one is the most trustworthy. In a Bayesian setting, we can replace the information criteria with a fully probabilistic on-line model choice, that can be based on the reversible-jumps \citep{GREEN1995} or Dirichlet process (DP) \citep{Ferguson1973}.\\
The reversible-jump is a Markov chain Monte Carlo (MCMC) algorithm that
simulates from posterior distributions defined on spaces of varying dimensions
and it can be used to perform model choice. Its implementation requires a mapping function between model parameters that is not always straightforward to define and it has a great impact on the ability of the MCMC to explore the target distribution \citep{Brooks2003}.
On the other hand, the DP can be used as a prior for an infinite set of parameters, it allows
to perform model choice in mixture-based models \citep{teh2010} and, generally, it leads to MCMC algorithms straightforward to implement.
In this work we propose a semi-parametric extension of \cite{Chib1998} based on the DP, which address issue i) in a fully probabilistic setting, allowing an on-line model choice, while the second issue is left to future developments and considered out of the scope of this work.
{Prior to this work,} \cite{Kozumi2000} and \cite{ko2015} dealt with \cite{Chib1998} extensions DP based. Both of them have flaws that make their use problematic.
In \cite{Kozumi2000}, as also noted by \cite{ko2015}, {no temporal evolution in the latent allocation dynamic is considered, a regime can always be revisited and the model reduces to a mixture. In \cite{ko2015} there is not a clear and rigorous formalization of the underlying DP, there are incorrect computations of some full conditionals and the proposed MCMC algorithm updates the latent allocations in a way that easily leads to the identification of the wrong number of regimes (more details on these issues are given in the Appendix).} The model of \cite{ko2015} is close to our proposal and then, together with the one of \cite{Chib1998}, are considered as our main competitor.
In this paper we explain how to use the DP to build a semi-parametric extension of \cite{Chib1998}, giving a rigorous formalization of the entire procedure. Semi-parametric HMMs based on the DP has been previously proposed, see for example by \cite{Teh2006} and \cite{fox2011}, but here, due to the peculiar transition matrix, these approaches cannot be used. We propose to use the DP to obtain countably infinite distributions, each one with only two possible outcomes and where the probabilities of the outcomes are related to the stick-breaking weights \citep{sethuraman:stick}.
This approach {allows us} to treat the number of segments as random and to estimate it during model fitting. {Our specification of the model induces issues in the regime labeling that are solved by using a collapsed Gibbs sampler \citep{Liu1994} that marginalizes over the DP weights; the sampling algorithm is partially based on an birth and death MCMC.
Our proposal is applied to simulated datasets and two real ones.
The formers are used to show how the proposed MCMC
algorithm is able to recover model parameters, number and positions of the change points. Our results are compared with the ones
of \cite{Chib1998} and \cite{ko2015} and we show that a great improvement in terms of change points identification is achieved. The models are then applied to one of the most used test-dataset in change-point studies, i.e. the coal-mining disasters data \cite[see for example][]{JARRETT1979,Carlin1992}. The results we obtain are consistent with the one of \cite{Chib1998}, but, under our model, we are able to give a measure of uncertainly on the number of latent change points. In the last example a time series of Italian indoor radon measurement \citep{Nicol2015} is analyzed.
{Radon emissions are characterized by a non-stationary temporal pattern with periodic components \citep{Baykut2010} at different time scales \citep{barbosa2010} and
changes in mean, variability and trend.
Radon concentration is considered a possible earthquake precursor \citep{Woith2015} since have been observed that, prior to strong earthquakes, abrupt changes in the time series characteristics occur. The segmentation of radon data is a first step to try to understand its connection with geodynamic activity.
To the best of our knowledge, in the literature have never been proposed a model-based method to segment a radon time series while, for example,
wavelet transformation \citep{Barbosa2007} and
testing procedures \citep{barbosa2010} have been exploited.
We show that our model identifies reasonable change points and with sojourn time in a regime of about a day, that was also observed in previous studies \citep[see for example][]{barbosa2010}, proving that
change-point models can be used to infer changes in a radon time series. }
%
The paper is organized as follows. In Section \ref{sec:DP} we introduce the DP. In Section \ref{sec:chib} we formalize the model of Chib and in Section \ref{sec:model} we show our proposal. The MCMC algorithm is shown in Section \ref{sec:mcmc} while Section \ref{sec:ex} contains the simulated and real data examples. The paper ends with a discussion in Section \ref{sec:disc}. In the Appendix
we
{highlight} what we believe are the problematic aspects and unclear points of the model and MCMC implementation proposed by \cite{ko2015}.
\section{The semi-parametric change-point model} \label{sec:cp}
Before the model specification, we introduce the DP.
\subsection{The Dirichlet process} \label{sec:DP}
The DP is a stochastic process defined over a measurable space $(\Theta,\mathcal{B})$ \citep{Ferguson1973} and it is a random probability
measure on a space of distribution functions, i.e. a drawn from a DP is a random discrete distribution, it
depends on a \emph{scaling parameter} $\beta>0$ and a \emph{base distribution} $H$ over $\Theta$; the density of $H$ will be indicated with $h(\cdot)$. By definition $G$ is DP distributed with parameters $(\beta,H)$, i.e. $G|\beta,H \sim DP(\beta,H) $, if for any finite partition $\{A_k\}_{k =1}^K$ of $\Theta$ such that $\cup_{k=1}^K A_k \equiv \Theta$ and $A_k \cap A_{k^{\prime}}= \{\emptyset\}$ if $ k\neq k^{\prime}$,
we have
\begin{align} \label{eq:g}
&(G(A_1),G(A_2), \dots G(A_K))^{\prime}|\beta,H \sim \\ &\phantom{s}Dir(\beta H(A_1), \beta H(A_2),\dots , \beta H(A_K)),
\end{align}
where $Dir(\cdot,\cdot, \dots,\cdot)$ indicates the Dirichlet distribution.
Since
\begin{align} \label{eq:beta}
&(G(A),1-G(A))^{\prime}|\beta,H \sim\\& \phantom{s} Dir(\beta H(A), \beta(1-H(A)) )\equiv B(\beta H(A), \beta(1-H(A))),
\end{align}
where $B(\cdot,\cdot)$ is the beta distribution, mean and variance of $G(A)$ can be easily computed:
\begin{equation} \label{eq:meanvar}
E(G(A)) = H(A) , \quad Var(G(A))= \frac{H(A)(1-H(A))}{\beta+1}.
\end{equation}
From \eqref{eq:meanvar} we see that
$H$ is the expected shape of $G$ while $\beta$ controls the degree of variability.
\cite{sethuraman:stick} gives an explicit representation of $G$, that is called the \emph{stick-breaking process} or \emph{stick-breaking representation}; If
\begin{equation} \label{eq:discG}
G= \sum_{k \in \mathbb{N}} \tau_k \delta_{\boldsymbol{\theta}_k},
\end{equation}
is DP distributed, then
\begin{align}
\pi_{k} &\sim B(1, \beta),\label{eq:piw}\\
\tau_k& = \pi_{k}\prod_{l=1}^{k-1}(1-\pi_l)\label{eq:pi},\\
\boldsymbol{\theta}_k &\sim H \label{eq:piH},
\end{align}
where $\delta_{\cdot}$ is a point mass function, $\{\tau_k\}_{k \in \mathbb{N}}$ is the set of weights and $\{\boldsymbol{\theta}_k\}_{k \in \mathbf{N}}$ the set of \emph{atoms} of the DP.
Notice that $\tau_k>0$, $\sum_{k \in \mathbb{N}}\tau_k = 1 $ and G is then a discrete distribution. Sets
$\{\tau_k\}_{k \in \mathbb{N}}$ and $\{\pi_k\}_{k \in \mathbb{N}}$ are often written as $\{\tau_{\boldsymbol{\theta}_k}\}_{k \in \mathbb{N}}$ and $\{\pi_{\boldsymbol{\theta}_k}\}_{k \in \mathbb{N}}$ to stress their connection with the DP atoms $\{\boldsymbol{\theta}_k\}_{k \in \mathbb{N}}$.
For computational purposes \cite[see for example][]{neal2000,gelfand2005} a drawn from a DP is frequently parametrized using $\{\tau_k, \boldsymbol{\theta}_k\}_{k=1}^{\infty}$.
%
The discrete nature of $G$, with its countably infinite atoms and weights, makes the use of the DP convenient to extend semi-parametrically mixture-based models, where the couples atom-weight are potential sets of parameters (the atoms) and mixture probabilities (the weights);
details can be found in \cite{antoniak1974}, \cite{Mac1998}, \cite{teh2010} or \cite{fox2011}.
\subsection{The model of \cite{Chib1998}} \label{sec:chib}
{
In this section we introduce the hierarchical model of \cite{Chib1998}.
Let $\mathbf{y}=\{y_t\}_{t=1}^T$ be an observed time series. At the first level the conditional density of $y_{t}|\{y_j\}_{j=1}^{t-1}$\footnote{We assume $\{ y_j \}_{j=1}^0 \equiv \{ \emptyset \}$.} is assumed to depend on a vector of parameters $\boldsymbol{\theta}_{s_t} \in \Theta$, indexed by a discrete latent random variable $s_t \in \{1,2,\dots,K^*\}$ that indicates the regime membership, i.e. if $s_t=k$ then $y_t$ belongs to the $k^{th}$ regime; notice that $\boldsymbol{\theta}_{s_t} \equiv \boldsymbol{\theta}_{k}$ if $s_{t}=k$.
At the second level $\{s_t\}_{t=1}^T$ is a Markov process, with starting point $s_1=1$, ruled by a $K^* \times K^*$ constrained \emph{one-step ahead} transition matrix
\begin{equation}
P= \left(
\begin{array}{ccccccc}
\pi_{1} & 1-\pi_1 & 0 & 0 & \cdots & 0 \\
0 & \pi_2 & 1-\pi_2 &0& \cdots & 0\\
0 & 0 & \pi_3 & 1-\pi_3 & \cdots & 0\\
0 & 0 & 0 & \pi_4 & \cdots & 0 \\
\cdots & \cdots & \cdots & \cdots & \cdots\\
0 & 0 & 0 & 0 & \cdots & 1
\end{array}
\right).\label{eq:tra}
\end{equation}
Since the lower diagonal elements of $P$ are zeros, a regime left cannot be visited again.}
Let $f(\cdot)$ indicate a density function and $I(\cdot,\cdot)$ the indicator function, we can then write
\begin{align}
&f(s_t|s_{t-1}=k, \{ \pi_k \}_{k=1 }^{K^*} ) =\\ & \phantom{s} \pi_k I(s_t,k)+(1-\pi_k) I(s_t,k+1),\, t=2,\dots , T,\\
&s_1=1,
\end{align}
where it is assumed that $s_t \in\{k,k+1\}$ if $s_{t-1}=k$.
\cite{Chib1998} assumes beta distributions with the same set of parameters for all the elements of $P$ and then, letting $H$ be a prior distribution, the model can be written as
\begin{align}
&f(\mathbf{y}|\{\boldsymbol{\theta}_k \}_{k=1}^{K^*},\{s_t\}_{t=1}^T) = \prod_{t=1}^T \prod_{k =1}^{K^*} f(y_t|\{ y_j \}_{j=1}^{t-1},\boldsymbol{\theta}_{k})^{I(s_t,k)},\\
& f(s_t|s_{t-1}=k, \{ \pi_k \}_{k=1 }^{K^*} ) = \\
&\phantom{s}\pi_k I(s_t,k)+(1-\pi_k) I(s_t,k+1), \, t=2,\dots ,T, \label{eq:qqq}\\
& s_1=1,\label{eq:qqq1}\\
& \pi_{k} \sim B(\alpha,\beta),\, k=1,\dots K^*,\\
& \boldsymbol{\theta}_k \sim H,\, k=1,\dots K^*.
\end{align}
The model described above can be seen as an HMM with constrained transition matrix.
The number of
rows of $P$, that is equal to the number of regimes, must be set \emph{a priori} (see equation \eqref{eq:tra}) and an off-line procedure is needed to assess the value of $K^*$. With our proposal we are going to extend the model of \cite{Chib1998} allowing an on-line model choice.
\subsection{The semi-parametric extension} \label{sec:model}
Our extension starts with the introduction of an equivalent specification of the model of Chib that is obtained by substituting the latent process $\{s_t\}_{t=1}^T $
with $\{\boldsymbol{\psi}_t \in \Theta\}_{t=1}^T$, assuming the following time evolution:
\begin{align}
&f( \boldsymbol{\psi}_t|\boldsymbol{\psi}_{t-1}= \boldsymbol{\theta}_k, \{ \pi_k \}_{k=1 }^{K^*} ) \sim \\ &\phantom{s} \pi_k I(\boldsymbol{\psi}_t,\boldsymbol{\theta}_k)+(1-\pi_k) I(\boldsymbol{\psi}_t,\boldsymbol{\theta}_{k+1}), \, t=2,\dots , T,\label{eq:sss}\\
& \boldsymbol{\psi}_1 = \boldsymbol{\theta}_1.\label{eq:sss1}
\end{align}
assuming $\boldsymbol{\psi}_t \in \{\boldsymbol{\theta}_k, \boldsymbol{\theta}_{k+1} \}$ if $\boldsymbol{\psi}_{t-1}= \boldsymbol{\theta}_k$.
Notice that equations \eqref{eq:qqq} and \eqref{eq:qqq1} are equivalent to equations \eqref{eq:sss} and \eqref{eq:sss1} since $f(\boldsymbol{\psi}_t=\boldsymbol{\theta}_k|\boldsymbol{\psi}_{t-1}=\boldsymbol{\theta}_{k^{\prime}}) = f(s_t=k|s_{t-1}=k^{\prime})$ and
$\boldsymbol{\psi}_t=\boldsymbol{\theta}_k$ if and only if $s_t=k$.
{Semi-parametric extensions for mixture-based models are generally defined by taking $K^*\rightarrow \infty $ and assuming a DP based prior for the probability structure of $\{\boldsymbol{\psi}_t\}_{t=1}^T$ \citep[see for example][]{Escobar1995,Teh2006,Johnson2013}.
Here we propose the following. First notice that each row of $P_{\theta}$ sums to 1, i.e. is a vector of probabilities, { with only two non-zero values}. We assume $G= \sum_{k=1}^{\infty}\tau_k \delta_{\theta_k} \sim DP(\beta,H)$ and
we define distributions $G_{\theta_k}$'s, with $k \in \mathbb{N}$, as follows:
\begin{equation} \label{eq:Gthetaw}
G_{\theta_{k}} = \frac{\tau_{k}}{1-\sum_{l=1}^{k-1}\tau_{l}}\delta_{\theta_k}+\left( 1-\frac{\tau_{k}}{1-\sum_{l=1}^{k-1}\tau_{l}} \right)\delta_{\theta_{k+1}},\, k \in \mathbb{N}.
\end{equation}
{In our model $G_{\theta_k}$ is used as distribution for the $k^{th}$ row of $P_{\theta}$. Notice that $\frac{\tau_{k}}{1-\sum_{l=1}^{k-1}\tau_l}$ is equal to the beta distributed weight $\pi_k$ (see equation \eqref{eq:pi}), and then \eqref{eq:Gthetaw} can be written equivalently as}
\begin{equation} \label{eq:Gtheta}
G_{\theta_{k}} = \pi_{k}\delta_{\theta_k}+(1-\pi_{k})\delta_{\theta_{k+1}},\, k \in \mathbb{N} ,
\end{equation}
where by definition, see Section \ref{sec:DP},
\begin{align}
\pi_k & \sim B(1,\beta), \label{eq:qw}\\
\theta_k & \sim H. \label{eq:qw2}
\end{align}
We have then distributions based on the DP, one for each row of the infinite-dimensional transition matrix $P_{\theta}$. Notice that the atoms in the regimes are tied by construction, i.e. the atom of $\left[P_{\theta}\right]_{i,i+1}$ is equal to the one of $\left[P_{\theta}\right]_{i+1,i+1}$. We can then write
\begin{align}
&\boldsymbol{\psi}_t|\boldsymbol{\psi}_{t-1}= \theta_k, \{\boldsymbol{\theta}_k, \pi_k\}_{k \in \mathbb{N}} \sim G_{\theta_{k}} \label{eq:dd}
, \, t=2,\dots ,T,\\
& \boldsymbol{\psi}_1 = \boldsymbol{\theta}_1.
\end{align}
{ The model is then }
%
%
\begin{align}
& f(\mathbf{y}|\{\boldsymbol{\psi}_t \}_{t=1}^T) = \prod_{t=1}^T f(y_t|\{ y_j \}_{j=1}^{t-1},\boldsymbol{\psi}_{t}),\\
& \boldsymbol{\psi}_t|\boldsymbol{\psi}_{t-1}= \boldsymbol{\theta}_k, \{\boldsymbol{\theta}_k, \pi_k\}_{k \in \mathbb{N}} \sim G_{\theta_{k}}
,\, t=2,\dots ,T,\\
& \boldsymbol{\psi}_1 = \boldsymbol{\theta}_1,\\
& \pi_{k}|\beta \sim B(1, \beta), \, k \in \mathbb{N} ,\\
& \boldsymbol{\theta}_k|H \sim H, \, k \in \mathbb{N},
\end{align}
or, introducing the discrete time series $\{s_t\}_{t=1}^T$, it can be equivalently stated as
\begin{align}
& f(\mathbf{y}|\{\boldsymbol{\theta}_k \}_{k\in \mathbb{N}},\{s_t\}_{t=1}^T) = \prod_{t=1}^T \prod_{k \in \mathbb{N}} f(y_t|\{ y_j \}_{j=1}^{t-1},\boldsymbol{\theta}_{k})^{I(s_t,k)},\\
& f( s_t|s_{t-1}=k, \{ \pi_k \}_{k \in \mathbb{N}} ) = \\ & \phantom{s} \pi_k I(s_t,k)+(1-\pi_k) I(s_t,k+1), \, t=2,\dots ,T,\\
& s_1=1,\\
& \pi_{k}|\beta \sim B(1, \beta),\, k \in \mathbb{N},\\
& \boldsymbol{\theta}_k|H \sim H,\, k \in \mathbb{N}.
\end{align}
This model is an infinite-dimensional extension of the one shown at the end of Section \ref{sec:chib}.\\
As in the standard DP based mixture models, the number $K$ of unique values that $\boldsymbol{\psi}$ (or $s$) assumes, is used as an estimate of the number of segments of the observed time series.
Notice that $H$ acts as the prior distribution of $\boldsymbol{\theta}_k$.
\section{The MCMC algorithm} \label{sec:mcmc}
From equation \eqref{eq:Gtheta} and matrix $P_{\theta}$ we see that regimes are visited in increasing order, e.g. after regime $k$, regime $k+1$ is visited and this can produce an
inefficient MCMC algorithm.
Then,
to avoid the problem, we marginalized over the vector of DP weights. This strategy is often adopted \citep[see for example][]{neal2000,Teh2006,Blei2010} since the resulting process defines a prior over a partition of the data that no more depends on the labels.ì
Let $n_{i}^{j:j^{\prime}}= \sum_{t=j}^{j^{\prime}-1} \delta(s_t,i)\delta(s_{t+1},i)$, that is the number of self-transitions in the $i^{th}$ regime between time $j$ and $j^{\prime}$. After marginalization {we obtain the following for the dynamic of } $s_t$:
\begin{align}
&f(s_{t}=i|s_{t-1}=k,s_{t-1},\dots , s_{1},\beta)= \\
& \phantom{} \begin{cases}
\frac{n_{k}^{1:(t-1)}+1}{n_{k}^{1:(t-1)}+1+\beta} & \mbox{if } i=k,\\
\frac{\beta}{n_{k}^{1:(t-1)}+1+\beta} & \mbox{if }
i=k+1,\\
\end{cases}t=2,\dots , T,
\label{eq:s_t}\\
& s_1 =1.
\end{align}
We want to remark that now regimes are visited in increasing order only to simplify the notation, but any regimes re-labeling are equivalent. \\
{The conditional distribution of $s_{t}$ depends on the count $n_{k}^{1:(t-1)}$ and parameter $\beta$ and the process $s_t$ is no more Markovian.}
{The probability of $s_t=k|s_{t-1}=k,s_{t-1},\dots , s_{1},\beta$, i.e. $s_t$ assumes the same value of $s_{t-1}$, increases with $n_{k}^{1:(t-1)}$ meaning that, if at time $t$ an observation is allocated to the previously observed regime $k$, at time $t+1$ the probability to belong to the same regime increases; i.e. the process has the \emph{self reinforcement} property \citep{pemantle2007}. Parameter $\beta$ can be interpreted noticing that when there is only one observation in the $k^{th}$ regime, i.e. $n_{k}^{1:(t-1)}=0$, the odd to move to a new regime at time $t+1$ is $\beta$.}
{The model is then}
\begin{align}
& f(\mathbf{y}|\{\boldsymbol{\theta}_k \}_{k\in \mathbb{N}},\{s_t\}_{t=1}^T) = \prod_{t=1}^T \prod_{k \in \mathbb{N}} f(y_t|\{y_j\}_{j=1}^{t-1},\boldsymbol{\theta}_{k})^{I(s_t,k)},\\
& f(s_{t}=i|s_{t-1}=k,s_{t-2},\dots , s_{1},\beta)= \\ & \phantom{s}
\begin{cases}
& \frac{n_{k}^{1:(t-1)}+1}{n_{k}^{1:(t-1)}+1+\beta}\, \mbox{if } i=k,\\
& \frac{\beta}{n_{k}^{1:(t-1)}+1+\beta}\, \mbox{if }
i=k+1,\\
\end{cases} t=2,\dots,T,\\
&s_1 = 1,\\
& \boldsymbol{\theta}_k|H \sim H, \, k \in \mathbb{N}.
\end{align}
Under this setting the MCMC updates of $\beta$ and $\boldsymbol{\theta}_k$ are simple and we show, in the next paragraphs, how to implement them. The update of $\{s_t\}_{t=1}^T$ will be discussed in more details since it needs to be more carefully implemented to obtain and efficient algorithm.
\paragraph{The update of $\beta$}
Let $f(\beta)$ be a prior distribution, then
the full conditional of $\beta$ is proportional to
$
f(s_1,\dots,s_T|\beta) f(\beta).
$
Using \eqref{eq:s_t} we can find that
\begin{align}
& f(s_1,\dots,s_T|\beta)=\\ & \phantom{s} \left[ \prod_{i=1}^{K-1} \frac{ \beta \prod_{j=0}^{n_{i}^{1:T}-1}(1+j) } { \prod_{j=0}^{n_{i}^{1:T}}(1+\beta+j) } \right] \frac{ \prod_{j=0}^{n_{K}^{1:T}-1}(1+j) } { \prod_{j=0}^{n_{K}^{1:T}-1}(1+\beta+j) } , \label{eq:prod}
\end{align}
and using relation $a(a+1)\dots (a+m-1)= \frac{\Gamma(a+m)}{\Gamma(a)}$, \eqref{eq:prod} can be expressed as
\begin{equation} \label{eq:S}
\beta^{K-1}\prod_{i=1}^{K} \frac{ \Gamma(\beta+1) \Gamma(n_{i}^{1:T}+1 )}{\Gamma(n_{i}^{1:T}+1+\beta+1 -I(i,K) )}.
\end{equation}
The full conditional of $\beta$ is then
\begin{equation} \label{eq:fullbeta}
\beta^{K-1}\prod_{i=1}^{K} \frac{ \Gamma(\beta+1) \Gamma(n_{i}^{1:T}+1 )}{\Gamma(n_{i}^{1:T}+1+\beta+1 -I(i,K) )} f(\beta).
\end{equation}
{To the best of our knowledge, there is not a prior distribution $f(\beta)$ that let us express \eqref{eq:fullbeta} in a closed form from which sampling is easy and a sample of $\beta$ must be draw using a Metropolis-Hastings step. }
\paragraph{The update of $\boldsymbol{\theta}_k$}
The full conditional of $\boldsymbol{\theta}_k$ is proportional to
\begin{equation} \label{eq:fulltheta}
\prod_{t=1}^{T} f(y_{t}|\{y_j\}_{j=1}^{t-1}, \boldsymbol{\theta}_{k}) ^{I(s_t,k)} h(\boldsymbol{\theta}_k),
\end{equation}
The functional form of \eqref{eq:fulltheta} depends on how we specify $f(y_{t}|\{ y_j \}_{j=1}^{t-1}, \boldsymbol{\theta}_{k})$ and $H$. As an example, if $f(y_{t}|\{ y_j \}_{j=1}^{t-1}, \boldsymbol{\theta}_{k})\equiv f(y_{t}| \boldsymbol{\theta}_{k})$, with $Y_t |\boldsymbol{\theta}_{k} \sim N(\mu_k,\sigma_k^2)$ and $\boldsymbol{\theta}_k= \{ \mu_k,\sigma_k^2 \}$, then if $H$ is the product of a normal distribution over $\mu_k$ and inverse gamma over $\sigma_k^2$, likelihood and prior are conjugate and the full conditional is normal-inverse gamma \citep{Gelman2003}.
\subsection{The update of $\{s_t\}_{t=1}^T$}
It is generally preferable to update jointly as many random variables as possible \citep{Robert2005}.
Unfortunately, we are unable to find a way to sample from the joint full condition of $\{s_t\}_{t=1}^T$ and then a different approach must be used.
A simple solution is the univariate update of each component $s_t$ but, experimenting with simulated data, we notice that {this } leads to unsatisfactory results in terms of MCMC chain mixing since, for example, redundant states with similar $\boldsymbol{\theta}_k$'s are created and the distribution of $K$ is generally {entirely} concentrated on a single value. We solved the aforementioned problems by combining the univariate update with other updates:
\begin{itemize}
\item the split update (or birth move) - we propose a new change point at time $t$;
\item the merge update (or death move) - we propose to merge consecutive regimes.
\end{itemize}
{At each MCMC iteration only one of them is performed, choosing randomly with assign probabilities. } We assume that before the MCMC update of $s_t$ is performed, it have value $k$ and, to simplify notation, after each MCMC step regimes are relabelled so to $s_1=1$ and $s_{t} \in \{s_{t-1}, s_{t-1}+1 \}$.
\paragraph{The single-component update}
Let $n_{i}^{-t}= n_{i}^{1:(t-1)} +n_{i}^{(t+1):T}$, i.e. the number of self transition in the $i^{th}$ regime without taking into account transitions that involve $s_{{t}}$, and let $*$ indicates a new regime.
We have to sample $s_t$ only if $s_t\neq s_{t-1}$ or $s_t\neq s_{t+1}$, otherwise $s_t = s_{t+1}=s_{t-1}$ with probability 1, then
\begin{itemize}
\item with probability proportional to $$\frac{\beta}{\beta+1}f(y_{t}|\{ y_{j}\}_{j=1}^{t-1}, \boldsymbol{\theta}_{*})$$
$s_t=*$;
\item if $t \neq 1$, $s_t$ can be equal to $s_{t-1}$ with probability proportional to
\begin{equation}
\frac{n_{s_{t-1}}^{-t}+1}{n_{s_{t-1}}^{-t}+1+\beta+1} f(y_{t}|\{ y_{j}\}_{j=1}^{t-1},\boldsymbol{\theta}_{s_{t-1}});
\end{equation}
\item if $t \neq T$, $s_t$ can be equal to $s_{t+1}$ with probability proportional to
\begin{equation}
\frac{n_{s_{t+1}}^{-t}+1}{n_{s_{t+1}}^{-t}+1+\beta+1-I(s_{t+1},K)} f(y_{t}|\{ y_{j}\}_{j=1}^{t-1}, \boldsymbol{\theta}_{s_{t+1}});
\end{equation}
\item
if $n_{k}^{1:T}=0$, then $s_{t-1} \neq k \neq s_{t+1}$, and $s_t$ can be equal to $k$ with full conditional
\begin{equation}
\propto \frac{\beta}{\beta+1}f(y_{t}|\{ y_{j}\}_{j=1}^{t-1}, \boldsymbol{\theta}_{k}).
\end{equation}
\end{itemize}
\paragraph{The split update}
Let $S_{-} = \{s_{t'}:s_{t'}=k, t^{\prime}<t \}$ and $S^{+} = \{s_{t'}:s_{t'}=k, t^{\prime}\geq t \}$, let $n_{S_{-}}$ and $n_{S^{+}}$ be the number of self transitions in the two subsets and let
\begin{equation}
\gamma_{c}(n) = \frac{\Gamma(\beta+1)\Gamma( n+1 )}{\Gamma(n+1+\beta+1 -c)},
\end{equation}
then
\begin{itemize}
\item $s_t=k$ for all $s_t \in S_{-}\cup S^+$ with probability
\begin{equation}
\propto
\gamma_{I(k,K)}(n_{k}^{1:T})
\prod\limits_{t: s_t \in S_{-}\cup S^+ } f(y_{t}|\{ y_{j}\}_{j=1}^{t-1}, \boldsymbol{\theta}_{k}) ;
\end{equation}
\item $s_t = *$ for all $s_t \in S_{-}$ and $s_t = k$ for all $s_t \in S^+ $ with probability
\begin{align}
&\propto\beta
\gamma_{0}(n_{S_{-}})
\gamma_{I(k,K)}(n_{S^{+}})\times \\
& \phantom{s}\prod\limits_{t: s_t \in S_{-} } f(y_{t}|\{ y_{j}\}_{j=1}^{t-1}, \boldsymbol{\theta}_{*})
\prod\limits_{t: s_t S^+ } f(y_{t}|\{ y_{j}\}_{j=1}^{t-1}, \boldsymbol{\theta}_{k});
\end{align}
\item $s_t = k$ for all $s_t \in S_{-}$ and $s_t = *$ for all $s_t \in S^+$ with probability
\begin{align}
&\propto\beta
\gamma_{0}(n_{S_{-}})
\gamma_{I(k,K)}(n_{S^{+}}) \times \\
&\phantom{s} \prod\limits_{t: s_t \in S_{-} } f(y_{t}|\{ y_{j}\}_{j=1}^{t-1}, \boldsymbol{\theta}_{k})
\prod\limits_{t: s_t S^+ } f(y_{t}|\{ y_{j}\}_{j=1}^{t-1}, \boldsymbol{\theta}_{*}).
\end{align}
\end{itemize}
\paragraph{Merge update}
Let $S_{j}=\{ s_t: s_t=j\}$, then, for $k=1,\dots , K$:
\begin{itemize}
\item $s_t = *$ for all $s_t$ in $S_{k}$ with probability
\begin{align}
& \beta \gamma_{0}({n}_{k-1,k-1}^{1:T})^{1-I(k,1)} \gamma_0({n}_{k,k}^{1:T}) \gamma_{I(k,K)}({n}_{k+1,k+1}^{1:T})^{1-I(k,K)}\times \\
&\phantom{s} \prod \limits_{t: s_t \in S_k } f(y_{t}|\{ y_{j}\}_{j=1}^{t-1}, \boldsymbol{\theta}_{*});
\end{align}
\item if $k \neq 1$, then $s_t= k-1$ for all $s_t$ in $S_{k}$
with probability
\begin{align}
& \gamma_{0}({n}_{k-1,k-1}^{1:T}+{n}_{k,k}^{1:T}+1) \gamma_{I(k,K)}({n}_{k+1,k+1}^{1:T})^{1-I(k,K)} \times \\
&\phantom{s} \prod \limits_{t: s_t \in S_k } f(y_{t}|\{ y_{j}\}_{j=1}^{t-1}, \boldsymbol{\theta}_{k-1});
\end{align}
\item if $k \neq K$, then $s_t= k+1$ for all $s_t$ in $S_{k}$
with probability
\begin{align}
& \gamma_{0}({n}_{k-1,k-1}^{1:T})^{1-I(k,1)} \gamma_{I(k,K)}({n}_{k,k}^{1:T}+{n}_{k+1,k+1}^{1:T}+1)\times \\
&\phantom{s} \prod \limits_{t: s_t \in S_k } f(y_{t}|\{ y_{j}\}_{j=1}^{t-1}, \boldsymbol{\theta}_{k+1});
\end{align}
\item $s_t = k$ for all $s_t$ in $S_{k}$ with probability
\begin{align}
& \beta \gamma_{0}({n}_{k-1,k-1}^{1:T})^{1-I(k,1)} \gamma_0({n}_{k,k}^{1:T}) \times \\
&\phantom{s}\gamma_{I(k,K)}({n}_{k+1,k+1}^{1:T})^{1-I(k,K)} \prod \limits_{t: s_t \in S_k } f(y_{t}|\{ y_{j}\}_{j=1}^{t-1}, \boldsymbol{\theta}_{k}).
\end{align}
\end{itemize}
MCMC mixing is improved if, choosing randomly, the univariate and split updates are performed starting from the first to the last time or from the last to the first.
.
\section{Examples} \label{sec:ex}
In this section we compare the results of our model with the ones of \cite{ko2015} and \cite{Chib1998} on simulated datasets and real ones.
Using simulated datasets we test the ability of the models in recovering the right number of latent regimes and parameters, then the models are estimated on a standard change-point problem, that is the number of coal-mining disasters, and on the radon data.
%
We implement the model of \cite{Chib1998}, introduced in Section \ref{sec:chib}, assuming prior \eqref{eq:qw} for the transition probabilities and the same priors over $\beta$ and likelihood parameters as the one of our proposal; model choice is performed using BIC.
The model of \cite{ko2015}, with respect to our approach, starts from a different specification of the latent process $s_t$ (for details see the Appendix) with
\begin{align}
& f(s_{t}=i|s_{t-1}=k,s_1,\dots , s_{t-1},\beta, \alpha)= \\
&\phantom{s} \begin{cases}
\frac{n_{k}^{1:(t-1)}+\alpha}{n_{k}^{1:(t-1)}+\alpha+\beta} & \mbox{if } i=k,\\
\frac{\beta}{n_{k}^{1:(t-1)}+\alpha+\beta} & \mbox{if }
i=k+1,\\
\end{cases}
\label{eq:s_t2s}
\end{align}
that reduces to our specification if $\alpha=1$, see equation \eqref{eq:s_t}. Then, in this section, the model of \cite{ko2015} is implemented using the MCMC algorithm that the authors proposed, assuming $\alpha=1$ and under the same priors over $\beta$ and likelihood parameters of our proposal. In all the examples posterior inference is carried out using 130000 iterations, burnin 80000 and thin 10, using 5000 posterior values for inferential purposes, with an half normal prior for $\beta$ with variance parameter $\sigma_{\beta}^2$.
{We chose between
the single component, split and merge updates with probabilities respectively equal to 0.5, 0.25 and 0.25, while with probability 0.5 we perform the univariate or split update starting from the first to the last time. We indicate our proposal as Model1, the model of \cite{Chib1996} as Model2 and the one of \cite{ko2015} as Model3. }
The \emph{R} \citep{rcran} source codes that can be used to replicate the results of the simulated and coal-mining disasters examples are available online in a GitHub repository\footnote{\url{https://github.com/GianlucaMastrantonio/Change-Point} }
while, due to a confidentiality issue, only the \emph{R} functions used to analyze the radon example are available.
\subsection{Simulated data} \label{sec:simex}
\begin{figure*}[t]
\centering
{\includegraphics[scale=0.45]{Sim1.pdf}}
\caption{Simulated example - first scheme: one of the simulated times series. The vertical dashed lines separate the regimes.}\label{fig:exy1}
\end{figure*}
\begin{figure*}[t]
\centering
{\includegraphics[scale=0.45]{Sim2.pdf}}
\caption{Simulated example - second scheme: one of the simulated times series. The vertical dashed lines separate the regimes.}\label{fig:exy12}
\end{figure*}
\begin{figure*}[t]
\centering
{\includegraphics[scale=0.45]{K1.pdf}}
\caption{Simulated example - first scheme: posterior distribution of $K$.}\label{fig:K1}
\end{figure*}
\begin{figure*}[t]
\centering
{\includegraphics[scale=0.45]{K1.pdf}}
\caption{Simulated example - second scheme: posterior distribution of $K$.}\label{fig:K2}
\end{figure*}
\begin{table}[t]
\centering
\begin{tabular}{cc|cccccccccc}
\hline \hline
&&$\hat{\mu}_k$& $\hat{\sigma}_k^2$ & $\hat{\xi}_k$\\
&& (CI) & (CI)& (CI)\\
\hline
&1 & 0.1 &0.711 & 50 \\
& & [-0.139 0.341] & [0.491 1.092]& [50 50]\\
&2 &5.005 &1.968 & 250 \\
& & [4.806 5.206] & [1.636 2.403]& [250 251]\\
&3& 1.985&1.081 & 650\\
& & [1.881 2.088] & [0.945 1.245]& [650 650]\\
&4 &-2.084 &0.628 &749 \\
& & [-2.249 -1.919] & [0.481 0.842]& [747 753]\\
&5 &-0.002 &1.132 & 1000\\
& & [-0.134 0.132] & [0.954 1.359]& [998 1000]\\
&6 &1.956 &3.017 &1400 \\
& & [1.791 2.125] & [2.644 3.448]& [1400 1400]\\
&7 &10.184 & 5.361& \\
& & [ 9.714 10.640] & [4.126 7.088]& \\
\hline \hline
\end{tabular}
\caption{Simulated example - first scheme - Model1: posterior means $ (\,\hat{ }\, )$ and credible intervals (CI) of $\mu_k$, $\sigma_k^2$ and $\xi_k$ computed using the subset of posterior samples that has $K=7$. } \label{tab:ex}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{cc|cccccccccc}
\hline \hline
&&$\hat{\mu}_k$& $\hat{\sigma}_k^2$ & $\hat{\xi}_k$\\
&& (CI) & (CI)& (CI)\\
\hline
&1 &0.1 & 0.713& 50\\
& & [-0.136 0.342] & [0.494 1.079]&[50 50] \\
&2 &5.003 &1.978 & 250 \\
& & [4.809 5.200] & [1.629 2.416]& [250 251] \\
&3 & 1.986& 1.083& 650 \\
& & [1.882 2.088] & [0.948 1.249]& [650 650] \\
&4 & -2.084& 0.633&749 \\
& & [-2.243 -1.921] & [0.485 0.851]& [747 753] \\
&5 & 0.001&1.129 & 1000\\
& & [-0.135 0.135] & [0.953 1.360]& [998 1000 ] \\
&6 &1.957 &3.018 &1400 \\
& & [1.786 2.133] & [2.633 3.483]& [1400 1400] \\
&7 &10.18 &5.349 & \\
& & [9.725 10.632] & [4.125 7.167]& \\
\hline \hline
\end{tabular}
\caption{Simulated example - first scheme - Model2: posterior means $ (\,\hat{ }\, )$ and credible intervals (CI) of $\mu_k$, $\sigma_k^2$ and $\xi_k$. } \label{tab:ex2}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{cc|cccccccccc}
\hline \hline
&&$\hat{\mu}_k$& $\hat{\sigma}_k^2$ & $\hat{\xi}_k$\\
&& CI & CI& CI\\
\hline
&1 &0.101 &0.715 & 50\\
& & [-0.138 0.336] & [0.493 1.095]& [50 50]\\
&2 & 5.008&1.974 &250\\
& & [4.816 5.199] & [1.627 2.420]& [250 251]\\
&3 & 1.972& 1.123&900\\
& & [1.891 2.050] & [1.014 1.257]& [900 903]\\
&4& 2.08& 0.163&957\\
& & [1.966 2.188] & [0.114 0.245]& [950 959]\\
&5 & 1.935& 1.118&1101\\
& & [1.758 2.107] & [0.896 1.432]& [1098 1101]\\
&6 & 1.887 & 14.833&1400\\
& & [1.448 2.332] & [12.638 17.546]& [1399 1400]\\
&7 & 10.171& 5.376& \\
& & [9.720 10.625] & [4.088 7.161]& \\
\hline \hline
\end{tabular}
\caption{Simulated example - second scheme - Model1: posterior means $ (\,\hat{ }\, )$ and credible intervals (CI) of $\mu_k$, $\sigma_k^2$ and $\xi_k$ computed using the subset of posterior samples that has $K=7$. } \label{tab:ex3}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{cc|cccccccccc}
\hline \hline
&&$\hat{\mu}_k$& $\hat{\sigma}_k^2$ & $\hat{\xi}_k$\\
&& CI & CI& CI\\
\hline
&1 &0.099 &0.712 &50\\
& & [-0.138 0.340] & [0.496 1.071]& [50 50] \\
&2& 5& 1.978&250\\
& & [4.808 5.199] & [1.637 2.419]& [250 251] \\
&3 & 1.973& 1.123&900\\
& & [1.895 2.058] & [1.008 1.257]& [899 905] \\
&4 &2.077 & 0.162&956\\
& & [1.963 2.190] & [0.111 0.238]& [947 959] \\
&5 & 1.937&1.115 &1101 \\
& & [1.767 2.113] & [0.89 1.42]& [1096 1101]\\
&6 &1.897 & 14.808&1400\\
& & [1.458 2.341] & [12.699 17.392]& [1399 1400] \\
&7 & 10.159&5.386 &\\
& & [ 9.693 10.623] & [4.079 7.218]& \\
\hline \hline
\end{tabular}
\caption{Simulated example - second scheme - Model2: posterior means $ (\,\hat{ }\, )$ and credible intervals (CI) of $\mu_k$, $\sigma_k^2$ and $\xi_k$. } \label{tab:ex4}
\end{table}
We simulate datasets under two schemes, both with $T=1500$, 7 regimes and assuming conditional independence between the $y_t$'s and with $Y_t|\boldsymbol{\theta}_k \sim N(\mu_k,\sigma_k^2)$.
In the first set the change points are $\boldsymbol{\xi}=\{\xi_k\}=\{50, 250, 650$ $750, 1000, 1400 ,1500\}$, and the parameters are $\boldsymbol{\mu}$ $=\{\mu_k\}_{k=1}^7$ $=\{0, 5, 2,-2, 0, 2, 10\}$ and $\boldsymbol{\sigma}^2=\{\sigma_k^2\}_{k=1}^7=\{1,2,1,$ $0.5,1,3,5\}$ while in the other $\boldsymbol{\xi}=\{50, 250, 900, 950,$ $ 1100, 1400 ,1500\}$ with $\boldsymbol{\mu}=\{0,5,2,2,2,2,10\}$ and $\boldsymbol{\sigma}^2=\{1,2,1,0.1,1,15,5\}$.
For each scheme 100 datasets are simulated; two examples of simulated time series, one for each scheme, are plotted in Figures \ref{fig:exy1} and \ref{fig:exy12}.
{The set of parameters of the first scheme are chosen so to have regimes of short (1 and 4) and long (3 and 6) length, overlapping distributions on adjacent regimes (2-3 and 4-5-6), well separated ones (1-2 and 3-4) and different values of variability. In the second scheme we are mainly interested in the evaluation of how the models behave when a short regime (the forth) is in between two regimes (the third and fifth) that have the same density parameters. }
We assume a normal prior for $\mu_k$ with 0 mean and variance 1000, while the prior over $\sigma_k^2$ is inverse gamma with shape and rate parameters both equal to 1.
Here $\sigma_{\beta}^2$ is set to be 1000 and through a simulation we evaluated that it induces a prior over $K$ that puts the central 90\% of probability mass between 741 and 1477. For Model2, we estimated model with $K^* \in\{ 4,5,\dots ,10 \}$.
\paragraph{First scheme}
Under our proposal the maximum a posteriori (MAP) estimate of $K$ is 7 in 96 datasets, in 3 is 8 and in 1 is 9. We measure the agreement between the true partition and the one found by our model, i.e. the MAP classification,through the Rand Index (RI) \citep{Hubert1985}, that is an index that ranges between 0 and 1, with 0 indicating that the partitions, the true one and the MAP, do not agree in any pair of points and 1 in case of perfect agreement \citep[for details see][]{Hubert1985}. Among datasets, the minimum value reached by the RI is 0.986 and in 27 of them is exactly 1. Model2 identifies 7 regimes in 99 datasets and 8 in 1, with minimum RI equal to 0.969, that is lower than the one found by our model, and it is exactly 1 in 27 datasets. On the other hand,
Model3 identifies always 1 regime and we will give a justification of this result in the Appendix.
For Model1 and Model2 we show in Tables \ref{tab:ex} and \ref{tab:ex2} the posterior estimates and credible intervals (CIs) for the simulated dataset depicted in Figure \ref{fig:exy1}. Results under Model1 and Model2 are quite similar and both estimate well the parameters, i.e. the true values are inside the associated CIs, but
only with our proposal we have an estimate of the posterior distribution of $K$, shown in Figure \ref{fig:K1}, and we can evaluate the uncertainty on the estimated number of regimes. The CI of $\beta$ is [0.076 0.366] for Model1 and [0.0730 0.352] for Model2.
\paragraph{Second scheme}
{Under Model1, the MAP estimate of $K$ is 7 in 75 datasets, while is 8 in 6, 9 in 1, 6 in 1 and 5 in 17. As in the first scheme, Model3 identifies always 1 regime while Model2 estimates 7 regimes in 23 datasets and 5 in 66. As we expected, when a posterior sample of $K$ is 5, regimes 3, 4 and 5 are generally merged.}\\
In terms of RI Model1 and Model2 have similar minimum and maximum values, i.e. for both the maximum is 0.999 while Model1 has minimum 0.717 and Model2 0.714. The main difference is in the distribution of the RI across simulated datasets, since our proposal has median RI that is equal to 0.995 while the model of \cite{Chib1998} has median value 0.725, showing that our proposal tends to perform better.
For the simulated dataset plotted in Figure \ref{fig:exy12}, we show in Tables \ref{tab:ex3} and \ref{tab:ex4} the parameters estimates for Model1 and Model2, where we can see a good agreement between the posterior CIs; for both models the values $\sigma_{3}^2$ and $\sigma_{4}^2$ are the only one not inside the associated CIs. $\beta$ has CI [0.0778 0.370] in Model1 and [0.076 0.369] under Model2. The posterior distribution of $K$ is shown in Figure \ref{fig:K2}.
\subsection{Coal-mining disasters}
\begin{figure*}[t]
\centering
{{\includegraphics[scale=0.48]{coal.pdf}}}
\caption{Coal-mining disasters data: the height of the bar represents the count in the associated year. The vertical dashed line separates the regimes identified by the MAP estimate of Model1.}\label{fig:exy2}
\end{figure*}
\begin{figure*}[t]
\centering
{\includegraphics[scale=0.45]{Kcoal.pdf}}
\caption{Coal-mining disasters data: posterior distribution of $K$.}\label{fig:coalK}
\end{figure*}
Our first real data application is devoted to the analysis of one of the most analysed dataset in change-point literature \citep[see][]{JARRETT1979,Carlin1992}; the annual number of coal-mining disasters in Britain, during the period 1851-1962. Here $y_t \in \mathbb{N}$, $\boldsymbol{\theta}_k = \lambda_k$ and, following \cite{Chib1998}, we assume $f(y_{t}|\{ y_j \}_{j=1}^{t-1}, \lambda_{k})=f(y_{t}| \lambda_{k})$
with $Y_t| \lambda_{k}\sim Pois(\lambda_{k})$. The data are plotted in Figure \ref{fig:exy2}.
We assume $\lambda_k\sim G(2,1)$ while the variance parameter of the half normal prior on $\beta$ is set to 0.1 and, again through a simulation, we evaluated that this induces a prior over $K$ that puts the central 90\% percent of total mass of probabilities between 1 and 10. On this dataset the Model2 is tested with $K$ between 1 and 4.
The MAP estimate of $K$ is 1 under Model3, i.e. there is not a segmentation, while Model2 chooses $K=2$ that is also the value found in \cite{Chib1998}. Our proposal has MAP estimate of $K$ equal to 2 and the associated posterior distribution is showed in Figure \ref{fig:coalK}. We wanted to point out that since our proposal has a non-parametric specification of the latent allocation structure, it is not surprising that with little data, as in this example, the posterior of $K$ has a lot of variability.
For Model1, posterior mean estimates of $\lambda_{1}$ and $\lambda_{2}$ are $\hat{\lambda}_1=3.045$ and $\hat{\lambda}_2=0.923$ with, respectively, CIs [2.544 3.648] and [0.711 1.166], on the other hand, under Model2 they are $\hat{\lambda}_1=3.084$ and $\hat{\lambda}_2=0.933$ with, respectively, CIs [2.587 3.688] and [0.7223 1.186]. In both models the CI of $\xi_1$ is [1886 1896] with MAP estimate 1890 while the CI of $\beta$ is [0.053 1.017] for Model1 and [0.025 0.45] for Model2.
\subsection{Indoor radon data} \label{sec:radon}
\begin{figure*}[t]
\centering
{{\includegraphics[scale=0.46]{Radon1.pdf}}}
\caption{Indoor radon concentration data: the vertical dashed line separates the regimes identified by the MAP estimate of Model1.}\label{fig:radon}
\end{figure*}
\begin{table*}[t]
\centering
\begin{tabular}{cc|ccccc}
\hline \hline
&&$\hat{\mu}_{0,k}$&$\hat{\mu}_{1,k}$& $\hat{\sigma}_{k}^2$ & $\hat{\xi}_k$\\
&& CI & CI& CI&CI\\
\hline
&1 & -0.19 &0 &0.027& 367\\
& & [-0.225 -0.155] & [0.000 0.001]&[0.024 0.032] &[366 371 ] & \\
&2 & 1.429 &-0.003 &0.039& 470\\
& & [0.856 1.987] & [-0.004 -0.002]&[0.030 0.052] &[468 471] & \\
&3 & 5.824&-0.011 &0.032&614 \\
& & [5.448 6.218] & [-0.012 -0.011]&[0.026 0.041] &[604 627] & \\
&4 &-1.136 & 0&0.021&881 \\
& & [-1.311 -0.953] & [-1.311 -0.953]&[0.018 0.025] &[865 893] & \\
Regime &5 &-9.571 &0.01 &0.027&1022 \\
& & [-10.360 -8.943] & [0.009 0.010]&[0.022 0.035] &[1004 1027] & \\
&6 & -2.042 & 0.002&0.041&1183 \\
& & [-2.862 -1.243] & [0.001 0.003]&[0.034 0.051] &[1181 1187] & \\
&7 & -18.73 &0.016 &0.069&1315 \\
& & [-20.302 -17.178] & [0.015 0.018]&[0.055 0.089] &[1309 1325] & \\
&8 & 31.758 & -0.022&0.046& \\
& & [30.886 32.641] & [-0.022 -0.021]&[0.037 0.056] & & \\
\hline \hline
\end{tabular}
\caption{Indoor radon concentration data: posterior means $ (\,\hat{ }\, )$ and credible intervals (CI) of $\mu_{0,k}$, $\mu_{1,k}$, $\sigma_k^2$ and $\xi_k$ computed using the subset of posterior samples that has $K = 8$. } \label{tab:radon}
\end{table*}
\begin{table*}[t]
\centering
\begin{tabular}{cc|ccccc}
\hline \hline
&&$\hat{\mu}_{0,k}$&$\hat{\mu}_{1,k}$& $\hat{\sigma}_{k}^2$ & $\hat{\xi}_k$\\
&& CI & CI& CI&CI\\
\hline
&1 & -0.758 & 0.448 &1.495& 1 \\
& & [-44.558 44.088] & [-42.921 44.201]&[0.262 39.986] &[1 2] & \\
&2 & -0.23 &0.001 &0.03&477 \\
& & [-0.263 -0.197] & [0.001 0.001]&[0.027 0.034] &[469 521] & \\
&3 & 5.94 & -0.012&0.033&613 \\
& & [5.245 6.525] & [-0.013 -0.010]&[0.026 0.046] &[604 629] & \\
&4 & -1.142& 0&0.021&883 \\
& & [-1.318 -0.952] & [-1.318 -0.952]&[0.018 0.025] &[865 895] & \\
Regime &5 &-9.61 &0.01 &0.028&1021 \\
& & [-10.584 -8.911] & [0.009 0.011]&[0.022 0.037] &[988 1027] & \\
&6 & -2.017& 0.002&0.041&1183 \\
& & [-2.912 -1.253] & [0.001 0.003]&[0.032 0.052] &[ 1180 1188] & \\
&7 &-18.712 & 0.016&0.068&1317 \\
& & [-20.190 -17.031] & [0.015 0.018]&[0.053 0.087] &[1308 1334] & \\
&8 & 31.776 &-0.022 &0.046& \\
& & [30.865 32.631] & [-0.022 -0.021]&[0.038 0.056] & & \\
\hline \hline
\end{tabular}
\caption{Indoor radon concentration data. Posterior means $ (\,\hat{ }\, )$ and credible intervals (CI) of $\mu_{0,k}$, $\mu_{1,k}$, $\sigma_k^2$ and $\xi_k$. } \label{tab:radon2}
\end{table*}
Radon is a colorless and odorless inert noble gas generated by the radioactive decay of Radium (Ra226) in the decay chain of Uranium (U238) \citep{Hauksson1981}.
A time series of radon concentration is characterized by daily and annual periodic components with
about daily changes in mean level, variance and temporal trend \citep{barbosa2010}.
Radon concentration is considered a proxy of geodynamic activity since many authors \citep[see][and references therein]{monnini1997,Woith2015} proved that prior to a powerful geodynamic event, such as an earthquake, a radon time series can show abrupt and out of ordinary changes \citep{Steinitz2003,Kawada2007}; this connection between radon anomalies and geodynamic events makes {relevant the understanding of the radon time series dynamic. }
In this example we use the radon data owned by the \emph{International Association for Research Seismic Precursors} (iAReSP) \citep{Nicol2015,Nicoli2016a}. The iAReSP, with the Tellus project \citep{TELLUS}, that consists of a network of radon recording stations,
aims to understand what happens to the radon concentration during the phase preceding an earthquake. At the present moment we have data only from one station {and on a limited time window. More precisely
our data are the mean radon counts over ten minutes, observed between November 18$^{th}$ 2015, 8:00, and November 28$^{th}$ 2015, 17:50, having then 1500 observations.}
Data are recorded in central Italy, in the town of Pizzoli, close to the city of L'aquila, 803 $m$ above sea level, using an ionization chamber with continuous measurement of Alfa particles produced by the decay of radon stable isotope $^{222}$RN \citep{Nicol2015}. In the observational period no major earthquakes were observed.
Here we show some {preliminary} results that prove the ability/potentiality of the model in the segmentation of a radon time series. As said in the beginning of this section, radon data presents a daily periodicity that is {stable in time \citep[see][]{barbosa2010}; in other words changes in the time series do not affect its amplitude. This characteristic of the radon data cannot be expressed in our model which assumes that all parameters change between regimes and then, prior to the model fitting, we decompose the time series into seasonal, trend and irregular components using the approach of \cite{cleveland90}, implemented in the \emph{stl} function of \emph{R} and the (daily) seasonal component is subtracted to eliminate the periodicity. }
The resulting time series has mean $\approx$5080.515 and variance
$\approx$1124047 and, to avoid possible numerical stability problems that such large numbers may raise,
we standardize the data;
the resulting time series is plotted in Figure \ref{fig:radon}.
To take into account changes in mean level, variance and temporal trend the following is assumed:
$Y_t| \boldsymbol{\theta}_k \sim N(\mu_{0,k}+\mu_{1,k} t, \sigma_{k}^2) $. \\
Parameters $\mu_{0,k}$ and $\mu_{1,k}$ have normal priors with 0 mean and variance 1000 while $\sigma_k^2\sim IG(1,1)$. In this example $T$ is 1500, as in the simulated ones, and then we use the same prior on $\beta$.
\emph{A posteriori}, Model3 estimates only 1 regime while our proposal put 99\% of the mass of probability on $K=8$ and the remaining on $K=9$, the posterior means and CIs of parameters and change points can be seen in Table \ref{tab:radon} while Figure \ref{fig:radon} shows the MAP classification. Model2 estimates 8 regimes. The main differences between our proposal and Model2 can be seen in the estimates of parameters and change points of the first two regimes, see Tables \ref{tab:radon} and \ref{tab:radon2}.
Posterior mean estimates, CIs and change points of the other regimes, are similar. The CI of $\beta$ is [0.078 0.331] under Model1 and [0.082 0.368] under Model2.
Model1 and Model2 found a clear and reasonable segmentation of the data showing that there are almost daily changes in the radon emission features. This last finding is coherent with previous studies, see for example \cite{barbosa2010}.
\section{Discussion} \label{sec:disc}
In this work we proposed a semi-parametric formalization of the standard change-point model of \cite{Chib1998}. In our extension, the first order latent Markov process, ruled by a constrained one-step transition matrix, is substituted by a stochastic process based on the stick-breaking representation of the DP. We suggested to draw samples from the posterior distribution using a marginalized version of the proposed model and we showed how to compute the full conditionals needed to implement the MCMC algorithm. \\
To asses the ability of the model in recovering the number and locations of change points, we used simulated examples. Then we make inference
in one of the most analysed dataset in the literature and on a new one. We showed that our proposal outperformed the ones of
\cite{ko2015} and \cite{Chib1998} in terms of change points estimates.\\
{The future will find us enriching the model including covariates information and modelling multiple time series subjected to individual and concurrent shifting in their features. We will also use the model to analyzed a longer times series of radon data to possibly acquire an early signal of major earthquake events.
}
\section*{Acknowledgements}
The author wishes to thank Giovanna Jona Lasinio and Antonello Maruotti for assistance and comments that greatly improved the manuscript.
\bibliographystyle{natbib}
|
2,877,628,090,260 | arxiv | \section{Introduction}
\label{Introduction}
After several decades from the pioneering work
of~\citet[][]{Oemler1974,Davis&Geller1976,Sandage&Visvanathan1976},
the role of environment in driving galaxy evolution still represents a research
frontier.
Several correlations have been observed between the
place in which galaxies reside and their own properties
\citep[see e.g.][for a review]{Blanton2009},
but the mechanisms responsible for them remain
poorly understood. Even the well-established morphology-density relation
\citep{Dressler1980,Postman&Geller1984} has a number of
contrasting interpretations
\citep[cf][]{Thomas2010,vanderWel2010,Cappellari2011}.
Many pieces of evidence
suggest that the environment
has a fundamental influence \citep[e.g.][]{
Cooper2008,Cucciati2010,Burton2013}.
In particular, some
of the processes halting the production of new stars
(the so-called ``galaxy quenching'') should be related to the
dense intergalactic medium (e.g., ram pressure stripping)
and/or interactions with nearby galaxies \citep[for more details,
see e.g.][]{Boselli&Gavazzi2006,Gabor2010,Woo2012}.
On the contrary,
other authors consider the galaxy stellar mass ($\mathcal{M}$)
or the halo mass ($\mathcal{M}_\mathrm{h}$)
the main evolutionary drivers,
with a secondary -- or even negligible --
contribution by their environment \citep[e.g.][]{
Pasquali2009,Thomas2010,Grutzbauch2011}.
Classical discussions contrast a scenario in which the
fate of a galaxy is determined primarily by physical processes
coming into play after the galaxy has become part of a group or
of a cluster (``nurture''), to one in which the observed environmental
trends are established before these events,
and primarily determined by internal physical processes (``nature'').
However, this dichotomy is simplistic,
as stellar mass and environment are inter-related.
In fact, the parametrisation of the latter is
often connected to the gravitational
mass of the hosting halo, which is also physically
coupled to galaxy stellar mass.
Therefore, in a scenario of
hierarchical accretion, it is expected that
most massive galaxies show a correlation with overdensities
\citep{Kauffmann2004,Abbas2005,Scodeggio2009}.
For this reason,
it is misleading
to contrast stellar mass and
environment as two separate aspects of galaxy evolution
\citep[see the discussion in][]{DeLucia2012}.
Another crucial point is how the environment is defined.
One possibility is to identify high-density regions as
galaxy groups and clusters, in contrast to a
low-density ``field'', sometimes ambiguously defined.
When halo mass estimates are used, the
classification is more tightly related to the underlying distribution
of dark matter, with galaxies often divided in satellite and central
objects \citep[][]{vandenBosch2008a}.
Other methods, involving galaxy counts,
can identify a broad range of densities
with a resolution
from a few Megaparsecs down to $\sim100\,\mathrm{kpc}$;
they are based on the two-point clustering \citep[e.g.][]{Abbas2005},
Voronoi's tessellation \citep[e.g.][]{Marinoni2002},
or the computation of the galaxy number density inside a
window function, which is the approach used in the present work.
In general, different methods probe galaxy surroundings on
different scales
\citep[][]{Muldrew2012}. Nonetheless, the method we adopt here
(based on the fifth nearest neighbourhood)
is expected to be in overall good agreement with other
robust estimators as Delaunay's or Voronoi's tessellations
\citep[see][]{Darvish2015}.
In this kind of research,
the galaxy stellar mass function
(GSMF) is one of the most effective tools.
Especially when computed inside a specific
environment, the GSMF requires excellent data,
derived from the observation of wide fields
or a large number of clusters/groups.
For this reason, only few studies on the GSMF
consider the environmental dependence aspects
\citep[e.g.][]{Baldry2006,Bundy2006,Bolzonella2010,Vulcani2012,
Giodini2012,Annunziatella2014,Hahn2015,Mortlock2015}.
Although still fragmentary,
an interesting picture is emerging from these pieces of work.
In the local Universe, \citet[][SDSS data]{Baldry2006}
observe a correlation between the turn-off mass of the
GSMF ($\mathcal{M_\star}$)
and the local density (which they estimate
as an average between the fourth and fifth nearest neighbour).
In the lowest densities, they estimate
$\log(\mathcal{M_\star/M_\odot})\simeq10.6$, reaching values of
about 11.0 in the densest environment.
Probing a larger redshift range (from $z\sim1$ to $\sim0.1$)
\citet{Bolzonella2010} detect
environmental effects for zCOSMOS \citep[][]{Lilly2007} galaxies:
the passive population grows more rapidly
inside regions of high density (recovered by counting the fifth nearest neighbour
of each galaxy).
The authors find
this trend by studying the redshift evolution
of $\mathcal{M}_\mathrm{cross}$,
i.e.~the value of stellar mass at which the active and passive GSMFs
intersect each other
\citep[see also][]{Bundy2006,Peng2010,Annunziatella2014}.
Recent studies indicate that already at $z\sim1$
the assembly of massive galaxies is faster
in overdensities \citep{Mortlock2015}.
Using a slightly different classification with respect to
\citeauthor{Bolzonella2010}, i.e.~relying on the third nearest
neighbour, \citet{Bundy2006} seek for environmental
effects in the stellar mass function of DEEP2 galaxies,
from $z=0.4$ to 1.4.
The evolution they find shows a mild
dependence on local environment,
such that \citeauthor{Bundy2006} quantify it
as a secondary driver with respect to stellar mass.
Other studies compare the stellar mass functions
of clusters, groups, and isolated (or ``field'') galaxies.
\citet{Kovac2010b}, using the 10k zCOSMOS sample,
confirm the trend noted by \citet{Bolzonella2010}:
massive galaxies preferentially reside inside groups.
\citet{Annunziatella2014} find that the passive galaxy stellar
mass function
in a cluster of the CLASH-VLT survey
has a steeper slope
in the core of the cluster than in the
outskirts \citep[see also][]{Annunziatella2015}.
On the other hand,
\citet{Calvi2013} and \citet{Vulcani2012,Vulcani2013}
compare galaxy clusters and general field up to
$z\simeq0.8$,
without detecting any significant difference
in the respective GSMFs. Also
\citet{vanderBurg2013}
find similar shapes for active/passive mass functions
in each environment, although
the total GSMFs differ from each other because of
the different percentage of passive galaxies in their
clusters at $0.86<z<1.34$
with respect to the field.
We note however that these analyses are based on
different kinds of datasets: \citeauthor{Vulcani2013} and
\citeauthor{vanderBurg2013} use samples of several
clusters, while \citeauthor{Annunziatella2014} focus on
one system but with deeper and wider observations.
We aim to provide new clues in this context, exploiting the
VIMOS Public Extragalactic Redshift Survey
\citep[VIPERS,][]{Guzzo2014} to search for environmental
effects between $z\simeq1$ and $z\simeq0.5$.
As shown in a previous paper of this series
\citep[][hereafter D13]{Davidzon2013}, the VIPERS data allow
robust measurement of the GSMF.
The accurate VIPERS redshifts are also the cornerstone to estimate
the local density contrast around each galaxy;
this task has been carried out
in \citet[][]{Cucciati2014a} and will be further
developed in another study focused on the colour-density relation
(Cucciati et al., in prep.).
In the present paper, Sect.~\ref{Data}
contains a general description of the survey.
The computation of local density contrast
is summarised in the same Section, along with
the derivation of other fundamental galaxy quantities
(in particular galaxy stellar mass).
In Sect.~\ref{Classifications} we present our classification of
environment and galaxy types.
After posing these definitions, we are able to estimate the GSMF
in low- and high-density regions of VIPERS, also considering
the passive and active galaxy samples separately
(Sect.~\ref{Gsmf results}).
The interpretation of our results is discussed in Sect.~\ref{Discussion},
while conclusions are in Sect.~\ref{Conclusions}.
Our measurements assumes a flat $\Lambda$CDM cosmology in which
$\Omega_m = 0.25$, $\Omega_\Lambda= 0.75$, and $h_{70}=H_0 /
(70\,\mathrm{km\,s^{-1}\,Mpc^{-1}})$, unless specified otherwise.
Magnitudes are in the AB system \citep{Oke1974}.
\section{Data}
\label{Data}
Since 2008, VIPERS has probed
a volume of $\sim1.5 \times 10^8\,\mathrm{Mpc}^3\,h_{70}^{-3}$
between $z=0.5$ and $1.2$, providing the largest spectroscopic galaxy
catalogue at intermediate redshifts.
The final public release, expected in 2016, will cover $24$\,deg$^2$,
including about 90\,000 galaxies and
AGNs in the magnitude range of $17.5\leqslant i \leqslant 22.5$.
The first public data release (PDR1),
consisting of 57\,204 spectroscopic measurements,
has been presented in \citet{Garilli2014} and is now
available on the survey database\footnote{\url{http://vipers.inaf.it/rel-pdr1.html}}.
From a cosmological perspective, the main goals of VIPERS is
measuring the growth rate of structure \citep{delaTorre2013b}.
Additional science drivers refer also to extragalactic research fields,
to investigate a wide range of galaxy properties at an epoch when
the Universe was about half its current age
(\citealp{Marchetti2013,Malek2013}; D13; \citealp{Fritz2014}).
In addition, in the context of the present study, it is worth mentioning the VIPERS
papers that describe the relation between baryons and dark matter
through the galaxy bias factor \citep{Marulli2013,DiPorto2014,Cappi2015,Granett2015}.
Both the galaxy density field and the galaxy bias, if the latter is measured
as a function of stellar mass and/or luminosity, are intimately linked to clustering
and the total matter distribution.
We refer the reader to \citet{Guzzo2014} and \citet{Garilli2014}
for further details on the survey construction and the
scientific investigations being carried out by the VIPERS collaboration.
\subsection{Photometry}
\label{Photometry}
The spectroscopic survey is
associated with photometric ancillary data
obtained from both public surveys and dedicated observations.
The VIPERS targets have been selected within two fields of the
Canada-France-Hawaii Telescope Legacy Survey Wide
(CFHTLS-Wide\footnote{\url{http://www.cfht.hawaii.edu/Science/CFHLS/}}),
namely W1 and W4.
The CFHTLS optical magnitudes were derived by the Terapix
team\footnote{Data available at \url{http://www.terapix.iap.fr}
(T0005 data release).}
by means of \texttt{SExtractor}
\citep[MAG\_AUTO in double image mode, see][]{Bertin&Arnouts1996}
in the filters
$u^\ast$, $g^\prime$, $r^\prime$, $i^\prime$, and $z^\prime$.
Photometric redshifts ($z_\mathrm{phot}$) have been estimated by using
such magnitudes, following the procedure described in \citet{Coupon2009};
their uncertainty is $\sigma_\mathrm{zphot}=0.035(1+z_\mathrm{phot})$.
This photometric catalogue is limited at $i\leqslant22.5$, and we refer to it as
the ``parent sample'' of VIPERS.
Sources whose
quality was deemed insufficient
for our analysis (e.g.~because of nearby stars)
have been excluded by means of angular masks.
Beyond optical data, a $K_s$-band follow-up
added information in the near-infrared (NIR) range.
Data were collected
by means of the WIRCam instrument at CFHT,
setting an optimised depth
to match the brightness of the spectroscopic sources
(Moutard et al., in prep.).
These observations cover almost completely the W4 field,
while in W1 a $1.6\times0.9$\,deg$^2$ area is missing
(see D13, Fig.~1).
At $K\leqslant22.0$ (that is the $5\sigma$ limiting magnitude),
96\% of the VIPERS objects in W4 have a
WIRCam counterpart, as compared to
80\% in W1.
When estimating galaxy stellar masses by fitting galaxy
spectral energy distributions (SEDs),
NIR photometry can be critical,
e.g.~to avoid age underestimates
\citep[see][]{Lee2009}.
For this reason, $K_\mathrm{WIRCam}$ has been
complemented by the UKIDSS data\footnote{\url{http://www.ukidss.org},
note that Petrosian magnitudes in the database are in Vega system,
but conversion factors to AB system are provided
by the UKIDSS team on the reference website.
}.
The sky region that WIRCam did not observe in W1
is fully covered by the UKIDSS-DXS survey,
which has been used also in W4
-- together with the shallower UKIDSS-LAS --
for less than $300$ sparse objects not matched
with $K_\mathrm{WIRCam}$.
After that, the fraction of our spectroscopic sample having
$K$-band magnitude rises to $97$\% both in W1 and in W4;
in absence of $K$ magnitudes, we use (where possible)
the $J$ band from UKIDSS.
The comparison between $K_\mathrm{WIRCam}$ and
$K_\mathrm{UKIDSS}$ was
performed in D13: the two surveys are in
good agreement, and we can safely combine them.
In addition, about 32\% of the spectroscopic targets in W1
lie in the XMM-LSS field and have been associated with
infrared (IR) sources observed by SWIRE\footnote{\url{http://swire.ipac.caltech.edu/swire}}.
Since our SED fitting (Sect.~\ref{Sed fitting})
uses models of stellar population synthesis
that do not reproduce the re-emission from dust,
we only considered magnitudes in the $3.6\,\mu$m and
$4.5\,\mu$m bands (it should be also noticed that
beyond those wavelengths SWIRE is
shallower, and source detection is very sparse).
\subsection{Spectroscopy}
\label{Dataspec}
We extract our galaxy sample from the
same spectroscopic catalogue used
in D13.
That catalogue includes 53\,608 galaxy spectra
with $i \leqslant i_\mathrm{lim} \equiv22.5$.
Along with the limiting magnitude,
an additional criterion for target selection,
based on $(g-r)$ and $(r-i)$ colours,
was successfully applied
to enhance the probability of observing
galaxies at $z>0.5$ \citep[see][]{Guzzo2014}.
Spectra were observed at VLT using the VIMOS multi-object
spectrograph \citep{LeFevre2003a}
with the LR-Red grism ($R=210$)
in a wavelength range of 5500--9500\,\AA.
The four quadrants of the VIMOS instrument, separated
by gaps $2\arcmin$ wide, produce a characteristic
footprint that we have accounted for
by means of spectroscopic masks.
Besides gaps, a few quadrants are missing
in the survey layout \citep[][Fig.~10]{Guzzo2014}
because of technical issues in the spectrograph set-up.
After removing the vignetted parts of each pointing,
the effective area covered by the survey is
5.34 and 4.97\,deg$^2$, in W1 and W4 respectively.
To maximise the number of targets,
we used short slits as proposed in \citet{Scodeggio2009}.
As a results we targeted $\sim45\%$ of available sources
in a single pass.
A description of the VIPERS data reduction
can be found in \citet{Garilli2012a}. At the end of
the pipeline, a validation process was carried out
by team members, who checked
the measured redshifts
and assigned a quality flag ($z_\mathrm{flag}$) to each of them.
Such a flag corresponds to the confidence level (CL) of
the measurement, according to the same scheme
adopted by previous surveys like
VVDS \citep{LeFevre2005} and zCOSMOS.
The sample we will use
includes galaxies with
$2 \leqslant z_\mathrm{flag} \leqslant 9$, corresponding to
95\% CL.
It comprises 34\,571 spectroscopic measurements
between $z=0.5$ and 0.9, i.e.~the redshift range of the present analysis.
We estimate the error in the $z_\mathrm{spec}$ measurements
from repeated observations. It is
$\sigma_z = 0.00047(1+z_\mathrm{spec})$,
corresponding to a velocity uncertainty of $\sim140\,\mathrm{km\,s}^{-1}$
\citep[][]{Guzzo2014}.
We provide each object with a statistical weight $w(i,z)$
to make this sample representative of all the photometric galaxies at
$i<22.5$ in the survey volume.
We estimate weights by considering three
selection functions: the target sampling rate (TSR),
the spectroscopic success rate (SSR), and the
colour sampling rate (CSR).
Further details about TSR, SSR, and CSR
are provided in \citet{Fritz2014}, \citet{Guzzo2014},
and \citet{Garilli2014}. The overall sampling rate,
i.e.~$\mathrm{TSR}\times\mathrm{SSR}\times\mathrm{CSR}$
is on average 35\%.
\subsection{SED fitting estimates}
\label{Sed fitting}
We estimate several quantities,
in particular galaxy stellar masses and absolute magnitudes,
by means of SED fitting,
in a similar way to D13.
Through this technique, physical
properties of a given galaxy can be derived
from the template
(i.e., the synthetic SED) that
best reproduces its multi-band
photometry (after redshifting the template
to $z=z_\mathrm{spec}$ or $z_\mathrm{phot}$).
To this purpose, we use the
code \textit{Hyperzmass}, a modified version of
\textit{Hyperz} \citep{Bolzonella2000} developed by
\citet{Bolzonella2010}.
The software selects the best-fit template as the one that
minimises the $\chi^2$.
To build our library of galaxy templates we
start from the simple stellar populations
modelled by \citet[][hereafter BC03]{Bruzual2003}.
The BC03 model assumes a universal initial mass function (IMF)
and a single non-evolving metallicity ($Z$)
for the stars belonging to a given simple stellar population (SSP).
Many SSPs are combined and integrated in order to reproduce
a galaxy SED.
As in D13,
we choose SSPs with \citet{Chabrier2003} IMF,
having metallicity either
$Z=Z_\odot$ or $Z=0.2Z_\odot$
to sample the metallicity range observed
for galaxies at $z\sim0.8$ \citep{Zahid2011}.
We adopt only two values to limit
degeneracy with other parameters such as the age.
Synthetic galaxy SEDs are derived
by evolving the SSPs in agreement with a given
star formation history (SFH).
We assume eleven SFHs: one with a constant SFR,
and ten having an exponentially declining
profile, i.e.~$\mathrm{SFR}\propto\exp(-t/\tau)$
with values of $\tau$ ranging
between 0.1 and $30\,\mathrm{Gyr}$.
The formation redshift of our galaxy templates
is not fixed, but the ages allowed in the fitting procedure
range from 0.09\,Gyr to the age of the Universe at
the redshift of the fitted galaxy.
Composite SFHs could be considered
by adding random bursts of star formation to
the exponential (or constant) SFR, as in
\citet{Kauffmann2003c}.
However, in D13 we
checked that replacing smooth SFHs with
more complex ones has a critical impact on the
stellar mass estimate (i.e., more than
0.3\,dex difference)
only for a small fraction ($<10$\%) of the VIPERS
galaxies, while for the majority of the sample the
change is less than $\sim0.1$\,dex
\citep[see also][]{Pozzetti2007}.
Similar conclusions are drawn by \citet{Mitchell2013},
who find that
the exponential decrease
is a reasonable approximation
of the true (i.e., composite) SFH of their simulated galaxies: their
SED fitting estimates show small scatter and no systematics
with respect to the stellar masses obtained from the
theoretical model
\citep[see also][whose results do not change significantly
when using either composite or ``delayed'' SFHs]{Ilbert2013}.
Attenuation by dust is modelled
by assuming either \citet{Calzetti2000} or Pr\'evot-Bouchet
\citep{Prevot1984,Bouchet1985} extinction law.
For both, we let the $V$-band attenuation
vary from $A_V=0$ (i.e., no dust) to 3\,mag, with steps of 0.1.
No prior is implemented to discriminate between the two
extinction laws:
for each galaxy the model is chosen that
minimises the $\chi^2$.
We exclude from our library those
templates having an unphysical SED,
according to observational evidence.
Galaxy models with $\mathrm{age}/\tau > 4$ and $A_V > 0.6$ are not
used in the fitting procedure, since they represent old galaxies
with an excessive amount of dust compared to what observed
in the local universe
\citep[cf][Fig.~3]{Brammer2009}.
We also rule out best-fit solutions
representing early-type
galaxies (ETGs) with too young ages, i.e.~models
with $\tau \leqslant 0.6\,\mathrm{Gyr}$
and redshift of formation $z_\mathrm{form}< 1$
\citep[evidence of high $z_\mathrm{form}$ of ETGs can be found e.g.~in][]{Fontana2004,
Thomas2010}. Any other combination of parameters within the ranges
mentioned above is allowed.
Considering all these parameters and their allowed ranges,
our SED fitting should provide us with
stellar mass estimates with
an uncertainty of $\lesssim0.3$\,dex, according to
\citet{Conroy2009b}.
Moreover, we emphasise that the lack of IR photometry
for a small part of the VIPERS sample (see Sect.~\ref{Photometry})
does not introduce significant bias, as already tested in
D13.
In addition to stellar mass,
we estimate absolute magnitudes
in several bands
from the same best-fit SED.
To minimise the model dependency,
we take the apparent magnitude
in the closest filter to
the rest-frame wavelength of interest,
and apply a k- and colour-correction
based on the SED shape.
In this way, the outcome is little sensitive
to the chosen template, relying mainly
on the observations. Typical uncertainties of this
procedure, when applied to optical/NIR filters,
are about 0.05\,mag at $0.5<z<0.9$
\citep[][]{Fritz2014}.
\subsection{Galaxy density contrast}
\label{Density contrast}
To characterise the different environments in which galaxies
live (Sect.~\ref{Environment definition}) we rely on
the galaxy density contrast ($\delta$). This quantity is related to the local
concentration of galaxies (i.e.~the galaxy density field $\rho$) and the mean
galaxy density ($\bar{\rho}$) such that
$\delta=(\rho-\bar{\rho})/\bar{\rho}$.
Although $\rho$ is a point field
indirectly connected to matter density,
it is a good proxy of the
underlying matter distribution: through various smoothing schemes
(included the one described here) it is possible to recover the latter
from the former with a scale-independent bias factor
\citep{Amara2012,DiPorto2014}.
The procedure adopted here is thoroughly described in a
companion paper \citep{Cucciati2014a}.
To derive the local density of a given galaxy,
we count objects inside a filter centred on it.
Those objects that trace $\rho$ are part of
a ``volume-limited'' sample that includes
both spectroscopic
and photometric galaxies. The latter ones come from
the photometric parent catalogue, which contains CFHTLS
sources with the same $i$-band cut of VIPERS
(see Sect.~\ref{Photometry}).
To build such a sample, we select galaxies in W1 and W4
with $M_B < -20.4 -Qz$. The factor
$Q$ takes into account the evolution in redshift of the
characteristic galaxy luminosity, as determined by
$M_B^\star$ in the galaxy luminosity
functions
\citep[see more details e.g.~in][]{Moustakas2013}.
We set
$Q=1$ according to the zCOSMOS luminosity function
\citep[which encompasses $z\sim0.2$ to 0.9, see][]{Zucca2009}.
The volume-limited sample is complete up to $z=0.9$,
and traces the underlying cosmic structure
avoiding strongly evolving
bias that instead a flux-limited sample would produce
\citep[cf][]{Amara2012}.
We will refer to this volume-limited sample
as the sample of
``tracers'' (to be distinguished from
the VIPERS sample for
which we will compute $\delta$).
Among those tracers, 14\,028 objects have a $z_\mathrm{spec}$
with $z_\mathrm{flag}\in[2,9]$ while more than $100\,000$ have
a $z_\mathrm{phot}$.
The large number of spectroscopic redshifts -- and
their accuracy -- is crucial to robustly determine the
density field in the 3-dimensional (redshift) volume:
generally, when using
photometric redshifts only, the reconstruction along the
line of sight is prevented by their larger photo-$z$ errors
\citep[e.g.][]{Cooper2005,Scoville2013}.
We will compute $\delta$ for galaxies beyond $z=0.51$,
to avoid the steep decrease of $N(z)$
at $z \lesssim 0.5$ \citep[see][Fig.~13]{Guzzo2014}
that could affect our density estimates.
Thanks to the photometric redshifts, there is a sufficient number of
(photometric) tracers
also in the gaps produced by the footprint of VIMOS and
in the missing quadrants.\footnote{
Nevertheless, \citet{Cucciati2014a} demonstrate that
the major source of uncertainties in the procedure
is not the presence of gaps
but the incompleteness of the spectroscopic sample
(i.e.~the $\sim35$\% sampling rate). Besides that, we emphasise that
the $z_\mathrm{phot}$ sample is crucial to avoid
any environmental bias caused by the slit assignment.
In fact, the VIMOS sampling rate could be slightly smaller
in crowded regions, because of the minimum distance required
between two nearby slits. }
In absence of a secure spectroscopic measurement,
we apply a modified version of the method described
by \citet{Kovac2010a}. The key idea of the method
is that galaxy clustering along
the line of sight,
recovered by using spectroscopic redshifts,
provides information about the radial positions
of a photometric object, i.e.~it is likely to lie
where the clustering is higher.
Thus, to each photometric tracer
we assign a distribution of $z_\mathrm{peak}$ values,
together with an ensemble of
statistical weights.
For each value of $z_\mathrm{peak}$,
the associated weight $w_\mathrm{peak}$ represents
the relative probability for the object to be
at that given redshift
(the sum of weights is normalised to unity).
In other words,
the $z_\mathrm{peak}$ values are the ``most likely''
radial positions of a photometric tracer.
In detail, to determine $z_\mathrm{peak}$ and $w_\mathrm{peak}$,
we proceed as follows.
We start from the probability distribution function (PDF) of
the measured
$z_\mathrm{phot}$, assumed to be
a Gaussian
with rms equal to $\sigma_\mathrm{zphot}$.
We also determine $N(z)$, that is
the galaxy distribution along the line of sight
of the target. To do that, we take
all the objects of the spectroscopic sample lying
inside a cylinder with
$7.1\,h_{70}^{-1}\,\mathrm{Mpc}$ comoving radius\footnote{
This value corresponds to a radius equal to
$5\,\mathrm{Mpc}$ if one assumes
$H_0=100\,\mathrm{km\,s^{-1}\,Mpc^{-1}}$ \citep[as in][]{Cucciati2014a}
}
and half-depth of $\pm3\sigma_\mathrm{zphot}$; the cylinder is
centred in the coordinates (RA, Dec, $z_\mathrm{phot}$) of the
considered galaxy. The
desired $N(z)$ distribution is obtained from those objects,
using their $z_\mathrm{spec}$ values (without errors)
in bins of $\Delta z =0.003$.
Then, we multiply the PDF of $z_\mathrm{phot}$
by $N(z)$ and renormalise the resulting function.
In this way we obtain a new PDF whose peaks
represent the desired set of
$z_\mathrm{peak}$ values. Their respective $w_\mathrm{peak}$
are provided according to the relative height of each peak
(the sum of them being equal to one).
Thus, we compute the local density $\rho$
for each of the 33\,952 galaxies of the VIPERS
(flux-limited) catalogue from $z=0.51$ to 0.9.
Given the galaxy coordinates
$r_g=(\mathrm{RA}_g,\mathrm{Dec}_g)$
and redshift $z_g$,
$\rho(r_g,R_{\mathrm{5NN}})$
is equal to the number of tracers
inside a cylindrical filter centred in $r_g$;
the cylinder has half-depth
$\Delta v = \pm 1000\,\mathrm{km}\,\mathrm{s}^{-1}$
and radius equal to $R_{\mathrm{5NN}}$, i.e.~the projected distance of
the fifth closest tracer
(or fifth nearest neighbour, hereafter 5NN).
It should be noticed that
such an estimate depends on the absolute magnitude of
the tracers. By using fainter tracers
(e.g., limited at $M_B<-19.5-z$) the object identified as
5NN would change and $R_\mathrm{5NN}$ would be generally smaller.
However, although the absolute value of $\delta$
varies as a function of tracer luminosity,
we are interested in a relative classification
that divides galaxies in
under- and over-densities (see Sect.~\ref{Environment definition}).
Therefore, a different cut in $M_B$
would not alter our findings, as we verified that
the galaxy ranking in density contrast
would be on average preserved.
On the other hand, fainter tracers
would be incomplete already at
lower redshifts (for example by assuming $M_B<-19.5-z$
we would restrict our analysis at $z<0.7$).
The density contrast is defined
on scales that
differs
from one galaxy to another. Namely, in our reconstruction $R_\mathrm{5NN}$
ranges from $\sim\!2.8$ to $8.6\,h_{70}^{-1}\,\mathrm{Mpc}$
while moving from the densest regions toward galaxies
with the lowest $\rho$.
Probing a non-uniform scale
does not impair our analysis, because we are interested in a
relative classification of different environments
(see Sect.~\ref{Environment definition}).
The 5NN estimator leads to the desired ranking.
We adopt the 5NN because
it is an adaptive estimator that efficiently samples
a broad range of densities.
Using, instead, a fixed radius of
$\sim\!3\,h_{70}^{-1}\,\mathrm{Mpc}$
(i.e., comparable to the 5NN distance
in the highest densities),
the reconstruction
would have been highly affected by shot noise
in the VIPERS regions with medium/low density.
In those regions, the number of
tracers inside a filter with small fixed aperture
is very low: considering e.g.~that at $z\simeq0.7$ the mean surface density of
tracers is about 85 objects per $\mathrm{deg}^2$,
only three tracers are expected on average
within a cylinder having
$R\simeq3\,h_{70}^{-1}\,\mathrm{Mpc}$.
The use of cylinders, instead of e.g.~spherical filters,
minimises the impact of redshift-space
distortions \citep{Cooper2005}.
The depth along the line of sight ($2\,000\,\mathrm{km\,s}^{-1}$)
is optimal not only for spectroscopic redshifts, but also for
photometric ones after multiplying their PDF by $N(z)$
as described above.
The reconstruction of the density field through the procedure
described here
is extensively tested in \citet{Cucciati2014a},
but using spherical filters with
$R_\mathrm{fixed}=7.1$ and $11.4\,h_{70}^{-1}\,\mathrm{Mpc}$
(5 and 8\,Mpc if
$H_0=100\,\mathrm{km\,s^{-1}\,Mpc^{-1}}$).
We verified
that the outcomes do not change
when replacing spheres with cylinders (Cucciati et al.~in prep.).
For a detailed comparison among different filters
(spheres or cylinders, fixed or adaptive apertures, etc.)
we refer to \citet{Kovac2010a} and \citet{Muldrew2012}.
The local density contrast of a given galaxy
is
\begin{equation}
\delta(r_g,z_g,R_{\mathrm{5NN}})=
\frac{\rho (r_g,z_g,R_{\mathrm{5NN}}) -
\bar{\rho}(z_g) }
{\bar{\rho}(z_g) },
\label{delta}
\end{equation}
where we estimate $\bar{\rho}(z_g)$
as a function of
redshift by smoothing the spectroscopic
distribution $N(z)$ with the $V_\mathrm{max}$ statistical
approach, in a similar way to \citet{Kovac2010a}.
For galaxies near the survey edges we correct $\delta$
as done in \citet{Cucciati2006}, i.e.~by rescaling the measured
density by the fraction of the cylinder volume within the
survey borders. We notice however that the scatter
in the density field reconstruction is mainly due
to the survey strategy (e.g.,~the sampling rate).
The impact of border effects is much smaller,
and becomes significant only when most of
the cylinder volume ($>50\%$) is outside the survey area.
When it happens,
we prefer to discard the object from the sample.
We also remove galaxies for which the cylinder
is inside the survey borders, but less than 60\% is included
in the spectroscopically
observed volume
(e.g., when more than 40\% of it
falls in gaps or in a missing VIMOS quadrant).
In that case the density contrast should rely mostly on photometric neighbours,
and our estimate would be less accurate.
With these two constraints, we excluded about 9\% of
the objects (almost all located on the edges of the survey).
\section{Environment and galaxy type classification}
\label{Classifications}
A fundamental step in this work is to identify the
galaxies residing in two opposite environments,
i.e.~regions
of low density (LD) and high density (HD).
Broadly speaking,
the former ones are regions
without a pervasive presence of
cosmic structure,
whereas the latter are
associated with the highest peaks of the matter distribution.
Ideally, one would like to link these definitions to the total matter density.
However, since the dark matter component is not directly
observed, any classification has to rely on
some proxy of the overall density field. Our classification
relies on the galaxy density contrast evaluated in Sect.~\ref{Density contrast}.
In addition to this, we divide galaxies by type,
to work in each environment with
active and passive objects separately.
\subsection{Galaxy environments}
\label{Environment definition}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{D1D4lim.eps}
\caption{\textit{Upper panel:} galaxy density contrast
of a mass-limited sample having
$\log(\mathcal{M/M_\odot})>$10.86.
Galaxies from the W1 field are marked with plus signs,
from W4 with open circles.
For each redshift bin, galaxies below the 25th (above the 75th)
percentile of the $1+\delta$ distribution
are enclosed by orange (violet) rectangles
(dotted lines for W1, dashed lines for W4).
The two tresholds that define LD and HD,
as discussed in Sect.~\ref{Environment definition},
are marked by an arrow on the left side of the plot.
\textit{Lower panel:} combining the two fields together,
histograms represent the redshift distribution
of the LD and HD sub-sample, in orange and violet respectively.
}
\label{d1d4lim}
\end{figure}
In our analysis, we discriminate
LD from HD environments
by means of the
local density contrast.
We include in the LD (HD) sample galaxies
that have a density contrast smaller (larger) than
a certain value of $\delta$. These thresholds
can be fixed according to some physical prescription
\citep[e.g.~to match detections of galaxy groups or clusters, as in][]{Kovac2010a},
or determined in a relative way, e.g.~by considering the extreme tails
of the $1+\delta$ distribution.
Following the latter approach,
\citet[][zCOSMOS 10k sample]{Bolzonella2010}
assume as reference for low and high densities
the 25th and 75th percentile (i.e., first and third quartile)
of the $1+\delta$ distribution,
respectively. The authors compute the distribution
in each of their redshift bins, independently;
however, we notice that the thresholds they
estimate in bins between $z=1$ and 0.5 are almost constant
\citep[see also][Fig.~9]{Peng2010}.
Similarly to \citet{Bolzonella2010},
we compute the distribution of $1+\delta$
(distinctly in W1 and W4)
within three redshift bins:
$0.51<z\leqslant0.65$, $0.65<z\leqslant0.8$, $0.8<z\leqslant0.9$.
We choose this partition to probe a volume sufficiently large
in each bin ($\gtrsim7\times10^{6}\,h_{70}^{-3}\,\mathrm{Mpc}^{3}$).
Moreover, the resulting median redshifts
($\langle z \rangle$, see Table~\ref{tabmlim})
correspond to nearly equally-spaced time steps (0.6--0.7\,Gyr).
We will adopt the same redshift bins
to estimate the
mass functions in Sect.~\ref{Gsmf results}.
Here we take into account only galaxies
with $\log(\mathcal{M/M_\odot})>10.86$,
to work with a complete sample
in all the $z$-bins. Indeed,
such a value corresponds to the stellar mass
limit of the passive population at $z\simeq0.9$
(see Sect.~\ref{Gsmf estimate} and
Table \ref{tabmlim}).
The resulting 25th and 75th percentiles
($\delta_\mathrm{LD}$ and $\delta_\mathrm{HD}$)
vary among the three $z$-bins and the two fields
by less than $\sim20\%$,
namely
$\delta_\mathrm{LD}$ assumes values
between 0.55 and
0.79, while
$3.84<\delta_\mathrm{HD}<5.87$.
These changes do not represent a monotonous
increase as a function of redshift,
but rather
random variations between one $z$-bin and another,
and between one field and the other (see Fig.~\ref{d1d4lim}).
In Appendix \ref{Appendix} we confirm, by means of
cosmological simulations, that
the small fluctuations of the percentile thresholds do
not reflect an evolution in $z$. In fact, they
are mainly due to sample variance, and
do not reflect the growth of structure over cosmic time.
Moreover, we verified that there is no bias introduced by the VIPERS
selection effect.
Therefore, we can safely use constant
thresholds to classify LD and HD environments in VIPERS:
we consider
galaxies with $1+\delta<1.7$
as belonging to LD, and
galaxies with $1+\delta>5$ to HD.
These limits, applied from $z=0.9$ to 0.51,
are the mean of 25th and 75th
percentiles computed above
(see Fig.~\ref{d1d4lim}).
Despite the name we chose for sake of clarity,
we note that the HD regions in VIPERS
have actually intermediate
densities in absolute terms.
Very concentrated structures, such as massive galaxy clusters,
typically have $1+\delta\simeq15$--20 \citep{Kovac2010a}
and should approximately match the upper 5\% of environmental
density. However the HD environment we defined,
although on average less extreme,
is certainly interesting to study, since it has evolved more recently
than typical clusters \citep[][]{Smith2005,Fritz2005}.
As stated above, with the 5NN we tend to probe 3--8$\,h_{70}^{-1}\,
\mathrm{Mpc}$.
Hence, if a certain environmental mechanism were efficient
at smaller scale, its trace could be ``diluted'',
or even vanish, in our analysis.
However, this is not the case, as we will show in the following.
Environmental dependencies at large scales
have already been measured e.g.~in
\citet{Cucciati2006} \citep[see also][]{Bassett2013,Hearin2015}.
These findings
can be due to physical mechanisms operating at distances larger
than the halo virial radius
\citep[e.g.][]{LuT2012}. Another possibility is that a connection
between large-scale environment and halo properties
preserves the small-scale signal
even when working with lower resolutions.
Supporting the latter argument,
\citet{Haas2012} demonstrated that
estimators based on a number of neighbours $2\leqslant N \leqslant10$
correlate equally well with host halo mass.
Further details about the estimate of the density field
are given in Appendix \ref{Appendix}. Among the tests
described there,
we also evaluate the
purity and completeness of our LD and HD samples.
By working on mock galaxy catalogues, we show that
our method is not hindered
by the VIPERS selection function: more than
70\% of LD/HD galaxies are expected to be assigned
to the correct environment, while a small tail
of objects ($<8\%$) end up in the opposite one
as interlopers.
\subsection{Classification of galaxy types}
\label{Galaxy type classification}
\begin{figure}
\includegraphics[width=0.99\columnwidth]{NrKdiag2.eps}
\caption{The $\mathrm{NUV}rK$ diagram of the VIPERS galaxies
between $z=0.51$ and 0.9, in the LD
environment (top panel) and in HD (bottom panel).
According to our classification,
passive galaxies lie above the solid line (defined in Eq.~\ref{nrkcut}),
while the dashed line (Eq.~\ref{nrkcut2}) divides galaxies with
intense star formation (bottom part of the diagram)
from those having low-sSFR (see text).
In the remainder of the paper, the two classes will be treat as
a whole sample of active objects.
Their number (and the number of passive galaxies) in each environment
is shown in the top-left corner of the plots.
Each colour-coded pixel represents the median sSFR
of the galaxies inside it,
estimated by means of SED fitting.
\citet{Arnouts2013} find that in this diagram the sSFR increases
as moving along the direction
$[(r-K),(\mathrm{NUV}-r)]=[(r-K)_0+\sin(54\degr),
(\mathrm{NUV}-r)_0-\cos(54\degr)]$,
identified by the bottom-right vector $NrK_\mathrm{sSFR}$
(note that the different scale in $x$- and $y$-axis warps the angles).
}
\label{nrkdiag}
\end{figure}
In order to separate active and passive galaxies, we apply the
method described in \citet{Arnouts2013}, based on the
$(\mathrm{NUV}-r)$ vs $(r-K)$ diagram ($\mathrm{NUV}rK$ in the following).
With this method, recent star formation
on a scale of $10^6$--$10^8$\,yr
is traced by the $(\mathrm{NUV}-r)$ colour \citep[][]{Salim2005},
while $(r-K)$ is sensitive to
the inter-stellar medium (ISM) absorption.
The absolute magnitudes used here
have been estimated as detailed in Sect.~\ref{Sed fitting},
through the filters $\mathrm{NUV}$, $r$, and $Ks$
of GALEX, MegaCam, and WIRCam respectively.
It should be noticed that our redshift range ($0.51<z\leqslant0.9$) is
fully within the interval $0.2 < z < 1.3$ used by
\citeauthor{Arnouts2013} in their analysis.
Their diagram is similar to the $(U-V)$ vs $(V-J)$ plane proposed by
\citet{Williams2009}, but by sampling more extreme wavelengths
it results in a sharper separation between quiescent and
star-forming galaxies \citep[cf~also][]{Ilbert2013}.
Moreover,
the position of a galaxy in the $\mathrm{NUV}rK$ plane
correlates well with its infrared excess (IRX, i.e.~the
$L_\mathrm{IR}/L_\mathrm{NUV}$ ratio)
and specific SFR
($\mathrm{sSFR}\equiv\mathrm{SFR}/\mathcal{M}_\odot$),
at least when $\log(\mathcal{M/M_\odot})\geqslant 9.3$
\citep[for further details, see][]{Arnouts2013}.
It should also be emphasised that
with classification methods based on a single-colour bimodality,
the passive sample is partially contaminated by star-forming
galaxies reddened by dust, as shown e.g.~by \citet{Moresco2013}.
With the $\mathrm{NUV}rK$, the simultaneous use of two colours disentangles
those different populations.
As illustrated in Fig.~\ref{nrkdiag} (solid line), we regard a galaxy
as passive when
\begin{equation}
\begin{array}{l}
(\mathrm{NUV} - r) > 3.75 \quad \mathrm{and}\\
(\mathrm{NUV} - r) > 1.37 (r - K) + 3.2 \quad \mathrm{and}\\
(r - K) < 1.3 \, .
\end{array}
\label{nrkcut}
\end{equation}
Active galaxies are located in
the complementary region of the diagram (i.e.~below the solid line
in Fig.~\ref{nrkdiag}).
With respect to the definition of \citet{Arnouts2013}
we added a further cut, namely $(r-K)<1.3$.
In this way we take into account the
geometrical effect they observe after including the
dust prescription of \citet{Chevallard2013} in their
analysis. According to that study, the reddest $(r-K)$ colours
can be reached only by edge-on disc galaxies with
a flat attenuation curve.
We also verified through a set
of BC03 templates that passive
galaxies ($\mathrm{age}/\tau>4$)
have $(r-K)<1.15$.
This result, considering the typical
uncertainties in magnitude estimates,
justifies the third
condition in Eq.~(\ref{nrkcut}). With a similar argument,
\citet{Whitaker2011} modify the passive \textit{locus}
of \citet{Williams2009} diagram.
In the $\mathrm{NUV}rK$, sSFR increases as galaxies move along a
preferential direction,
identified in Fig.\ref{nrkdiag} by the vector $NrK_\mathrm{sSFR}$.
Therefore, lines orthogonal to that direction
work as a cut in sSFR: for instance,
the diagonal boundary we defined for the passive \textit{locus}
roughly corresponds to
$\mathrm{sSFR}<10^{-11}\,\mathrm{yr}^{-1}$.
We prefer to use $\mathrm{NUV}rK$ instead of selecting
directly through the sSFR distribution,
since the SED fitting estimates of SFR are
generally less reliable than colours \citep[][]{Conroy2009b},
especially when far-IR data are not available.
Nevertheless, it is worth noticing that the sSFR values
we obtained from \textit{Hyperzmass} are on average in good
agreement with the $\mathrm{NUV}rK$ classification,
providing an additional confirmation
of its robustness (see Fig.~\ref{nrkdiag}).
Among the galaxies we have classified as $\mathrm{NUV}rK$-passive,
about 95\% have a (SED fitting derived) sSFR lower than
$10^{-11}\,\mathrm{yr}^{-1}$ \citep[which is the typical cut used
e.g.~in][]{Pozzetti2010}.
We also tested another boundary in the colour-colour space
(the dashed line in Fig.~\ref{nrkdiag}), namely
\begin{equation}
\begin{array}{l}
(\mathrm{NUV} - r) > 3.15 \quad \mathrm{and} \\
(\mathrm{NUV}- r) > 1.37 (r - K) + 2.6 \quad \mathrm{and} \\
(r - K) < 1.3 \, .
\end{array}
\label{nrkcut2}
\end{equation}
In this way we can delimit a region in the
$\mathrm{NUV}rK$ plane likely corresponding to the ``green valley'':
galaxies in between Equations (\ref{nrkcut}) and (\ref{nrkcut2})
are probably shutting off their star formation,
having
$\mathrm{sSFR} \simeq 10^{-10}\,\mathrm{yr}^{-1}$
according to their SED fitting estimates
\citep[Fig.~\ref{nrkdiag}, but see also][]{Arnouts2013}.
We include these galaxies in the active sample, although
they are expected to be in transition towards the passive
\textit{locus}. We verify that removing them from the
active sample do not modify our conclusions. The typical features of this
``intermediate'' galaxies will be explored in a future work.
\section{Stellar mass functions in different environments}
\label{Gsmf results}
We now derive the stellar mass function of VIPERS galaxies within the
environments described in Sect.~\ref{Environment definition},
also separating active and passive subsamples.
The chosen redshift bins are those already adopted there (also reported in Table \ref{tabmlim}).
We describe our results and compare them to what has been found by previous
surveys.
\subsection{Methods}
\label{Gsmf estimate}
First, we determine the threshold $\mathcal{M}_\mathrm{lim}$
above which our data can be considered complete in stellar mass.
As explained below, $\mathcal{M}_\mathrm{lim}$
depends on the flux limit of VIPERS ($i_\mathrm{lim}$), redshift,
and galaxy type.
Such a limit excludes stellar mass bins with large fractions of undetected objects.
The estimate of $\mathcal{M}_\mathrm{lim}$ is complicated
by the wide range of $\mathcal{M}/L$.
To estimate such a limit
we apply the technique of \citet{Pozzetti2010}, which
takes into account typical $\mathcal{M}/L$
of the faintest observed galaxies
(see also the discussion in D13, Sect.~3.1).
We keep separated
the active sample from the passive one,
since $\mathcal{M}/L$ depends on galaxy type.
For each population we select
the 20\% faintest objects
inside each redshift bin.
We rescale their stellar masses at the
limiting magnitude:
$\log(\mathcal{M}(i\!=\!i_\mathrm{lim})/\mathcal{M_\odot}) \equiv
\log(\mathcal{M}/\mathcal{M_\odot})+0.4(i-i_\mathrm{lim})$.
For the active and passive sample respectively,
we define $\mathcal{M}_\mathrm{lim}^\mathrm{act}$ and
$\mathcal{M}_\mathrm{lim}^\mathrm{pass}$
to be equal to the 98th and 90th percentile of the corresponding
$\mathcal{M}(i\!=\!i_\mathrm{lim})$ distributions.
We choose a higher percentile level for active galaxies
to take into account the larger scatter they have in $\mathcal{M}/L$.
Results are reported in Table \ref{tabmlim}.
The increase of the limiting mass toward higher redshifts
is due to dimming,
while $\mathcal{M}_\mathrm{lim}^\mathrm{act}$ is always smaller
than $\mathcal{M}_\mathrm{lim}^\mathrm{pass}$
because passive SEDs have on average larger $\mathcal{M}/L$.
For the total GSMF we will use
$\mathcal{M}_\mathrm{lim}^\mathrm{pass}$
as a conservative threshold;
a direct estimate by applying the technique of \citet{Pozzetti2010}
to the whole sample would result in lower
values by about 0.2--0.3 dex,
because of the mixing of galaxy types
\citep[cf D13;][]{Moustakas2013}.
We then estimate the stellar mass functions
by means of two methods, namely the $1/V_\mathrm{max}$ method
\citep{Schmidt1968} and the one devised by
\citet[][hereafter STY]{Sandage1979}.
The former is non-parametric, whereas the latter
assumes the GSMF to be modelled by the
\citet[][]{Schechter1976} function
\begin{equation}
\Phi(\mathcal{M})\mathrm{d}\mathcal{M} =
\Phi_\star \left( \frac{\mathcal{M}}{\mathcal{M}_\star} \right)^{\alpha}
\exp\left( - \frac{\mathcal{M}}{\mathcal{M}_\star} \right)
\frac{\mathrm{d}\mathcal{M}}{\mathcal{M}_\star} \: .
\label{schfun}
\end{equation}
%
Both of them are
implemented in the software package ALF \citep{Ilbert2005}.
The $1/V_\mathrm{max}$ method
gives the comoving galaxy density
in a certain stellar mass bin (e.g., between $\mathcal{M}$
and $\mathcal{M}+\mathrm{d}\mathcal{M}$):
\begin{equation}
\Phi(\mathcal{M})\mathrm{d}\mathcal{M}=
\sum_{j=1}^N \frac{w_j}{V_{\mathrm{max}\,j}} \;,
\end{equation}
where $V_{\mathrm{max}}$ is the comoving
volumes in which a galaxy (out of the $N$ detected in the
given bin) would be observable, and $w$ is
the statistical weight described in Sect.~\ref{Dataspec}.
Usually, to measure $V_\mathrm{max}$
one needs to know
the sky coverage of the survey, and
the minimum and maximum redshifts
at which the object drops out the magnitude range of detection.
However,
considering the whole surveyed area is not formally correct
when dealing with HD/LD galaxies -- as well as galaxies in
clusters or groups -- because those objects have no chance
(by definition) to be observed outside their environment.
In other words, we need to reconstruct
the comoving volumes occupied by the HD/LD regions
and take them into
account, instead of
the total VIPERS volume, to estimate the $V_\mathrm{max}$ values.
This new approach is described in detail in Appendix \ref{Voronoi}.
It allows us to properly normalise the stellar mass functions
in Fig.~\ref{mfcfvmax},
In the same Appendix we also describe how we estimated the
uncertainty due to cosmic variance, by means of galaxy mocks.
We include this uncertainty
in the error budget of the total GSMFs, along with Poisson noise,
while for the active and passive samples
we compute errors assuming Poisson statistics only.
In plotting each GSMF, the $1/V_\mathrm{max}$ points are centred at
the median stellar mass of their bin.
We evaluate the error on this position,
i.e.~the error bar on the $x$-axis,
by considering 100 simulated Monte Carlo samples
in which the uncertainty of $\log(\mathcal{M/M_\odot})$
is randomly assigned from a
Gaussian 0.2\,dex large. After binning those samples,
the median stellar mass
within each bin shows a variance on average
smaller than 0.05\,dex, fully negligible in the treatment.
\begin{table}[h]
\caption{Stellar mass completeness: thresholds
for active and passive galaxies in the redshift bins adopted in this work.
These limits are valid both in LD and HD regions;
$\mathcal{M}_\mathrm{lim}^\mathrm{pass}$ is used also for the whole
galaxy sample. In addition, the median redshift of each bin is
reported in the second column.}
\label{tabmlim}
\setlength{\extrarowheight}{1ex}
\centering
\begin{tabular}{lccc}
\hline \hline
redshift range & $\langle z \rangle$
& $\log(\mathcal{M_\mathrm{lim}^\mathrm{act}}/\mathcal{M_\odot})$ &
$\log(\mathcal{M_\mathrm{lim}^\mathrm{pass}}/\mathcal{M_\odot})$ \\ [1ex]
\hline
$0.51<z \leqslant0.65$ & 0.60 & 10.18 & 10.39 \\
$0.65<z \leqslant0.8$ & 0.72 & 10.47 & 10.65 \\
$0.8<z \leqslant0.9$ & 0.84 & 10.66 & 10.86 \\ [1ex]
\hline
\end{tabular}
\end{table}
The STY method determines the
parameters $\alpha$ and $\mathcal{M}_\star$
of Eq.~(\ref{schfun})
through a maximum-likelihood approach.
The associated uncertainties come from the confidence
ellipsoid at 1$\sigma$ level.
In the highest redshift bin, i.e.~$0.8<z<0.9$, we are limited
to $\log\mathcal{M/M_\odot}\gtrsim10.7$ and therefore
we prefer to keep $\alpha$ fixed to the value found in the
previous $z$-bin.
The third parameter ($\Phi_\star$) is computed independently,
to recover the galaxy number density after integrating the
Schechter function \citep[see][]{Efstathiou1988,Ilbert2005}.
Also in this case we consider the comoving volumes occupied by the
two environments (Appendix \ref{Voronoi}).
The STY estimates, along with their uncertainties,
are listed in Table \ref{tab_mstar}.
Complementary to the $1/V_\mathrm{max}$ estimator in
many aspects, this method is unbiased
with respect to density inhomogeneities
\citep[see][]{Efstathiou1988}.
We verified that the $1/V_\mathrm{max}$ outcomes are
reliable by comparing its outcomes not only with the STY but
also with another non-parametric estimator \citep[i.e., the
stepwise maximum-likelihood method of][]{Efstathiou1988}.
These multiple estimates strengthen our results, as
the different methods turn out to be in good agreement (Fig.~\ref{gsmf1}).
In particular, this fact validates the completeness limits
we have chosen, because the estimators would diverge at
$\mathcal{M>M}_\mathrm{lim}$
if some galaxy population were missing
\citep[see][]{Ilbert2004}.
\begin{table*}
\caption{GSMF in low- and high-density regions: Schechter parameters resulting from the STY method, when applied to different galaxy populations at different redshifts. Before fitting data at $0.8<z<0.9$, $\alpha$ has been fixed to the value of the previous $z$-bin.}
\label{tab_mstar}
\setlength{\extrarowheight}{1ex}
\centering
\begin{tabular}{lcccccc}
\hline\hline
galaxy sample & $\alpha$ & $\log\mathcal{M}_\star$ & $\Phi_\star$ & $\alpha$ & $\log\mathcal{M}_\star$ & $\Phi_\star$\\
& & $[h_{70}^{-2}\,\mathcal{M}_\odot]$ & $[10^{-3}\,h_{70}^3\,\mathrm{Mpc}^{-3}]$ & & $[h_{70}^{-2}\,\mathcal{M}_\odot]$ & $[10^{-3}\,h_{70}^3\,\mathrm{Mpc}^{-3}]$ \\
[1ex] \hline
$ 0.51<z<0.65 $ & \multicolumn{3}{c}{low density } & \multicolumn{3}{c}{high density} \\
total & $-0.95^{ + 0.16}_{ -0.16}$ & $10.77^{ + 0.06}_{ -0.05}$ & $ 1.27^{ + 0.17}_{ -0.19}$ & $-0.76^{ + 0.14}_{ -0.13}$ & $11.01^{ + 0.06}_{ -0.06}$ & $ 4.60^{ + 0.59}_{ -0.63} $ \\
passive & $-0.49^{ + 0.20}_{ -0.20}$ & $10.76^{ + 0.06}_{ -0.06}$ & $ 0.73^{ + 0.06}_{ -0.08}$ & $-0.00^{ + 0.18}_{ -0.18}$ & $10.89^{ + 0.06}_{ -0.05}$ & $ 3.51^{ + 0.16}_{ -0.16} $ \\
active & $-0.87^{ + 0.20}_{ -0.19}$ & $10.51^{ + 0.06}_{ -0.06}$ & $ 1.18^{ + 0.16}_{ -0.19}$ & $-0.93^{ + 0.19}_{ -0.18}$ & $10.77^{ + 0.08}_{ -0.08}$ & $ 2.71^{ + 0.55}_{ -0.57} $ \\
[1ex] \hline
$ 0.65<z<0.80 $ & \multicolumn{3}{c}{low density } & \multicolumn{3}{c}{high density} \\
total & $-0.52^{ + 0.32}_{ -0.31}$ & $10.72^{ + 0.07}_{ -0.06}$ & $ 1.14^{ + 0.07}_{ -0.11}$ & $-0.80^{ + 0.23}_{ -0.22}$ & $10.99^{ + 0.08}_{ -0.07}$ & $ 3.83^{ + 0.55}_{ -0.69} $ \\
passive & $-0.14^{ + 0.40}_{ -0.39}$ & $10.73^{ + 0.09}_{ -0.08}$ & $ 0.51^{ + 0.03}_{ -0.04}$ & $-0.40^{ + 0.28}_{ -0.27}$ & $10.97^{ + 0.09}_{ -0.07}$ & $ 2.42^{ + 0.18}_{ -0.32} $ \\
active & $-1.26^{ + 0.32}_{ -0.31}$ & $10.69^{ + 0.10}_{ -0.09}$ & $ 0.79^{ + 0.20}_{ -0.24}$ & $-0.91^{ + 0.31}_{ -0.30}$ & $10.78^{ + 0.10}_{ -0.09}$ & $ 2.54^{ + 0.51}_{ -0.65} $ \\
[1ex] \hline
$ 0.80<z<0.90 $ & \multicolumn{3}{c}{low density } & \multicolumn{3}{c}{high density} \\
total & $-0.52 $ & $10.64 ^{ + 0.05 }_{ -0.04}$ & $ 1.16^{ + 0.08}_{ -0.08}$ & $-0.80 $ & $10.85 ^{ + 0.05 }_{ -0.04}$ & $ 4.59^{ + 0.33}_{ -0.33} $ \\
passive & $-0.14 $ & $10.66 ^{ + 0.06 }_{ -0.05}$ & $ 0.36^{ + 0.04}_{ -0.04}$ & $-0.40 $ & $10.88 ^{ + 0.06 }_{ -0.05}$ & $ 1.76^{ + 0.18}_{ -0.18} $ \\
active & $-1.26 $ & $10.70 ^{ + 0.07 }_{ -0.07}$ & $ 0.85^{ + 0.05}_{ -0.05}$ & $-0.91 $ & $10.75 ^{ + 0.07 }_{ -0.06}$ & $ 3.35^{ + 0.25}_{ -0.25} $ \\
\hline
\end{tabular}
\end{table*}
\subsection{Results}
\label{Results}
\begin{figure*}
\includegraphics[width=0.9\textwidth]{mfcfvmax.eps}
\caption{Evolution of the GSMF in the different
VIPERS environments. Total, passive, and active samples are in
black, red, and blue colours respectively. Each shaded area is obtained
from the $1/V_\mathrm{max}$ estimates adding the corresponding Poissonian
uncertainty (see Sect.~\ref{Gsmf estimate} and
Appendix \ref{Voronoi} for details); only estimates above the
stellar mass completeness limit are considered. }
\label{mfcfvmax}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.99\textwidth]{MF3col_pap_ok.eps}
\caption{Stellar mass functions of galaxies at low density (orange symbols)
and high density (violet symbols) in three different redshift bins, namely
$0.51<z\leqslant0.65$, $0.65<z\leqslant0.8$, and $0.8<z\leqslant0.9$.
Right-side panels show the GSMFs
of active galaxies, while central panels refer to passive ones. The GSMFs
of the whole sample in the same $z$-bins are shown on the left.
In each plot, filled (open) circles represent the $1/V_\mathrm{max}$
points above (below) the completeness mass $\mathcal{M}_\mathrm{lim}$
(vertical dot line),
with error bars (shown only above $\mathcal{M}_\mathrm{lim}$)
that accounts for Poisson uncertainty.
In the total GSMFs, also the uncertainty due to cosmic variance is
added in the error bars (note that in some cases the error bar is smaller
than the size of the points).
Solid lines represent the Schechter functions estimated through the STY method,
with the 1$\sigma$ uncertainty highlighted by a shaded area.
With this estimator all the Schechter parameters are free, except at
$0.8<z\leqslant0.9$, where $\alpha$ is fixed to the value found in the
previous $z$-bin (see Table \ref{tab_mstar}).
To compare the shape of mass functions in LD and HD, we renormalise them
in such a way that their number density ($\rho_N$) is equal to unity when we
integrate the GSMF at $\mathcal{M}>\mathcal{M}_\mathrm{lim}$.
}
\label{gsmf1}
\end{figure*}
\begin{figure}
\includegraphics[width=0.99\columnwidth]{sch-ellis_pap_ok.eps}
\caption{\citet{Schechter1976} parameters (filled symbols)
of the GSMFs at redhisft
$0.51<z<0.65$ and $0.65<z<0.8$, where $\alpha$ was let free during
the STY fitting (cf Fig.~\ref{gsmf1}).
The solid- and dashed-line contours represent
respectively the 68.3 and 90\% CL. Orange lines and downward triangles
are the estimates for galaxies in the LD regions, violet lines and
upward triangles are used for the HD ones.
Each panel concerns a different sample (total, passive, and active
galaxies from left to right). All the values are obtained by using the
algorithms contained in the ALF suite \citep{Ilbert2005}.
}
\label{schellips}
\end{figure}
The GSMFs computed in this Section are shown
in Fig.~\ref{mfcfvmax} and \ref{gsmf1}.
In the former, to show their evolution,
we superimpose the mass functions
at different redshifts, namely
$0.51<z\leqslant0.65$, $0.65<z\leqslant0.8$,
and $0.8<z\leqslant0.9$ (median redshift $\tilde{z}\sim 0.6$, 0.72, 0.84).
On the other hand, in Fig.~\ref{gsmf1}, we renormalise the GSMFs
in such a way that their number density is equal to unity when we
integrate the GSMF at $\mathcal{M}>\mathcal{M}_\mathrm{lim}$.
With this kind of rescaling we can directly
compare the shape that the GSMF has in the two environments.
In both Figures, the mass functions of different galaxy types
(total, passive, and active samples) are plotted in distinct columns.
Our results are particularly intriguing in the high-mass regime, where VIPERS
benefits from its large number statistics.
Figure \ref{mfcfvmax} shows a different growth of stellar mass in LD and HD
environments. Regarding the total galaxy sample, there is
a mild increase of the HD high-mass tail over cosmic time
(bottom-left panel), an increase that
is not observed neither in LD (top-left panel)
nor in the GSMF of the whole VIPERS
volume (D13). This trend seems to be due to the passive
population (central panels) and will be investigated in Sect.~\ref{Discussion}.
Also looking at the shape of the GSMFs, there is a remarkable difference
between LD and HD galaxies (Fig.~\ref{gsmf1}).
At $z\leqslant0.8$, a large fraction of massive galaxies inhabits
the densest regions, resulting in a higher exponential tail of
the HD mass function with respect to the LD environment.
At higher redshifts this difference
becomes less evident.
Quantitatively,
the difference is well described by the Schechter
parameter $\mathcal{M}_\star$, which is larger in
the HD regions (see the likelihood contours for $\alpha$
and $\mathcal{M}_\star$ shown in Fig.~\ref{schellips}).
For the total sample, in the first and second
redshift bin, $\Delta\mathcal{M}_\star\equiv
\log(\mathcal{M}_{\star,\mathrm{HD}}/\mathcal{M}_{\star,\mathrm{LD}})
=0.24\pm0.12$ and $0.27\pm0.15\,\mathrm{dex}$ respectively.
A similar deviation appears at $0.8<z\leqslant0.9$
($\Delta\mathcal{M}_\star=0.21\pm0.11\,\mathrm{dex}$)
although in that case
the formal $\mathcal{M}_\star$ uncertainty
has been reduced by keeping $\alpha$ fixed in the fit.
The behaviour seen for the whole sample is also signature of
the GSMFs divided by galaxy types (Fig.~\ref{gsmf1},
middle and right panels).
At intermediate masses, our analysis becomes less stringent.
Given the completeness limit of VIPERS,
it is difficult to constrain the power-law slope of the GSMF.
We find that $\alpha_\mathrm{HD}$ and $\alpha_\mathrm{LD}$ are
compatible within the errors, with the exception of the passive sample
at $0.51<z\leqslant0.65$, for which
the stellar mass function is steeper in the LD regions.
\subsection{Comparison with previous work}
\label{Gsmf comparison}
The comparison with other authors is not always
straightforward, given the different definitions of
environment and galaxy types. Besides that, also the selection function
(and the completeness) change from one survey to another.
A piece of work with an approach
very similar to ours is \citet{Bolzonella2010}.
In that paper, low- and high-density
galaxies in the zCOSMOS survey ($0.1<z<1.0$) are classified
by means of the galaxy density contrast (derived
from the 5NN, as in our case).\footnote{
For sake of
simplicity, we use our notation (LD and HD) also when referring to the
low-/high-density galaxies of \citet{Bolzonella2010}, which are
named D1 and D4 in the original paper.}
\citeauthor{Bolzonella2010} observe a higher fraction
of massive galaxies in overdense regions, although within
the uncertainties of the GSMF estimates.
Down to the redshift range not reached by VIPERS
($0.1<z<0.5$) they also find
an upturn of the high-density GSMF
below $\log \mathcal{M/M_\odot}\lesssim10$.
In Fig.~\ref{zcos-vip} we directly compare our GSMFs
to those of
\citet{Bolzonella2010} in a redshift bin
that is similar in the two analyses ($0.5<z<0.7$ in their paper,
$0.51<z<0.65$ in ours). We find a good agreement
for both passive and active
galaxies.\footnote{When considering
the next bin of \citeauthor{Bolzonella2010}, i.e.~$0.7<z<1$,
we also found a fairly good agreement with
our data at $0.65<z<0.9$.
However we preferred to show the $z$-bin
where the stellar mass limit
is lower.}
With respect to the latter sample,
a better accordance is reached considering only high-sSFR
galaxies, i.e.~when we remove the ``intermediate''
objects that lie
between the borders (\ref{nrkcut}) and (\ref{nrkcut2}) of
the $\mathrm{NUV}rK$ diagram.
This improvement is probably
due to the fact that
the high-sSFR subsample is
more similar to the late-type galaxies
of \citet{Bolzonella2010}, which they identify using
an empirical set of
galaxy templates. We note that also in \citet{Bundy2006} a difference
between LD and HD mass function is visible but not
significant \citep[][Fig.~11]{Bundy2006}.
\citet{Mortlock2015},
with a combination of photometric redshift samples,
conduct a study of environmental effects up to $z\sim2.5$.
Their analysis suggests
that massive galaxies at $z<1$ favour denser environment.
When they derive the GSMF in this environment
they also observe a flatter low-mass end, in agreement with our findings.
On the contrary, at $z>1$ the
GSMFs in low and high densities become very similar.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{cf_zcosmos.eps}
\caption{VIPERS (this work) and zCOSMOS \citep{Bolzonella2010}
stellar mass functions of galaxies in LD/HD regions (orange/violet
and grey/black colours,
see the legend in the top-right corner of each panel).
The
comparison is restricted to a single
redshift bin that is similar in the two surveys
($0.5<z<0.7$ in zCOSMOS, $0.51<z<0.65$ in VIPERS).
All the GSMFs are rescaled in order to have
$\rho_N(\mathcal{M}>\mathcal{M}_\mathrm{lim})=1$,
as in Fig.~\ref{gsmf1}.
In both panels, solid lines
represent the STY estimates for the various galaxy samples,
with a shaded area encompassing the 1$\sigma$ uncertainty
(the line is dashed below the stellar mass limit).
Filled circles and diamonds are the $1/V_\mathrm{max}$
determinations of the GSMFs of zCOSMOS
(LD and HD respectively).
The \textit{upper panel} includes the stellar mass functions
of star-forming galaxies, classified by \citet{Bolzonella2010}
according to their photometric types (T2, i.e.~late-type galaxies),
and by means of the $\mathrm{NUV}rK$ diagram in our analysis.
We also show with dot-dashed lines
the stellar mass function of the VIPERS galaxies having high sSFR
(i.e., those remaining after removing the
$\mathrm{NUV}rK$-intermediate objects from the active sample).
In the \textit{lower panel}, the VIPERS passive
sample and the zCOSMOS early-type galaxies
(i.e., T1 spectrophotometric types) are considered.
}
\label{zcos-vip}
\end{figure}
In contrast, other studies find no environmental dependency
in the stellar mass function of galaxy clusters
\citep{Calvi2013,Vulcani2012,Vulcani2013,vanderBurg2013}.
The lack of differences in the
field vs cluster comparison
can be due
to the various (local) environments embraced in the
broad definition of ``field'' (i.e., a sky region without clusters)
that can include single galaxies, pairs, and even galaxy groups.
Simulations of \citet{McGee2009} indicate that the majority
of cluster members have been
accreted through galaxy groups. Other models, as those
used in \citet{DeLucia2012}, similarly show that a large fraction of
cluster galaxies before
belonged to smaller groups, and were ``pre-processed'' in that environment.
Therefore, as much as galaxy groups also
contribute to the stellar mass function in the field,
the high-mass end is expected to be similar in the
two environments.
Indeed, when \citet{Calvi2013}
consider only isolated galaxies, they obtain a stellar
mass function that differs from the others.
The presence of structures in the field can thus be crucial
in this kind of analysis.
Also the (global) environment represented by a galaxy cluster
includes regions with different local conditions.
We note that in
\citet{Vulcani2012}
the local galaxy density
assumes a wide range of values
also in clusters.
The issue is discussed also
by \citet{Annunziatella2014}, who
analyse a cluster from the CLASH-VLT survey.
They find that the stellar
mass function of passive galaxies in the core
shows a steeper decrease at low masses,
in comparison with passive galaxies in the outskirts of the cluster.
In addition, we emphasise that VIPERS is better designed
than current cluster surveys to probe $\mathcal{M}>\mathcal{M_\star}$.
For instance, \citet{vanderBurg2013} have
12 spectroscopic members in their 10 GCLASS clusters
with $11.2<\log(\mathcal{M/M_\odot})<11.6$
and no detection at higher masses; instead, our HD regions contain
a few hundreds (spectroscopic) galaxies above
$\log(\mathcal{M/M_\odot})=11.2$.
To summarise, the comparison illustrates the advancement
VIPERS represents with respect to previous surveys like zCOSMOS
or DEEP2:
we are now able to
robustly discriminate the LD and HD mass functions,
finding differences
that were not statistically significant before. We emphasise
that VIPERS has also more statistical power than current
cluster surveys to probe the massive end of the GSMF.
Besides that, the fact that our results disagree e.g.~with \citet{Vulcani2012}
is related to the different definition of environment.
On the other hand, the sample used in this paper
spans only $\sim\!2.3\,\mathrm{Gyr}$
of the history of the universe, whereas zCOSMOS and DEEP2 have a larger
redshift range. Future spectroscopic surveys shall
combine high statistics
and large cosmic time intervals
thanks to the next-generation
facilities \citep[especially PFS, the Subaru Prime Focus Spectrograph][]{Takada2014}. They could
confirm whether the environmental effects on the GSMF at $z\lesssim1$
(i.e., the enhancement of the high-mass end and the flattening of the
power-law slope)
vanish at higher redshifts, as suggested by \citet{Mortlock2015}.
We can also compare the VIPERS mass functions with
those measured in the local universe. In particular,
\citet[][hereafter P10]{Peng2010}
define the environment of SDSS galaxies
as in \citet{Bolzonella2010},
i.e.~in a way similar to ours. They find:
\begin{enumerate}[i.]
\item values of $\alpha$ and $\mathcal{M}_\star$ for
active GSMFs are the same in the LD and HD regions;
\item in LD,
the stellar mass function of passive galaxies has the
same $\mathcal{M}_\star$ of the active one;
\item comparing passive the GSMF in LD and HD regions, the latter
have a larger value of $\mathcal{M}_\star$.\footnote{
In P10, the passive GSMFs are fitted with a double Schechter function
Here we refer only to what concerns the primary (most massive)
component.
}
\end{enumerate}
Thanks to the large volume of the VIPERS sample,
and to the high precision of the redshfit measurements,
we can verify whether these findings extend to intermediate redshifts.
We emphasise that at $z>0$ the environmental signatures
(i)--(iii) have not been confirmed yet: several studies provided
contrasting clues \citep[cf][]{Bolzonella2010,Vulcani2012,
Giodini2012,vanderBurg2013,
Annunziatella2014}.
With respect to the passive mass functions, the STY estimator yields
larger
$\mathcal{M}_\star$ values in the regions of higher density,
as stated in (iii).
We find such a trend in all three redshift bins
(see Table \ref{tab_mstar}).
This feature, as we will discuss
in Sect.~\ref{Evolution}, can be associated to
dry major mergers, which are more likely to happen
in the overdense regions.
Turning to the active GSMFs,
we observe (i) and (ii) at $z>0.65$. Indeed,
the shape of the active GSMF is similar
in the two VIPERS environments,
since
$\alpha$ and $\mathcal{M}_\star$
computed in LD/HD regions are
compatible within the errors (note that at $z\sim0.84$
we can compare only $\mathcal{M}_\star$ because
$\alpha$ is fixed \textit{a-priori}).
Moreover, $\mathcal{M}_\star^\mathrm{act,LD}$ is
consistent with $\mathcal{M}_\star^\mathrm{pass,LD}$.
At $0.51<z\leqslant0.65$,
the features (i) and (ii)
are not observed any longer.
We argue that the difficulty in assessing clearly
(i) and (ii) is due to the GSMF parametrisation of
the active sample, which here is
a single Schechter function (Eq.~\ref{schfun}).
Recent work suggests that this is not the optimal choice.
For instance, \citet[][GAMA survey]{Baldry2012} observe
an excess of blue galaxies
at $\mathcal{M}>10^{10}\,\mathcal{M}_\odot$
with respect to their best (single Schechter) fit,
with the magnitude of the deviation
depending on the colour adopted to classify.
We find that, by adopting a double-Schechter model for
the active mass function at $z\sim0.6$,
the STY fit produces $\alpha$ and $\mathcal{M}_\star$
that satisfy relations (i) and (ii). However, the uncertainties in
this case are larger: given the stellar mass
limit of VIPERS, the slope of the secondary component is not
well constrained.
In the next Section we will discuss the origin of
these GSMF features, already observed in the local universe and
now confirmed at $0.5\lesssim z \lesssim1$
\section{Discussion}
\label{Discussion}
The shape of the passive GSMFs is different in the LD
and HD environments, and this difference increases going to
lower $z$ (see Fig.~\ref{gsmf1}).
This can be the result of an environmental-dependent quenching
mechanism, but may also be explained by a different halo mass
distribution, or a different assembly history for haloes
of similar mass but residing in different regions
\citep[see discussion in][]{DeLucia2012}.
A similar perspective, looking at the halo environment,
has been adopted
by \citet{Hearin2015} to explain the so-called ``galactic conformity''
\citep{Weinmann2006}, which is the tendency of satellite galaxies
to stay in the same state (star forming of passive)
of the central one well beyond the virial radius.
Such a sSFR correlation could be linked to the tidal
forces that
haloes evolving in the same large-scale environment experienced.
\subsection{Comparison with semi-analytical models}
\label{Millennium}
\begin{figure}
\includegraphics[width=0.49\textwidth]{hmf_ref_mill2.eps}
\caption{Halo mass function derived from the simulation described in Sect.~\ref{Millennium},
restricted to galaxies in the HD and LD environment (violet and orange symbols,
respectively). Different symbols are estimated in the three redshift bins quoted in the
bottom-left corner of the plot, with error bars obtained from the variance among the 10
mock catalogues. The mass function of haloes in the entire box
($714\,h_{70}^{-1}\,\mathrm{Mpc}$ side), at snapshots consistent with our
redshift bins, is shown as reference
with solid lines (darker colour at lower $z$).
}
\label{mill_hmf}
\end{figure}
We make use of galaxy simulations to
investigate more in detail the two environments we defined.
In VIPERS we can exploit
a set of 10 light cones, built
from the Millennium simulation \citep{Springel2005}.
To derive mock catalogues,
dark-matter haloes are populated with galaxies
by means of the semi-analytic model
(SAM) of \citet[][hereafter DLB07]{DeLucia&Blaizot2007}.
For each mock galaxy, rest-frame and apparent magnitudes
have been estimated in the same filter used in the real survey,
and the
same magnitude cut of VIPERS ($i \leqslant 22.5$) is applied to
the simulated catalogue.
We add an error to each redshift
to emulate observational measurements, either spectroscopic or photometric
depending whether the object is chosen to be a VIMOS pseudo-target
by the slit positioning algorithm.\footnote{The sampling rate is defined
as the ratio between the number of spectroscopic pseudo-targets and the
whole mock galaxies sample, in bins of redshift. It is
very similar to the TSR of the real survey,
while the SSR is 100\%. The statistical weighing factor is
therefore $1/\mathrm{TSR(z)}$.}
In Appendix \ref{Appendix} and \ref{Voronoi} we use these mock catalogues
to test our reconstruction of the
density field,
together with another set realised through the halo
occupation distribution (HOD) technique
\citep[see][]{delaTorre2013b}.
The HOD mock galaxies
better reproduce VIPERS-PDR1: they cover the same area of
the real survey and have the colour pre-selection applied.
The SAM catalogues were prepared at an earlier stage of the survey, so
in each of the 10 realisations the effective area is $4.5\,\mathrm{deg}^2$.
The decline of $N(z)$ at $z\sim0.5$ due to the VIPERS selection function
is reproduced by removing objects randomly,
irrespective of their $(g-r)$ and $(r-i)$ colours.
Nevertheless, the SAM catalogues are better suited
to the goal of this Section, containing more physical
information.
Indeed, the DLB07 model predicts
galaxy properties such as stellar mass, SFR, colours
at different redshifts,
in addition to the apparent magnitudes mentioned above;
on the contrary, galaxy stellar mass and SFR
are not available in the HOD catalogues.
In these Millennium light-cones we identify HD and LD regions
by means of the same method used with real data
(see Sect.~\ref{Environment definition} and Appendix \ref{Voronoi}).
In principle, this means that environmental effects predicted by DLB07 can
be straightforwardly compared to those found in VIPERS.
However the LD/HD environments in the simulation may correspond
only roughly to the regions delimited
in the real survey, for several reasons.
First, the volume-limited ($M_B<20.4-z$)
tracers used to estimate $\delta$ in the simulation
may have different number density
and clustering. As highlighted in
\cite{Cucciati2012b}, at intermediate redshifts
the $B$-band luminosity function
shows an excess of bright late-type galaxies
in the DLB07 model with respect to VVDS data,
while early-type galaxies at $M_B<M_B^\star$ are underpredicted.
Moreover, we are aware that for the most luminous and massive galaxies
the two-point correlation function of VIPERS is slightly higher than DLB07
on scales $\gtrsim 7\,h_\mathrm{70}^{-1}\,\mathrm{Mpc}$ \citep{Marulli2013}.
This is expected, as the $\sigma_8$ parameter, set
by the first-year analysis of the Wilkinson Microwave Anisotropy Probe
\citep[WMAP1,][]{Spergel2003} and adopted in the Millennium simulation,
is larger than more recent measurements from WMAP9 and Planck-2015
\citep[][]{Hinshaw2013,Planck2015-13}.
We discuss these differences also
in Appendix \ref{Voronoi}.
Further investigations have been carried out in Cucciati et al.~(in prep.).
Overall, those tests show that
structures (and voids) in the Millennium simulations grow earlier
than those in the observed universe,
and the volume occupied by the HD (LD) regions is smaller (larger).
Nevertheless, the under- and over-densities in our light cones
still represent two opposite environments that we can contrast,
e.g.~by looking at their underlying
dark matter content.
Figure~\ref{mill_hmf} shows
the mass distribution of haloes hosting either LD or HD galaxies.
In all the redshift bins, the number density of HD haloes is higher
than the LD ones. The distribution of the former has a flatter slope,
with a higher fraction of massive haloes:
those with $\mathcal{M}_\mathrm{halo} \gtrsim 10^{13.5}\,\mathcal{M}_\odot$
are not found in the opposite, low-dense environment. This excess
is a clear indication that our environment
reconstruction classifies as HD regions rich galaxy groups and galaxy
clusters.
These results are in agreement with \citet{Fossati2015}, who find
similar correlations between local galaxy density and halo mass in
a thorough study of galaxy environment. We also highlight that the
halo number density starts to be higher in HD at masses of
$10^{12}$--$10^{12.5}\,\mathcal{M}_\odot$. Haloes in this bin should includes
almost 50\% of galaxies with $\mathcal{M}>10^{11}\,\mathcal{M}_\odot$, as
found by \citet{Popesso2015}.\footnote{We note that both
\citet{Fossati2015} and \citet{Popesso2015} use SAMs from the same
``family'' of DLB07, implemented on a new run of the Millennium simulation.}
The difference observed between LD and HD
in the high-mass end of the GSMF (Fig.~\ref{gsmf1}) can be
interpreted, at least partly, as a reflection of the mass
segregation of dark matter.
In hierarchical models, massive haloes
preferentially populate the densest regions
\citep[e.g.][]{Mo&White1996}, and the correlation between
halo mass and galaxy stellar mass
produces in turn a concentration of
massive galaxies in the HD environment
\citep[cf][]{Abbas2005,Abbas2006,Scodeggio2009,delaTorre2010}.
This gives an idea about how intrinsic properties of the galactic systems
are entangled with the classification of their local environment via halo mass,
without any
solution for the ``nature'' vs ``nurture'' dilemma.
This picture is consistent with the mass segregation
observed by \citet{vanderBurg2013} in the GCLASS clusters at
$z\simeq1$. They normalise their stellar mass function
by estimating the total mass (baryons and dark matter)
contained within the virial radius of each cluster. On the other hand,
their GSMF in the UltraVISTA field is normalised by multiplying its
volume by the average matter density of the Universe.
After such rescaling, the authors find that the stellar
mass function is higher in the clusters than in the field
\citep[see][Fig.~8]{vanderBurg2013}.
We can also derive the stellar mass function of SAM galaxies
in LD and HD environments.
We already know (see D13) that the DLB07 model
overestimates the GSMF
low-mass end of the VIPERS field, and
shows minor tension at higher masses.
The same weaknesses are present in more recent SAMs
\citep[see e.g.][]{Fontanot2009,Cirasuolo2010,Guo2011,Maraston2013,Lu2014}
and also in hydrodynamical simulations.
Furthermore, discrepancies arise
because of the error sources in the observations
\citep[e.g.~systematics in stellar mass estimates, see][]{Marchesini2009,Bernardi2013}.
Most importantly, the LD and HD regions traced in the simulation, although having
the same meaning of the real ones, are different e.g.~in terms of occupied volume
(see discussion above). For this reason we renormalise each GSMF to unity
number density (as previously done in Fig.~\ref{gsmf1}).
\begin{figure}
\includegraphics[width=0.99\columnwidth]{gsmf_resc_tstmill.eps}
\caption{Stellar mass functions of mock galaxies built from the Millennium simulation
through the semi-analytical model of \citet{DeLucia&Blaizot2007}. The 10 mock
realisations correspond to the solid lines (orange and violet for LD and
HD regions respectively) while symbols with error bars show the
GSMF of VIPERS in the two environments (the same as Fig.~\ref{gsmf1}).
All the mass functions are plotted starting from the completeness limit
($\mathcal{M}_\mathrm{lim}$) at that redshift. They are
obtained by means of the $1/V_\mathrm{max}$ method,
rescaled to have the same number density $\rho_N$ when integrating
$\Phi(\mathcal{M}$ at $\mathcal{M}>\mathcal{M}_\mathrm{lim}$.
}
\label{mill_smf}
\end{figure}
The shape of the different GSMFs are compared in Fig.~\ref{mill_smf}.
In both environments, at each redshift bin,
the shape of the mock GSMF is similar to the observed one
after convolving SAM stellar masses with a Gaussian of dispersion 0.2\,dex,
to reproduce observational uncertainties. The 0.2\,dex width
has been chosen as an arbitrary value representing the typical scatter in
the SED fitting estimates \citep[see e.g.][]{Mobasher2015}. We note that a different value
\citep[e.g.~0.25\,dex, as in][]{Guo2011} may result in a worse agreement
with data. Aware of this potential bias, we note that it would not
remove the difference emerging between HD and LD regions in the
simulation. Indeed, the main finding in this Section is that mock GSMFs
show the same increase
of the high-mass end in the densest environment, as found in VIPERS.
In addition to this, the model hints how the low-mass slope changes
as a function of environment, at least for the GSMF at $0.51<z\leqslant0.65$ where the
mass range probed is the largest.
Looking at the central galaxies (as defined according to the merger tree) we note that
about half of those living the HD regions are central of a sub-halo already inside a larger structure,
while in the LD regions most of them are ``isolated'' central.
Also the number of satellite galaxies, i.e.~those
embedded in another galaxy halo,
increases as a function of $\delta$:
the HD satellite fraction is a factor $\sim2$ higher than the one in LD, reaching about 20\% at
$\log(\mathcal{M/M}_\odot)\sim 10.6$ and going down to zero at
$\log(\mathcal{M/M}_\odot) > 11$.
Also the fraction of recent mergers (i.e., mergers between two consecutive
timesteps) is $\sim2$ time larger in the HD regions.
This can explain the flatter profile of the GSMF with respect to the LD regions.
The relevance of mergers is discussed, with a different approach, also in the next Section.
\subsection{An empirical approach}
\label{Evolution}
We use VIPERS data to test the empirical description
of galaxy evolution proposed by P10, in which
the galaxy number density changes as a function of
$\mathcal{M}$, SFR, and environment.
Three observational facts are fundamental in P10:
\begin{enumerate}[1)]
%
\item the stellar mass function of star-forming galaxies
has the same shape at different redshifts
\citep[i.e., $\alpha$ and $\mathcal{M_\star}$
are nearly constant, see e.g.][]{Ilbert2010}, with
little increase in normalisation moving towards lower
redshifts;
%
\item there is a tight relation between SFR and stellar mass for
star-forming galaxies (the so-called ``main sequence'')
with $\mathrm{SFR}\propto\mathcal{M}^{1+\beta}$
\citep[e.g.][]{Noeske2007,Elbaz2007,Daddi2007};
\item average sSFR can be parametrised with respect to
stellar mass and redshift/cosmic time
\citep[][and references therein]{Speagle2014},
while it is independent of environment
\citep[P10;][]{Muzzin2012,Wetzel2012}.
\end{enumerate}
In spite of the large
consensus in the literature, we caution that
these three findings have been established only recently:
new data may be at odds with them,
bringing into question the basis of \citeauthor{Peng2010} work.
For instance, \citet{Ilbert2015} show that
$\log(\mathrm{SFR})\propto -\mathcal{M}\log(\mathcal{M})$ is a better
parametrisation than 2), at least for their $24\mu m$ selcted sample.
The keystone of P10 description is that
two mechanisms can regulate the decline
of star formation; they are named
\textit{mass quenching} and \textit{environment quenching}
as they depend respectively on $\mathcal{M}$ and $\delta$.
In first approximation, the evolution of the GSMF can be
parametrised by the two mechanisms only.
As we shall show below, other processes
are needed in the HD regions.
Using data from local Universe
\citep[SDSS-DR7,][]{Abazajian2009} and at $z\sim1$
\citep[zCOSMOS,][]{Lilly2007} the authors argue that
mass and environment quenching are fully separable.
The effect of both can be expressed
analytically; in particular the mass quenching rate is
\begin{equation}
\lambda_m = \frac{\mathrm{SFR}}{\mathcal{M_\star}}
= \mu \mathrm{SFR} \:,
\label{massquen}
\end{equation}
where $\mathcal{M_\star}$, namely the Schechter parameter of the
star-forming mass function, is constant ($\mathcal{M_\star} \equiv
\mu^{-1} \simeq 10^{10.6}\,\mathcal{M_\odot}$, according to
observations).
Equation~(\ref{massquen}) can be regarded as
the probability of a galaxy to become passive via mass quenching.
This is the simplest analytical form that satisfies 1)-3) but alternative, more complex
formulations cannot be excluded.
The empirical laws of P10
do not shed light on the physical processes
responsible for quenching but
describe its characteristics.
In \citet{Peng2012} mass and environment quenching are linked
to halo occupation. In this view, central galaxies are subjected to the former,
which is analogous to the ``internal quenching'' described in
other papers \citep[e.g.][and reference therein]{Gabor2010,Woo2012},
while environment quenching is the preferred channel of satellite galaxies.
This distinction however is not clear-cut because satellite galaxies can
spend a significant fraction of their life as centrals, before being accreted
into another halo \citep[see e.g.][]{DeLucia2012}.
Moreover \citet{Knobel2015}, using the same SDSS group catalogue
of \citet{Peng2012}, show that the central vs satellite
dichotomy disappears when excluding isolated galaxies from the
sample of central galaxies (i.e., central galaxies in groups are
affected by the environment in the same way as satellites).
With these simple prescriptions,
it is possible to
reproduce several statistics of galaxies across cosmic time.
In P10, the authors
generate a galaxy sample
at $z=10$, with a primordial stellar mass function
that follows a power law,
and they evolve it
down to $z=0$.
That mock sample has very simple
features, e.g.~active galaxies
form stars at a constant level
that is given by the
sSFR$(z,\mathcal{M})$ parametrisation
of \citet{Pannella2009}.
At any epoch, a fraction of blue galaxies become
red, proportionally to mass and environment quenching
rates.
This picture does not include the birth of
new galaxies.
Here, we do not make use of mock galaxies, rather we
start from the
observed stellar mass function in
a given $z$-bin
and ``evolve'' it to a lower redshift following the prescriptions of P10.
Then, we compare such an ``empirical prediction'' of the
GSMF with our data.
In the LD regions, the fraction
of VIPERS active galaxies that migrate into the passive mass function
is assumed $\propto \lambda_\mathrm{m}$, i.e.~it is
determined by mass quenching only.
To evaluate the fraction of new quenched galaxies, one has to insert a
functional form of the
specific SFR, generally speaking sSFR$(z,\mathcal{M})$, into
Eq.~\ref{massquen}. The function chosen by P10 (their Eq.~1)
comes from \citet{Pannella2009}.
From such a definition of quenching rate,
it follows that, in a given mass bin centred in $\mathcal{M}_\mathrm{b}$,
the galaxy number density evolution is
\begin{align}
\Phi_\mathrm{pass}(z_2) \,=&\, \Phi_\mathrm{pass}(z_1) +
\int_{t(z_1)}^{t(z_2)} \Phi_\mathrm{act}(z)
\lambda_\mathrm{m} \,\mathrm{d}t \nonumber \\
=&\, \Phi_\mathrm{pass}(z_1) + \tilde{\Phi}_\mathrm{act} \,
\mu \int_{z_1}^{z_2} \mathcal{M}_\mathrm{b} \,
\mathrm{sSFR}(z,\mathcal{M}_\mathrm{b})
\,\mathrm{d}z \;.
\label{phievo}
\end{align}
In the Equation, the GSMF of the active sample
is constant ($\tilde{\Phi}_\mathrm{act}$)
between $z_1$ and $z_2<z_1$, regardless of the
environment in which it is computed.
This assumption is supported both by our data (see Fig.~\ref{mfcfvmax})
and other studies \citep[e.g.][]{Pozzetti2010,Ilbert2013};
$\tilde{\Phi}_\mathrm{act}$ is determined by averaging the
$\Phi_\mathrm{act}$ estimates at $z_1$ and $z_2$.
We apply Eq.~(\ref{phievo}) in the LD environment,
evolving data at $0.8<z\leqslant0.9$ down to
$\langle z\rangle=0.72$ and
$\langle z\rangle=0.6$.
The resulting passive GSMFs, built under the action of
mass quenching only, are consistent with those
observed in the corresponding redshift bins
(see Fig.~\ref{mfev}, upper panels).
We repeat the procedure
starting from $0.65<z<0.8$, finding a good agreement
at $\langle z\rangle=0.6$ (this comparison is
not shown in the Figure).
The major uncertainty in this technique is related to
SFR-$\mathcal{M}$ relation. To quantify the
impact of different parametrisations, we also used,
instead of the equation provided in P10, the ``concordance function''
obtained by \citet{Speagle2014} fitting
data of 25 studies from the literature (see their Eq.~28).
We also estimate the uncertainty related to
$\tilde{\Phi}_\mathrm{act}$ by replacing it
with upper and lower values of
$\Phi_\mathrm{act}(z_1)$ and $\Phi_\mathrm{act}(z_2)$
respectively. We note that keeping
the active mass function fixed
introduces a much smaller uncertainty
with respect to
the sSFR$(z,\mathcal{M})$ parametrisation.
Another approximation in the procedure is
that galaxies do not change
environment as time goes by.
This assumption is appropriate
in the time interval we probe, as we verified following the
evolution of mock galaxies in the simulations of Sect.~\ref{Millennium}.
\begin{figure}
\includegraphics[width=0.97\columnwidth]{evolv_pap.eps}
\caption{Comparison between the GSMFs constructed with the
P10 recipe and the VIPERS data.
In each panel, red filled circles are the $1/V_\mathrm{max}$ points
(with Poissonian errors) of the VIPERS
passive mass function, in the redshift bin and environment indicated
in the legend;
lines and shaded area represent the evolution
of the GSMF observed at $0.8<z<0.9$,
down to the same redshift of the plotted data points.
Applying the
quenching description of P10, we obtain two different estimates
if we use the original sSFR$(z,\mathcal{M})$ parametrisation
of P10 (solid line), or the function provided in
\citet[][dashed line]{Speagle2014};
a further error is introduced to account for the uncertainties in the
integration (see Eq.~\ref{massquen}), giving the final width of the
shaded area.
}
\label{mfev}
\end{figure}
We apply Eq.~(\ref{phievo}) also in the HD regions.
We emphasise that in this case there should be a combined effect of
both mass and environment quenching mechanisms. However, P10 show that
the former is more effective
at $\log(\mathcal{M/M_\odot})<10.5$, and
therefore negligible
in the VIPERS stellar mass range. The main difference
with respect to LD, instead, is that
after becoming passive, galaxies in the overdensities
have higher chance to merge.
We will show that such dry mergers are crucial to modify the shape of
the passive GSMF.
In fact, a description which accounts for
mass quenching only
does not reproduce well
the passive mass function of HD galaxies
(Fig.~\ref{mfev}, lower panels).
Dry mergers produce a redistribution of the stellar mass
in the simulated GSMF, which is now more consistent
with the observed one (Fig.~\ref{mfevdry}).
We add this `post-quenching' ingredient (i.e.~dry merging)
through the scheme described below.
P10 assume a simple model in which
part of the passive population
merges with 1:1 mass ratio.
Similar prescriptions are used also in the ``backward
evolutionary model'' of
\citet{Boissier2010}.
Both P10 and \citet{Boissier2010} find that
dry major mergers enhance the exponential tail
of the passive GSMF, and make $\mathcal{M}_\star$
increase with respect to the LD environment.
They also consider minor
mergers fully negligible in the GSMF evolution, at least at
$\mathcal{M}\geqslant 10^{10}\,\mathcal{M}_\odot$,
\citep[see also][]{Lopez-Sanjuan2011,Ferreras2014}.
In our analysis, we introduce dry (major) mergers in the evolution of
$\Phi_\mathrm{pass,HD}$, assuming that two
objects
in the same bin of $\log\mathcal{M}$ can merge together
without triggering relevant episodes of star formation
\citep[e.g.][]{DiMatteo2005,Karman2015}.
We set the fraction of galaxies
undergoing a 1:1 merger
to be equal to $f_\mathrm{dry}(z)$,
with no dependence on the stellar mass of the
initial pair \citep[cf][]{Xu2012a}.
An estimate of $f_\mathrm{dry}(z)$ is
inferred by \citet{Man2014} by counting galaxy pairs
with stellar mass ratio less than $1:4$.\footnote{\citeauthor{Man2014}
show that their merger rate is
suitable to study dry mergers, e.g.~it is
consistent with that of gas-poor galaxies
in the simulations of \citet[][]{Hopkins2010a}.
Moreover
the authors, performing their analysis
on the COSMOS field,
can be compared to several previous studies \citep[e.g.][]{deRavel2011,
Xu2012a,Lopez-Sanjuan2012}, with which they are in
fairly good agreement.
P10
use the merger rate derived by \citet{deRavel2011}
for the zCOSMOS galaxies.}
The merger rate of \citet{Man2014} leads to a merger
fraction $f_\mathrm{dry}=5_{-2}^{+3}\%$
from $\langle z\rangle=0.84$ to 0.72,
and $f_\mathrm{dry}=10_{-4}^{+6}\%$
from $\langle z\rangle =0.84$ to 0.6.
Since they are averaged over the general COSMOS field,
these values can get $\sim$2--3 times
higher in HD environments
\citep[][see also \citealp{Lin2010,Lotz2013}]{Kampczyk2013}.
For this reason we test a range of $f_\mathrm{dry}$ values:
from 5 to 15\% in the time span from $\langle z\rangle=0.84$ to
0.72 ($\sim$0.7\,Gyr)
and 10--30\%
from redshift $0.84$ to 0.6 (i.e., across $\sim$1.4\,Gyr).
As stressed above, dry merging is the key element to reconcile the simulated GSMF
with the observed one (Fig.~\ref{mfevdry}).
Nevertheless, a 1$\sigma$ difference remains at
$\log(\mathcal{M/M_\odot})\simeq10.4$.
Together with the ($<1\sigma$) difference at the high-mass
end, this overestimate may suggest that the impact of mergers in the
densest regions of VIPERS could be even larger than what we assumed.
At the same time, we cannot exclude that the explanation for these (minor) tensions
resides in the simplicity of our parametrisation. Indeed the result depends on the
model used to describe the evolution of these massive galaxies.
For example, central ones could grow significantly by
(multiple) accretion of satellites. Since our sample does not
distinguish between satellite and central galaxies, we could not
test this scenario.
\begin{figure}
\includegraphics[width=\columnwidth]{evolv_dry_pap.eps}
\caption{Evolution of the passive mass function in the HD environment,
including
dry mergers. The solid line in each panel is the
predicted GSMF in the HD environment,
as in Fig.~\ref{mfev},
assuming mass quenching only and the sSFR parametrisation of P10;
yellow shaded area is the GSMF modified by dry mergers, whose
percentage ranges from 5--10\% (triple-dot-dashed line)
to 15--30\% (dot-dashed line) depending
on the redshift bin. In each $z$-bin,
red circles are the $1/V_\mathrm{max}$
estimates (with Poissonian errors)
of the stellar mass function of the VIPERS passive galaxies
(symbols are filled above the
completeness limit $\mathcal{M}_\mathrm{lim}^\mathrm{pass}$). }
\label{mfevdry}
\end{figure}
\section{Conclusions}
\label{Conclusions}
The large volume probed, along with the accuracy
of redshift measurements, make VIPERS the ideal survey
to study environmental effects at intermediate redshifts.
We reconstruct the local density field (\citealp[][]{Cucciati2014a}; Cucciati et al., in prep.)
and identify galaxies embedded in under- and over-densities.
We estimate the volumes occupied by
such LD and HD regions, finding that they represent
nearly $50$ and $7\%$ of the total comoving volume of the survey.
Thanks to the volume reconstruction, we can properly compute
the number density of galaxies in these two opposite environments,
and compare the GSMFs
at different epochs.
The stellar mass function of LD galaxies is nearly constant
in the redshift range $0.51<z<0.9$, while a significant
evolution is observed in the HD regions.
Moreover, we find that the VIPERS stellar mass function
has a shape that depends on the environment, with
a higher fraction of massive objects in the over-densities.
Interestingly, our approach is complementary to the other VIPERS studies
that show
the increase of the galaxy bias as a function of $\mathcal{M}$
\citep[e.g.][]{Marulli2013}.
Despite our completeness limit
($\mathcal{M}_\mathrm{lim}\gtrsim 10.4$
at $z\sim 0.6$) we also find hints that the
low-mass end of the GSMF is flatter in the HD regions,
with a particular decrement of the passive sample.
This marginal effect could be robustly assessed once the final VIPERS
catalogues ($\sim90 000$ spectra) will be available.
The LD vs HD variance is quantitatively
described by the \citet{Schechter1976} parameters:
the $\alpha$-$\mathcal{M}_\star$ likelihood contours from the STY fit
show a significant difference between the two environments.
In particular, the enhancement of the GSMF massive end
is well described by $\mathcal{M}_\star$, which increases
by $\sim0.25$\,dex in the HD regions
(namely $0.24\pm0.12$, $0.27\pm0.15$, and $0.21\pm0.11$\,dex at $z\sim0.6$,
$0.72$, and $0.84$ respectively).
Such a difference
remains visible when considering the
active or passive sample only.
An environmental imprint in the stellar mass function has already
been observed in the local Universe
\citep{Baldry2006,Peng2010}. With VIPERS, it becomes evident
for the first time also at $z\gtrsim0.5$.
We investigate these environmental trends
by using 10 mock catalogues
derived from the Millennium simulation. Galaxies are
simulated following the prescriptions of
\citet{DeLucia&Blaizot2007} and the survey design is reproduced
to make these catalogues similar to VIPERS.
In this way we were able to define galaxy environments
as done in the real survey.
The different slope of the low-mass end is observed
also in the mock GSMF,
and can be associated to a larger
number of merger events where the local galaxy density is higher.
Looking at the exponential tail of the mock GSMF,
the higher number density of $\mathcal{M}>\mathcal{M}_\star$
galaxies in the HD regions is linked to a large amount of
haloes with $\mathcal{M}_\mathcal{h}>10^{13}\,\mathcal{M}_\odot$.
Such massive haloes are absent in the LD sample.
As a result, both satellite and merger fractions increase when
selecting denser environments.
To summarise, our classification based on the galaxy density contrast
corresponds to a discrimination in halo properties, highlighting the
ambiguity of the ``mass vs environment'' dichotomy \citep[see][]{DeLucia2012}.
We find that the difference between LD and HD
mass functions decreases
from $\langle z \rangle =0.60$ to $0.84$.
The trend is expected to continue at higher redshifts,
where the massive haloes that characterise our densest
environment have not collapsed yet.
We can connect our results to
the analysis of \citet{Mortlock2015}, in which the GSMF at
$1<z<1.5$ does not change when computed either in
high and low densities (even though the large uncertainties
could hide some minor environmental effect).
This change can be linked to the different conditions of cosmological structures
in the earlier stages of the universe, with group environment being
more effective at $z<1$.
We also experimented the
empirical description of \citet{Peng2010},
in which the stellar mass function of passive galaxies evolves
under the combined effect of mass and environment quenching.
Differently from other studies, we use this approach in a self-consistent way:
we evolve the observed mass function at each redshift bin considered
in our study, and compare the expectation to the GSMF observed at the lower redshift bin.
Our results show that the measured evolution of the GSMF in low density regions
is consistent with a model in which galaxy evolution is dominated by internal physical
processes only (``mass quenching'' in the formalism by \citeauthor{Peng2010}).
For high density regions, however, additional processes have to be considered
to explain the evolution of the massive end of the GSMF.
In particular, we demonstrate that the observed evolution can be explained
by including the effect of dry mergers.
We stress that our survey has the capability to shed light
on the role of mergers in shaping the GSMF, e.g.~tackling the
problem of sample variance highlighted by \citet{Keenan2014}.
Moreover, in the redshift range
of our survey, merging events are more frequent than in the local universe
\citep[][but see also outcomes from state-of-the-art simulations in
\citealp{Rodriguez-Gomez2015}]{Lopez-Sanjuan2012}.
In our study,
both semi-analytic modelling and empirical approach highlighted the
importance of mergers in the large-scale dense environment.
Future analyses relying on the
final $24\,\mathrm{deg}^2$ release of VIPERS shall complement the
present results, providing further details about galaxy-galaxy interactions.
\begin{acknowledgements}
We acknowledge the crucial contribution of the ESO staff for the management of service observations. In particular, we are deeply grateful to M. Hilker for his constant help and support of this program. Italian participation to VIPERS has been funded by INAF through PRIN 2008 and 2010 programs.
OC acknowledges the support from grants ASI-INAF I/023/12/0 ``Attivit\`a relative alla fase B2/C per la missione Euclid''.
LG and BRG acknowledge support of the European Research Council through the Darklight ERC Advanced Research Grant (\# 291521).
OLF acknowledges support of the European Research Council through the EARLY ERC Advanced Research Grant (\# 268107). AP, KM, and JK have been supported by the National Science Centre (grants UMO-2012/07/B/ST9/04425 and UMO-2013/09/D/ST9/04030), the Polish-Swiss Astro Project (co-financed by a grant from Switzerland, through the Swiss Contribution to the enlarged European Union). RT acknowledges financial support from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement n. 202686.
EB, FM and LM acknowledge the support from grants ASI-INAF I/023/12/0 and PRIN MIUR 2010-2011. LM also acknowledges financial support from PRIN INAF 2012. YM acknowledges support from CNRS/INSU (Institut National des Sciences de l’Univers) and the Programme National Galaxies et Cosmologie (PNCG). Research conducted within the scope of the HECOLS International Associated Laboratory, supported in part by the Polish NCN grant DEC-2013/08/M/ST9/00664.
\end{acknowledgements}
\begin{appendix}
\section{Tests on the $1+\delta$ distribution}
\label{Appendix}
In Sect.~\ref{Environment definition} we associated
VIPERS galaxies to LD or HD environments by means of
their density contrast $\delta$. Specifically,
galaxies with $\delta<0.7$ are
assumed to be in LD region, while
HD galaxies are those with $\delta>4$.
For sake of clarity
we dub these thresholds $\delta_\mathrm{LD}$ and
$\delta_\mathrm{HD}$.
Their respective values correspond to the 25th and
75th percentiles of the
$\delta$ distribution, which can be computed
at various redshifts ($0.51<z\leqslant0.65$, $0.65<z\leqslant0.8$,
$0.8<z\leqslant0.9$) and
in W1 and W4 separately.
The final thresholds we adopted
($\delta_\mathrm{LD}=0.7$,
$\delta_\mathrm{HD}=4$)
are obtained by averaging the percentiles
obtained in each bin.
In this Appendix, we justify the choice of
using constant values
despite the small
variations among
different redshifts and fields
(see Fig.~\ref{d1d4lim}).
First of all, we verify that the
absence of selection effects in the computation.
Even though the selection of our spectroscopic targets,
described through TSR, SSR, and CSR
(Sect.~\ref{Dataspec}), does vary with redshift, this is not
the case for the mass-selected sample
($\log(\mathcal{M/M_\odot})>10.86$) we use as a proxy of the
density field. The statistical weights of these galaxies are
nearly constant from $z=0.51$ to 0.9.
Some variation of $\delta_\mathrm{LD}$
and $\delta_\mathrm{HD}$ should be
due to statistical fluctuations,
since we are sampling a nearly-Gaussian distribution
\citep[][]{DiPorto2014}
with a limited number of objects.
In fact, each $z$-bin
contains only galaxies that were spectroscopically observed,
and the $\delta$ ranking is sensitive to this incompleteness.
From this perspective, the survey selection originates
some amount of scatter: datasets
drawn from the same
galaxy parent population
can yield different quartile values
just because they populate in different
ways the tails of the original density distribution.
To verify this hypothesis,
we perform a Monte Carlo simulation.
First, we divide the VIPERS sample in the
three $z$-bins mentioned above,
keeping W1 and W4 separate.
In each bin, and for both fields individually,
we derive a PDF from the observed $\delta$ distribution.
We extract 100\,000
times the same number of objects as it
has been observed in VIPERS,
and assign to these fake galaxies
a density contrast according to the reconstructed PDF.
In other words, this task consists in reproducing many times
the plot shown in Fig.~\ref{d1d4lim}, as it would appear
if we targeted different galaxies from the parent photometric
sample (every time with the same sampling rate).
The quartiles resulting from each realisation have
a scatter of the
order of 10--15\% around the mean value.
Another reason for the fluctuations of $\delta_\mathrm{LD}$
and $\delta_\mathrm{HD}$
could be cosmic variance. In this case, it is not the subsample
of observed objects to vary but the density field itself,
e.g.~because of field-to-field variations in large-scale
clustering \citep[][and references therein]{Moster2011}.
In VIPERS, thanks to its large volume,
this effect is generally small, as shown in D13 and \citet{Fritz2014}.
To estimate the impact of cosmic variance on our definition
of environment, we use two sets of simulations, each one
consisting in 10 independent mock galaxy catalogues.
The first set originates from the halo occupation distribution (HOD)
modelled by \citet[][see also the description in \citealp{Cucciati2014a}]{
delaTorre2013b}.\footnote{The other mock catalogues,
built according the semi-analytical model (see Sect.~\ref{Millennium}),
are not used here because they cover
a single sky region of $7\times1\,\mathrm{deg}^2$.}
To do that, we started from mock catalogues that have
100\% sampling rate, no masked area, and
galaxy redshifts without observational errors (i.e., they are
cosmological redshifts perturbed by peculiar velocities).
We referred to them as ``reference'' mock catalogues.
We manipulated them
to reproduce the VIMOS footprint, and added redshift
measuring errors to have the correct percentages of
$z_\mathrm{phot}$ and $z_\mathrm{spec}$
(``VIPERS-like'' mock catalogues).
We estimate galaxy density contrast
(through the projected 5NN, as described in Sect.~\ref{Density contrast})
and consequently its distribution, in the three $z$-bins used in
this work.
Among the 10 ``VIPERS-like'' realisations using HOD,
the 25th (75th) percentile
that determines the LD (HD) environment has
$\sim5\%$ ($\sim10\%$) scatter.
This outcome implies that the
LD and HD thresholds in real data vary
also because of cosmic variance. In the HOD mock catalogues
the galaxy luminosity in the B band is available. Assuming an average
$\mathcal{M}/L_\mathrm{B}$ ratio, we estimated the fractional error
due to cosmic variance in each bin of stellar mass of
the total GSMF shown in Fig.~\ref{gsmf1}.
In conclusion, the percentiles we estimated for VIPERS,
in its two fields and
within three different $z$-bins,
spread over a range comparable
to the one resulting in simulations.
Undersampling of the $\delta$ distribution
and cosmic variance are the major
responsible for these fluctuations, which
are small enough not to invalidate
our choice of keeping fixed $\delta_\mathrm{LD}$
and $\delta_\mathrm{HD}$.
We also verified
that the galaxy
density field does not evolve
significantly from $z=0.9$ down to 0.5
(i.e., we can safely compare
results obtained at different redshifts).
In fact,
the values of the density thresholds
at the 25th and 75th
percentile do not show a
dependence on $z$.
Moreover, by means of cosmological simulations
based on the Millennium Simulation
\citep[the same used in][]{DiPorto2014} we check
that the PDF of the underlying matter density field
is almost constant between $z=1$ and 0.5.
These tests confirm that we can safely classify
galaxies by using the same thresholds
($\delta_\mathrm{LD}$
and $\delta_\mathrm{HD}$) in different $z$-bins.
Besides that, we can estimate purity and completeness
of the LD and HD samples
by means of the HOD simulation already
used to test cosmic variance effects.
We parametrise galaxy environments as done with data,
in both the VIPERS-like and the reference mocks,
and classify the LD and HD environments.
The comparison indicates that our method is not harmed
by the effects of the VIPERS design: in each VIPERS-like mock
the classification is in good agreement with the one obtained
in the reference (i.e.~working without the limitations
of the observational strategy).
About 70\%
of galaxies for which $\delta$ is
below the 25th (above the 75th) percentile in the reference mocks,
remain in the LD (HD) environment also in the
VIPERS-like ones.
For the purity, we considered the interlopers that should have
been associated to LD or HD (according to the
reference estimate) but erroneously
fall in the opposite environments. We find that
less than 8\% of low-density galaxies in the reference
are misclassified as high-density in the VIPERS-like mocks,
and a similar percentage
of HD galaxies become LD interlopers.
\section{Volumes occupied by HD and LD galaxies}
\label{Voronoi}
In this Appendix we describe the technique
to evaluate the comoving volumes where we recover
the low- and high-density regions.
Also in this case we rely on
the volume-limited sample introduced
in Sect.~\ref{Density contrast}, i.e.~those objects
with $M_B\leqslant -20.4 - z$
that have been used
to estimate the galaxy density contrast $\delta$.
Such a sample,
contrary to a flux-limited one, has
uniform characteristics from $z=0.5$ to 0.9 and
should not introduce any redshift-dependent bias
\citep{Cucciati2014a}.
We already know the local density
contrast of these bright galaxies (Sect.~\ref{Density contrast}),
so we can identify
the ones that belong to LD or HD
environments (Sect.~\ref{Environment definition}).
We fill the whole VIPERS volume
with random particles homogeneously distributed with
a comoving density equal to $2\,h_{70}^3\,\mathrm{Mpc}^{-3}$.
We associate each random particle to the nearest galaxy among
the volume-limited sample. Particles linked
to LD (HD) galaxies are taken into account to
estimate the volume occupied by the LD (or HD) regions,
which is the fraction of particles in the specific
environment, multiplied by the total VIPERS volume.
Namely, this is a Monte Carlo integration in
comoving coordinates \citep[see e.g.][]{Weinzierl2000}.
We compare this estimate to an alternative technique,
based on the Voronoi decomposition \citep[e.g.][and references therein]{Marinoni2002}.
Around a chosen galaxy (belonging to the volume-limited sample),
a Voronoi polyhedron
is unambiguously defined as
the set of points closer to that object than to any other.
Once realised such a partition of the VIPERS space, we add together
the polyhedra of LD/HD galaxies
to estimate the volume of the two environments.
This sum overestimates the previous result
by $\sim20\%$,
because a few Voronoi
polyhedra exceed the effective volume of VIPERS, i.e.~they expands
in the VIMOS gaps.
On the other hand, in the Monte Carlo integration we do not
deal with such a problem because
we can easily remove
random particles that fall out from the
spectroscopic area. We verified that, for galaxies
far from the survey gaps, the two techniques
are in excellent agreement.
Once delimited low and high densities
in the 3-D space, we plot the LD/HD
volumes ($V_\mathrm{LD}$ and $V_\mathrm{HD}$)\footnote{In the following we will refer
to these volumes also with the general term $V_\mathrm{env}$.}
enclosed between $z=0.5$
and a certain $z_\mathrm{up}$.
This upper boundary
runs from 0.5 to 0.9 with steps of $\Delta z=0.002$.
That is,
\begin{equation}
V_\mathrm{env}(z_\mathrm{up})= \frac{N_\mathrm{env}(0.5,z_\mathrm{up})}
{N(0.5,z_\mathrm{up})} V(0.5,z_\mathrm{up}) \;,
\end{equation}
where $N_\mathrm{env}/N$ is the fraction of random particles -- in the
range $[0.5,z_\mathrm{up}]$ -- associated to the given environment,
while $V$ is the comoving volume of the whole survey in the same redshift slice
(see Fig.~\ref{vmax_z}). As said before, $V$ is computed
considering only the effective (i.e., spectroscopically observed) area of VIPERS and
the random particle outside of that are not considered.
We linearly
interpolate $V_\mathrm{env}(z_\mathrm{up})$
between consecutive values of $z_\mathrm{up}$
to get a continuous function $V_\mathrm{env}(z)$, shown in the upper panel of Fig.~\ref{vmax_z}.
When computing the GSMF (Sect.~\ref{Gsmf estimate})
we use $V_\mathrm{env}(z)$ to determine the
$V_\mathrm{max}$ volume. Each VIPERS galaxy is detectable
between redshift $z_\mathrm{min}$ and $z_\mathrm{max}$,
i.e.~the distances at which the object becomes respectively
brighter/fainter than the flux range of the survey.
In some cases $z_\mathrm{min}$ and/or $z_\mathrm{max}$
fall outside the $z$-bin in which the GSMF
is measured. If so, we replace $z_\mathrm{min}$ ($z_\mathrm{max}$)
with the lower (upper) limit of the bin.
Once established its redshift interval of observability,
the $V_\mathrm{max}$ of a given galaxy is equal to
$V_\mathrm{env}(z_\mathrm{max}) -
V_\mathrm{env}(z_\mathrm{min})$, as illustrated in
Fig.~\ref{vmax_z}.
This approach is a variation of the method of \citet{Schmidt1968},
accounting for the spatial segregation of
the sample. Indeed, the ``classical'' computation of
$1/V_\mathrm{max}$ is based on
the area of the whole survey, while here
we assume that galaxies contributing to the
LD/HD stellar mass function cannot be observed
outside their environment.
With the exception of the first $\sim130\,h_{70}^{-1}\,\mathrm{Mpc}$
along the line of sight (between $z=0.51$ and 0.55) the fraction of the total volume
occupied by the HD and LD structures is nearly constant, i.e.~about 7 and
50\% respectively (Fig.~\ref{vmax_z}, lower panel).
\begin{figure}
\includegraphics[width=0.99\columnwidth]{vol_vs_z.eps}
\caption{\textit{Upper panel:} function of the comoving volume, between redshift 0.51 and $z$,
filled by either HD and LD regions (violet and orange lines). The function is evaluated by means of
a Monte Carlo integration as described in the text. To find the $V_\mathrm{max}$ of a galaxy,
one has to consider the volume between its minimum and maximum allowed redshift
($z_\mathrm{min}$ and $z_\mathrm{min}$, see the vertical dashed lines as an example).
\textit{Lower panel:} the fraction of the total volume (between $z=0.5$ and the given redshift)
occupied by HD and LD regions (violet and orange lines). }
\label{vmax_z}
\end{figure}
This technique could be also applied to the semi-analytic mock samples
(Sect.~\ref{Millennium}), but in the present work we do not use it because
of a few systematics that make the comparison to real data more difficult.
One reason is that the cosmological parameters of the Millennium simulation
\citep[based on WMAP1,][]{Spergel2003}
could be different from the ones of the observed universe.
In particular, the amplitude of matter fluctuations on
$8\,h^{-1}\,\mathrm{Mpc}$ scale should be overestimated in the simulation
(where $\sigma_8=0.9$) compared to more recent measurements ($\sigma_8\simeq0.8$).
Also $\Omega_\Lambda$, $\Omega_\mathrm{m}$, and
the spectral index of the primordial perturbation field are slightly different in WMAP1 from
what found by WMAP9 \citep[][]{Hinshaw2013} and \textit{Planck}
\citep{Planck2015-13}.
In view of these facts, the HD/LD thresholds in the model
may not agree with data.
Compared to VIPERS, low-density regions should be more extended, while the overdenisties
should be concentrated in a smaller volume, as expected in a more clustered universe.
These differences shall be verified in future work.
\citet{Wang2008} investigated some consequences of varying cosmological parameters in a simulation.
They ran the same SAM \citep{DeLucia&Blaizot2007} several times,
but changing cosmology from WMAP1 to WMAP3 \citep{Spergel2007}.
The variations due to the new parameters mostly cancel out at
$z\sim0$, while they are significant at $z\gtrsim1$. This is especially evident by looking at the GSMF
\citep[][Fig.~14]{Wang2008}, which starts to over-predict the observations already at $z=0.5$ when
WMAP1 parameters are assumed.
The luminosity function is less affected by these systematics \citep[][Fig.~13]{Wang2008}.
We also notice that modifications of the galaxy formation model should
have a smaller impact than cosmology on the GSMF.
We identify low- and high-density galaxies in the \citeauthor{Wang2008} boxes
($125\,h^{-1}\,\mathrm{Mpc}$ comoving size), those based on WMAP1
as well as the boxes with WMAP3 cosmology.
We observe that the
distribution of the density contrast has a higher tail at large values of $\delta$ when
WMAP1 is the reference. Thus, the two thresholds to divide HD and LD regions are more
``extreme'' (Fig.~\ref{wang_hist}), mainly because structures form earlier
in the WMAP1 case.\footnote{Similar results are found by \citet{Guo2013}
comparing WMAP1 and WMAP7 parameters.
For example, at a fixed cosmic time
massive haloes ($>10^{12.5}\,\mathcal{M}_\odot$)
are more abundant with a WMAP1 cosmology \citep[][Fig.~1]{Guo2013}.}
\begin{figure}
\includegraphics[width=0.99\columnwidth]{wang_delta_distr.eps}
\caption{Distribution of $1+\delta$ in two cosmological boxes at $z=0.75$. In both simulations, galaxies
evolve according to \citet{DeLucia&Blaizot2007} prescriptions. The cosmological parameters used as input
are not the same, being taken from WMAP1 (red histogram) or WMAP3 (blue histogram). In this case, since
we are not restricted to projected coordinates, we evaluated the density contrast using the 5NN in the 3-d space. }
\label{wang_hist}
\end{figure}
The systematic effects are even more severe when comparing our mock samples (which are based on WMAP1) to
observations: they are due not only to cosmology (especially $\sigma_8$) but also to differences
between modelled galaxies and real ones, because of both theoretical and observational limitations.
For example, the luminosity function predicted by \citet{DeLucia&Blaizot2007} at $z\sim0.7$
has a characteristic magnitude
( $M_\mathrm{B}^\star\simeq-20.5$) about 0.2\,dex brighter than the one measured in VIPERS \citep{Fritz2014}.
It means that galaxies with $M_B<20.4-z$, used to to define the 5NN, has a higher number density and
should trace the environment on slightly smaller scales.
As an aside, we note that these outcomes suggest another possible use of our dataset:
the reconstructed volumes from observations, since they are sensitive to cosmological parameters,
can be used to devise a new kind of cosmological test.
\end{appendix}
|
2,877,628,090,261 | arxiv | \section{Introduction}
The Navier-Stokes (NS) equations for an incompressible viscous
fluid are the fundamental governing equations of fluid mechanics.
In many cases, exact solutions can be constructed to these
equations \cite{DR06} and spectral and nonlinear stability of
these exact solutions can be analyzed \cite{DR81}. Our work
addresses stability of exact solutions for the NS equations in
spherical coordinates.
The three-dimensional NS equations in a thin rotating spherical
shell describe large-scale atmospheric dynamics that plays an
important role in the global climate control and weather
prediction \cite{LTW92,LTW92b} (see also review in \cite{Gill}).
It was rigorously proved by Temam \& Ziane \cite{TZ97} that the
average of the longitudinal velocity in the radial direction
converges to the strong solution of the two-dimensional NS
equation on a sphere as the thickness of the spherical shell goes
to zero. The latter model has been used in geophysical fluid
dynamics since middle of the last century \cite{Russian_Text}.
The treatment of the geometric singularity in spherical coordinates
has for many years been a difficulty in the development of numerical
simulations for oceanic and atmospheric flows around the Earth.
Blinova \cite{Blinova1,Blinova2} represented solutions in the
inviscous case by the eigenfunction expansions in spherical
harmonics. Vorticity equations were considered by Ben-Yu with the
spectral method \cite{GB}. More recent work of Furnier et al.
\cite{FB} applied the spectral-element method to the axis-symmetric
solutions (see \cite{JB,MC,W} for other applications of the spectral
methods in spherical coordinates). Finally, point vortex motion on a
sphere was modeled by ordinary differential equations for vortex
centers in Boatto \& Cabral \cite{B06} and Crowdy \cite{Crowdy}.
We address the three-dimensional NS equations for an incompressible
viscous fluid,
\begin{equation}
\label{1.1} \left\{ \begin{array}{lll} & \frac{\partial
\mathbf{u}}{\partial t}+\left( \mathbf{u\cdot \nabla }\right)
\mathbf{u}-\nu \Delta \mathbf{u}+\nabla p=0, \quad & {\bf x} \in
\Omega, \; t \in \mathbb{R}_+, \\ & \mathbf{\nabla }\cdot
\mathbf{u}=0, \quad & {\bf x} \in \Omega, \; t \in \mathbb{R}_+, \\
& {\bf u} |_{t = 0} = {\bf u}_0, \quad & {\bf x} \in \Omega,
\end{array} \right.
\end{equation}
in a thin spherical shell $\Omega = \{ {\bf x} \in \mathbb{R}^3 :
1 < |{\bf x}| < 1 + \varepsilon \}$ with $\varepsilon \to 0$,
subject to the boundary conditions
\begin{equation}
\label{1.3} {\bf u} \cdot {\bf n} = 0, \qquad \nabla {\bf u} \times
{\bf n} = {\bf 0}, \qquad {\bf x} \in \partial \Omega.
\end{equation}
Here $\mathbf{u} : \Omega \times \mathbb{R}_+ \mapsto
\mathbb{R}^3$ is the velocity vector, $p : \Omega \times
\mathbb{R}_+ \mapsto \mathbb{R}$ is the ratio of the pressure to
constant density, $\nu$ is the kinematic viscosity, ${\bf n}$ is
the normal vector to the boundary $\partial \Omega$ of the
spherical shell $\Omega$ and $\mathbf{u}_{0} : \Omega \mapsto
\mathbb{R}^3$ is a given initial condition. Although Coriolis and
gravity forces may be dynamically significant in oceanographic
applications, our model is considered in a non-rotating reference
frame and without external forces. The effects of rotation and
gravity can be included into the model but they do not
substantially alter the physical picture that emerges from the NS
equations (\ref{1.1}).
We employ the spherical coordinates $(r,\theta,\phi)$ with the
velocity vector ${\bf u} = u_r {\bf e}_r + u_{\theta} {\bf
e}_{\theta} + u_{\phi} {\bf e}_{\phi}$, where $({\bf e}_r,{\bf
e}_{\theta},{\bf e}_{\phi})$ are basic orthonormal vectors along
the spherical coordinates. For completeness, we reproduce the
three-dimensional NS equations (\ref{1.1}) in spherical
coordinates \cite{Bachelor}:
\begin{eqnarray*}
&& \frac{\partial u_r}{\partial t} + u_r \frac{\partial
u_r}{\partial r} + \frac{u_{\theta}}{r} \frac{\partial
u_r}{\partial \theta} + \frac{u_{\phi}}{r \sin \theta}
\frac{\partial u_r}{\partial \phi} - \frac{u_{\theta}^2 +
u_{\phi}^2}{r} = - \frac{\partial p}{\partial r} + \nu \left(
\Delta u_r + \frac{2}{r} \frac{\partial
u_r}{\partial r} + \frac{2 u_r}{r^2} \right), \\
\nonumber && \frac{\partial u_{\theta}}{\partial t} + u_r
\frac{\partial u_{\theta}}{\partial r} + \frac{u_{\theta}}{r}
\frac{\partial u_{\theta}}{\partial \theta} + \frac{u_{\phi}}{r
\sin \theta} \frac{\partial u_{\theta}}{\partial \phi} + \frac{u_r
u_{\theta}}{r} - \frac{u_{\phi}^2 \cot \theta}{r} = - \frac{1}{r}
\frac{\partial p}{\partial \theta} \\
&& \phantom{texttexttexttexttexttext} + \nu \left( \Delta
u_{\theta } + \frac{2}{r^2} \frac{\partial u_r}{\partial \theta} -
\frac{u_{\theta }}{r^2 \sin^2 \theta} - \frac{2\cos \theta}{r^2
\sin ^2 \theta } \frac{\partial u_{\phi }}{\partial \phi} \right),
\\ \nonumber && \frac{\partial u_{\phi}}{\partial t} +
u_r \frac{\partial u_{\phi}}{\partial r} + \frac{u_{\theta}}{r}
\frac{\partial u_{\phi}}{\partial \theta} + \frac{u_{\phi}}{r \sin
\theta} \frac{\partial u_{\phi}}{\partial \phi} + \frac{u_r
u_{\phi}}{r} + \frac{u_{\theta} u_{\phi} \cot \theta}{r} = -
\frac{1}{r \sin \theta} \frac{\partial p}{\partial \phi} \\
&& \phantom{texttexttexttexttexttext} + \nu \left( \Delta u_{\phi
} + \frac{2}{r^2 \sin \theta} \frac{\partial u_r}{\partial \phi} +
\frac{2\cos \theta }{r^2 \sin^2 \theta } \frac{\partial u_{\theta
}}{\partial \phi }-\frac{u_{\phi }}{r^2 \sin^{2}\theta }\right), \\
&& \frac{1}{r^2} \frac{\partial}{\partial r} \left( r^2 u_r
\right) + \frac{1}{r \sin \theta} \frac{\partial }{\partial \theta
}\left( \sin \theta u_{\theta} \right) + \frac{1}{r \sin \theta}
\frac{\partial u_{\phi }}{\partial \phi } = 0,
\end{eqnarray*}
where
\begin{eqnarray*}
\Delta = \frac{1}{r^2} \frac{\partial }{\partial r} \left( r^2
\frac{\partial}{\partial r} \right) + \frac{1}{r^2 \sin \theta}
\frac{\partial}{\partial \theta} \left( \sin \theta
\frac{\partial}{\partial \theta} \right) + \frac{1}{r^2 \sin^2
\theta} \frac{\partial^2}{\partial \phi^2}
\end{eqnarray*}
is the Laplacian in spherical coordinates and the initial and
boundary conditions are not written. One can check by direct
differentiation that there exists an exact stationary solution to
the three-dimensional NS equations in spherical coordinates:
\begin{equation}
\label{exact-solution} u_r = 0, \quad u_{\theta} = \frac{\alpha}{r
\sin \theta}, \quad u_{\phi} = 0, \quad p = \beta -
\frac{\alpha^2}{2 r^2 \sin^2 \theta},
\end{equation}
where $(\alpha,\beta)$ are arbitrary parameters. The stationary
solution (\ref{exact-solution}) describes fluid motion tangential
to a sphere of any given radius $r$. The stationary flow has two
pole singularities at $\theta = 0$ and $\theta = \pi$. The
singularities correspond to the source and sink of the velocity
vector at the North and South poles of the spherical shell
$\Omega$: the fluid is injected at the North pole from an external
source and it leaks out at the South pole to an external sink.
In the limit $\varepsilon \to 0$, the non-stationary
three-dimensional fluid flow is confined on a sphere $S$ of unit
radius parameterized by the polar (latitude) angle $\theta$ and
azimuthal (longitude) angle $\phi$,
\begin{equation}
\label{domain-complete} S =\left\{ \left( \theta ,\phi \right)
,\text{ \ }0\leqslant \theta \leqslant \pi ,\text{ }0\leqslant \phi
<2\pi \right\}.
\end{equation}
Since the velocity vector ${\bf u}$ and the pressure $p$ in the NS
equations (\ref{1.1}) are coupled together by the
incompressibility constraint $\nabla \cdot {\bf u} = 0$, it is
difficult to analyze the full set of three-dimensional equations.
A common approach to simplify the problem is to use the artificial
methods such as the pressure stabilization and projections
\cite{Shen}. The error estimate of the pressure stabilization and
projection methods is not however mathematically precise. Instead,
we shall use the result of the Theorem B in \cite{TZ97}, which
states that provided the function ${\bf u}_0(r,\theta,\phi)$ is
smooth enough, the strong global solution ${\bf
u}(r,\theta,\phi,t)$ of the three-dimensional NS equations
converges as $\varepsilon \to 0$ to the strong unique global
solution ${\bf v}(\theta,\phi,t)$ of the two-dimensional NS
equations on the sphere, where
$$
{\bf v}(\theta,\phi,t) = \lim\limits_{\varepsilon \to 0}
\frac{1}{\varepsilon} \int_1^{1+\varepsilon} r {\bf
u}(r,\theta,\phi,t) dr = (0,v_{\theta},v_{\phi}).
$$
The vector ${\bf v}(\theta,\phi,t)$ is interpreted as the average
velocity with respect to the radial coordinate $r$. The
two-dimensional NS equations on a sphere $S$ in spherical angles
$(\theta,\phi)$ are written explicitly as follows \cite{TZ97}:
\begin{eqnarray*}
&& \frac{\partial v_{\theta}}{\partial t} + v_{\theta}
\frac{\partial v_{\theta}}{\partial \theta} + \frac{v_{\phi}}{\sin
\theta} \frac{\partial v_{\theta}}{\partial \phi} - v_{\phi}^2
\cot \theta = - \frac{\partial p}{\partial \theta} + \nu \left(
\Delta_S v_{\theta } - \frac{v_{\theta }}{\sin^2 \theta} -
\frac{2\cos \theta}{\sin ^2 \theta } \frac{\partial v_{\phi
}}{\partial \phi} \right), \\ && \frac{\partial
v_{\phi}}{\partial t} + v_{\theta} \frac{\partial
v_{\phi}}{\partial \theta} + \frac{v_{\phi}}{\sin \theta}
\frac{\partial v_{\phi}}{\partial \phi} + v_{\theta} v_{\phi} \cot
\theta = - \frac{1}{\sin \theta} \frac{\partial p}{\partial \phi}
+ \nu \left( \Delta_S v_{\phi } + \frac{2\cos \theta }{\sin^2
\theta } \frac{\partial v_{\theta
}}{\partial \phi} - \frac{v_{\phi }}{\sin^{2}\theta }\right), \\
&& \frac{1}{\sin \theta} \frac{\partial }{\partial \theta }\left(
\sin \theta v_{\theta} \right) + \frac{1}{\sin \theta}
\frac{\partial v_{\phi }}{\partial \phi } = 0,
\end{eqnarray*}
where $\Delta_S$ is the Laplace-Beltrami operator in spherical
angles
\begin{eqnarray*}
\Delta_S =\frac{1}{\sin \theta }\frac{\partial }{\partial \theta
}\left( \sin \theta \frac{\partial }{\partial \theta }\right) +\frac{1}{\sin ^{2}\theta }%
\frac{\partial ^{2}}{\partial \phi ^{2}}.
\end{eqnarray*}
Note that no boundary conditions are specified for the vector
${\bf v}(\theta,\phi,t)$ on sphere $S$, while the initial
condition ${\bf v}|_{t = 0} = {\bf v}_0$ on $S$ is not written.
For the purposes of our work, we rewrite the two-dimensional NS
equations on the sphere $S$ in an equivalent form:
\begin{eqnarray}
&& \frac{\partial v_{\theta }}{\partial t} - \frac{v_{\phi}
\omega}{\sin \theta} + \frac{\partial q}{\partial \theta } = \nu
\left( \Delta_S v_{\theta} - \frac{v_{\theta}}{\sin^2\theta }
-\frac{2\cos \theta }{\sin ^{2}\theta} \frac{\partial v_{\phi }}{\partial \phi} \right),
\label{2.1} \\
&& \frac{\partial v_{\phi}}{\partial t} + \frac{v_{\theta}
\omega}{\sin \theta} + \frac{1}{\sin \theta} \frac{\partial
q}{\partial \phi } = \nu \left( \Delta_S v_{\phi }+\frac{2\cos
\theta }{\sin ^{2}\theta } \frac{\partial v_{\theta
}}{\partial \phi }-\frac{v_{\phi }}{\sin^{2}\theta }\right), \label{2.2} \\
&& \frac{\partial }{\partial \theta }\left( \sin \theta v_{\theta}
\right) + \frac{\partial v_{\phi }}{\partial \phi } = 0, \label{2.3}
\end{eqnarray}
where $q$ is a static (stagnation) pressure and $\omega$ is the
vorticity:
\begin{equation}
\label{vorticity-renorm} q = p + \frac{1}{2} \left( v_{\theta}^2 +
v_{\phi}^2 \right), \qquad \omega = \frac{\partial }{\partial
\theta }\left( \sin \theta \; v_{\phi } \right) -\frac{\partial
v_{\theta }}{\partial \phi}.
\end{equation}
The stationary solution (\ref{exact-solution}) corresponds to the
exact stationary solution of the two-dimensional NS equations
(\ref{2.1})--(\ref{2.3}) on the unit sphere $S$:
\begin{equation}
\label{exact-solution-2D} v_{\theta} = \frac{\alpha}{\sin \theta},
\qquad v_{\phi} = 0, \qquad q = \beta,
\end{equation}
where $(\alpha,\beta)$ are arbitrary parameters.
We shall also consider the situation where the external source and
sink singularities at $\theta = 0$ and $\theta = \pi$ are excluded
from the domain of the NS equations (\ref{2.1})--(\ref{2.3}). For
instance, we shall consider the truncated domain in the form of the
spherical layer
\begin{equation}
\label{domain-truncated} S_0 = \left\{ (\theta,\phi) : \quad
\theta_0 \leq \theta \leq \pi - \theta_0, \;\; 0 \leq \phi \leq
2\pi \right\},
\end{equation}
where $0 < \theta_0 < \frac{\pi}{2}$. Without loss of generality,
the spherical layer $S_0$ is truncated symmetrically at the two
rings located in the Northern and Southern semi-spheres such that
the stationary flow (\ref{exact-solution-2D}) is free of pole
singularities in $S_0$. In other words, without dipping into
details on how the fluid flow is injected on the sphere and is
collected from the sphere in a neighborhood of the North and South
poles, we will study how the fluid leaks from the Northern
semi-sphere to the Southern semi-sphere along the spherical layer
(\ref{domain-truncated}). In this context, the stationary solution
(\ref{exact-solution-2D}) is interpreted as the mass conservation
law which is obtained by integrating the free divergence condition
(\ref{2.3}).
We are interested in spectral stability of the stationary fluid flow
(\ref{exact-solution-2D}). In the case of $S$ when the singularities
are included, we prove analytically that the linearized NS equations
(\ref{2.1})--(\ref{2.3}) about the stationary solution
(\ref{exact-solution-2D}) are asymptotically stable. In the case of
$S_0$ when the singularities are excluded, the asymptotical
stability of the stationary flow can only be proved for the case
$\nu = \infty$, that is in the limit of zero Reynolds numbers. By
using the power series expansions, we approximate solutions
numerically and show that the stationary flow remains asymptotically
stable for all Reynolds numbers.
Our paper is structured as follows. Section 2 introduces the
linearization of the two-dimensional NS equations
(\ref{2.1})--(\ref{2.3}) at the stationary solution
(\ref{exact-solution-2D}) and discusses boundary conditions for the
perturbation vector. Analytical results on location of the spectrum
of the linearized problem are reported in Section 3 for
symmetry-breaking ($\phi$-dependent) perturbations and in Section 4
for symmetry-preserving ($\phi$-independent) perturbations.
Numerical results on computations of eigenvalues of the linearized
problem are described in Section 5 for symmetry-breaking
perturbations and in Section 6 for symmetry-preserving
perturbations. Section 7 discusses applications.
\section{Linearized equations and separation of variables}
Without loss of generality, we consider the stationary solution
(\ref{exact-solution-2D}) with $\alpha = 1$ and $\beta = 0$. The
presence of arbitrary parameters ($\alpha,\beta$) introduces
time-independent (neutral) modes of the linearized equations,
which we will also account for in this section. We consider an
infinitesimal time-dependent perturbations of the stationary flow
with $\alpha = 1$ and $\beta = 0$ in the form
\begin{equation}
v_{\theta} = \frac{1}{\sin \theta} + U(\theta,\phi) e^{\lambda t},
\qquad v_{\phi} = V(\theta,\phi) e^{\lambda t}, \qquad q =
Q(\theta,\phi) e^{\lambda t}, \label{2.6}
\end{equation}
where $\lambda \in \mathbb{C}$ is a parameter, such that
perturbations with ${\rm Re}(\lambda) > 0$ imply spectral
instability of the stationary flow. If ${\rm Re}(\lambda) < 0$ for
all perturbations, the stationary flow is asymptotically stable,
while if ${\rm Re}(\lambda) = 0$ for some perturbations and ${\rm
Re}(\lambda) < 0$ for all other perturbations, the stationary flow
is stable in the sense of Lyapunov.
By neglecting the quadratic terms of the perturbation, we linearize
the NS equations (\ref{2.1})--(\ref{2.3}) with the expansion
(\ref{2.6}) to the form:
\begin{eqnarray}
&& \lambda U + \frac{\partial Q}{\partial \theta } = \nu \left(
\Delta_S U - \frac{U}{\sin ^{2}\theta} - \frac{2\cos \theta }{\sin^2
\theta } \frac{\partial V}{\partial \phi }\right), \label{2.7} \\
&& \lambda V + \frac{1}{\sin^2 \theta} \left(
\frac{\partial}{\partial \theta} \left( \sin \theta \; V \right) -
\frac{\partial U}{\partial \phi} \right) + \frac{1}{\sin \theta}
\frac{\partial Q}{\partial \phi } = \nu \left( \Delta_S V +
\frac{2\cos \theta }{\sin2 \theta }\frac{\partial U}{\partial \phi}
-\frac{V}{\sin ^{2}\theta }\right) , \label{2.8} \\
&& \frac{\partial}{\partial \theta} \left( \sin \theta \; U \right)
+ \frac{\partial V}{\partial \phi }=0. \label{2.9}
\end{eqnarray}
Perturbation terms of the velocity vector must satisfy some boundary
conditions in the domains $S_0$ or $S$. It is naturally to assume
that the velocity vector is periodic with respect to the angle
$\phi$:
\begin{equation}
\label{bc-periodic} U(\theta,\phi+2\pi) = U(\theta,\phi), \quad
V(\theta,\phi+2\pi) = V(\theta,\phi).
\end{equation}
Therefore, we look for Fourier series solutions of the system
(\ref{2.7})--(\ref{2.9}):
\begin{equation}
U(\theta,\phi) = \sum_{k \in \mathbb{Z}} U_k(\theta) e^{i k \phi},
\quad V(\theta,\phi) = \sum_{k \in \mathbb{Z}} V_k(\theta) e^{i k
\phi}, \quad Q(\theta,\phi) = \sum_{k \in \mathbb{Z}} Q_k(\theta)
e^{i k \phi}.
\end{equation}
We also require that the components $(U_k,V_k)$ of the velocity
vector be square integrable in $S_0$ or $S$ with respect to the
spherical weight:
\begin{equation}
\label{square-integrable} \int_{\theta_0}^{\pi-\theta_0} \left(
|U_k|^2 + |V_k|^2 \right) \sin \theta d \theta < \infty,
\end{equation}
where $0 \leq \theta_0 < \pi/2$. When the domain is the truncated
spherical shell $S_0$, we require that components the velocity
vector vanish at the regular end points of the domain:
\begin{equation}
\label{bc-dirichlet-0} U_k(\theta_0) = U_k(\pi - \theta_0) =
V_k(\theta_0) = V(\pi-\theta_0) = 0.
\end{equation}
The complete sphere $S$ with the singular end points will be
considered in the limit $\theta_0 \to 0$. We require that the
components of the vorticity in (\ref{vorticity-renorm}) vanish at
the singular end points of the domain:
\begin{equation}
\label{bc-dirichlet} \lim_{\theta \to 0} U_k(\theta) =
\lim_{\theta \to \pi} U_k(\theta) = \lim_{\theta \to 0} \sin
\theta V_k(\theta) = \lim_{\theta \to \pi} \sin \theta V_k(\theta)
= 0.
\end{equation}
It will be clear later that separation of variables is different
between the cases $k = 0$ and $k \neq 0$. We say that the
correction terms with $k = 0$ represent {\em symmetry-preserving}
perturbations of the stationary flow (\ref{2.6}), while the
correction terms with $k \neq 0$ represent {\em symmetry-breaking}
perturbations.
{\bf Case $k \neq 0$:} It follows from the divergence-free condition
(\ref{2.9}) that one can introduce the stream function
$\Psi_k(\theta)$ for the velocity vector $(U_k,V_k)$ as follows:
\begin{equation}
U_k = \frac{i k}{\sin \theta} \Psi_k(\theta), \qquad V_k =
-\Psi'_k(\theta). \label{stream-function-representation}
\end{equation}
The system of linearized equations (\ref{2.7})--(\ref{2.8}) reduces
to the coupled ODE system for $\Psi_k(\theta)$ and $Q_k = i k
P_k(\theta)$:
\begin{eqnarray}
\frac{d}{d\theta} P_k & = & \frac{1}{\sin \theta} \left( \nu
\Delta_k \Psi_k - \lambda \Psi_k \right), \label{system-1} \\
\frac{k^2}{\sin \theta} P_k & = & \frac{d}{d\theta}\left( \nu
\Delta_k \Psi_k - \lambda \Psi_k \right) - \frac{1}{\sin \theta}
\Delta_k \Psi_k,\label{system-2}
\end{eqnarray}
where
\begin{equation}
\label{Laplacian-k} \Delta_k = \frac{d^2}{d \theta^2} + \frac{\cos
\theta}{\sin \theta} \frac{d}{d\theta} - \frac{k^2}{\sin^2 \theta}.
\end{equation}
Let $\Phi_k = \Delta_k \Psi_k$ be a new variable. Then, the
variable $P_k$ can be excluded from the system
(\ref{system-1})--(\ref{system-2}), such that the system reduces
to a closed second-order ODE:
\begin{equation}
\label{eigenvalue-equation} \nu \Delta_k \Phi_k -
\frac{\Phi_k'}{\sin \theta} = \lambda \Phi_k.
\end{equation}
Besides the relations (\ref{system-1}) and (\ref{system-2}) between
the pressure $P_k$, the stream function $\Psi_k$ and the vorticity
$\Phi_k$, we note another relation between these components:
\begin{equation}
\label{second-equation} \Phi_k = \Delta_k \Psi_k = \sin^2 \theta
\Delta_k P_k.
\end{equation}
Due to the boundary conditions (\ref{bc-dirichlet-0}) and the
representation (\ref{stream-function-representation}), the
solution $\Psi_k(\theta)$ for the truncated spherical layer $S_0$
is defined on a closed interval $\theta_0 \leq \theta \leq \pi -
\theta_0$ for $0 < \theta_0 < \pi/2$ subject to the boundary
conditions
\begin{equation}
\Psi_k(\theta_0) = \Psi_k'(\theta_0) = \Psi_k(\pi-\theta_0) =
\Psi_k'(\pi-\theta_0) = 0. \label{X0}
\end{equation}
Since $\theta = 0$ and $\theta = \pi$ are singular points of the
system (\ref{eigenvalue-equation})--(\ref{second-equation}) when
$\theta_0 \to 0$, the solution $\Psi_k(\theta)$ for the complete
sphere $S$ is defined on an open interval $0 < \theta < \pi$
satisfying the boundary conditions from (\ref{bc-dirichlet}) and
(\ref{stream-function-representation}):
\begin{equation}
\lim_{\theta \to 0} \Psi_k(\theta) = \lim_{\theta \to 0} \sin \theta
\Psi_k'(\theta) = \lim_{\theta \to \pi} \Psi_k(\theta) =
\lim_{\theta \to \pi} \sin \theta \Psi_k'(\theta) = 0. \label{X}
\end{equation}
{\bf Case $k = 0$:} It follows from the divergence-free condition
(\ref{2.9}) that
$$
U_0 = \frac{\alpha}{\sin \theta},
$$
where $\alpha \in \mathbb{R}$. This solution resembles the neutral
eigenmode generated by the arbitrary constant $\alpha$ in the
stationary solution (\ref{exact-solution-2D}). Since the eigenmode
violates the boundary conditions (\ref{X0}) on $S_0$ and has pole
singularities on $S$, we set $\alpha = 0$. In this case, the first
equation (\ref{2.7}) admits a solution $Q_0 = \beta$, where $\beta
\in \mathbb{R}$. It is also a neutral eigenmode generated by the
arbitrary constant $\beta$ in the stationary solution
(\ref{exact-solution-2D}). Since it is a trivial eigenmode (the
pressure term is defined with accuracy to an addition of an
arbitrary constant), we can set $\beta = 0$.
When $\alpha = \beta = 0$, the representation for $U_0 = Q_0 = 0$
matches the previous representation for $U_k$ and $Q_k$ with $k =
0$. Using the representation (\ref{stream-function-representation}),
we introduce $V_0 = -\Psi_0'(\theta)$ and rewrite the second
equation (\ref{2.8}) as follows:
\begin{equation}
\label{zero-equation} \frac{d}{d\theta}\left( \nu \Delta_0 \Psi_0 -
\lambda \Psi_0 \right) - \frac{1}{\sin \theta} \Delta_0 \Psi_0 = 0,
\end{equation}
where $\Delta_0$ is defined by (\ref{Laplacian-k}) with $k = 0$.
Letting $\Phi_0 = \Delta_0 \Psi_0$ and taking one more derivative
in $\theta$, one can convert the non-trivial equation
(\ref{zero-equation}) to the previous form
(\ref{eigenvalue-equation}) with $k = 0$. Therefore, all solutions
of (\ref{zero-equation}) are also solutions of
(\ref{eigenvalue-equation}) with $k = 0$, while the converse
statement is not true. It follows from (\ref{bc-dirichlet-0}) and
(\ref{bc-dirichlet}) that the stream function $\Psi_0(\theta)$
satisfies the Neumann boundary conditions
\begin{equation}
\Psi_0'(\theta_0) = \Psi_0'(\pi-\theta_0) = 0 \label{X0-zero}
\end{equation}
in the case of $S_0$ and the boundary conditions
\begin{equation}
\lim_{\theta \to 0} \sin \theta \Psi_0'(\theta) = \lim_{\theta \to
\pi} \sin \theta \Psi_0'(\theta) = 0 \label{X-zero}
\end{equation}
in the case of $S$. Stability analysis of the linearized system
(\ref{eigenvalue-equation})--(\ref{second-equation}) with $k \neq
0$ is developed separately from that of the linearized equation
(\ref{zero-equation}) with $k = 0$. Our main results on
eigenvalues of the linearized systems
(\ref{eigenvalue-equation})--(\ref{second-equation}) and
(\ref{zero-equation}) are summarized in Table 1. The remainder of
this article is devoted to the proofs and numerical verifications
of results described in Table 1.
\vspace{0.1cm}
\begin{tabular}{|p{1.5cm}|p{2cm}|p{2cm}|p{4cm}|p{4cm}|}
\hline
Index $k$ & Viscosity $\nu$ & Cut-off $\theta_0$ & eigenvalues & results \\
\hline
$k \neq 0$ & $0 < \nu \leq \infty$ & $\theta_0 = 0$ & real negative & Proposition 2 \\
\hline $k \neq 0$ & $\nu = \infty$ & $0 < \theta_0 <
\frac{\pi}{2}$ & real
negative & Propositions 3 and 4 \\
\hline $k \neq 0$ & $0 < \nu < \infty$ & $0 < \theta_0 <
\frac{\pi}{2}$ & real or complex & Section 5 \\
\hline $k = 0$ & $0 < \nu \leq \infty$ & $\theta_0 = 0$ & real
negative or absent & Proposition 8 \\
\hline $k = 0$ & $0 < \nu \leq \infty$ & $0 < \theta_0 < \frac{\pi}{2}$ & real negative & Propositions 9 and 10 \\
\hline $k = 0$ & $0 < \nu < \infty$ & $0 < \theta_0 < \frac{\pi}{2}$ & real negative & Section 6 \\
\hline
\end{tabular}
{\bf Table 1:} Summary of main results.
\section{Stability analysis for $k \neq 0$}
We rewrite the coupled system
(\ref{eigenvalue-equation})--(\ref{second-equation}) for
$(\Psi_k,\Phi_k)$ by using the variable $x = \cos \theta$:
\begin{equation}
\label{stability-problem} L_k \Psi_k = \Phi_k, \qquad L_k \Phi_k +
\epsilon \Phi_k' = \mu \Phi_k,
\end{equation}
where $\epsilon = 1/\nu$ is the Reynolds number of the basic flow,
$\mu = \lambda/\nu$ is a rescaled eigenvalue, and $L_k$ is the
Sturm--Liouville operator for associated Legendre functions
\begin{equation}
L_k = \frac{d}{d x} \left[ (1-x^2) \frac{d}{dx} \right] -
\frac{k^2}{1-x^2}. \label{Sturm-Liouville}
\end{equation}
The system (\ref{stability-problem}) is defined on the symmetric
interval $-x_0 \leq x \leq x_0$, where $x_0 = \cos \theta_0$. The
spherical layer $S_0$ corresponds to the case $0 < x_0 < 1$, while
the complete sphere $S$ corresponds to the limit $x_0 \to 1$. In
the latter case, the interval $x \in [-1,1]$ connects two singular
points $x = \pm 1$ of the Sturm--Liouville operator
(\ref{Sturm-Liouville}). The case $\epsilon = 0$ corresponds to
the infinitely viscous fluid, while the case $\epsilon = \infty$
corresponds to the inviscous fluid.
Using the representation (\ref{stream-function-representation})
and the transformation $x = \cos \theta$ with $\Psi_k'(\theta) = -
\sqrt{1 - x^2} \Psi_k'(x)$, we rewrite the condition
(\ref{square-integrable}) as the norm on function space ${\cal
H}_k$, which is used throughout our work:
\begin{equation}
\label{energy-norm} \| \Psi_k \|^2_{{\cal H}_k} = \int_{-x_0}^{x_0}
\left[ (1-x^2) |\Psi_k'(x)|^2 + \frac{k^2}{1 - x^2} |\Psi_k(x)|^2
\right] dx < \infty.
\end{equation}
We shall denote ${\cal H}_k([-x_0,x_0])$ when $0 < x_0 < 1$ and
${\cal H}_k([-1,1])$ when $x_0 = 1$. When $0 < x_0 < 1$, the
linearized system (\ref{stability-problem}) is defined on function
space
\begin{equation}
X_0 = \left\{ \Psi_k \in {\cal H}_k([-x_0,x_0]) : \quad \Psi_k(\pm
x_0) = \Psi_k'(\pm x_0) = 0 \right\}, \label{bc1}
\end{equation}
where the boundary conditions (\ref{X0}) are taken into account.
When $x_0 = 1$, the linearized system (\ref{stability-problem}) is
defined in function space
\begin{equation}
\label{bc2} X = \left\{ \Psi_k \in {\cal H}_k([-1,1]) : \quad
\lim_{x \to \pm 1} \Psi_k(x) = \lim_{x \to \pm 1} (1-x^2) \Psi'_k(x)
= 0 \right\},
\end{equation}
where the boundary conditions (\ref{X}) are taken into account. We
note that the boundary conditions in the definition of $X$ are
redundant, since the norm (\ref{energy-norm}) is finite on $x \in
[-1,1]$ only if the boundary conditions in (\ref{bc2}) are
satisfied. Nevertheless, we write these redundant boundary
conditions according to the standard formalism of the singular
Sturm--Lioville problems \cite{Strauss}.
The Sturm--Liouville operator $L_k$ in (\ref{Sturm-Liouville}) is
self-adjoint with respect to the boundary conditions in $X_0$ and
$X$, such that $(\Psi_k, L_k \Psi_k) = -\| \Psi_k \|^2_{{\cal
H}_k} < 0$ is finite and real-valued for $\Psi_k \in {\cal
H}_k([-x_0,x_0])$. Therefore, the kernel of $L_k$ is empty in
$X_0$ and $X$. Because the smallest eigenvalue of $L_k$ is bounded
away zero, the operator $L_k$ is invertible and ${\rm range}(L_k)$
is dense in the space of square integrable functions on $x \in
[-x_0,x_0]$ for any $0 < x_0 \leq 1$. Therefore, as it follows
from the first equation of the system (\ref{stability-problem}),
the component $\Phi \in {\rm range}(L_k)$ is square integrable on
$x \in [-x_0,x_0]$ but it does not satisfy any specific boundary
conditions at the end points $x = \pm x_0$.
The eigenvalue problem (\ref{stability-problem}) in $X_0$ and $X$
has two continuous parameters $0 < x_0 \leq 1$ and $\epsilon \geq 0$
and one integer parameter $k \in \mathbb{Z} \backslash \{0\}$, while
$(\mu,\Psi_k)$ is the eigenvalue-eigenfunction pair that defines
spectral stability of the stationary flow. The following results
characterize the spectrum of the eigenvalue problem in the cases:
(i) $x_0 = 1$ and $\epsilon \geq 0$; (ii) $0 < x_0 < 1$ and
$\epsilon = 0$; and (iii) in the limit $x_0 \to 1$ when $\epsilon =
0$. Based on these results, we prove the following theorem:
\begin{theorem}
When $x_0 = 1$ and $\epsilon \geq 0$ or $0 < x_0 \leq 1$ and
$\epsilon = 0$, the stationary flow (\ref{exact-solution-2D}) is
asymptotically stable with respect to symmetry-breaking
perturbations in the sense that the spectrum of the linearized
problem (\ref{stability-problem}) in $X_0$ or $X$ consists of a
set of isolated eigenvalues $\mu$ of finite multiplicities, where
$\mu \in \mathbb{R}_-$ is bounded away from zero.
\label{theorem-1}
\end{theorem}
The proof of theorem consists of the proofs of three individual
propositions.
\begin{proposition}
\label{proposition-x0-1} A complete spectrum of the eigenvalue
problem (\ref{stability-problem}) with $x_0 = 1$ and $\epsilon \geq
0$ in $X$ consists of simple isolated eigenvalues at $\mu = \mu_n$,
\begin{equation}
\label{eigenvalues-1} \mu_n = -s_n(s_n+1), \qquad s_n = \sigma + n,
\end{equation}
where $\sigma = \sqrt{k^2 + \epsilon^2/4} > 0$ and $n \geq 0$ is
integer.
\end{proposition}
\begin{proof}
Let $\mu = -s(s+1)$ and
\begin{equation}
\label{transformation-1} \Phi_k(x) = \left( \frac{1-x}{1+x}
\right)^{\epsilon/4} \varphi(x).
\end{equation}
The second equation of the system (\ref{stability-problem})
transforms to the associated Legendre equation
\begin{equation}
\label{associated-Legendre} \frac{d}{d x} \left[ (1-x^2) \frac{d
\varphi}{dx} \right] - \frac{\sigma^2}{1-x^2} \varphi + s(s+1)
\varphi = 0, \qquad -1 < x < 1,
\end{equation}
where $\sigma = \sqrt{k^2 + \epsilon^2/4} > 0$. Since the linear ODE
(\ref{associated-Legendre}) has no singular points on $-1 < x < 1$,
there exists a set of two linearly independent twice continuously
differentiable solutions in any compact subset of $x \in (-1,1)$
\cite{CL55}. Singularity analysis of the ODE
(\ref{associated-Legendre}) as $x \to \pm 1$ shows that the solution
$\varphi(x)$ either have a singular (unbounded) behavior like $(1
\mp x)^{-\sigma/2}$ as $x \to \pm 1$ or a regular (vanishing)
behavior like $(1 \mp x)^{\sigma/2}$ as $x \to \pm 1$.
Let $\varphi(x)$ be a regular solution of
(\ref{associated-Legendre}) on $x \in [-1,1]$, such that $\varphi(x)
\sim (1\mp x)^{\sigma/2}$ and $\Phi_k(x) \sim (1 \mp x)^{\pm
\epsilon/4 + \sigma/2}$ as $x \to \pm 1$. Since the Sturm--Liouville
operator $L_k$ is invertible on $\Phi_k \in L^2([-1,1])$ for $k \neq
0$, the first equation of the system (\ref{stability-problem})
admits a solution $\Psi_k(x)$ that behaves like $(1 \mp x)^{1 \pm
\epsilon/4 + \sigma/2}$ as $x \to \pm 1$. Since $\pm \epsilon +
\sqrt{\epsilon^2 + 4 k^2} \geq 0$ for any $k \in \mathbb{Z}$ and
$\epsilon \geq 0$, the function $\Phi_k(x)$ is bounded on $x \in
[-1,1]$ such that $\Phi_k \in L^2([-1,1])$ while the function
$\Psi_k(x)$ belongs to the function space $X$ in (\ref{bc2}).
Therefore, if $\varphi(x)$ is a regular solution of
(\ref{associated-Legendre}), then $\Psi_k(x)$ is an eigenfunction of
the eigenvalue problem (\ref{stability-problem}) in $X$.
Let $\varphi(x)$ be a singular solution of
(\ref{associated-Legendre}), such that $\varphi(x) \sim (1 \mp
x)^{-\sigma/2}$ and $\Phi_k(x) \sim (1 \mp x)^{\pm \epsilon/4 -
\sigma/2}$ in at least one limit $x \to \pm 1$. Since $\pm
\epsilon - \sqrt{\epsilon^2 + 4 k^2} \leq - 2 |k| \leq -2$ for $k
\neq 0$ and $\epsilon \geq 0$, the function $\Phi_k$ does not
belong to $L^2([-1,1])$ and hence $\Psi_k(x)$ can not be in $X$.
By Theorem 10 on p.1441 in \cite{DS}, the essential spectrum of
the formally self-adjoint operator (\ref{associated-Legendre}) is
void. Therefore, the complete spectrum of the linearized system
(\ref{stability-problem}) in $X$ consists of isolated eigenvalues
$\mu$, which correspond to {\em regular} solutions $\varphi(x)$ of
the associated Legendre equation (\ref{associated-Legendre}).
Let $\varphi(x)$ be a regular solution of
(\ref{associated-Legendre}) and write $\varphi(x) =
(1-x^2)^{\sigma/2} F(x)$, where $F(x)$ is bounded as $x \to \pm 1$.
This substitution transforms the associated Legendre equation
(\ref{associated-Legendre}) to the hypergeometric equation
\begin{equation}
\label{hypergeometric-equation} z(1-z) F''(z) + \left( \gamma -
(\alpha + \beta + 1)z\right) F'(z) - \alpha \beta F(z) = 0,
\end{equation}
where
\begin{equation}
\label{hypergeometric-parametrization} z = \frac{1-x}{2}, \quad
\alpha = \sigma - s, \quad \beta = \sigma + s + 1, \quad \gamma =
\sigma + 1.
\end{equation}
The only solution of the ODE (\ref{hypergeometric-equation}) which
is bounded as $x \to 1$ ($z \to 0$) is the hypergeometric function
$F(z;\alpha,\beta,\gamma)$, which admits the power series at $z = 0$
(see 9.100 on p. 995 in \cite{GR}):
\begin{equation}
\label{hypergeometric} F(z;\alpha,\beta,\gamma) = 1 + \frac{\alpha
\beta}{\gamma 1!} z + \frac{\alpha (\alpha+1) \beta (\beta +
1)}{\gamma (\gamma + 1) 2!} z^2 + ...
\end{equation}
The hypergeometric series (\ref{hypergeometric}) converges for $|z|
< 1$ but it diverges as $z \to 1$ since $\alpha + \beta - \gamma =
\sigma > 0$ unless the truncation of the power series to a
polynomial in $z$ occurs (see 9.101--9.102 on p. 995 in \cite{GR}).
The latter case is the only case when the solution of the ODE
(\ref{hypergeometric-equation}) is bounded in both limits $x \to 1$
($z \to 0$) and $x \to -1$ ($z \to 1$). It is easy to see that the
truncation occurs when either $\alpha = -n$ or $\beta = -m$ with
non-negative integers $n$ and $m$. The two cases are in fact
equivalent to each other since $\mu = -s(s+1) = (\alpha - \sigma)
(\beta - \sigma)$ and $\alpha + \beta = 1 + 2 \sigma$. Let $\alpha =
-n$, such that $s = \sigma + n$, $\beta = 2\sigma + n + 1$ and
$\gamma = \sigma + 1$. In this case, the function
$F\left(z;-n,n+1+2\sigma,1+\sigma\right) \equiv F_n(x)$ is a
polynomial of degree $n$, e.g.
\begin{equation}
F_0 = 1, \quad F_1 = x, \quad F_2 = \frac{(2\sigma+3) x^2 -
1}{2(1+\sigma)}, \quad F_3 = \frac{(5 + 2 \sigma) x^3 - 3
x}{2(1+\sigma)}, \label{polynomials-Legendre}
\end{equation}
while the simple eigenvalues $\mu = \mu_n$ are given by the
expression (\ref{eigenvalues-1}). When $\sigma = 0$, polynomials
$F_n(x)$ coincide with the Legendre polynomials $P_n(x)$ in 8.91 on
p. 973 of \cite{GR}.
\end{proof}
\begin{proposition}
\label{proposition-epsilon-0} A complete spectrum of the eigenvalue
problem (\ref{stability-problem}) with $0 < x_0 < 1$ and $\epsilon =
0$ in $X_0$ consists of isolated eigenvalues $\mu$, which are (i)
real and strictly negative and (ii) either simple or double with
linearly independent eigenfunctions.
\end{proposition}
\begin{proof}
We first show that no zero eigenvalue $\mu = 0$ exists in the
eigenvalue problem (\ref{stability-problem}) with $0 < x_0 < 1$ and
$\epsilon = 0$ in $X_0$. Let $\Psi_k(x)$ be a $C^4([-x_0,x_0])$
solution of the fourth-order ODE $L_k^2 \Psi_k = 0$ in function
space $X_0$. Then,
$$
(\Psi_k,L_k^2 \Psi_k) = (1-x^2) \left[ \Psi_k (L_k \Psi_k)' -
\Psi_k' (L_k \Psi_k) \right] |_{x=-x_0}^{x=x_0} + (L_k \Psi_k, L_k
\Psi_k) = (L_k \Psi_k, L_k \Psi_k).
$$
Therefore, $\Psi_k(x)$ is in fact the solution of the second-order
ODE $L_k \Psi_k = 0$. The boundary conditions in $X_0$ admit the
only solution $\Psi_k(x) \equiv 0$, such that the eigenvalue problem
(\ref{stability-problem}) contains no eigenvalue $\mu = 0$ in $X_0$.
When $\mu \neq 0$ and $\epsilon = 0$, the system
(\ref{stability-problem}) admits a general solution in the form
$$
\Psi_k(x) = \frac{\phi(x)}{\mu} + \psi(x), \qquad \Phi_k(x) =
\phi(x),
$$
where $\psi(x)$ and $\phi(x)$ are general solutions of the
homogeneous second-order ODEs
$$
L_k \psi = 0, \qquad L_k \phi = \mu \phi.
$$
Since the operator $L_k$ is invariant with respect to the inversion
symmetry $x \mapsto - x$, each homogeneous second-order ODE has
linearly independent symmetric (even) and anti-symmetric (odd)
solutions denoted by subscripts $+$ and $-$ respectively. Therefore,
we obtain the decomposition
\begin{eqnarray*}
\Psi_k(x) & = & d_+ \frac{\phi_+(x)}{\mu} + c_+ \psi_+(x) + d_-
\frac{\phi_-(x)}{\mu} + c_- \psi_-(x), \\
\Phi_k(x) & = & d_+ \phi_+(x) + d_- \phi_-(x),
\end{eqnarray*}
where $(c_+,c_-,d_+,d_-)$ are constants and the functions
$\phi_{\pm}(x)$ and $\psi_{\pm}(x)$ are uniquely normalized by the
initial values at $x = 0$ (e.g. $\phi_+(0) = 1$, $\phi_+'(0) = 0$
and $\phi_-(0) = 0$, $\phi_-'(0) = 1$). We note that either
$\psi_{\pm}(x_0) \neq 0$ or $\psi_{\pm}'(x_0) \neq 0$ (since
$\psi_{\pm}(x) \equiv 0$ otherwise). By using the boundary
conditions in (\ref{bc1}), we decompose the boundary-value problems
into two uncoupled systems with
$$
d_{\pm} \phi_{\pm}(x_0) + \mu c_{\pm} \psi_{\pm}(x_0) = 0, \qquad
d_{\pm} \phi_{\pm}'(x_0) + \mu c_{\pm} \psi_{\pm}'(x_0) = 0,
$$
such that a non-zero solution for $(c_+,c_-,d_+,d_-)$ exists
provided
$$
\phi_{\pm}'(x_0) \psi_{\pm}(x_0) = \phi_{\pm}(x_0) \psi'_{\pm}(x_0).
$$
The functions $\psi_{\pm}(x)$ are independent of $\mu$, while
$\phi_{\pm}(x)$ depend on $\mu$. We have thus obtained that the
functions $\phi_{\pm}(x)$ solve the {\em closed} eigenvalue problem
\begin{equation}
\label{equivalent-problem-1} L_k \phi_{\pm} = \mu \phi_{\pm}, \qquad
-x_0 \leq x \leq x_0,
\end{equation}
defined on the function space
\begin{equation}
\label{equivalent-problem-2} H_0 = \left\{ \phi_{\pm} \in {\cal
H}_k([-x_0,x_0]) : \;\; \psi_{\pm}(x_0) \phi'_{\pm}(x_0) -
\psi_{\pm}'(x_0) \phi_{\pm}(x_0) = 0, \;\; \phi_{\pm}(-x) = \pm
\phi_{\pm}(x) \right\}.
\end{equation}
The $\mu$-independent boundary values in
(\ref{equivalent-problem-2}) are Robin boundary conditions when
$\psi_{\pm}(x_0)$ and $\psi_{\pm}'(x_0)$ are both non-zero,
Dirichlet boundary conditions when $\psi_{\pm}(x_0) = 0$ and Neumann
boundary conditions when $\psi'_{\pm}(x_0) = 0$. The associated
Legendre operator $L_k$ is self-adjoint in $H_0$ with respect to any
of these boundary conditions \cite{Strauss}. Therefore, all
eigenvalues $\mu$ of the eigenvalue problem
(\ref{equivalent-problem-1}) in $H_0$ are real-valued and isolated,
while the corresponding eigenfunctions $\phi_{\pm}(x)$ are
real-valued. Moreover, all eigenvalues of
(\ref{equivalent-problem-1}) are simple since the Wronskian of any
two solutions of (\ref{equivalent-problem-1}) with boundary
conditions in (\ref{equivalent-problem-2}) is zero. Since $\Psi_k
\in X_0$ and $\Phi_k \in H_0$, we obtain that
$$
(\phi,\phi) = (L_k \Psi_k, \phi) = (\Psi_k, L_k \phi) = \mu
(\Psi_k,\phi) = (\phi,\phi) + \mu (\psi,\phi),
$$
such that $(\psi,\phi) = 0$ for $\mu \neq 0$. By using the above
identity, we obtain that
\begin{equation}
\label{Green-identity} \frac{1}{\mu} (\phi,\phi) = (\Psi_k,\phi) =
(\Psi_k, L_k \Psi_k) = - \| \Psi_k \|^2_{{\cal H}_k} < 0,
\end{equation}
such that $\mu < 0$ for {\em each} eigenvalue with $\Psi_k \neq 0$
and $\phi \neq 0$. By construction, eigenvalues are at most
double. The case of double eigenvalues corresponds to the
situation when the eigenvalue problems
(\ref{equivalent-problem-1})--(\ref{equivalent-problem-2}) admit
two linearly independent (even and odd) eigenfunctions for the
same value of $\mu$.
\end{proof}
\begin{proposition}
Let $\{ \mu_n \}_{n \geq 0}$ be isolated eigenvalues of the
eigenvalue problem (\ref{stability-problem}) in $X_0$ with $0 < x_0
< 1$ and $\epsilon = 0$ ordered as
$$
\mu_0 \geq \mu_1 \geq ... \geq \mu_n \geq ...
$$
Then,
$$
\lim_{x_0 \to 1} \mu_n = - s_n(s_n+1), \qquad s_n = |k| + n,
$$
where $n \geq 0$. \label{proposition-continuity}
\end{proposition}
\begin{proof}
Consider even and odd solutions of the second-order ODE $L_k
\psi_{\pm} = 0$ in the limit $x_0 \to 1$. Since the kernel of $L_k$
admits no eigenfunctions in ${\cal H}_k$ for $k \neq 0$ and $0 < x_0
\leq 1$, the solutions $\psi_{\pm}(x_0)$ must diverge as $x_0 \to
1$. Singularity analysis as $x \to \pm 1$ suggests that the solution
$\psi_{\pm}(x)$ grows like $(1 \mp x)^{-|k|/2}$ as $x \to \pm 1$,
such that $\lim\limits_{x_0 \to 1} \psi_{\pm}(x_0)/\psi'_{\pm}(x_0)
= 0$. Therefore, eigenfunctions $\phi_{\pm}(x)$ of the auxiliary
eigenvalue problem (\ref{equivalent-problem-1}) for $0 < x_0 < 1$
satisfy in the limit $x_0 \to 1$ the singular eigenvalue problem
\begin{equation}
\label{equivalent-problem-3} L_k \phi_{\pm} = \mu \phi_{\pm}, \qquad
-1 < x < 1
\end{equation}
defined on the function space
\begin{equation}
\label{equivalent-problem-4} H = \left\{ \phi_{\pm} \in {\cal
H}_k([-1,1]) : \quad \lim_{x \to \pm 1} \phi_{\pm}(x) = \lim_{x \to
\pm 1} (1-x^2) \phi_{\pm}'(x) = 0 \right\}.
\end{equation}
Again, the boundary conditions in $H$ are redundant due to
convergence of the integral in ${\cal H}_k([-1,1])$. A complete
spectrum of the eigenvalue problem
(\ref{equivalent-problem-3})--(\ref{equivalent-problem-4}) is
constructed in the proof of Proposition \ref{proposition-x0-1}:
eigenvalues are given by (\ref{eigenvalues-1}) with $\epsilon = 0$
and eigenfunctions are $\phi_{\pm}(x) = (1-x^2)^{|k|/2} F_n(x)$,
where $F_n(x)$ are associated Legendre polynomials
(\ref{polynomials-Legendre}) with $\sigma = |k|$. Convergence and
uniqueness of continuations from eigenvalues of
(\ref{equivalent-problem-1}) in $H_0$ for $x_0 < 1$ to eigenvalues
of (\ref{equivalent-problem-3}) in $H$ for $x_0 = 1$ is proved in
two steps. Theorem 5.3 of \cite{zettle} guarantees convergence and
uniqueness of continuations from the singular Sturm--Liouville
problem (\ref{equivalent-problem-3}) in $H$ to the regular
Dirichlet problem for the Sturm--Liouville operator
(\ref{equivalent-problem-1}) on $-x_0 \leq x \leq x_0$. The
Dirichlet problem is generally different from the Robin
boundary-value problem in $H_0$ by the terms $\psi_{\pm}(x_0)
\phi_{\pm}'(\pm x_0)/\psi_{\pm}'(x_0)$ in the boundary conditions
in $H_0$. However, these terms are small in the limit $x_0 \to 1$.
Unique continuation of simple eigenvalues of the Dirichlet problem
to the simple eigenvalues of the Robin problem (separately for
$\phi_+(x)$ and $\phi_-(x)$) follows by standard perturbation
theory of eigenvalues of self-adjoint Sturm--Liouville operators
in Lemma VIII 1.24 of \cite{Kato}.
\end{proof}
\begin{remark}
{\rm Theorem \ref{theorem-1} does not cover the case $0 < x_0 < 1$
and $\epsilon > 0$. Eigenvalues of the linearized problem
(\ref{stability-problem}) in this case will be computed in Section 5
numerically. } \label{remark-k}
\end{remark}
\section{Stability analysis for $k = 0$}
We rewrite the linearized equation (\ref{zero-equation}) in the
variable $x = \cos \theta$:
\begin{equation}
\label{zero-equation-stability} L_0 \Psi_0 = \Phi_0, \qquad \Phi'_0
+ \frac{\epsilon}{1 - x^2} \Phi_0 = \mu \Psi_0'.
\end{equation}
where $L_0$ is the Sturm--Liouville operator for Legendre functions
\begin{equation}
L_0 = \frac{d}{d x} \left[ (1-x^2) \frac{d}{dx} \right]
\label{Sturm-Liouville-0}
\end{equation}
and $\Phi_0(x)$ is introduced similarly to the system
(\ref{stability-problem}). Incorporating the boundary conditions
(\ref{X0-zero}) and (\ref{X-zero}) in new variables, we introduce
the function spaces $X_0$ and $X$ for the eigenvalue problem
(\ref{zero-equation-stability}). When $0 < x_0 < 1$, the function
space $X_0$ is
\begin{equation}
X_0 = \left\{ \Psi_0 \in {\cal H}_0([-x_0,x_0]) : \quad \Psi_0'(\pm
x_0) = 0 \right\}. \label{bc1-zero}
\end{equation}
When $x_0 = 1$, the function space $X$ is
\begin{equation}
X = \left\{ \Psi_0 \in {\cal H}_0([-1,1]) : \quad \lim_{x \to \pm 1}
(1-x^2) \Psi_0'(x) = 0 \right\}, \label{bc2-zero}
\end{equation}
where the boundary conditions are redundant due to convergence of
the integral in ${\cal H}_0([-1,1])$. No boundary conditions on
$\Psi_0(x)$ are set at $x = \pm x_0$. Moreover, the system
(\ref{zero-equation-stability}) defines the function $\Psi_0(x)$
up to an arbitrary additive constant. Therefore, the constant
function $\Psi_0(x) \equiv {\rm const}$ is always an eigenfunction
of the system (\ref{zero-equation-stability}) with $\Phi_0(x)
\equiv 0$.
\begin{lemma}
The eigenvalue $\mu = 0$ of the linearized system
(\ref{zero-equation-stability}) in either $X_0$ or $X$ is
algebraically and geometrically simple. \label{lemma-zero}
\end{lemma}
\begin{proof}
Integrating the first equation in the system
(\ref{zero-equation-stability}) on $x \in [-x_0,x_0]$ for
$\Psi_0(x)$ in either $X_0$ or $X$, we obtain the Fredholm
Alternative condition
\begin{equation}
\label{Fredholm} \int_{-x_0}^{x_0} \Phi_0(x) dx = 0,
\end{equation}
where $0 < x_0 \leq 1$. Integrating the second equation in the
system (\ref{zero-equation-stability}), we obtain a general solution
for $\mu = 0$:
$$
\Phi_0 = c_0 \left(\frac{1 - x}{1+x}\right)^{\epsilon/2},
$$
where $c_0$ is constant. Since $\Phi_0(x)$ does not satisfy the
Fredholm Alternative condition (\ref{Fredholm}), we have to set $c_0
= 0$. Then, $\Psi_0(x)$ satisfies the second-order ODE $L_0 \Psi_0 =
0$, which admits only one eigenfunction $\Psi_0(x) \equiv {\rm
const}$ in either $X_0$ or $X$. Similarly one can prove that the
Jordan block of the zero eigenvalue with the eigenfunction
$\Psi_0(x) \equiv {\rm const}$ and $\Phi_0(x) \equiv 0$ is of the
length one.
\end{proof}
We will extend results of Section 3 to the linearized problem
(\ref{zero-equation-stability}) with $\mu \neq 0$ in $X_0$ and
$X$. Neglecting the only zero eigenvalue $\mu = 0$ with the
trivial eigenfunction $\Psi_0(x) \equiv {\rm const}$, we prove the
following theorem.
\begin{theorem}
The stationary flow (\ref{exact-solution-2D}) is asymptotically
stable with respect to symmetry-preserving perturbations in the
sense that all eigenvalues $\mu$ (excluding the trivial zero) of the
linearized problem (\ref{zero-equation-stability}) with $0 < x_0
\leq 1$ and $\epsilon \geq 0$ in $X_0$ or $X$ are real and strictly
negative. \label{theorem-2}
\end{theorem}
In order to develop analysis of eigenvalues for $\mu \neq 0$, we
shall use two equivalent reformulations of the third-order ODE
system (\ref{zero-equation-stability}) as the second-order
eigenvalue problems associated with formally self-adjoint operators.
In the first reformulation, we exclude $\mu \Psi_0'(x)$ from the
system (\ref{zero-equation-stability}) and find a closed equation
for $\Phi_0(x)$,
\begin{equation}
\label{stability-zero-problem} L_0 \Phi_0 + \epsilon \Phi_0' = \mu
\Phi_0.
\end{equation}
By introducing new dependent variable $\varphi(x)$ via
\begin{equation}
\label{transformation-2} \Phi_0(x) = \left( \frac{1-x}{1+x}
\right)^{\epsilon/4} \varphi(x),
\end{equation}
the linearized equation (\ref{stability-zero-problem}) is
transformed to the self-adjoint form given by the associated
Legendre equation
\begin{equation}
\label{associated-Legendre-zero} \frac{d}{d x} \left[ (1-x^2)
\frac{d \varphi}{dx} \right] - \frac{\epsilon^2}{4(1-x^2)} \varphi =
\mu \varphi, \qquad -x_0 < x < x_0.
\end{equation}
By using the second equation of the system
(\ref{zero-equation-stability}), we obtain the first-order ODE for
the function $\Psi_0(x)$:
\begin{equation}
\label{Psi-0-coupling} \mu \Psi_0'(x) = \left( \frac{1 -
x}{1+x}\right)^{\epsilon/4} \left( \frac{d \varphi}{dx} +
\frac{\epsilon}{2(1-x^2)} \varphi \right).
\end{equation}
While the linearized equation (\ref{stability-zero-problem})
coincides with the second equation of the system
(\ref{stability-problem}) for $k = 0$, the present role of this
equation is different. In order to find $\Psi_0(x)$ from a
solution $\Phi_0(x)$ of the closed equation
(\ref{stability-zero-problem}), we can solve the first-order ODE
(\ref{Psi-0-coupling}) in either $X_0$ or $X$ with $\mu \neq 0$.
Therefore, as opposed to the case $k \neq 0$, we do not have to
solve the first equation of the system
(\ref{zero-equation-stability}) and the Fredholm Alternative
condition (\ref{Fredholm}) can be ignored in this approach.
In the second reformulation of the third-order ODE system
(\ref{zero-equation-stability}), we introduce a new dependent
variable $\chi(x)$ via
\begin{equation}
\label{transformation-3} \Psi_0'(x) = \left( \frac{1-x}{1+x}
\right)^{\epsilon/4} \frac{\chi(x)}{\sqrt{1-x^2}}.
\end{equation}
By using the first equation of the system
(\ref{zero-equation-stability}), we express the function $\Phi_0(x)$
in terms of $\chi(x)$:
\begin{equation}
\label{Phi-0-coupling} \Phi_0(x) = \left( \frac{1 -
x}{1+x}\right)^{\epsilon/4} \left( \sqrt{1-x^2} \frac{d \chi}{dx} -
\frac{\epsilon + 2x}{2\sqrt{1-x^2}} \chi \right).
\end{equation}
The second equation of the system (\ref{zero-equation-stability})
transforms then to the self-adjoint form:
\begin{equation}
\label{self-adjoint-form} \frac{d}{d x} \left[ (1-x^2) \frac{d
\chi}{dx} \right] - \frac{\epsilon^2 + 4 + 4 \epsilon x}{4(1-x^2)}
\chi = \mu \chi, \qquad -x_0 < x < x_0.
\end{equation}
Although the second-order ODE (\ref{self-adjoint-form}) is more
complicated than the associated Legendre equation
(\ref{associated-Legendre-zero}), the eigenfunction $\chi(x)$ is
related to the function $\Psi_0(x)$ better than the eigenfunction
$\varphi(x)$. In particular, when $x_0 = 1$ and $\Psi_0 \in X$, the
eigenfunction $\chi(x)$ satisfies the conditions:
\begin{equation}
\label{conditions-chi-X} \int_{-1}^1 \left( \frac{1-x}{1+x}
\right)^{\epsilon/2} \chi^2(x) dx < \infty, \qquad \lim_{x \to \pm
1} \left( \frac{1-x}{1+x} \right)^{\epsilon/4} \sqrt{1-x^2} \chi(x)
= 0.
\end{equation}
When $0 < x_0 < 1$ and $\Psi_0 \in X_0$, the eigenfunction $\chi(x)$
is any classical solution of the second-order ODE
(\ref{self-adjoint-form}) on $x \in [-x_0,x_0]$ with the Dirichlet
boundary conditions $\chi(\pm x_0) = 0$. There exists a pair of
Darboux-Backlund transformations between the Sturm--Liouville
problems (\ref{associated-Legendre-zero}) and
(\ref{self-adjoint-form}):
\begin{eqnarray}
\label{backlund-1} \varphi(x) & = & \sqrt{1 - x^2} \chi'(x) -
\frac{\epsilon + 2 x}{2 \sqrt{1 - x^2}} \chi(x), \\
\label{backlund-2} \mu \chi(x) & = & \sqrt{1 - x^2} \varphi'(x) +
\frac{\epsilon}{2 \sqrt{1 - x^2}} \varphi(x),
\end{eqnarray}
where $\mu \neq 0$ is assumed. By the Friedrichs' theorems (see,
e.g. Theorem 10 on p.1441 or Theorem 67 on p. 1501 of \cite{DS}),
the essential spectrum of the formally self-adjoint operators
(\ref{associated-Legendre-zero}) and (\ref{self-adjoint-form}) is
void. Therefore, the spectrum of these operators consists of a
sequence of isolated eigenvalues of finite multiplicities, which
we identify in three individual propositions.
\begin{proposition}
\label{proposition-x0-1-k-0} A complete spectrum of the eigenvalue
problem (\ref{zero-equation-stability}) with $x_0 = 1$ and $0 \leq
\epsilon < 2$ in $X$ consists of simple isolated eigenvalues at $\mu
= \mu_n$, where
\begin{equation}
\label{eigenvalues-1-zero} \mu_n = - n(n+1), \qquad n \geq 0.
\end{equation}
No non-zero eigenvalues of the eigenvalue problem
(\ref{zero-equation-stability}) with $x_0 = 1$ and $\epsilon \geq 2$
exists in $X$.
\end{proposition}
\begin{proof}
Let $\mu = -s(s+1) \neq 0$ and $\varphi(x) = (1-x^2)^{\epsilon/4}
F(x)$ and consider the associated Legendre equation
(\ref{associated-Legendre-zero}) with $x_0 = 1$. Then, the
function $F(x)$ satisfies the hypergeometric equation
(\ref{hypergeometric-equation}) under parametrization
(\ref{hypergeometric-parametrization}) with $\sigma = \epsilon/2$.
In order to identify solutions $F(x)$ of the hypergeometric
equations in the function space $\Psi_0 \in X$, we shall rewrite
the relation (\ref{Psi-0-coupling}) as follows:
\begin{equation}
\label{Psi-0-equation} -s(s+1) \Psi_0'(x) = (1 - x)^{\epsilon/2}
\left( F'(x) + \frac{\epsilon}{2(1+x)} F(x) \right).
\end{equation}
Also recall that $\Phi_0(x) = (1-x)^{\epsilon/2} F(x)$. When
$\epsilon = 0$, we find that $\Phi_0(x) = F(x)$ and $\Psi_0(x) =
-\frac{1}{s(s+1)} F(x) + {\rm const}$, such that $\Psi_0 \in X$ if
and only if $F(x) \in X$. The only set of eigenfunctions of the
Legendre equation (\ref{associated-Legendre-zero}) with $\epsilon =
0$ in $X$ is the set of Legendre polynomials $F = P_n(x)$ for $s =
n$ with $n \geq 0$ (see 8.91 on p. 973 in \cite{GR}). This set
corresponds to the eigenvalues (\ref{eigenvalues-1-zero}). Although
the zero eigenvalue $(s = n = 0)$ is excluded from the approach
above, it is still added to the spectrum by Lemma \ref{lemma-zero}.
When $\epsilon > 0$, the eigenfunction $\Psi_0(x)$ belongs to $X$
only if $F(x)$ has a regular behavior as $x = 1$ ($z = 0$). The
only solution of the hypergeometric equation
(\ref{hypergeometric-equation}) which is bounded as $x \to 1$ is
the hypergeometric function $F(z;\alpha,\beta,\gamma)$. (Indeed,
by 9.153 on p. 1001 of \cite{GR}, the other linearly independent
solution $F(x)$ has a singular behavior like $F(x) \sim
(1-x)^{-\epsilon/2}$ as $x \to 1$, which results in the divergence
$\Psi'_0(x) \sim (1-x)^{-1}$ as $x \to 1$, such that $\Psi_0
\notin X$.) By the identity 9.131 on p.998 of \cite{GR}, the
hypergeometric function $F(z;\alpha,\beta,\gamma)$ admits the
following behavior at the other singular point $x = -1$ ($z = 1$):
\begin{eqnarray}
\nonumber F(z;\alpha,\beta,\gamma) = \frac{\Gamma(\gamma)
\Gamma(\gamma-\alpha-\beta)}{\Gamma(\gamma-\alpha)
\Gamma(\gamma-\beta)} F(1-z;\alpha,\beta,\alpha+\beta-\gamma+1)\\
\label{hypergeometric-expansion} + (1-z)^{\gamma-\alpha-\beta}
\frac{\Gamma(\gamma) \Gamma(\alpha+\beta-\gamma)}{\Gamma(\alpha)
\Gamma(\beta)}
F(1-z;\gamma-\alpha,\gamma-\beta,\gamma-\alpha-\beta+1),
\end{eqnarray}
where $\Gamma(z)$ is the Gamma function and
$$
z = \frac{1-x}{2}, \quad \alpha = \frac{\epsilon}{2} - s, \quad
\beta = \frac{\epsilon}{2} + s + 1, \quad \gamma =
\frac{\epsilon}{2} + 1.
$$
Since $\alpha + \beta - \gamma + 1 = \gamma$ and $\gamma - \alpha -
\beta + 1 = 1 - \frac{\epsilon}{2}$, the relation
(\ref{hypergeometric-expansion}) can be used only for $\epsilon < 2$
(the hypergeometric function $F(z;\alpha,\beta,\gamma)$ diverges for
$\gamma = -n$ with $n \geq 0$ integer).
It follows from (\ref{Psi-0-equation}) that the first term in
(\ref{hypergeometric-expansion}) leads the singular behavior of
$\Psi_0'(x) \sim (1+x)^{-1}$ as $x \to -1$ ($z \to 1$) if $\epsilon
\neq 0$, while the second term in (\ref{hypergeometric-expansion})
leads to the singular behavior $\Psi_0'(x) \sim (1+x)^{-\epsilon/2}$
as $x \to -1$ ($z \to 1$) if $\mu \neq 0$. Therefore, the
eigenfunction $\Psi_0(x)$ belongs to $X$ only if the first term in
(\ref{hypergeometric-expansion}) is removed which is only possible
if $\gamma - \alpha = 1 + s = -n$ or $\gamma - \beta = -s = -m$ with
integers $n,m \geq 0$. Both choices define the same set of
eigenvalues (\ref{eigenvalues-1-zero}) in the parametrization $\mu =
-s(s+1)$. Using another identity 9.131 on p.998 of \cite{GR},
\begin{equation}
\label{identity-hypergeometric} F(z;\alpha,\beta,\gamma) =
(1-z)^{\gamma-\alpha-\beta} F(z;\gamma-\alpha,\gamma-\beta,\gamma),
\end{equation}
we set $s = -1-n$ with $n \geq 1$, such that
$$
F\left(z;\frac{\epsilon}{2}+1+n,\frac{\epsilon}{2}-n,\frac{\epsilon}{2}
+1 \right) = (1-z)^{-\epsilon/2} F\left(z;-n,n+1, \frac{
\epsilon}{2}+1\right),
$$
where $F\left(z;-n,n+1,1 + \epsilon/2\right) \equiv \tilde{F}_n(x)$
is a polynomial of degree $n$, e.g.
\begin{eqnarray*}
\tilde{F}_0 = 1, \; \tilde{F}_1 = \frac{x + \sigma}{1 + \sigma}, \;
\tilde{F}_2 = \frac{3 x^2 + 3 \sigma x + \sigma^2
-1}{(1+\sigma)(2+\sigma)}, \; \tilde{F}_3 = \frac{15 x^3 + 15 \sigma
x^2 + (6 \sigma^2 - 9) x + \sigma (\sigma^2 -
4)}{(1+\sigma)(2+\sigma)(3+\sigma)},
\end{eqnarray*}
with $\sigma = \epsilon /2$. When $\epsilon = 0$ ($\sigma = 0$),
polynomials $\tilde{F}_n$ coincide with Legendre polynomials
$P_n(x)$ in 8.91 on p. 973 of \cite{GR}. The zero eigenvalue $(n =
0)$ is excluded from the construction but added to the spectrum by
Lemma \ref{lemma-zero}. When $\epsilon < 2$, the resulting
eigenfunction $\Psi_0(x)$ belongs to $X$. Since $\Psi_0'(x) \sim
(1+x)^{-\epsilon/2}$ as $x \to -1$, the resulting eigenfunction
$\Psi_0(x)$ does not belong to $X$ for $\epsilon \geq 2$.
We shall prove that no non-zero eigenvalues exist in $X$ for
$\epsilon \geq 2$. Using the identity
(\ref{identity-hypergeometric}), we transform the solution $F(x)$ to
the equivalent form $F(x) = (1+x)^{-\epsilon/2} \tilde{F}(x)$, where
$\tilde{F}(x)$ satisfies the hypergeometric equation
(\ref{hypergeometric-equation}) with new parameters
$$
z = \frac{1-x}{2}, \quad \tilde{\alpha} = \gamma - \alpha = 1 + s,
\quad \tilde{\beta} = \gamma - \beta = -s, \quad \tilde{\gamma} =
\gamma = \frac{\epsilon}{2} + 1.
$$
Up to a constant factor, $\tilde{F}(x)$ is represented by the
hypergeometric function $F(z;1+s,-s,1+\epsilon/2)$. It follows from
the ODE (\ref{Psi-0-equation}) that the eigenfunction $\Psi_0(x)$ is
related to $\tilde{F}(x)$ by
$$
-s(s+1) \Psi_0'(x) = \left( \frac{1 - x}{1+x} \right)^{\epsilon / 2}
\tilde{F}'(x).
$$
Since $\tilde{\alpha} + \tilde{\beta} - \tilde{\gamma} = -
\epsilon/2 < 0$ for $\epsilon > 0$, the hypergeometric series for
the function $F(z;1+s,-s,1+\epsilon/2)$ converges absolutely on
the entire interval $x \in [-1,1]$ ($z \in [0,1]$) (see 9.102 on
p.995 of \cite{GR}). Therefore, $\tilde{F}(x) \in C^2$ on $x \in
[-1,1]$ and $\tilde{F}'(-1)$ is well-defined. We shall prove that
$\tilde{F}'(-1) \neq 0$ for any $s \neq 0$ and $\epsilon \geq 2$.
It follows from the hypergeometric equation
(\ref{hypergeometric-equation}) with
$(\tilde{\alpha},\tilde{\beta},\tilde{\gamma})$ at $z = 1$ that
$$
s(s+1) \tilde{F}(-1) + \left( \frac{\epsilon}{2} - 1 \right)
\tilde{F}'(-1) = 0.
$$
If $\tilde{F}'(-1) = 0$, then $\tilde{F}(-1) = 0$ for any $s \neq
0$ and $\epsilon \geq 2$, and the only regular solution of the
hypergeometric equation (\ref{hypergeometric-equation}) is
$\tilde{F}(x) \equiv 0$. Therefore, $\tilde{F}'(-1) \neq 0$, and
therefore, $\Psi_0 \notin X$ for $\epsilon \geq 2$.
\end{proof}
\begin{proposition}
\label{proposition-epsilon-0-k-0} A complete spectrum of the
eigenvalue problem (\ref{zero-equation-stability}) with $0 < x_0 <
1$ and $\epsilon \geq 0$ in $X_0$ consists of simple isolated
eigenvalues $\mu$ with $\mu \in \mathbb{R}_-$.
\end{proposition}
\begin{proof}
When $\Psi_0 \in X_0$, the eigenfunction $\varphi(x)$ of the
associated Legendre equation (\ref{associated-Legendre-zero})
satisfies the Robin boundary conditions
$$
2(1-x_0^2) \varphi'(\pm x_0) + \epsilon \varphi(\pm x_0) = 0,
$$
while the eigenfunction $\chi(x)$ of the second-order ODE
(\ref{self-adjoint-form}) satisfies the Dirichlet boundary
conditions $\chi(\pm x_0) = 0$. Each eigenvalue problem is
self-adjoint with respect to these boundary conditions
\cite{Strauss}. Therefore, all eigenvalues $\mu$ of the regular
boundary-value problems are real-valued and isolated. Moreover,
these eigenvalues are negative due to the Green's identity
\cite{Strauss}:
\begin{equation}
\label{Green-identity-11} \mu \int_{-x_0}^{x_0} \varphi^2(x) dx =
- \int_{-x_0}^{x_0} (1-x^2) \left( \varphi'(x)\right)^2 dx -
\frac{\epsilon^2}{4} \int_{-x_0}^{x_0} \frac{\varphi^2(x)}{1-x^2}
dx < 0.
\end{equation}
These eigenvalues are also simple, since the Wronskian of any two
solutions with the Robin or Dirichlet boundary conditions is zero.
\end{proof}
\begin{proposition}
Let $\{ \mu_n \}_{n \geq 0}$ be isolated simple eigenvalues of the
eigenvalue problem (\ref{self-adjoint-form}) with $0 < x_0 < 1$
and Dirichlet boundary conditions $\chi(\pm x_0) = 0$ ordered as
$$
0 > \mu_0 > \mu_1 > ... > \mu_n > ...
$$
Then, $\lim\limits_{x_0 \to 1} \mu_n = - s_n(s_n+1)$, where
$$
s_n = 1 + n, \;\; \mbox{for} \;\; 0 \leq \epsilon \leq 2 \qquad
\mbox{and} \qquad s_n = \frac{\epsilon}{2} + n, \;\; \mbox{for} \;\;
\epsilon \geq 2
$$
with $n \geq 0$. \label{proposition-continuity-zero}
\end{proposition}
\begin{proof}
Singularity analysis of the second-order ODE
(\ref{self-adjoint-form}) shows that the solution $\chi(x)$ behaves
as
$$
\chi \to c_1^+ (1 - x)^{(\epsilon+2)/4} + c_2^+
(1-x)^{-(\epsilon+2)/4}, \quad \mbox{as} \;\; x \to 1
$$
and
$$
\chi \to c_1^- (1+x)^{(\epsilon-2)/4} + c_2^- (1+x)^{-(\epsilon
-2)/4}, \quad \mbox{as} \;\; x \to -1
$$
The ODE (\ref{self-adjoint-form}) admits a bounded (regular)
solution $\chi(x)$ on $x \in [-1,1]$ only if the singular
components are removed. This leads to the constraints $c_2^+ = 0$
and either $c_1^- = 0$ for $0 \leq \epsilon < 2$ or $c_2^- = 0$
for $\epsilon > 2$. It is explained in Proposition
\ref{proposition-x0-1-k-0} that the set $c_2^+ = 0$ and $c_1^- =
0$ for $0 \leq \epsilon < 2$ is equivalent to $s = m$ with $m \geq
0$, when the first term in the relation
(\ref{hypergeometric-expansion}) is removed and the hypergeometric
function $F(z;\tilde{\alpha},\tilde{\beta},\tilde{\gamma})$ is a
polynomial. Note that the zero eigenvalue $s = 0$ ($m = 0$) of the
problem (\ref{associated-Legendre-zero}) is excluded from the
spectrum of the problem (\ref{self-adjoint-form}), such that $s =
s_n = 1 + n$ with $n \geq 0$. On the other hand, the set $c_2^+ =
0$ and $c_2^- = 0$ for $\epsilon > 2$ is equivalent to $s = s_n =
\epsilon/2 + n$ with $n \geq 0$, when the second term in the
relation (\ref{hypergeometric-expansion}) is removed and the
hypergeometric function $F(z;\alpha,\beta,\gamma)$ is a
polynomial. Note that the first Darboux--Backlund transformation
(\ref{backlund-1}) implies that if $c_2^+ = 0$, then
$$
\varphi \to c_1^+ (1 - x)^{\epsilon/4}, \quad \mbox{as} \;\; x \to 1
$$
and
$$
\varphi \to c_1^- (1+x)^{\epsilon/4} + c_2^- (1+x)^{-\epsilon/4},
\quad \mbox{as} \;\; x \to -1.
$$
Recall that $\varphi(x) = (1-x^2)^{\epsilon/4} F(x)$. When $c_1^-
= 0$ ($0 \leq \epsilon < 2$), $F(x)$ is singular like $F(x) \to
(1+x)^{-\epsilon/2}$ as $x \to -1$ in accordance with the relation
(\ref{identity-hypergeometric}). When $c_2^- = 0$ ($\epsilon >
2$), $F(x)$ is bounded as $x \to -1$. The marginal case $\epsilon
= 2$ corresponds to the case when $\chi(x)$ has a bounded and
logarithmically growing components as $x \to -1$. The logarithmic
growth is excluded if $s_n = 1 + n$ with $n \geq 1$, which is the
border between the two spectra at $\epsilon = 2$.
When $s = s_n$ and $\epsilon \neq 2$, the eigenfunction $\chi(x)$
of the formally self-adjoint problem (\ref{self-adjoint-form})
satisfies the Dirichlet boundary conditions $\lim_{x \to \pm 1}
\chi(x) = 0$. When $\epsilon = 2$, the eigenfunction $\chi(x)$ is
bounded at $x = -1$ and zero at $x = 1$. In either case,
convergence and uniqueness of continuations from eigenvalues of
the regular Dirichlet problem (\ref{self-adjoint-form}) with $x_0
< 1$ to eigenvalues of the singular boundary-value problem
(\ref{self-adjoint-form}) with $x_0 = 1$ is proved by Theorem 5.3
of \cite{zettle}.
\end{proof}
\begin{remark}
{\rm Bounded (for $\epsilon = 2$) and decaying (for $\epsilon
> 2$) eigenfunctions $\chi(x)$ of the self-adjoint problem
(\ref{self-adjoint-form}) with $x_0 = 1$ for eigenvalues $\mu =
-s_n(s_n+1)$ with $s_n = \epsilon/2 + n$ violate the conditions
(\ref{conditions-chi-X}). Indeed, one can check that the limit in
(\ref{conditions-chi-X}) as $x \to -1$ is non-zero (proportional
to $c_1^-$) and the integral in (\ref{conditions-chi-X}) hence
diverges. Therefore, the eigenvalues of the self-adjoint problem
(\ref{self-adjoint-form}) for $\epsilon \geq 2$ do not correspond
to eigenvalues of the original problem
(\ref{zero-equation-stability}) in space $\Psi_0 \in X$, in
agreement with Proposition \ref{proposition-x0-1-k-0}. We also
note that if one consider a generalized conditions for $\chi(x)$
with
\begin{equation}
\left| \lim_{x \to \pm 1} \left( \frac{1-x}{1+x}
\right)^{\epsilon/4} \sqrt{1-x^2} \chi(x) \right| < \infty,
\end{equation}
the spectrum of the self-adjoint problem (\ref{self-adjoint-form})
is not defined since $c_2^+ \neq 0$ and the eigenvalue problem is
not complete. }
\end{remark}
\begin{remark}
{\rm Theorem \ref{theorem-2} covers the entire parameter domain $0
< x_0 \leq 1$ and $\epsilon \geq 0$. However, there is an
interesting problem with convergence of eigenvalues of the
associated Legendre equation (\ref{associated-Legendre-zero}) in
the limit $x_0 \to 1$. While the eigenvalues with $0 < x_0 < 1$
are expected to converge to the eigenvalues in
(\ref{eigenvalues-1-zero}) for $0 \leq \epsilon < 2$, no
eigenvalues with the eigenfunctions $\Psi \in X$ exist for
$\epsilon \geq 2$. Convergence of eigenvalues of the linearized
problem (\ref{zero-equation-stability}) as $x_0 \to 1$ will be
computed in Section 6 numerically. }
\end{remark}
\section{Numerical computations of eigenvalues for $k \neq 0$}
In order to illustrate distribution of eigenvalues in Propositions
\ref{proposition-x0-1}, \ref{proposition-epsilon-0} and
\ref{proposition-continuity} and to investigate eigenvalues in the
domain $0 < x_0 < 1$ and $\epsilon > 0$ in Remark \ref{remark-k},
we develop a numerical method based on power series expansions.
Since $x = 0$ is an ordinary point and $x = \pm 1$ are regular
singular points of the system (\ref{stability-problem}), the power
series expansions of the functions $\Psi_k(x)$ and $\Phi_k(x)$ in
powers of $x$ converge uniformly and absolutely for $|x| < 1$. The
numerical method is based on truncation of the power series.
Let $\mu \in \mathbb{C}$ be parameterized by $\mu = -s(s+1)$, $s
\in \mathbb{C}$. Due to the symmetry, it is sufficient to consider
the domain $\{ s \in \mathbb{C} : \; {\rm Re}(s) \geq -\frac{1}{2}
\}$. The stability domain ${\rm Re}(\mu) < 0$ corresponds to the
domain
\begin{equation}
\label{stability-boundary} \left\{ s \in \mathbb{C} : \quad |{\rm
Im}(s)| < \sqrt{{\rm Re}(s)( {\rm Re}(s) + 1)}, \;\; {\rm Re}(s) >
0 \right\}.
\end{equation}
Consider the power series with separated even and odd terms:
\begin{eqnarray}
\label{Psi-series-k} \Psi_k(x) & = & \sum_{m \geq 0} c_m x^{2m} +
\sum_{m \geq 0} d_m x^{2m+1}, \\
\label{Phi-series-k} \Phi_k(x) & = & \sum_{m \geq 0} a_m x^{2m} +
\sum_{m \geq 0} b_m x^{2m+1},
\end{eqnarray}
where the starting coefficients $(a_0,b_0,c_0,d_0)$ are parameters.
Substituting (\ref{Phi-series-k}) into the second equation of the
system (\ref{stability-problem}) we find that $(a_1,b_1)$ are
defined separately as
\begin{eqnarray}
\label{recurrence1-app} a_1 & = & \frac{(k^2 - s(1+s)) a_0 -
\epsilon b_0}{2}, \\
\label{recurrence2-app} b_1 & = & \frac{(k^2 + 2 - s(s+1)) b_0 -
\epsilon 2 a_1}{6},
\end{eqnarray}
while the coefficients $\{ a_m,b_m \}_{m \geq 2}$ are defined
uniquely from the recurrence equations:
\begin{eqnarray}
\nonumber a_{m+2} & = & \frac{(k^2 - s(s+1) + 2(2m+2)^2) a_{m+1} +
(s(s+1)-2m (2m+1)) a_m}{(2m+4)(2m+3)} \\
\label{recurrence1-app-2} & \phantom{t} & \phantom{texttext} \frac{-
\epsilon (2m+3) b_{m+1} + \epsilon (2m+1) b_m}{(2m+4)(2m+3)}, \\
\nonumber b_{m+2} & = & \frac{(k^2 - s(s+1) + 2 (2m+3)^2) b_{m+1} +
(s(s+1)- (2m+2) (2m+1)) b_m}{(2m+5)(2m+4)} \\
\label{recurrence2-app-2} & \phantom{t} & \phantom{texttext}
\frac{-\epsilon (2m+4) a_{m+2} + \epsilon (2m+2)
a_{m+1}}{(2m+5)(2m+4)}.
\end{eqnarray}
We note that the initial equations
(\ref{recurrence1-app})--(\ref{recurrence2-app}) follow from the
recurrence equations
(\ref{recurrence1-app-2})--(\ref{recurrence2-app-2}) for $m = -1$
with $a_{-1} = b_{-1} = 0$.
Substituting (\ref{Psi-series-k}) into the first equation of the
system (\ref{stability-problem}) we find that the coefficients $\{
c_m,d_m \}_{m \geq 2}$ are defined from the coefficients $\{
a_m,b_m \}_{m \geq 0}$ by the recurrence equations:
\begin{eqnarray}
\label{recurrence3-app-2} c_{m+2} & = & \frac{(k^2 + 2(2m+2)^2)
c_{m+1} - 2m (2m+1) c_m + a_{m+1} - a_m}{(2m+4)(2m+3)} \\
\label{recurrence4-app-2} d_{m+2} & = & \frac{(k^2 + 2 (2m+3)^2)
d_{m+1} -(2m+2)(2m+1) d_m + b_{m+1} - b_m}{(2m+5)(2m+4)}.
\end{eqnarray}
The initial equations for $(c_1,d_1)$ follow from the recurrence
equations (\ref{recurrence3-app-2})--(\ref{recurrence4-app-2}) for
$m = -1$ with $a_{-1} = b_{-1} = c_{-1} = d_{-1} = 0$.
The boundary conditions in (\ref{bc1}) lead to the equations
\begin{eqnarray} \label{boundary-condition-1-k} \sum_{m \geq 0}
c_m x_0^{2m} = 0, \quad \sum_{m \geq 0} d_m x_0^{2m} = 0, \quad
\sum_{m \geq 0} (2m) c_m x_0^{2m} = 0, \quad \sum_{m \geq 0}
(2m+1) d_m x_0^{2m} = 0.
\end{eqnarray}
There exists a linear map from $(a_0,b_0,c_0,d_0) \in
\mathbb{C}^4$ parametrized by $s \in \mathbb{C}$ to the sequence
$\{ a_m,b_m,c_m,d_m \}_{m \in \mathbb{N}}$. Therefore, the
boundary conditions (\ref{boundary-condition-1-k}) are equivalent
to the homogeneous system $A_k(s) {\bf x} = {\bf 0}$, where ${\bf
x} = (a_0,b_0,c_0,d_0)^T \in \mathbb{C}^4$ and $A_k(s)$ is a
$4$-by-$4$ matrix computed from the entries of
(\ref{boundary-condition-1-k}). The matrix $A_k(s)$ depends on $s
\in \mathbb{C}$ and $k \in \mathbb{N}$, as well as parameters
$x_0$ and $\epsilon$. If the power series are truncated at the
$M$-th term, the matrix $A_k(s)$ depends also on $M$. Eigenvalues
$\mu = -s(s+1)$ of the system (\ref{stability-problem}) in
(\ref{bc1}) are {\em equivalent} to roots $s$ of the determinant
equation
\begin{equation}
F_k(s;x_0,\epsilon,M) = {\rm det}(A_k(s)).
\end{equation}
Numerical results of computations of roots of the function
$F_k(s;x_0,\epsilon,M)$ are shown on Figures 1--5. Figure
\ref{fig-s-x0a} show first few roots $s$ of
$F_k(s;x_0,\epsilon,M)$ with $k = 1, 3, 5$ versus $x_0$ for
$\epsilon = 0$ and $M = 150$. In agreement with Proposition
\ref{proposition-continuity}, the roots converge as $x \to 1$ to
the values $s_n = \sigma + n$ with $\sigma = |k|$ and $n \geq 0$.
We can see that the convergence is excellent for $k = 3$ and $k =
5$ but it is worse for $k = 1$ in the sense that the roots at $x_0
= 0.99$ are still far from the values $s_n$. This feature is
explained by the decay of the eigenfunctions $(\Phi_k,\Psi_k)$ of
the system (\ref{stability-problem}) on $x \in [-1,1]$. Indeed, it
follows from Proposition \ref{proposition-x0-1} that $\Phi_k \sim
(1 - x^2)^{\sigma/2}$ and $\Psi_k \sim (1-x^2)^{1 + \sigma/2}$ as
$x \to \pm 1$ for $\epsilon = 0$ and $|k| \geq 1$. Therefore, the
derivative of $\Phi_k(x)$ is bounded as $x \to \pm 1$ for $|k|
\geq 2$ and unbounded for $|k| = 1$. In the latter case, the
power series expansions (\ref{Psi-series-k})--(\ref{Phi-series-k})
diverge in the limit $x_0 \to 1$ and the numerical approximation
is not accurate for $x_0$ close to $1$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=8cm]{s_x0a.eps}
\end{center}
\caption{First few roots $s$ of $F_k(s;x_0,\epsilon,M)$ versus
$x_0$ for $\epsilon = 0$ and $M = 150$: $k = 1$ (circles), $k = 3$
(stars) and $k = 5$ (dots).} \label{fig-s-x0a}
\end{figure}
Figure \ref{fig-s-M} shows first few roots $s$ with $k = 1,5$
versus $M$ for $\epsilon = 0$ and $x_0 = 0.9$. We can see that the
roots quickly converge to constant values, which are taken as
approximations of real roots when $M = 150$ in the remainder of
the figures. The numerical error for large values of $M$ consists
of three sources: truncation of the power series, root finding
algorithms, and rounding entries of the matrix $A_k(s)$ when a
number $x_0$ with $x_0 < 1$ is evaluated at a large power $x_0^M$.
While the first two sources can be reduced to any desired degree,
the last source represents an irremovable obstacle on getting
accurate approximations when $M$ gets large.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=8cm]{s_M.eps}
\end{center}
\caption{Convergence of roots $s$ versus $M$ for $\epsilon = 0$
and $x_0 = 0.9$: $k = 1$ (circles) and $k = 5$ (dots). }
\label{fig-s-M}
\end{figure}
Figure \ref{fig-s-k} shows the first six roots $s$ versus $k$ for
$\epsilon = 0$, $x_0 = 0.9$, and $M = 150$. We observe two
properties from this figure: the values of $s$ becomes larger for
larger values of $k$ (e.g. the eigenvalues $\mu$ becomes more and
more negative) and the roots $s$ approach to the integer values
for larger values of $k$ even when $x_0 = 0.9$ is not close to
$x_0 = 1$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=8cm]{s_k.eps}
\end{center}
\caption{First six roots $s$ versus $k$ for $\epsilon = 0$, $x_0 =
0.9$, and $M = 150$. } \label{fig-s-k}
\end{figure}
Figure \ref{fig-s-eps} show the first few roots $s$ with $k = 1,3$
versus $\epsilon$ for $x_0 = 0.9$ and $M = 150$. Although the
roots are real for small values of $\epsilon$ in agreement to
Proposition \ref{proposition-epsilon-0}, they coalesce for larger
values of $\epsilon$. After two roots merge, they split into
complex domain and complex values of $s$ are not shown on Figure
\ref{fig-s-eps}. It is seen from this figure that the roots with
larger values of $k$ coalesce for larger values of $\epsilon$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=8cm]{s_eps.eps}
\end{center}
\caption{First few roots $s$ versus $\epsilon$ for $x_0 = 0.9$ and
$M = 150$: $k = 1$ (bolded curve) and $k = 3$ (thin curve).}
\label{fig-s-eps}
\end{figure}
Figure \ref{fig-eps-compl} shows the spectrum of complex roots $s$
with $k = 1,3$ for $x_0 = 0.9$, $M = 150$, and different values of
$0 \leq \epsilon \leq 12$. The boundary of the stability domain
(\ref{stability-boundary}) is shown by the dotted curve. We can
see that roots $s$ remain in the stability domain after they
bifurcate off the real axes.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6cm]{Eig_complk1.eps}
\includegraphics[height=6cm]{Eig_complk3.eps}
\end{center}
\caption{Complex roots $s$ for $k = 1$ (left) and $k = 3$ (right),
$x_0 = 0.9$ and $M = 150$ when parameter $\epsilon$ transverses in
the interval $0 \leq \epsilon \leq 12$. The dotted curve shows the
boundary of the stability domain (\ref{stability-boundary}).}
\label{fig-eps-compl}
\end{figure}
\section{Numerical computations of eigenvalues for $k = 0$}
We approximate eigenvalues of the system
(\ref{zero-equation-stability}) with power series solutions
explained in Section 5. The solution for $\Psi_0(x)$ and
$\Phi_0(x)$ is represented by the power series
(\ref{Psi-series-k})--(\ref{Phi-series-k}), where the starting
coefficients $(a_0,b_0,c_0,d_0)$ are parameters, while the
coefficients $\{ a_m,b_m,c_m,d_m \}_{m \in \mathbb{N}}$ are
defined uniquely from the recurrence equations. It follows from
the ODE (\ref{stability-zero-problem}) that the set $\{ a_m,b_m
\}_{m \in \mathbb{N}}$ is uncoupled from the other coefficients
but it is defined by the unknown value of the parameter $s$:
\begin{eqnarray}
\label{recurrence1} a_{m+1} & = & \frac{(2m-s)(2m+1+s) a_m -
\epsilon (2m+1) b_m}{(2m+2)(2m+1)}, \\
\label{recurrence2} b_{m+1} & = & \frac{(2m+1-s)(2m+2+s) b_m -
\epsilon (2m+2) a_{m+1}}{(2m+3)(2m+2)}.
\end{eqnarray}
Given $(a_0,b_0)$ and the value for $s$, the recurrence equation
(\ref{recurrence1}) gives the value of $a_1$ and then the recurrence
equation (\ref{recurrence2}) defines the value of $b_1$, and so on.
It follows from the first equation of the system
(\ref{zero-equation-stability}) that the set $\{ c_m,d_m \}_{m \in
\mathbb{N}}$ is defined by the set $\{ a_m,b_m \}_{m \in
\mathbb{N}}$ but it is independent of $s$:
\begin{eqnarray}
\label{recurrence3} c_{m+1} & = & \frac{(2m)(2m+1) c_m + a_m}{(2m+2)(2m+1)}, \\
\label{recurrence4} d_{m+1} & = & \frac{(2m+1)(2m+2) d_m +
b_m}{(2m+3)(2m+2)}.
\end{eqnarray}
Finally, it follows from the second equation of the system
(\ref{zero-equation-stability}) that there exist two initial
equations:
\begin{eqnarray*}
b_0 + \epsilon a_0 & = & -s(s+1) d_0, \\
2 a_1 + \epsilon b_0 & = & -2 s(s+1) c_1
\end{eqnarray*}
in addition to the system (\ref{recurrence1})--(\ref{recurrence2}).
When $s \neq 0$, we can solve the initial equations as
$$
b_0 = -\epsilon a_0 - s(s+1) d_0, \qquad c_1 = \frac{a_0}{2},
$$
such that the only independent parameters are $(a_0,d_0)$. We also
note that the parameter $c_0$ is trivial since $\Psi_0(x)$ is
defined up to the addition of an arbitrary constant.
The boundary conditions in (\ref{bc1-zero}) lead to the equations:
\begin{eqnarray}
\label{boundary-condition-1} \sum_{m \geq 0} (2m) c_m x_0^{2m} = 0,
\qquad \sum_{m \geq 0} (2m+1) d_m x_0^{2m} = 0.
\end{eqnarray}
There exists a linear map from $(a_0,d_0) \in \mathbb{C}^2$
parameterized by $s \in \mathbb{C}$ to the sequence $\{
a_m,b_m,c_m,d_m \}_{m \in \mathbb{N}}$. Therefore, the boundary
conditions (\ref{boundary-condition-1}) are equivalent to the
homogeneous system $A_0(s) {\bf x} = {\bf 0}$, where ${\bf x} =
(a_0,d_0)^T \in \mathbb{C}^2$ and $A_0(s)$ is a $2$-by-$2$ matrix
which depends on $s \in \mathbb{C}$, parameters $x_0$ and
$\epsilon$, and integer $M$ for truncation of power series.
Eigenvalues $\mu = -s(s+1)$ of the system
(\ref{zero-equation-stability}) in (\ref{bc1-zero}) are {\em
equivalent} to roots $s$ of the determinant equation
\begin{equation}
F_0(s;x_0,\epsilon,M) = {\rm det}(A_0(s)).
\end{equation}
Figure \ref{fig-k0-1} represents the first ten eigenvalues $s$
versus $x_0$ for $\epsilon = 1$ and $M = 100$. In agreement with
Proposition \ref{proposition-x0-1-k-0}, the roots converge to the
integer values in the limit $x_0 \to 1$. Since the convergence of
power series becomes slower with $M$ for $x_0 \neq 1$, there is a
gap between the last numerical data and the value $x_0 = 1$. We
also note that the numerical accuracy of the limiting eigenvalues
(\ref{eigenvalues-1-zero}) becomes worse for larger eigenvalues.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=8cm]{Figk01.eps}
\end{center}
\caption{First ten eigenvalues of the problem
(\ref{zero-equation-stability}) for $\epsilon = 1$ and $M = 100$.}
\label{fig-k0-1}
\end{figure}
Figure \ref{fig-k0-2} represents the first ten eigenvalues $s$
versus $\epsilon$ for $x_0 = 0.9$ and $M = 100$. It is obvious
that the eigenvalues remain real in agreement with Proposition
\ref{proposition-epsilon-0-k-0}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=8cm]{Figk02.eps}
\end{center}
\caption{First ten eigenvalues of the problem
(\ref{zero-equation-stability}) for $x_0 = 0.9$ and $M = 100$.}
\label{fig-k0-2}
\end{figure}
Figure \ref{fig-k0-3} represents the first seven eigenvalues $s$
versus $x_0$ for $\epsilon = 4$ and two values of $M = 100$
(dashed curves) and $M = 1000$ (solid curves). In agreement with
Proposition \ref{proposition-continuity-zero}, the roots converge
to their limiting values which are not eigenvalues of the problem
(\ref{zero-equation-stability}) in space (\ref{bc2-zero}). We also
note limitations of the numerical methods based on truncations of
the power series. True limits can only be recovered if too many
terms of the power series are taken into accounts which leads to
long computational time and large round-off errors of numerical
computations. The effects of slow convergence and truncations of
power series lead to coalescence of real eigenvalues and their
splitting to the complex plane, which is not observed if the
values of $M$ are large enough.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=8cm]{Figk03.eps}
\end{center}
\caption{Convergence of eigenvalues of the problem
(\ref{zero-equation-stability}) for $\epsilon = 4$ and two values
of $M = 100$ (dashed curve) and $M = 1000$ (solid curves). }
\label{fig-k0-3}
\end{figure}
\section{Discussions}
We have shown analytically that the stationary flow on the sphere
is asymptotically stable whatever the Reynolds number may occur.
This result is relevant for the flow of a viscous fluid (e.g. oil)
over a sphere (e.g. a metal ball). We have also found that the
linearized operator for symmetry-preserving perturbations has void
spectrum in the energy space for sufficiently large Reynolds
numbers. One can show by direct analysis that the full system
(\ref{2.1})--(\ref{2.3}) reduces to a scalar linear equation for
symmetry-preserving ($\phi$-independent) solutions:
\begin{equation}
\label{linear-equation-longitudinal} \frac{\partial
v_{\phi}}{\partial t} + \frac{1}{\sin \theta} \Delta_0 v_{\phi} =
\nu \frac{\partial}{\partial \theta} \Delta_0 v_{\phi },
\end{equation}
where $\Delta_0$ is given by (\ref{Laplacian-k}) for $k = 0$. When
$v_{\theta}(\theta,t) = - \Psi_0'(\theta) e^{\lambda t}$, the
linear equation (\ref{linear-equation-longitudinal}) reduces to
the linear eigenvalue problem (\ref{zero-equation}) which has no
eigenvalues in the space of square integrable functions
$\int_0^{\pi} \left( \Psi_0'(\theta)\right)^2 \sin \theta d \theta
< \infty$ when $\nu \leq \frac{1}{2}$ ($\epsilon \geq 2$).
Implications of this result to the well-posedness of the Cauchy
problem for the linear time-dependent equation
(\ref{linear-equation-longitudinal}) with $\nu \leq \frac{1}{2}$
remain unclear.
We have also shown analytically and numerically that the
stationary flow on the truncated spherical layer is asymptotically
stable and all isolated eigenvalues are real for small Reynolds
numbers and complex for large Reynolds numbers. The eigenvalues
are always real for symmetry-preserving perturbations. The
truncated spherical layer can be used to model the ice melting in
Arctics due to global warming, when the near-stationary flow of
ocean water moves from Arctics to Antarctica. We note however that
the model of two-dimensional Navier--Stokes equations on sphere
considered in this paper does not include the Earth's rotation,
the gravity force, and the location of continents.
{\bf Acknowledgement.} The authors thank Marina Chugunova and
Bartosz Protas for useful discussions and remarks. The work was
supported by the PREA and NSERC Discovery grants.
|
2,877,628,090,262 | arxiv | \section{Introduction} \label{sec:intro}
There is mounting evidence for the presence of massive galaxies with suppressed star formation
at $z>2$ (e.g., \citealt{2005ApJ...626..680D, 2008ApJ...677L...5V}).
These galaxies are known to be remarkably compact and dense compared to local ones
(e.g., \citealt{2006MNRAS.373L..36T,2007ApJ...671..285T, 2008ApJ...677L...5V, 2014ApJ...788...28V,2017MNRAS.469.2235K}).
The size evolution of these massive quiescent galaxies (QGs) can be parameterized as
$r\propto(1+z)^{\beta}$ where $\beta\sim-1.5$ which is steeper than $\beta \sim -1$ of
star forming galaxies (SFGs; e.g., \citealt{2014ApJ...788...28V,2015ApJS..219...15S}).
The remarkable compactness and early formation of massive QGs
pose a challenge to the standard picture of galaxy formation
in which galaxies grow hierarchically and become more massive with time.
Gas rich major mergers (e.g., \citealt{2008ApJS..175..356H,2015MNRAS.449..361W})
and infall of giant clumps formed via disk instability
(e.g., \citealt{2008ApJ...688...67E, 2009ApJ...703..785D})
can trigger nuclear starburst and increase
the central density in galaxies to form a compact remnant.
Discoveries of compact starburst galaxies at $z>2$ may support these scenarios
\citep{2014ApJ...782...68T, 2014ApJ...791...52B, 2015ApJ...810..133I,2016ApJ...827L..32B, 2017ApJ...849L..36I, 2017ApJ...840...47B,2018ApJ...856..121G}.
On the other hand, massive QGs at high redshift
need several to ten times growth in size but less growth in stellar mass
to evolve into giant elliptical galaxies today.
Dry minor mergers (e.g., \citealt{2009ApJ...697.1290B,2009ApJ...699L.178N}),
adiabatic expansion \citep{2008ApJ...689L.101F, 2014ApJ...791...45V}
and size evolution of newly quenched galaxies
with redshift \citep{2013ApJ...773..112C, 2013ApJ...777..125P, 2015ApJ...799..206B}
have been proposed as the driver of this steep size growth.
Now massive QGs at $z\sim4$
are found photometrically \citep{2014ApJ...783L..14S}
and confirmed spectroscopically
($z_{spec}=3.717$; \citealt{2017Natur.544...71G, 2018A&A...611A..22S}).
The {\it Hubble Space Telescope (HST)} has been the main workhorse
in the field of galaxy morphologies at high redshift, but
it can not probe the rest-frame optical wavelength regime of galaxies at $z>3$
due to its wavelength cutoff of $\sim1.7~\mu m$.
In this study, we select galaxies with a prominent
Balmer break feature at $z\sim4$ photometrically
from the Subaru XMM-Newton Deep Survey (SXDF; \citealt{2008ApJS..176....1F})
and investigate their rest-frame optical morphologies by the deep $K'$-band images obtained with
the adaptive optics (AO) on the Subaru Telescope.
This paper is organized as follows:
in Section 2 we describe our sample selection of massive galaxies with suppressed star formation,
in Section 3 we describe the observation and data reduction procedure,
in Section 4 we describe the size measurement method and possible errors,
and in Section 5 we show the results.
We discuss the stellar mass surface density and size-stellar mass evolution of them in Section 6.
Throughout the paper, we adopt a $\Lambda$CDM cosmology with $H_0=70$ km s$^{-1}$ Mpc$^{-1}$,
$\Omega_{\Lambda} = 0.7$ and $\Omega_m = 0.3$,
and magnitudes are given in the AB system.
\section{Sample construction}
\label{sec:sample_construction}
\subsection{Multi-band Catalog}
We base our analysis on a multi-band photometric catalog in the Subaru
XMM-Newton Deep Field (SXDF; \citealt{2008ApJS..176....1F}).
SXDF has deep optical imaging from Suprime-Cam of the Subaru Telescope
in $BVRiz$-bands \citep{2008ApJS..176....1F}.
The UKIRT Infrared Deep Sky Survey (UKIDSS; \citealt{2007MNRAS.379.1599L})
is centered on the same field and we use the Data Release
10 to complement the optical data.
Furthermore, the $u$-band photometry from CFHT Megacam and {\it Spitzer} photometry
from the {\it Spitzer} UKIDSS Ultra Deep Survey (SpUDS; \citealt{2007sptz.prop40021D})
are available, allowing us to cover the entire optical
and IR wavelengths up to $24\mu m$ over a wide area. It is an excellent
field to search for faint, rare objects at high redshifts.
We first register all the optical images to the WCS grid of the UKIDSS images.
The seeing is different from band to band, and we apply a Gaussian kernel
to homogenize the seeing to $\sim0.82$ arcsec.
We run {\sf SExtractor} \citep{1996A&AS..117..393B} on the $K$-band image to detect
sources. We then measure sources in the other optical and nearIR bands
using the dual image mode.
We perform photometry within a circular aperture of 2.0 arcsec in all the bands.
Because we miss a fraction of total light in this aperture, we measure the Kron
fluxes of objects in the $K$-band and estimate the aperture correction, assuming
the Kron flux is the total flux
(here after we refer to the Kron magnitude as the total magnitude).
We apply the aperture correction to the 2.0 arcsec
aperture photometry in all the bands so that our photometry is closer to total
light while keeping the accurate colors.
Because of the relatively large PSF sizes of the {\it Spitzer}/IRAC images,
objects are often blended with nearby objects, and we choose to perform
the {\it Spitzer} photometry separately from the optical-nearIR bands.
We use {\sf T-PHOT} \citep{2015A&AS..582..15M} version 1.5.11 to fit 2d profiles
of objects in the IRAC images taking the object blending into account using
the $K$-band image as a prior.
For objects detected in the $K$-band high-resolution image (HRI), small image cutouts
of the same region are generated in order to model the IRAC low-resolution image (LRI).
The cutouts are convolved with a kernel constructed from LRI and HRI, both of
which are constructed from point sources selected in HRI, to homogenize the PSF.
Then, the optimization process is performed by scaling the fluxes of the objects
of the PSF-matched HRI to match the LRI using the $\chi^{2}$ minimization technique.
We process the IRAC images in all channels from 3.6$\mu$m to 8.0$\mu$m in the same way,
and we use the total magnitude of each object from the best-fit model flux.
In the final catalog, we have about $10^5$ objects over $\sim0.7$ deg$^2$
with coverage in all the filters. Table \ref{tab:depth} summarizes the depth
in each band.
\begin{table}
\centering
\caption{
$5\sigma$ limiting magnitudes within 2 arcsec apertures for each filter.
}
\begin{tabular}{ccc}
filter & instrument & depth\\
\hline
$u$ & Megacam & 26.8\\
$B$ & Suprime-Cam & 27.6\\
$V$ & Suprime-Cam & 27.3\\
$R$ & Suprime-Cam & 27.1\\
$i$ & Suprime-Cam & 27.0\\
$z$ & Suprime-Cam & 26.0\\
$J$ & WFCAM & 25.2\\
$H$ & WFCAM & 24.6\\
$K$ & WFCAM & 25.0\\
ch1 & IRAC & 24.8\\
ch2 & IRAC & 24.3\\
ch3 & IRAC & 22.6\\
ch4 & IRAC & 22.5\\
\hline
\label{tab:depth}
\end{tabular}
\end{table}
\subsection{Target Selection}
We run a custom photometric redshift code \citep{2015ApJ...801...20T}
on the multi-band catalog.
This is a template-fitting code and we use templates generated using
the \citet{2003MNRAS.344.1000B} stellar population synthesis code.
We adopt the following assumptions in the models: exponentially declining
star formation history, solar metallicity, \citet{1994ApJ...429..582C}
attenuation curve, and \citet{2003PASP..115..763C} initial mass function (IMF).
As we know the SFR and attenuation of each template, we add emission lines due to star formation using the emission line intensity ratios by \citet{2011MNRAS.415.2920I} (see \citealt{2015ApJ...801...20T} for details).
The code infers redshifts and physical
properties of galaxies such as stellar mass in a self-consistent manner
and the uncertainties on the physical properties quoted in the paper have been
estimated by marginalizing over all the other parameters, including redshift.
As we have a large number of filters spanning a wide wavelength range, the data
has a strong constraining power on the overall SED shapes. We therefore
choose to apply flat priors in the fitting. We have confirmed that our results do not signifiantly change if we apply the full priors.
Using some of the publicly available spectroscopic redshifts
(\citealt{2013MNRAS.433..194B,2013MNRAS.428.1088M}, Simpson et al. in prep),
we achieve a normalized dispersion of $\sigma(\Delta z/(1+z))=0.029$ and
an outlier rate of 4.8\%, where the outliers are defined in the conventional
way (i.e., those with $|\Delta z/(1+z)|>0.15$; \citealt{2017arXiv170405988T}). However,
the spectroscopic sample is heterogeneous and the numbers here should not
be over-interpreted.
We exclude objects with unreliable photo-$z$'s using the reduced chi-squares,
$\chi_\nu>4$. Poor chi-squares are often due to poor photometry (e.g., halos
around bright stars and object blending). For the purpose of this paper,
we do not need a complete sample of evolved galaxies at high redshift and
this cut does not introduce any bias.
We then select galaxies at $3.5<z_{phot}<4.5$. Fig. \ref{fig:sfr_smass} shows
star formation rate (SFR) against stellar mass of the $z\sim4$ galaxies.
Both SFR and stellar mass are from the SED fit. There is a clear sequence of
SFGs and also a population of massive galaxies with suppressed
star formation.
These two populations can be separated very well at specific SFR (sSFR)
of $10^{-9.5}\rm yr^{-1}$.
To be conservative, we choose galaxies whose $1\sigma$ upper limit
of their sSFR is lower than $10^{-9.5}\rm yr^{-1}$
as the targets for the near-IR follow-up imaging with AO.
The red filled points in Fig. \ref{fig:sfr_smass} satisfy this condition.
We note that there is some ambiguity in the definition of QGs
in the literature, but when we refer to QGs in what follows,
we mean galaxies with suppressed star formation as defined in Fig.~\ref{fig:sfr_smass}.
The $UVJ$ diagram is often used to define QGs, but it is tuned at $z\lesssim2$
\citep{2005ApJ...624L..81L,2009ApJ...691.1879W} and is not clear whether it can be applied
to $z\sim4$ galaxies. For this reason, we adopt the sSFR-based definition.
In addition to the sSFR constraint, further practical constraints come
from the location of tip-tilt stars for the AO-assisted observation.
Since we need tip-tilt stars $R=16.5$ or brighter (for NGS mode, $R<19$ for LGS mode) for AO188,
the available targets are further limited.
We have conducted the near-IR follow-up imaging with AO
for five of the brightest QGs with suitable tip-tilt stars as shown by the stars
in Fig. \ref{fig:sfr_smass} (here after ID1-5).
Fig. \ref{fig:individual_seds} shows their SEDs.
All of them are located around $z_{phot}\sim4$.
As can be seen, all the objects show a prominent
Balmer break, indicative of an evolved stellar population.
ID1 and ID2 have a faint UV continuum and are consistent with passively evolving galaxies
(SED-based SFR is consistent with zero).
The others have a brighter UV continuum, but the break feature is still prominent.
To further characterize our targets,
we compare the mean SEDs of SFGs
with that of QGs in Fig. \ref{fig:mean_seds}.
SFGs have a very blue UV continuum with a strong Lyman break.
On the other hand, the SEDs of our targets are clearly distinct; they have a suppressed
UV continuum with a clear Balmer break. This break cannot be due to dust extinction
because it does not introduce a sudden break at 3650\AA\ while keeping the continuum
at longer wavelengths blue. This is due to abundant A-type stars in these galaxies.
The observed targets are consistent with the mean SED shown by the red shades and that
suggests that they are representative of the evolved population around that redshift.
We note that a part of our survey area is observed
in the Fourstar Galaxy Evolution survey (ZFOURGE) \citep{2016ApJ...830...51S}.
\citet{2014ApJ...783L..14S} select QGs at $z\sim4$ using
the rest-frame $UVJ$ colors and photometric redshifts from ZFOURGE.
We briefly compare our sample of QGs with those in \citet{2014ApJ...783L..14S}.
We find that the QGs identified in SXDF (UDS) in \citet{2014ApJ...783L..14S} all satisfy
sSFR$<10^{-9.5}\rm yr^{-1}$ based on our catalog.
On the other hand, two of our targets, ID3 and ID5 are in the ZFOURGE field.
ID3 is also identified as a QG in ZFOURGE, whereas ID5 is not.
The rest-frame color of ID5 is $U-V=0.95\pm0.04$ and $V-J=0.86\pm0.02$ \citep{2016ApJ...830...51S},
slightly bluer than the color criterion for QGs adopted in \citet{2014ApJ...783L..14S},
but their SED fit suggests sSFR $<10^{-9.5}\rm yr^{-1}$ at $z_{\rm phot}\approx4$,
satisfying our criterion of QGs.
Overall, our QG selection is broadly compatible with that of \citet{2014ApJ...783L..14S}.
It is noteworthy that, most of QGs in their sample are fainter than $K>23$.
Thanks to the wider area coverage, most of our targets are brighter
and better suited for detailed structural studies.
\begin{figure}
\centering
\includegraphics[width=80mm]{sfr_smass.eps}
\caption{
SFR v.s. stellar mass of galaxies at $z\sim4$.
The open circles are SFGs.
The filled circles are QG candidates with a $1\sigma$ upper limit
of the sSFR lower than $10^{-9.5}\rm\ yr^{-1}$.
Objects with SFRs smaller than 1$\rm M_\odot\ yr^{-1}$ are shown at SFR=$1\rm M_\odot\ yr^{-1}$
only for illustrative purposes.
The stars indicate the targets observed with IRCS+AO188 (see \S
\ref{sec:observation_and_data_reduction}).
The open star is ID3, the target
also classified as quiescent by \citet{2014ApJ...783L..14S}.
}
\label{fig:sfr_smass}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=80mm]{SXDS1_10017.eps}
\includegraphics[width=80mm]{SXDS5_10158.eps}\\\vspace{0.5cm}
\includegraphics[width=80mm]{SXDS2_24848.eps}
\includegraphics[width=80mm]{SXDS8_19538.eps}\\\vspace{0.5cm}
\includegraphics[width=80mm]{SXDS6_11101.eps}
\caption{
{\it Top:}
The SEDs of our targets, ID1 to ID5. The spectrum is the best-fitting template spectrum
and the points are the observed photometry. Some of the relevant quantities such as
age and star formation timescale of the template are also indicated.
{\it Bottom:}
Redshift probability distribution function. The arrow shows the median redshift.
}
\label{fig:individual_seds}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=80mm]{mean_seds.eps}
\caption{
Rest-frame mean SEDs of SFGs (blue) and QGs (red) normalized in
the $V$-band. The shaded areas encompass the 68 percentile of the distribution.
The objects that we observed are shown as the solid lines.
They show the prominent Balmer break.
}
\label{fig:mean_seds}
\end{figure}
\section{Observation and data reduction}
\label{sec:observation_and_data_reduction}
We observed the five targets selected in \S 2
with IRCS \citep{1998SPIE.3354..512T, 2000SPIE.4008.1056K}+AO188
\citep{2008SPIE.7015E..10H, 2010SPIE.7736E..0NH} on the Subaru Telescope
on the 25th and 26th of September 2016. We used the $K'$ filter with
the 52 mas pixel scale. The observing conditions were fair; the sky
was clear on both nights with reasonably good seeing ($\sim0.2$ arcsec with AO),
though it fluctuated occasionally. We observed both in NGS and
LGS modes due to occasional poor seeing and satellite crossings.
We reject the worst $\sim10$\% of the bad seeing frames.
After rejecting these bad PSF frames, the variation of PSF sizes of the frames on
each target is less than 0.05 arcsec.
We reduced the data using the {\sf IRAF} data reduction tasks
following the data reduction manual for
the IRCS\footnote{http://www.subarutelescope.org/Observing/DataReduction\\/Cookbooks/IRCSimg\_2010jan05.pdf}.
We first mask bad pixels and then apply the flat, which were constructed from dithered
science exposures with objects masked out.
The sky background is the median value in the whole area
of each frame, $\sim54$ arcsec on a side.
We estimate the telescope offset between the pointing
from the relative positions of bright stars within the field of view.
Finally, we combine the frames with 3 sigma clipping.
Magnitude zero-points are calibrated by using the $K$-band images of UKIDSS.
We estimate $K-K'$ (i.e., WFCAM - IRCS) color as a function of $J-K$ color using
the stellar library from \citet{1998PASP..110..863P}.
We set the zero points of the IRCS-AO $K'$-band images by matching the fluxes
of bright ($K<21$) but not saturated stars
with those measured on the UKIDSS $K$-band images after applying the $K-K'$ color term.
The $K-K'$ colors of the stars used as the standard stars here range from $-0.12$ to $-0.10$.
Since observing conditions were stable during the nights,
we use the average of magnitude zero-points of each night,
$25.41$ for 25th Sep (ID2 \& 3) and $25.43$ for 26th Sep (ID1,4 \& 5).
We summarize the details of the coadd images in Table \ref{tab:targetdiscription}.
The total exposure time of each target ranges from 18 to 54 minutes.
The FWHM PSF sizes measured on the PSF reference stars range from $0.15$ to $0.23$ arcsec.
\begin{deluxetable*}{lccccccc}
\tablecaption{Summary of observations \label{tab:targetdiscription}}
\tablecolumns{8}
\tablewidth{0pt}
\tablehead{
\colhead{ID} &
\colhead{R.A.} &
\colhead{Dec} & \colhead{EXPTIME} & \colhead{ZEROPOINT} & \colhead{depth\tablenotemark{a}}& \colhead{separation\tablenotemark{b}} & \colhead{FWHM PSF\tablenotemark{c}}\\
\colhead{} & \colhead{(h:m:s)} & \colhead{(d:m:s)} & \colhead{(min)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(arcsec)} & \colhead{(arcsec)}
}
\startdata
1 & 02:19:01.511 & -05:18:29.07 & 33 & $25.43$ & 24.7 & 72(33) & 0.17\\
2 & 02:17:59.073 & -05:09:39.89 & 18 & $25.43$ & 24.6 & 53(34) & 0.21\\
3 & 02:17:22.781 & -05:17:33.34 & 35 & $25.41$ & 24.9 & 48(16) & 0.15\\
4 & 02:17:19.833 & -04:43:34.75 & 43 & $25.43$ & 25.0 & 41(38) & 0.23\\
5 & 02:16:58.232 & -05:08:35.21 & 54 & $25.41$ & 25.0 & 37(13) & 0.19\\
\enddata
\tablenotetext{a}{5$\sigma$ limiting magnitudes measured with $0.3$ arcsec diameter aperture.}
\tablenotetext{b}{The separation between the tip-tilt stars and the targets.
The numbers in the parentheses are the separations between the tip-tilt stars and the PSF reference stars. }
\tablenotetext{c}{FWHM of the PSF reference stars.}
\end{deluxetable*}
\section{Size measurement}
\subsection{Flux completeness}
We first examine the flux completeness of our targets on the IRCS-AO $K'$-band images
by comparing the flux measured on the IRCS-AO $K'$-band and UKIDSS $K$-band images.
The S/N on our $K'$-band images are lower than that on the UKIDSS $K$-band images.
Then if our targets have morphologies dominated by low-surface brightness components,
large fraction of their fluxes detectable on the UKIDSS $K$-band images
may go below the detection limit on our IRCS-AO $K'$-band images.
Also, the AO-corrected PSF tends to have an extended wing, which also
introduces a diffuse component in the observed profiles. These effects can
result in underestimated sizes and fluxes.
We compare the $K'$-band total magnitudes measured
on our IRCS-AO $K'$-band ($K'_{total, IRCS-AO}$)
with the UKIDSS $K$-band magnitudes corrected
of the $K-K'$ color term using the best-fit SED model
($K'_{total, synth}$) in order to evaluate the missing flux
(Fig. \ref{fig:magauto} and Table \ref{tab:result}).
Overall, we tend to underestimate the fluxes in the IRCS-AO images as expected.
For ID4 and ID5, we underestimate only by 10\% and the missing light probably
does not affect our size measurements significantly.
However, we miss $25-50$\% of the light for the other targets.
Though a care is needed when interpreting individual galaxies, the stacked galaxy
(open circle, see \S 4.4)
does not show a significant amount of missing flux,
suggesting that its size can be robustly measured.
We make an attempt to estimate the effects
of the missing light on the size measurements in Section~\ref{sec:galfit_fitting_errors},
where we actually reproduce the amount of the missing fluxes with a simulation and evaluate the limitation from PSF.
\begin{figure}
\centering
\includegraphics[width=85mm]{magauto.180410.eps}
\caption{
Synthetic $K'$-band magnitude ($K'_{total, synth}$) plotted
against IRCS $K'$-band magnitude ($K'_{total, IRCS-AO}$) of our targets.
The red filled circles show the individual objects.
The red open circle shows the stack of ID$1$ to ID$4$.
}
\label{fig:magauto}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=85mm]{sxds1-err-re-med.eps}
\includegraphics[width=85mm]{stack-err-re-med.eps}
\caption{
{\it Top:} Sizes measured by {\sf GALFIT} ($r_{e,maj,fit}$) as a function
of input size ($r_{e,maj,model}$) in our simulation for ID1.
The solid line shows $r_{e,maj,model}=r_{e,maj,fit}$.
The dashed curve and gray shaded regions show the median
and 1$\sigma$ range of the $r_{e,maj,fit}$ at the given $r_{e,maj,model}$.
{\it Bottom:} Similar to the {\it top} panel but for the stacked galaxy.
}
\label{fig:galfiterror}
\end{figure}
\begin{deluxetable*}{lcccccccc}
\tablecaption{Properties of the observed objects \label{tab:result}}
\tablecolumns{10}
\tablewidth{0pt}
\tablehead{
\colhead{ID} &
\colhead{$z_{\rm phot}$ } &
\colhead{$K_{tot}$\tablenotemark{a}} & \colhead{$K'_{tot, synth}\tablenotemark{b}$}
& \colhead{$K'_{tot, observed}$\tablenotemark{c}} & \colhead{$r_{e,maj}$\tablenotemark{d}}
& \colhead{ $M_{\star}$} \\
\colhead{} & \colhead{} & \colhead{(mag)}& \colhead{(mag)} & \colhead{(mag)} & \colhead{(kpc)} & \colhead{($10^{11}~M_{\odot}$)}
}
\startdata
1 & 4.07 & $22.47\pm0.05$& $22.60^{+0.05}_{-0.05}$ & $23.26\pm0.06$
& $0.92\pm0.31$ & 1.58\\
2 & 3.83 & $22.54\pm0.05$& $22.69^{+0.06}_{-0.05}$ & $23.43\pm0.08$
& $0.22\pm0.21$ & 1.09\\
3 & 3.70 & $22.55\pm0.05$& $22.65^{+0.05}_{-0.05}$ & $23.07\pm0.04$
& $0.63\pm0.18$ & 1.04\\
4 & 4.24 & $22.61\pm0.05$& $22.73^{+0.05}_{-0.05}$ & $22.85\pm0.03$
& $0.50\pm0.21$ & 1.83\\
5 & 4.09 & $23.35\pm0.09$& $23.38^{+0.10}_{-0.04}$ & $23.52\pm0.05$
& $1.70\pm0.71$ & 1.13\\
STACK & ... & $22.54\pm0.03$ & $22.67^{+0.03}_{-0.03}$ & $22.78\pm0.03$
& $0.52\pm0.18$ & 1.38 \\
\enddata
\tablenotetext{a}{Kron magnitudes measured on the UKIDSS $K$-band images.}
\tablenotetext{b}{Expected $K'$-band total magnitudes from the SED fits.}
\tablenotetext{c}{Kron magnitudes measured on the IRCS-AO $K'$-band images.}
\tablenotetext{d}{Median and standard deviation of the $r_{e,maj}$ measured with fixed $n=0.5,1,2,3,4 \&5$. }
\end{deluxetable*}
\begin{deluxetable}{lccc}
\tablecaption{{\sf GALFIT} fittings of ID5 with the IRCS-AO $K'$ and WFC3 $H$-band images \label{tab:k-candels}}
\tablecolumns{4}
\tablewidth{0pt}
\tablehead{
\colhead{Band} & \colhead{mag} &\colhead{$r_{e,maj}$} &\colhead{$n$} \\
\colhead{} & \colhead{(mag)} & \colhead{(kpc)} & \colhead{}}
\startdata
$K'$ & $23.52\pm0.05$ & $1.70\pm0.71$ &$0.79_{0.5}^{2.99}$\\
$H$ & $24.84\pm0.02$ & $1.19\pm0.03$ &$0.73\pm0.03$ \\
\enddata
\end{deluxetable}
\subsection{GALFIT fitting}
The sizes of our targets are measured by fitting S\'ersic profiles \citep{1968adga.book.....S} to their $K'$-band images
using {\sf GALFIT} \citep{2002AJ....124..266P, 2010AJ....139.2097P}.
{\sf GALFIT} fits two-dimensional analytical functions
convolved with a PSF to an observed galaxy image.
Here we use a scale of $6.951$ kpc/arcsec
in physical at $z=4$ for all the targets.
We use the nearest star in the field of view or a star taken before and after the science exposures as the PSF reference star.
We at first fit the S\'ersic models in ranges of effective radius
$r_e=0.2-12$ kpc and S\'ersic index $n=0.5-10$.
The fits are performed using an image cutout of 3.0 arcsec on a side for each object.
The background values are estimated in an annulus between
$2.9$ to $3.0$ arcsec from each object before S\'ersic model fittings.
As an initial guess, we use total magnitudes measured by {\sf SExtractor} \citep{1996A&AS..117..393B},
$r_e =1$ kpc and $n=1.4$.
The results are not sensitive to this initial guess.
In order to compare our results with \citet{2014ApJ...788...28V},
who measured galaxies sizes out to $z\sim3$,
we here use the effective radius along the semi-major axis ($r_{e,maj}$).
\begin{figure*}
\centering
\includegraphics[width=175mm]{galfitimage180829.eps}
\caption{
The observed images and {\sf GALFIT} result for ID1 to ID5 and stacked galaxy.
The observed images, best-fit S\'ersic models, and residuals are shown from left to right.
The sizes of the images are $2.1$ arcsec $\times$ $2.1$ arcsec.
The yellow filled circles show the FWHM of the PSF size on each image.
}
\label{fig:stamp}
\end{figure*}
\subsection{GALFIT fitting errors}
\label{sec:galfit_fitting_errors}
In \citet{2017MNRAS.469.2235K},
the morphologies of galaxies at $z\approx3$ were studied
by using the deeper $K'$-band image taken with the same instrument with us
and discussed the errors for {\sf GALFIT} fittings on those images.
We here discuss the possible errors in our size measurements following that work.
Let us start with the limitation by PSF.
We are now studying the targets which can be hardly resolved
even with our high resolution images.
We should note that our results can be just an upper limit
since the reduced $\chi^2$ values of S\'ersic model fitting
only marginally ($\Delta \chi^2\sim 0.01$) improves from that with PSF model fitting.
In addition, the fits with models of different S\'ersic indices $n=0.5\sim5$
are equally good.
Then we adopt the median of $r_{e,maj}$
of GALFIT fitting with $n=0.5,1,2,3,4~\&~5$
as the best-fit values.
In addition, there can be errors originated in a little PSF inconsistency.
We ideally need to evaluate the PSF at the positions of the targets,
but that is in practice difficult.
We use a single PSF reference star either
within the field of view or taken before/after the science exposures.
Even though the target and PSF reference stars are taken in a same frame,
as shown in Table \ref{tab:targetdiscription},
the distance between the tip-tilt star and the target,
and that between the tip-tilt star and the PSF reference star are not the same.
In case of our targets, we expect the PSF difference of $\lesssim0.03$ arcsec
according to the performance of AO188\footnote{https://www.subarutelescope.org/Observing/Instruments/AO\\/performance.html}.
However since the size of galaxies at $z\sim4$ is very small, this may not be negligible.
The separation between the PSF reference star and the tip-tilt star is always smaller
than the separation between the target and the tip-tile star, i.e., the PSF we use
in the fits is likely smaller than the real PSF at the object position.
This leads us to over-estimate the size. Thus, our estimates are likely conservative.
\citet{2017MNRAS.469.2235K} reported that,
this level of PSF inconsistency does not affect the measured sizes but
on the other hand, it significantly affects the measured S\'ersic indices,
which we do not discuss in this paper.
Next, we test the accuracy of the {\sf GALFIT} measurement
by generating mock galaxy images following \citet{2017MNRAS.469.2235K}.
We investigate the typical fitting errors by inserting artificial objects
on the sky of the observed image, measuring the sizes of them
and comparing the input and output structural parameters.
We here use the coadd image of ID1 as the representative case of our sample.
We generate artificial sources over a range of parameters;
$K'=22.6$ ($\approx K'_{tot, synth}$ of ID1),
$r_{e,maj}<4$ kpc, S\'ersic indices $n=0-8$, and various axis ratios and position angles.
They are convolved with the PSF reference star for ID1
and added to the sky of ID1 image.
By repeating this simulation, we find that
the median and standard deviation of the measured total magnitude
are $23.0\pm0.6$ in case $r_{e,maj,model}=0.5\sim2$ kpc,
which is consistent with the observed $K'$-band total magnitude of ID1.
In other words, the simulation reproduces the missing flux in the observation,
suggesting that our simulation is reasonably realistic.
We show the $r_{e,maj}$ of the mock galaxies ($r_{e,maj,model}$)
v.s. those measured on them with {\sf GALFIT} ($r_{e,maj,fit}$) in Fig. \ref{fig:galfiterror}.
Naively, we expect that small ($\la 1$ kpc) objects are overestimated the sizes from the input sizes
due to the limited resolution while large objects are underestimated
since their outer profiles are buried in noise.
We can see this tendency weakly.
The standard deviation of S\'ersic indices is $\sigma(n)=2.3$ (plot not shown).
This again suggests that the S\'ersic indices are hardly constrained with our data.
Though we input various axis ratios, the measured axis ratios tend to be lower than 0.4,
more asymmetric models are favored as the best-fit models.
This may be caused by asymmetric distribution of noisy pixels around the source,
since this tendency is soften at the depth of the stacked image.
The best-fit models of ID1 and ID5 in Fig. \ref{fig:stamp} look elongated
however it is not clear they are real signatures.
Finally, we compare the $r_{e,maj}$ measured
with {\it HST}/WFC3 $H$-band image and our $K'$-band images.
Among our sample, only ID5 is within the Cosmic Assembly
Near-IR Deep Extragalactic Legacy Survey (CANDELS;
\citealt{2011ApJS..197...35G,2011ApJS..197...36K}).
We summarize the comparison in Table. \ref{tab:k-candels}.
They should not necessarily be the same as our result
due to the wavelength difference but are a good comparison.
The size estimates on these images are broadly consistent with each other, but
our size estimate is slightly larger as expected from the simulation above.
This may also imply that they show no strong rest-frame UV to optical color gradient
due to age and/or metallicity gradient of the stellar population as well as attenuation by dust.
The stellar population of these galaxies may be relatively simple.
On the other hand, uncertainty in S\'ersic indices is large for $K'$,
which is again consistent with the above indications.
Taken all the tests together, there is a small bias in our size measurements
for individual objects in the sense that we tend to over-estimate the sizes
by $\sim30$ \%.
We do not account for this bias just to be conservative.
Our estimates can thus be considered as reasonable upper limits.
\subsection{Stacking analysis}
We stack our targets to gain S/N and measure their average size.
We exclude ID5 from the stacking because it is relatively fainter than the others.
We smoothed the single exposure images of ID1 to ID4
to a common seeing of $0.23$ arcsec (that of ID4) by convolving with a Gaussian
and then performed median stacking of them.
The total $K'$-band magnitude measured on the stacked image shows only a small amount
of missing flux (10\%, Fig.~\ref{fig:magauto}).
We repeat the same {\sf GALFIT} simulation
using the stacked image and PSF reference star for ID4 (Fig. \ref{fig:galfiterror}, {\it bottom}).
Similar to the individual galaxies, the reduced $\chi^2$ values of S\'ersic model fitting
only marginally improves from that with the PSF model fitting.
The $r_{e,maj}$ errors are reduced greatly from the simulation for ID1.
The bias in the size measurement marginally changes depending on the PSF adopted.
This gives us a confidence on the measured sizes of QGs on our stacked image.
\section{Results}
The results of {\sf GALFIT} fitting are shown
in Fig. \ref{fig:stamp} and summarized in Table \ref{tab:result}.
The $r_{e,maj}$ of our targets range from $0.2-1.7$ kpc with
the median and standard deviation being 0.6 kpc and 0.6 kpc, respectively.
Our results indicate that massive QGs at $z\sim4$ are indeed compact.
As discussed above, the individual size estimates may suffer from the flux
incompleteness (we are missing diffuse light), but we obtain a consistent result
for the stacked galaxy; the $r_{e,maj}$ measured on the stack is $0.52\pm0.18$ kpc,
providing further support for the compact sizes.
We also note that the possible systematic errors from the PSF inconsistency
are not included in our size estimate errors,
however, as we mentioned above, it may not affect them significantly.
Figure \ref{fig:masssize} shows the stellar mass v.s. $r_{e,maj}$ diagram
of the QGs at $z\sim4$.
For comparison, we plot the size-stellar mass relation of
QGs at $z=2.75$ measured at rest-frame optical \citep{2014ApJ...788...28V}
and at $z\sim3.7$ measured at rest-frame UV
(\citet{2015ApJ...808L..29S} using the catalog in \citet{2014ApJ...783L..14S} described above).
Both studies select QGs with the photometric redshifts and $UVJ$ colors,
and measure the size on {\it HST}/WFC3 $H$-band images from CANDELS.
The QGs at $z\sim4$ are below the size-stellar
mass relation of QGs at $z=2.75$, suggesting that they have
the physical sizes smaller than lower redshift ones.
The size measured on the stack shown with the open square confirms this trend.
The QGs at $z\sim3.7$ have a somewhat large dispersion in size, but our targets have
consistent sizes with some of their smallest objects.
There are a few objects with a large size of $r_{e,maj}\sim4$ kpc among QGs at $z\sim3.7$
which are more consistent with the typical sizes
of SFGs at $z=2.75$ \citep{2014ApJ...788...28V}.
This might indicate the contamination
of SFGs in their $UVJ$-selected QGs.
In Fig. \ref{fig:sizeredshift}, we show the rest-frame optical size-redshift relation
of galaxies with $10^{11}~M_{\odot}\leq M_{\star}\leq 10^{11.5}~M_{\odot}$.
The $r_{e,maj}$ at $z=0$, $0.75\leq z\leq2.75$ and $z\sim 3.7$
are the median sizes of QGs with $10^{11}~M_{\odot}\leq M_{\star}\leq 10^{11.5}~M_{\odot}$
from \citet{2009MNRAS.398.1129G}, \citet{2014ApJ...788...28V}
and \citet{2015ApJ...808L..29S}, respectively.
The $z=3.1$ point is from \citet{2017MNRAS.469.2235K}
who measured the size of a QG with $M_{\star}\approx 2\times10^{11}~M_{\odot}$
in a protocluster at the $K'$-band using IRCS/AO188.
Our stacked galaxy is shown as the open square.
We extend the size-redshift relation of QGs out to $z\sim4$ for the first time.
The figure shows that the sizes of massive QGs continuously decreases with redshift up to $z=4$,
an order of magnitude size evolution between $z=4$ and 0.
This is a surprisingly strong evolution.
Note that the size-stellar mass relation in \citet{2014ApJ...788...28V} show an upturn at $z=2.75$.
This could be caused by the contamination of SFGs.
In their $UVJ$ color diagram,
the dispersions of color sequences of QGs and SFGs increase with redshift
due to the observational errors and maybe the change of the SEDs of galaxies.
Then it is expected that contaminants in galaxies selected as QGs increase with redshift.
\citet{2015ApJ...808L..29S} also uses rest-frame $UVJ$ color selection
but since they did not use the sample near the border of selection criterion,
such contaminants may be reduced in their sample.
The size-redshift relation is often parameterized in a form $r_e/{\rm kpc}=A(1+z)^{\beta}$.
\citet{2014ApJ...788...28V} find $A=11.2_{-2.1}^{+2.6}$ and $\beta=-1.32\pm0.21$
for QGs with $10^{11}~M_{\odot}\leq M_{\star}\leq 10^{11.5}~M_{\odot}$
(dashed line in Fig. \ref{fig:sizeredshift}).
Adding the results at $z>3$ and fitting at $0.75\leq z \leq 4$,
we find $A=18.8\pm3.0$ and $\beta=-1.9\pm0.2$ (solid line),
though it is hard to fit the whole redshift range with this form.
Our results support a stronger size evolution of QGs compared to SFGs
with $\beta \sim-1$ (e.g., \citealt{2014ApJ...788...28V,2015ApJS..219...15S,2015ApJ...808L..29S}) up to $z=4$.
\begin{figure}
\centering
\includegraphics[width=85mm]{mass-size180821.eps}
\caption{
Stellar mass v.s. $r_{e,maj}$.
The filled squares, filled circles and blue open square
shows ID1-3, ID4-5 and the stack of ID 1-4, respectively.
The solid line and the shaded area show the size-stellar mass relation for QGs at $z=2.75$
in \citet{2014ApJ...788...28V}.
The open circles show QGs at $z\sim3.7$ measured the sizes
at rest-frame UV in \citet{2015ApJ...808L..29S}.
\label{fig:masssize} }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=85mm]{size-redshift180821.eps}
\caption{
Size evolution of QGs with the stellar mass $10^{11}~M_{\odot}\leq M_{\star}\leq 10^{11.5}~M_{\odot}$ at up to $z=4$.
The blue open square shows the stack of QGs at $z\sim4$ in this study.
The red open circle, filled circles and cross show the median
$r_{e,maj}$ of QGs at $z=0$ from \citet{2009MNRAS.398.1129G},
at $0.75\leq z\leq2.75$ from \citet{2014ApJ...788...28V}
and $z\sim 3.7$ from \citet{2015ApJ...808L..29S}, respectively.
The error bar of \citet{2009MNRAS.398.1129G} shows the $\sim0.1$ dex
difference between \citet{2003MNRAS.343..978S} and \citet{2009MNRAS.398.1129G}.
The blue open triangle shows the QG with $M_{\star}\approx2\times10^{11}~M_{\odot}$
at $z=3.1$ in \citet{2017MNRAS.469.2235K}.
The black solid curve shows $r_{e,maj}=A(1+z)^{\beta}$ fit in this study.
The gray dashed and dotted lines show those for QGs and SFGs
in \citet{2014ApJ...788...28V}, respectively.
\label{fig:sizeredshift}}
\end{figure}
\section{Discussion}
In this study, we measure the size of massive QGs
at $z\sim4$ in the rest-frame optical wavelength
for the first time based on the AO-assisted imaging using a ground-based telescope.
There are a few possible uncertainties in our results.
One is contamination of AGNs
which could make galaxies look compact.
However, as shown in Fig.~\ref{fig:individual_seds},
the overall SEDs of our targets are dominated by evolved
stellar populations as indicated by the strong Balmer break,
which suggests that the continuum is dominated by stars.
Thus, the AGN contamination, if any, is unlikely to significantly alter our results.
Our targets are not detected in X-ray \citep{2008ApJS..179..124U} or MIR \citep{2007sptz.prop40021D}.
Although only very active AGNs are detectable at the depth of the data at $z\sim4$,
this adds further support for no significant AGN contamination.
There is another question of the quiescence of our targets.
Although the SED fits indicate that these galaxies are not actively forming stars,
their quiescence should be further confirmed by other means.
\citet{2017arXiv170302207G} detected
significant far-IR fluxes from $BzK$ and $UVJ$-selected QGs,
suggesting that the optical-nearIR selection does not always give a clean sample of QGs.
Multi-wavelength follow-up observations of our targets are essential to fully confirm their quiescence.
Efforts in this direction are underway.
Although we should further address these possible uncertainties in the future,
it is interesting to discuss the origin and evolution
of these extremely compact massive QGs at $z\sim4$.
In this section, we first discuss the extremely high stellar mass surface density
of them and then focus on their size evolution on the evolving stellar mass track.
\subsection{Extremely high stellar mass surface density}
\begin{figure}
\centering
\includegraphics[width=85mm]{surfacedensity_M180821.eps}
\caption{
Surface stellar mass densities within effective radii v.s. stellar mass.
The large blue diamond shows massive QGs at $z=4$.
We also show its evolution track found in \S 6.2.
The crosses show dispersion-supported systems in local Universe
from \citet{2011AJ....142..199B}
(GC=globular cluster; cE=compact elliptical; E=early-type galaxy; dE=dwarf elliptical).
The red open circle shows the densest UCD reported in \citet{2013ApJ...775L...6S}.
\label{fig:surfacedensity} }
\end{figure}
It has been know that massive QGs at high redshift
have extremely high stellar mass surface densities
(e.g., \citealt{2008ApJ...677L...5V}).
We compare the mean stellar mass surface densities
within the effective radii of massive QGs at $z=4$
and dispersion-supported stellar systems in the local Universe
\citep{2011AJ....142..199B} in Fig. \ref{fig:surfacedensity}.
\citet{2011AJ....142..199B} is originally given in $V$-band luminosity.
We convert the $V$-band luminosity into stellar mass adopting $M_{\star}/L_V=3$
which is in case of a simple stellar population model with the age of $\sim10$ Gyr
adopting \citet{2003PASP..115..763C} IMF.
Note that $M_{\star}/L_V$ can depend on the object type.
We also show the densest ultra compact dwarf (UCD) reported in \citet{2013ApJ...775L...6S}
using its stellar mass from \citet{2014Natur.513..398S}.
It is interesting that high-z QGs and Globular clusters (GCs),
consisting of the oldest stars of the Milky Way and thought to form at high redshifts,
both have extremely high stellar mass surface densities,
even though their typical mass differ by several orders of magnitude.
\citet{2017ApJ...843...78J} and \citet{2017MNRAS.467.4304V}
find very low mass ($M_{\star}=$ a few $10^6~M_{\odot}$)
extremely dense galaxies at $z=2-6$ with strong lensing.
Though they are more massive than GCs,
they imply that such ultra dense objects are commonly formed at high redshift.
Given the high density and high gas fraction in the early Universe,
we naturally expect that gas rich major mergers
are one of the channels to form such extremely compact objects.
In addition, cosmological numerical simulations predict that
high-z galaxies are fed by streams of smooth gas and merging clumps from the cosmic web,
and then they are settled into violent disc instabilities and end up with dense objects
from dissipative compaction of gas and subsequent starburst
\citep{2014MNRAS.438.1870D, 2015MNRAS.450.2327Z}.
We remark that at $z > 4$, dusty SFGs (DSFGs)
display very compact far-IR emitting regions
that locate the ongoing starburst and establish a good proxy for the subsequent stellar remnant
(e.g., \citealt{2015ApJ...810..133I, 2017arXiv170904191O, 2018ApJ...856..121G}).
In a sample of six DSFGs at $z \sim 4.5$ with evidence of minor mergers,
\citet{2018ApJ...856..121G} measured a median stellar mass
of $\log (M_{\star}/M_{\odot}) = 10.49 \pm 0.32$ and far-IR sizes of $r_{e} = 0.70 \pm 0.29$ kpc.
They expect the starburst to be completed in $\sim 50$ Myr, faster than the anticipated timescale for the observed mergers of $\sim 500$ Myr.
Massive QGs at $z\sim4$ studied here may have stopped star formation earlier ($z>5$)
than these DSFGs, however, they present the capability of quickly building up and quenching of
massive stellar cores at such high redshift.
Further detailed studies of DSFGs with ALMA are awaited.
We finally quote \citet{2010MNRAS.401L..19H}
which reports that the maximum stellar surface
densities of GCs and high-z compact QGs
are at the global stellar mass surface density limit
regardless of their masses and propose that it is limited
by feedback from young massive stars when star formation reaches the Eddington limit.
Their results also imply that the densest objects
are formed in the extreme situation which may be only achievable in the early Universe.
\begin{figure}
\centering
\includegraphics[width=87mm]{size-mass-ccd180821.eps}
\caption{
Size-stellar mass growth from massive QGs at $z=4$ taking the stellar mass evolution
into account based on \citet{2014ApJ...794...65M}.
The $r_{e,maj}$ at each point are that extrapolated from \citet{2014ApJ...788...28V} (red filled circles),
median and the 25-75\% interval of the $r_{e,maj}$
of galaxies with $M_{\star}=10^{11.8}~M_{\odot}$
at $z=0$ from \citet{2009MNRAS.398.1129G} (red open circle),
and observed values (others).
The black dotted curve shows the best-fit curve.
The gray solid and dashed curves show the toy models of size-stellar mass growth
in cases of minor mergers ($r_{e,maj}\propto M_{\star}^2$)
and major mergers ($r_{e,maj}\propto M_{\star}$), respectively.
\label{fig:size-mass-ccd}}
\end{figure}
\subsection{Size evolution on the evolving stellar mass track}
The size-redshift relation of massive QGs in Fig. \ref{fig:sizeredshift} is
measured at a fixed stellar mass.
Galaxies grow with time and evolving cumulative number density
determined in the semi-empirical approach using abundance matching
has been used to find the progenitors of particular descendants
(e.g., \citealt{2013ApJ...777L..10B}).
\citet{2014ApJ...794...65M} tracks the progenitors
of ultra-massive galaxies today
($M_{\star}\approx10^{11.8}~M_{\odot}$) by this method
and find their stellar mass evolution as a function of redshift, $\log (M_{\star}/M_{\odot}) = A + Bz + Cz^2$
where $A=11.801\pm0.038$, $B=-0.304\pm0.054$ and $C=0.039\pm0.014$.
This relation is based on the observations at $z<3$
but if we extrapolate it to $z\sim4$, we find that
massive QGs at $z\sim4$ in this study are on this evolutionary track,
i.e., they plausibly evolve into ultra-massive galaxies today.
Taking the stellar mass evolution into account
using the stellar mass-redshift relation in \citet{2014ApJ...794...65M},
we show the size-stellar mass evolution from massive QGs at $z=4$ in Fig. \ref{fig:size-mass-ccd}.
The $r_{e,maj}$ are median and the 25-75\% interval of the $r_{e,maj}$
of galaxies with $M_{\star}=10^{11.8}~M_{\odot}$
from \citet{2009MNRAS.398.1129G} at $z=0$,
and extrapolated from the size-stellar mass relation
at each redshift in \citet{2014ApJ...788...28V} at $0.25\leq z\leq 2.75$.
The $r_{e, maj}$ at $z>3$ are the observed values
since their stellar masses are on the stellar mass-redshift relation of \citet{2014ApJ...794...65M}.
The point at $z=3.1$ \citep{2017MNRAS.469.2235K} is not included in the fit,
this is shown just for a reference.
The {\it top} and {\it bottom} panels of Fig. \ref{fig:sizeredshift-constatnt}
show the size evolution as functions of redshift and cosmic time, respectively.
The size-redshift relation is fitted in a form of $r_{e,maj}/{\rm kpc}=A\times(1+z)^{B}$
where $A=44.1\pm6.1$ and $B=-2.6\pm0.2$ or $r_{e,maj}/{\rm kpc}=A\times B^{-(1+z)}$
where $A=69.7\pm7.7$ and $B=2.7\pm0.1$.
If we fit the size-time relation in a form of $\log(r_{e,maj}/{\rm kpc})= A+B \log(t/\rm Gyr)$,
we obtain $A=-0.56\pm0.07$ and $B=1.91\pm0.09$.
We find that they grow in size by a factor $\sim10$ in the first few Gyr
and finally acquire the size $\sim30$ times larger than that of a massive QG at $z=4$ by $z=0$.
They have also evolved significantly in the stellar mass surface density (Fig. \ref{fig:surfacedensity}).
\begin{figure}
\centering
\includegraphics[width=85mm]{size-redshift-constant180821.eps}
\includegraphics[width=85mm]{size-time-constant180821.eps}
\caption{
{\it Top:} Size-redshift relation of massive QGs taking the stellar mass evolution into account.
The data points are the same as those in Fig. \ref{fig:size-mass-ccd}.
The black solid curve and dashed line show the best-fit curves
in forms of $r_{e,maj}/{\rm kpc}=A\times(1+z)^{B}$ and $r_{e,maj}/{\rm kpc}=A\times B^{-(1+z)}$, respectively.
{\it Bottom:} Similar to the {\it top} panel but scaled in cosmic time
and the black solid curve shows the best-fit curve in a form of $\log(r_{e,maj}/{\rm kpc})= A+B \log(t/\rm Gyr)$.
\label{fig:sizeredshift-constatnt}}
\end{figure}
In order to constrain physical processes driving this rapid evolution,
we compare the size-stellar mass growth
to the two toy models shown in Fig. \ref{fig:size-mass-ccd}.
We show size-stellar mass growth models
via minor mergers (gray solid curve) and major mergers (gray dashed curve),
which follow $r\propto M^2$ and $r\propto M$,
respectively \citep{2009ApJ...697.1290B,2009ApJ...699L.178N}.
The observed size-stellar mass growth closely follows that of minor mergers:
it is fitted in a form $r_{e,maj}/{\rm kpc}=A\times (M_{\star}/10^{11}~M_{\odot})^B + C$
where $A=1.0\pm0.4$, $B=1.9\pm0.2$ and $C=-1.3\pm0.6$ (black dotted curve).
Similarly, \citet{2010ApJ...709.1018V} evaluates
the size-stellar mass evolution of massive galaxies
with $M_{\star}\approx10^{11.45}~M_{\odot}$ at $z=0$
taking the stellar mass evolution into account by constant number density method.
Although the samples and methods are different,
they report $r_e\propto M^{2.08}$ evolution, similar to our result.
Our result also agree with the prediction of the stellar mass and size growth
of massive-end QGs with $M_{\star}\approx10^{11.8}~M_{\odot}$
in \citet{2018MNRAS.474.3976G} based on the IllustrisTNG simulation.
Taking all these results together, we conclude that
the evolution of massive-end galaxies from $z=4$
is likely to be driven by minor mergers.
Note that lower mass galaxies may not necessarily
follow the size growth found in this study.
The mass dependent evolution has been predicted in cosmological numerical simulations.
E.g., more moderate size growth of lower mass galaxies is predicted in
\citet{2018MNRAS.474.3976G}.
The continual addition of massive galaxies to the quiescent population,
so called progenitor bias may also contribute
to the observed size growth (e.g., \citealt{2013ApJ...773..112C,2013ApJ...777..125P})
though it alone may not be sufficient \citep{2015ApJ...799..206B}.
Several studies reported that the observed merger rate
is not capable for the size growth of high-z compact ellipticals
\citep{2011ApJ...738L..25W,2012ApJ...746..162N,2016ApJ...830...89M}
but on the other hand, in situ star formation in satellites
before mergers can push up the size-growth
amount via minor mergers \citep{2016ApJ...816...87M}.
It can also happen that the environment of the most massive galaxies is special.
Massive compact elliptical at $z=3.09$ cited from \citet{2017MNRAS.469.2235K}
is in a dense group of massive galaxies capable for the ten times size growth at least.
Further studies of not only compact massive quiescent galaxies themselves
but also their environment are needed to understand
what physical processes govern the size-stellar mass growth.
\section{Conclusion}
We have measured the rest-frame optical sizes of massive galaxies
with suppressed star formation at $z\sim4$
with IRCS and AO188 on the Subaru telescope.
Although our measurements on individual galaxies are noisy, the more robust size measurements on the stacked object reveals that they have smaller physical sizes compared to lower redshift ones.
This is the first measurement of the rest-frame optical sizes of QGs at $z\sim4$.
Their mean stellar mass surface density is similar to those of GCs, the densest objects of the Universe,
although their masses differ by several orders of magnitude.
This implies that the origin of the densest galaxies
are due to the high density and high gas fraction in the early Universe.
If we take the stellar mass evolution into account,
they plausibly evolve into the most massive galaxies today
and their stellar mass-size evolution is consistent with a scenario in which minor dry mergers drive the size growth.
We have shown that massive QGs at $z\sim4$ are compact,
but we have pushed the ability of current facilities close to the limit.
Deeper and higher resolution imaging
at $> 2~\mu$m with AO on ground based large(r) telescopes
and James Webb Space Telescope ({\it JWST}) are needed to make a leap from here.
\acknowledgments
K.Y. was supported by JSPS KAKENHI Grant No. JP16K17659 and JP18K13578.
M.T. acknowledges support by JSPS KAKENHI Grant No. 15K17617.
MS and S.T. acknowledge support from the European Research Council (ERC) Consolidator Grant funding scheme (project ConTExt, grant number 648179). The Cosmic Dawn Center is funded by the Danish National Research Foundation.
This work is based on data collected at the Subaru Telescope, which is operated by
the National Astronomical Observatory of Japan.
We thank the anonymouns referee for the useful report, which helped improve the paper.
|
2,877,628,090,263 | arxiv | \section{Introduction}
Clustering is inherently subjective \cite{Caruana06metaclustering,ScienceOrArt}: different users often require very different clusterings of the same dataset, depending on their prior knowledge and goals.
Constraint-based (or semi-supervised) clustering methods are able to deal with this subjectivity by taking a limited amount of user feedback into account.
Often, this feedback is given in the form of pairwise constraints \cite{Wagstaff01constrainedk-means}. The algorithm has no direct access to the cluster labels in a target clustering, but it can perform pairwise queries to answer the question: \textit{do instances $i$ and $j$ have the same cluster label in the target clustering?}
A must-link constraint is obtained if the answer is yes, a cannot-link constraint otherwise.
An effective constraint-based clustering system should satisfy three requirements.
First, it should allow for an iterative clustering process. In each iteration the user answers several pairwise queries, resulting in pairwise constraints. The clustering system uses these to improve the current clustering.
This process is repeated until the user is satisfied with the given clustering.
Second, it should produce high-quality solutions given only a limited number of pairwise queries.
This motivates the use of active query selection in clustering, in which the clustering system tries to determine the most informative queries.
Third, the process should be fast.
The workflow described above is inherently interactive: the user repeatedly answers pairwise queries and inspects the updated clustering.
For this to work in practice, both producing the clusterings and deciding on which pairs to query next should be fast.
None of the existing constraint-based clustering systems fulfills all of these requirements.
First, most of them assume that all pairwise constraints are given prior to running the clustering algorithm \cite{xing2002distance,probframework,Bilenko2004,Mallapragada2008}, which makes them non-iterative.
Second, traditional systems typically query random pairs \cite{xing2002distance,Bilenko2004}, which might not be the most informative ones and result in low-quality solutions. Several active constraint-based clustering methods have been proposed \cite{basu:sdm04,Mallapragada2008} that outperform random query selection, but most of them are non-iterative. NPU \cite{Xiong2014} is an example of a clustering framework that does satisfy the first two requirements. However, it does not satisfy the third as it requires re-clustering the entire dataset after every few constraints, which becomes prohibitively slow for large datasets.
The approach closest to fulfilling the requirements outlined above is COBRA \cite{COBRA}, a recently introduced method based on the concept of super-instances.
A super-instance is a set of instances that are assumed to belong to the same cluster in the unknown target clustering.
COBRA consists of two steps: it first over-clusters the data using K-means to construct these super-instances and then merges them into clusters based on pairwise constraints.
COBRA was shown to produce high-quality clusterings at fast run times. However, a fixed number of super-instances has to be specified prior to the clustering process.
Using a small number of super-instances results in good high-level clusterings using few queries, but these clusterings cannot be refined as more queries are answered.
If the number of super-instances is large, more fine-grained structure can be found as more queries are answered, but at the cost of lower quality clusterings early on in the clustering process. Hence, more queries are needed before a good result is obtained.
In this work, we introduce COBRAS (for Constraint-based Repeated Aggregation and Splitting), an active clustering system satisfying all the requirements outlined above.
In contrast to COBRA, it does not need a fixed set of super-instances.
Instead, it combines the \textit{bottom-up procedure} of merging super-instances with an incremental \textit{top-down search} for good super-instances.
By doing this it largely mitigates the trade-off present in COBRA: in the beginning the number of super-instances is small which allows getting a reasonable coarse-grained clustering using few queries; as more queries are answered these super-instances are refined, allowing to capture more fine-grained structure.
The remainder of this paper is structured as follows. In the next section, we describe existing work on constraint-based clustering. Next, we discuss COBRAS in more detail and give an algorithmic description. In our experimental section we demonstrate that COBRAS is the most suitable clustering method to be used in the iterative workflow described above, as it produces high-quality clusterings at fast run times.
\section{Related work}
The most common way to develop a constraint-based clustering method is to extend an existing unsupervised method.
One can either adapt the clustering procedure to take the pairwise constraints into account \cite{Wagstaff01constrainedk-means,Rangapuram2012,Wang2014}, or use the existing procedure with a new similarity metric that is learned based on the constraints \cite{xing2002distance,Davis:2007:IML:1273496.1273523}.
Alternatively, one can also modify both the similarity metric and the clustering procedure \cite{Bilenko2004,probframework}.
Traditional constraint-based clustering methods assume that a set of constraints is given, and in practice this set is often obtained by querying random pairs. Basu et al.\ \shortcite{basu:sdm04} introduce active constraint selection and show that selecting a set of informative queries can outperform querying random pairs.
In their method, the entire set of constraints is queried prior to a single run of the constraint-based clustering algorithm.
Xiong et al.\ \shortcite{Xiong2014} introduce NPU, an active selection procedure in which the data is clustered multiple times and each resulting clustering is used to determine which pairs to query next based on the principle of uncertainty sampling.
COBS \cite{COBS} is quite different from the previously discussed methods: it uses pairwise constraints to select and tune an unsupervised clustering algorithm. COBS generates a large set of clusterings by varying the hyperparameters of several unsupervised clustering algorithms, and selects the clustering from the resulting set that satisfies the most pairwise constraints.
COBRA \cite{COBRA} is a recently proposed method that is inherently active: deciding which pairs to query is part of its clustering procedure. First, COBRA uses K-means to cluster the data into super-instances. The number of super-instances, denoted as $N_S$, is an input parameter. Initially, each of the super-instances forms its own cluster. In the second step COBRA repeatedly queries the pairwise relation between the closest pair of (partial) clusters between which the relation is not known yet and merges clusters if necessary, until all relations between clusters are known.
\begin{figure}[ht]
\centering
\centering
\includegraphics[width=1.0\linewidth]{example}
\caption{A: The starting situation of COBRA with 10 super-instances (COBRA-10). Initially, each cluster consists of a single super-instance. B: final result of COBRA-10. Each of the clusters is represented as a set of super-instances. The final clustering is not correct, as $S_7$ contains instances from two actual clusters. C: The initial solution of COBRA-100, which is highly over-clustered. D: the final clustering of COBRA-100. E: the clustering produced by COBRAS after 5 queries. The first two queries are used to determine the initial splitting level (which was $k=2$), the next three for determining the pairwise relation between the first three super-instances. F: after 36 queries, COBRAS produces the correct clustering. }
\label{fig:example}
\end{figure}%
The results of COBRA were found to be strongly dependent on the number of super-instances $N_S$. A small value of $N_S$ has the advantage that is gives clusterings of reasonably good quality given few pairwise queries, but lacks the possibility of getting more fine-grained results. A large value of $N_S$ typically results in higher quality clusterings, but these clusterings only appear after answering a relatively high number of queries. This is illustrated for a toy dataset in Figure \ref{fig:example}a-d: 10 super-instances is not enough to get a correct clustering (the incorrectly clustered part is marked with a red ellipse), 100 super-instances result in a correct clustering, but only after 103 queries are answered. Note that this problem cannot be solved by tuning $N_S$: there is no value of $N_S$ that produces both a decent high-level clustering given few constraints, and a fine-grained clustering given more constraints.
\section{COBRAS: Constraint-based Repeated Aggregation and Splitting}
The key problem when running COBRA with a small $N_S$ is that super-instances often contain instances from different clusters (this happens e.g.\ for $S_7$ in Figure \ref{fig:example}a). COBRA cannot assign all of these instances to the correct clusters, as each super-instance is treated as a single unit.
COBRAS solves this problem by allowing super-instances to be refined. It starts with a single super-instance that contains all instances, and repeatedly refines this super-instance until a satisfactory clustering is obtained. More specifically, each iteration of COBRAS consists of two steps. First, it removes the largest super-instance from its cluster and splits it into two new super-instances. Second, it determines the relation of these two new super-instances to the existing clusters by running the merging step of COBRA on the new set of super-instances. By using this procedure of refining super-instances, COBRAS uses a small number of super-instances in the beginning of the clustering process, and a larger number as more queries are answered. This allows it to both produce reasonable high-level clusterings early on, and more fine-grained ones later. Panels (e) and (f) in Figure \ref{fig:example} illustrate the initial and final clusterings produced by COBRAS, and shows that it indeed performs well for both a small and larger number of queries.
\begin{sebcomment}
\textit{I'm wondering now - why not put the previous two paragraphs in caption? The caption as it is now is quite useless - one look at the image and you know what's on. However, it doesn't tell the main message; that's in the text. But again, when you read the text the picture is either on the other page so you have to keep scrolling back and forth; if you print it out, you still have to constantly go between text on one page and picture on different page. If this description is in the caption, then everything that is relevant for the picture is on the same place. I think only the paragraph below is enough for the text itself, everything else seem to better fit in the caption.}
\end{sebcomment}
\begin{comment}
\textcolor{blue}{\textbf{Seb:} too much space spent on COBRA, approx the same as for COBRAS?}
\textcolor{red}{\textbf{Toon:} still spending much time on the example, but now more explicitly framed as motivation for COBRAS}
\end{comment}
\begin{sebcomment}
\textit{Is COBRAS part of the picture actually necessary? The picture is supposed to show weaknesses of COBRA, so from that perspective I think that COBRAS part can be left out - it doesn't bring anything to support that, but takes quite some space. It can be mentioned in the text that COBRAS achieve equal performance but with smaller number of constraints. Maybe an intermediate COBRA-40/50/60 step might be more interesting showing how it improves a bit by bit.}
\end{sebcomment}
\subsection{Algorithmic description}
COBRAS is described in Algorithm \ref{algo:cobras}. In this algorithm a super-instance $S$ is a set of instances, a cluster $C$ is a set of super-instances, and a clustering $\mathcal{C}$ is a set of clusters. COBRAS starts with a single super-instance $S$ that contains all instances, which constitutes the only cluster $C$ (line 2). As long as the user keeps answering queries, COBRAS keeps refining the set of super-instances and the corresponding clustering (lines 3-10). In each iteration it selects the largest super-instance (line 4) and determines an appropriate splitting level for this super-instance (line 5, this is detailed in Algorithm \ref{algo:determine_k} which is discussed in the next subsection). COBRAS splits the selected super-instance $S_{split}$ into $k$ new super-instances by clustering the instances in $S_{split}$ using K-means (line 6). $S_{split}$ is then removed from its original cluster (line 7), and a new cluster is added for each of the newly created super-instances (line 8). Finally, in the last step of the while iteration COBRA is used to determine the pairwise relations between the newly added clusters (which each consist of a single super-instance), and the existing ones.
\begin{comment}
If this should be included, do it when discussing COBRA in the example.
The COBRA procedure used in line 8 of the algorithm differs from COBRA as it is defined in \cite{COBRA} in the sense that it already starts from a partial solution. With this we mean that several clusterings in $\mathcal{C}$ are already the result of merging super-instances in previous iterations. Further, the pairwise relation between many clusters in $\mathcal{C}$ might already be known prior to the COBRA call in line 8: of course the new COBRA run will not re-query these relations (for this purpose $ML$ and $CL$ are passed in the call), it will only determine the relations of the newly created clusters to the existing ones.
\end{comment}
\begin{algorithm}[ht]
\caption{COBRAS}
\label{algo:cobras}
\begin{algorithmic}[1]
\REQUIRE $\mathcal{X}$: a dataset\\
\ \ \ \ \ \ \ \ \ \ \ $q$: a query limit
\ENSURE $\mathcal{C}$: a clustering of $D$
\STATE $ML = \emptyset, CL = \emptyset$
\STATE $S = \{\mathcal{X}\}, C = \{S\}, \mathcal{C} = \{C\}$
\WHILE {$|ML| + |CL| < q$}
\STATE {$S_{split}, C_{origin} = \argmax_{S \in C, C \in \mathcal{C}}{|S|}$}
\STATE {\small {$k, ML, CL = \texttt{determineSplitLevel}(S_{split}, ML, CL)$} }
\STATE {$S_{new_1}, \ldots, S_{new_k} = \texttt{K-means}(S_{split},k)$}
\STATE {$C_{origin} = C_{origin} \setminus \{S_{split}\}$}
\STATE {$\mathcal{C} = \mathcal{C} \cup \{ \{ S_{new_1} \}, \ldots, \{ S_{new_k} \} \}$}
\STATE {$\mathcal{C}, ML, CL = \texttt{COBRA}(\mathcal{C},ML,CL)$}
\ENDWHILE
\RETURN $\mathcal{C}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[ht]
\caption{\emph{determineSplitLevel}}
\label{algo:determine_k}
\begin{algorithmic}[1]
\REQUIRE $\mathcal{S}$: a set of instances that is to be split
\ENSURE $k$: an appropriate splitting level\\
\ \ \ \ \ \ \ \ $ML$: the obtained ML constraint\\
\ \ \ \ \ \ \ \ $CL$: the obtained CL constraints
\STATE $d = 0$, $ML = \emptyset$, $CL = \emptyset$
\WHILE {no must-link obtained}
\STATE{$\mathcal{S}_1$, $\mathcal{S}_2$ = k-means($\mathcal{S}$,2)}
\IF{must-link(medoid($\mathcal{S}_1$), medoid($\mathcal{S}_2$))}
\STATE {add (medoid($\mathcal{S}_1$), medoid($\mathcal{S}_2$)) to $ML$}
\STATE {$d = \max(d,1)$}
\RETURN $2^d$, $ML$, $CL$
\ELSE
\STATE {add (medoid($\mathcal{S}_1$), medoid($\mathcal{S}_2$)) to $CL$}
\STATE $\mathcal{S} =$ pick between $\mathcal{S}_1$ and $\mathcal{S}_2$ randomly
\STATE {$d\texttt{++}$}
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{figure*}[ht]
\centering
\centering
\includegraphics[width=0.752\linewidth]{attempt5}
\caption{(a) COBRAS decides to split the initial super-instance $S_1$ into 4 new ones, as discussed in section \ref{sec:splittinglevel}. (b) $S_1$ was removed from the set of clusters (rendering it empty), and a new cluster was added for each of the newly created super-instances. This is the starting situation for the first bottom-up COBRA run. (c) The results of the COBRA run. COBRA queries the relations between the closest pair of clusters between which the relation is not known yet, until all relations are known. For example, it started by querying the relation between $S_4$ and $S_5$ by querying the pairwise relationship between their medoids. This resulted in a must-link constraint, and in the merging of $C_4$ and $C_5$ into $C_6$. (d) At the beginning of a new iteration, COBRAS selects the largest super-instance ($S_3$ in this case) to refine it further. In this case, $S_3$ is split into 2 new super-instances. (E) This results in two new clusters containing these super-instances before the start of a new COBRA merging step. (F) The pairwise relation between these two new clusters and the existing ones is determined using COBRA, leading to a new clustering. This illustrates the situation at the end of the second COBRAS iteration. }
\label{fig:algo_steps}
\end{figure*}%
\subsection{Determining the splitting level $k$}
\label{sec:splittinglevel}
\begin{comment}
An important question in each iteration of COBRAS is the number of super-instances $k$ that the current super-instance $S_{split}$ will be split in. An appropriate value of $k$ does not only depend on the data, but also on the user's clustering intent. Consider for example the CMU faces dataset, which consists of images of different people, taking different poses, while expressing different emotions, with or without sunglasses. This dataset can naturally be clustered in multiple ways: one user might want to cluster it by whether the person in the image wears sunglasses or not, another based on the identity of the person in the image. For the former, $k=2$ in the first while iteration would be appropriate, whereas for the latter an initial value around $k=20$ is better suited. From this example, it is clear that the procedure for determining $k$ should take the preferences of the user into account, and cannot depend on data alone.
\end{comment}
Algorithm \ref{algo:determine_k} describes the procedure that COBRAS uses to determine the splitting level $k$ for a super-instance $S$. The procedure tries to search for a $k$ such that the new super-instances will be pure w.r.t.\ the unknown target clustering. To check the purity of $S$, COBRAS splits it into two new (temporary) super-instances, and queries the relation between their medoids. Obtaining a must-link constraint indicates that the super-instance was pure, and we are at an appropriate level of granularity. Obtaining a cannot-link constraint on the other hand indicates that the original super-instance contained instances that should be in different clusters, and further splitting is warranted. In this case, one of the two new super-instances is split further, until a must-link constraint is obtained.
We illustrate this procedure based on the example given in Figure \ref{fig:algo_steps}(a), in which the splitting level for the initial super-instance $S_1$ is determined. We first split the super-instance into two new sets of instances, in this case $S_1$ is split into $S_{t1}$ and $S_{t2}$. The $t$ subscript indicates that these sets of instances are only temporary, i.e.\ they are only created in the process of finding the splitting level and afterward discarded. Next, the pairwise relation between the two newly created super-instances is queried. In this case, querying the relation between $S_{t1}$ and $S_{t2}$ results in a cannot-link constraint (indicated by constraint 1 in Figure \ref{fig:algo_steps}(a)). This cannot-link constraint indicates that it is indeed useful to split $S_1$ into smaller super-instances, as it contains elements that should be in different clusters. We repeat this process for $S_{t1}$, which in this case results in a cannot-link constraint between $S_{t3}$ and $S_{t4}$. Again, this indicates the usefulness of further splitting $S_{t1}$ into smaller super-instances. We finally repeat the process for $S_{t3}$, and obtain a must-link constraint between $S_{t5}$ and $S_{t6}$. This indicates that $S_{t3}$ was at an appropriate level of granularity. The algorithm assumes that this level of granularity is also appropriate for the remainder of the instances in $S_1$ (and not only for the single branch that we followed to $S_{t3}$), and determine $k=4$ to be an appropriate splitting level ($S_{t3}$ was at the second level of the tree, hence we split into $2^2$ new super-instances). Line 6 in Algorithm \ref{algo:determine_k} ensures that a super-instance is split into at least two new ones.
The remainder of Figure \ref{fig:algo_steps} illustrates two iterations of the entire COBRAS clustering process.
\begin{sebcomment}
The description is clear as is. It could potentially be improved by stating the general idea first - currently it says what is being done in the manner of the report.
\end{sebcomment}
\begin{sebcomment}
\textit{same as for COBRA weakness -- put in caption?}
\end{sebcomment}
\section{Experimental evaluation}
\label{sec:results}
In this section, we discuss the experimental evaluation of COBRAS.
\subsection*{Existing Constraint-based Algorithms}
We compare COBRAS to the following state-of-the-art constraint-based clustering algorithms:
\begin{itemize}
\item \textbf{COBS} \cite{COBS} uses constraints to select and tune an unsupervised clustering algorithm. We use the active variant in our experiments.
\item \textbf{COBRA} \cite{COBRA} is the algorithm that is most related to COBRAS, as discussed earlier in this paper. We run it with 10, 25 and 50 super-instances.
\item \textbf{NPU} \cite{Xiong2014} is an active constraint selection framework that can be used with any non-active constraint-based clustering method. It constructs neighborhoods of points that are connected by must-link constraints, with cannot-link constraints between the different neighborhoods. It repeatedly selects the most informative instance, and queries its neighborhood membership by means of pairwise constraints. NPU is an iterative method: after neighborhood membership is determined, the data is re-clustered and the obtained clustering is used to determine the next pairwise queries. NPU can be used with any constraint-based clustering algorithm, and we use it with the following two:
\begin{itemize}
\item \textbf{MPCKMeans} \cite{Bilenko2004} is an extension of K-means that exploits constraints through metric learning and a modified objective. We use the implementation in the WekaUT package \footnote{{\scriptsize \url{http://www.cs.utexas.edu/users/ml/risc/code/}}}.
\item \textbf{COSC} (for Constrained Spectral Clustering) \cite{Rangapuram2012} is an extension of spectral clustering optimizing for a modified objective. We use the code provided by the authors \footnote{{\scriptsize \url{http://www.ml.uni-saarland.de/code/cosc/cosc.htm}}}.
\end{itemize}
\end{itemize}
COSC-NPU and MPCKMeans-NPU require knowing the number of clusters $K$ prior to clustering, and in our experiments this true $K$ (as indicated by the class labels) is given to these algorithms.
Note that, in practice, this number K is often not known in advance, and that this constitutes a clear advantage of these algorithms over the others in the experimental evaluation.
\subsection*{Datasets}
We use the same datasets as those used in the evaluation of COBRA \cite{COBRA}. These include the following 14 UCI datasets: iris, wine, dermatology, hepatitis, glass, ionosphere, optdigits389, ecoli, breast-cancer-wisconsin, segmentation, column\_2C, parkinsons, spambase, sonar and yeast. These were selected because of their repeated use in earlier work on constraint-based clustering (for example, \cite{Bilenko2004,Xiong2014}). Optdigits389 contains digits 3, 8 and 9 of the UCI handwritten digits data \cite{Bilenko2004,Mallapragada2008}. Duplicate instances are removed from all of these datasets, and the data is normalized between 0 and 1. Further, we use the CMU faces dataset, containing 624 images of 20 persons with different poses and expressions, with and without sunglasses. This dataset has four natural clustering targets: identity, pose, expression and sunglasses. A 2048-value feature vector is extracted for each of the images using the pre-trained Inception-V3 network \cite{inceptionnet}. Further, two clustering tasks are included for the 20 newsgroups text dataset: clustering documents from 3 newsgroups on related topics (the target clusters are comp.graphics, comp.os.ms-windows and comp.windows.x, as in \cite{basu:sdm04,Mallapragada2008}), and clustering documents from 3 newsgroups on very different topics (alt.atheism, rec.sport.baseball and sci.space, as in \cite{basu:sdm04,Mallapragada2008}). To extract features from the text documents we apply tf-idf, followed by latent semantic indexing (as in \cite{Mallapragada2008}) to reduce the dimensionality to 10. In summary, 17 datasets are used in our experiments, for which 20 clustering tasks are defined (14 UCI datasets, 4 target clusterings for the CMU faces data, and 2 subsets of the newsgroups data).
\begin{figure*}
\centering
\subfigure[]{\label{fig:rank_comparison}\includegraphics[width=0.48\textwidth]{aligned_rank}}
\hspace*{25px}
\subfigure[ ]{\label{fig:ari_comparison}\includegraphics[width=0.41\textwidth]{average_ari}}
\caption{(a) Aligned rank for all methods over all clustering tasks (b) Average ARI of COBRAS and COBRA instantiations over all clustering tasks. The average ARI for other competitors is omitted to not clutter the figure too much.}
\end{figure*}
\subsection*{Experimental methodology}
We perform 10-fold cross-validation 10 times (similar to e.g.\ \cite{basu:sdm04} and \cite{Mallapragada2008}), and report averaged results. The algorithms always cluster the full dataset, but can only query the relations between pairs that are both in the training set. The quality of the resulting clustering is evaluated by computing the Adjusted Rand index (ARI, \cite{ARI}), only on the instances in the test set. The ARI measures the similarity between the produced clusterings and the ground-truth indicated by the class labels. A score of 0 means that the clustering is random, 1 means that it is exactly the same as the ground-truth. The score for an algorithm for a particular dataset is given by the average ARI over the 10 repetitions of 10 fold cross-validation.
We make sure that COBRAS and COBRA do not query any test instances during clustering by only using training instances to compute the medoids of the super-instances. For NPU, pairs involving an instance from the test set are simply excluded from selection.
In each while iteration of COBRAS, a super-instance is split and COBRA is run on the resulting new set of clusterings. If the user stops answering pairwise queries before the end of the COBRA run (which is simulated frequently in the experiments: we consider the intermediate clusterings after each query), COBRAS returns the clustering as it was at the beginning of the while iteration. The clustering that is returned is only updated after the COBRA run, which prevents us from returning clusterings for which the merging step was not finished yet. This holds for all COBRA runs expect the first one, as in that case there is no real prior clustering at the beginning of the iteration.
COBRA is not able to handle an unlimited amount of pairwise queries: once all the relations between super-instances are known, the clustering process naturally stops. In our experiments, we assume that COBRA simply keeps returning its final clustering after this point, which allows us to compare all algorithms for the same number of pairwise queries.
\subsection*{Clustering quality}
Figure \ref{fig:rank_comparison} shows the aligned ranks for COBRAS and all competitors over all clustering tasks\footnote{For COSC-NPU we set a timeout of 24h for each run of 250 queries for spambase. Typically it only got to 40 queries after that time. We considered the last clustering produced within 24h to be the final one, and use it in the results for all remaining queries in producing the graphs. }. In contrast to the regular rank, the aligned rank \cite{alignedrank,GARCIA20102044} takes the relative differences between algorithms for individual datasets into account. The first step in computing it is to calculate the average ARI achieved for each dataset over all algorithms. Then, for each algorithm the difference between its ARI and this average is calculated, and the resulting differences are sorted from $1$ to $kn$ ($k$ the number of algorithms, $n$ the number of datasets). The aligned rank for an algorithm is then the average of the positions of its entries in the sorted list.
The figure shows that COBRAS is clearly the best choice for the iterative clustering scenario that was outlined at the beginning of the paper. Only if the user knows in advance how many queries she will answer and does not care about the quality of intermediate results is COBRA the preferred algorithm. In particular, COBRA-10 outperforms COBRAS for 10 queries, COBRA-25 for 25 queries, and COBRA-50 for 50 queries. In many practical applications of clustering, however, the query budget is not known in advance and the quality of intermediate clusterings does matter. None of the COBRA instantiations are suited for this scenario. For example, COBRA-10 performs well for a very small number of queries, but lacks the ability to keep refining clusterings which results in a large performance gap with COBRAS for larger numbers of queries. COBRA-50, on the other hand, is clearly outperformed by COBRAS for a small number of pairwise queries (as it starts from heavily over-clustered solutions).
A similar argument can be made in the comparison of COBRAS to the other competitors (MPCKMeans-NPU, COSC-NPU and COBS). Furthermore, it is important to realize that COSC-NPU and MPCKMeans-NPU are given the true number of clusters prior to clustering, which explains their good relative performance for a very small number of queries.
\begin{figure}[ht]
\centering
\centering
\includegraphics[width=1.0\linewidth]{speedups}
\caption{Ratio of COBRAS to competitors run time for 20 clustering tasks. For COBRA we only include the run times of COBRA-25 to not clutter the graph, also the run times for COBRA-10 and COBRA-50 are typically lower than all others.}
\label{fig:runtimes}
\end{figure}%
Figure \ref{fig:ari_comparison} shows the average ARI of COBRAS and the COBRA instantiations over all clustering tasks. We only show the comparison to the COBRA instantiations to avoid having an overly cluttered figure. The aligned rank comparison has the advantage over the average ARI that it does not depend on immediate comparisons between ARIs on different datasets, but the disadvantage that it does not reflect the actual differences in ARIs between the methods. Figure \ref{fig:ari_comparison} confirms the conclusion drawn from Figure \ref{fig:rank_comparison}: it shows that COBRAS is preferable to each individual COBRA instantiation. It also puts the performance gap between COBRA-25 and COBRAS that Figure \ref{fig:rank_comparison} suggests for 25 constraints into perspective: the aligned rank indicates that COBRA-25 systematically outperforms COBRAS for 25 queries, but Figure \ref{fig:ari_comparison} shows that it only does so by a small amount, as the average difference in ARI is small.
\subsubsection*{Conclusion on clustering quality}
From Figures \ref{fig:rank_comparison} and \ref{fig:ari_comparison} we conclude that COBRAS is the best choice in terms of clustering quality for the iterative clustering process that was outlined at the beginning of the paper.
\subsection*{Runtime}
Figure \ref{fig:runtimes} shows the ratio of the run time of COBRAS to the run times of its competitors for the 20 clustering tasks after performing 100 queries. It illustrates that COBRA is typically the fastest algorithm, which is not surprising as it requires only a single run of K-means. COBRAS requires multiple K-means runs, rendering it slower than COBRA. Compared to the other competitors, however, COBRAS is still fast. In particular, for the largest dataset COBRA still requires less than 10 seconds for the 100 queries, meaning that runtime will not be a limitation for the user while answering queries. MPCKMeans-NPU is significantly slower since it relies on a more expensive constraint-based variant of K-means, and requires re-clustering the entire dataset after every few queries. In contrast, COBRAS only re-clusters the parts of the dataset that are being refined. The high runtimes of COBS are explained by the fact that it generates a large number of unsupervised clusterings prior to querying the user. Once this set of clusterings is generated, however, selecting the clusterings is fast which means that COBS should not be disregarded for interactive settings.
\begin{sebcomment}
I would put Figures 3 and 4 into Figure 3 a) and b). This would clearly indicate that two have to be considered together.
\end{sebcomment}
\section{Conclusion}
We introduce COBRAS, an active clustering system based on the concept of super-instances. With its top-down strategy of constructing and refining super-instances it aims to produce high-quality clusterings in all stages of the clustering process. COBRAS is fast, since its most expensive step consists of performing K-means clustering on ever smaller parts of the data set. Our experiments confirm that COBRAS compares favorably to competitors in terms of both clustering quality and runtime, making it the preferred solution for constraint-based clustering in many settings.
\section*{Acknowledgements}
Toon Van Craenendonck is supported by the Agency for Innovation by Science and Technology in Flanders (IWT). This research is supported by Research Fund KU Leuven (GOA/13/010) and FWO (G079416N). This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No [694980] SYNTH: Synthesising Inductive Data Models).
\bibliographystyle{named}
|
2,877,628,090,264 | arxiv | \section{Introduction}
We prove that any two birational Mori fibre spaces are connected by a sequence of
elementary transformations, known as Sarkisov links:
\begin{theorem}\label{t_main} Suppose that $\phi\colon\map X.S.$ and $\psi\colon\map
Y.T.$ are two Mori fibre spaces with $\mathbb{Q}$-factorial terminal singularities.
Then $X$ and $Y$ are birational if and only if they are related by a sequence of Sarkisov
links.
\end{theorem}
Recall the following:
\begin{conjecture}\label{c_mori} Let $(Z,\Phi)$ be a kawamata log terminal pair.
Then we may run $f\colon\rmap Z.X.$ the $(K_Z+\Phi)$-MMP such that either
\begin{enumerate}
\item $(X,\Delta)$ is a log terminal model, that is $K_X+\Delta$ is nef, or
\item there is a Mori fibre space $\phi\colon\map X.S.$, that is $\rho(X/S)=1$ and
$-(K_X+\Delta)$ is $\phi$-ample,
\end{enumerate}
where $\Delta=f_*\Phi$.
\end{conjecture}
We will refer to the log terminal model $X$ and the Mori fibre space $\phi$ as the output
of the $(K_Z+\Phi)$-MMP. If $h\colon\rmap Z.X.$ is any sequence of divisorial
contractions and flips for the $(K_Z+\Phi)$-MMP then we say that $h$ is the result of
running the $(K_Z+\Phi)$-MMP. In other words if $h$ is the result of running the
$(K_Z+\Phi)$-MMP then $X$ does not have to be either a log terminal model or a Mori fibre
space.
By \cite{BCHM06} the only unknown case of \eqref{c_mori} is when $K_Z+\Phi$ is
pseudo-effective but neither $\Phi$ nor $K_Z+\Phi$ is big. Unfortunately the output is
not unique in either case. We will call two Mori fibre spaces $\phi\colon\map X.S.$ and
$\psi\colon\map Y.T.$ \textit{Sarkisov related} if $X$ and $Y$ are outcomes of running the
$(K_Z+\Phi)$-MMP, for the same $\mathbb{Q}$-factorial kawamata log terminal pair
$(Z,\Phi)$. This defines a category, which we call the Sarkisov category, whose objects
are Mori fibre spaces and whose morphisms are the induced birational maps $\rmap X.Y.$
between two Sarkisov related Mori fibre spaces. Our goal is to show that every morphism
in this category is a product of Sarkisov links. In particular a Sarkisov link should
connect two Sarkisov related Mori fibre spaces.
\begin{theorem}\label{t_sarkisov} If $\phi\colon\map X.S.$ and $\psi\colon\map Y.T.$
are two Sarkisov related Mori fibres spaces then the induced birational map
$\sigma\colon\rmap X.Y.$ is a composition of Sarkisov links.
\end{theorem}
Note that if $X$ and $Y$ are birational and have $\mathbb{Q}$-factorial terminal
singularities, then $\phi$ and $\psi$ are automatically the outcome of running the
$K_Z$-MMP for some projective variety $Z$, so that \eqref{t_main} is an easy
consequence of \eqref{t_sarkisov}.
It is proved in \cite{BCHM06} that the number of log terminal models is finite if either $\Phi$
or $K_Z+\Phi$ is big, and it is conjectured that in general the number of log terminal models
is finite up to birational automorphisms. Moreover Kawamata, see \cite{Kawamata07}, has
proved:
\begin{theorem}\label{t_minimal} Suppose that $\sigma\colon\rmap X.Y.$ is a birational
map between two $\mathbb{Q}$-factorial varieties which is an isomorphism in codimension
one.
If $K_X+\Delta$ and $K_Y+\Gamma$ are kawamata log terminal and nef and $\Gamma$ is the
strict transform of $\Delta$ then $\sigma$ is the composition of $(K_X+\Delta)$-flops.
\end{theorem}
Note that if the pairs $(X,\Delta)$ and $(Y,\Gamma)$ both have $\mathbb{Q}$-factorial
terminal singularities then the birational map $\sigma$ is automatically an isomorphism in
codimension one.
We recall the definition of a Sarkisov link. Suppose that $\phi\colon\map X.S.$ and
$\psi\colon\map Y.T.$ are two Mori fibre spaces. A Sarkisov link $\sigma\colon\rmap X.Y.$
between $\phi$ and $\psi$ is one of four types:
\begin{align*}
&\begin{diagram}
& \text{I} & \\
X' & \rDashto & Y \\
\dTo & & \dTo_{\psi}\\
X & & T \\
\dTo^{\phi} & \ldTo & \\
S & &
\end{diagram}
&&
\begin{diagram}
& \text{II} & \\
X' & \rDashto & Y' \\
\dTo & & \dTo \\
X & & Y \\
\dTo^{\phi}& & \dTo_{\psi} \\
S & = & T
\end{diagram}
&&
\begin{diagram}
& \text{III} & \\
X & \rDashto & Y'\\
\dTo^{\phi} & & \dTo \\
S & & Y \\
& \rdTo & \dTo_{\psi} \\
& & T
\end{diagram}
&&
\begin{diagram}
& & \text{IV} & & \\
X & & \rDashto & & Y \\
\dTo^{\phi} & & & & \dTo_{\psi} \\
S & & & & T \\
& \rdTo & & \ldTo & \\
& & R. & &
\end{diagram}
\end{align*}
There is a divisor $\Xi$ on the space $L$ on the top left (be it $L=X$ or $L=X'$) such
that $K_L+\Xi$ is kawamata log terminal and numerically trivial over the base (be it $S$,
$T$, or $R$). Every arrow which is not horizontal is an extremal contraction. If the
target is $X$ or $Y$ it is a divisorial contraction. The horizontal dotted arrows are
compositions of $(K_L+\Xi)$-flops. Links of type IV break into two types, IV${}_m$ and
IV${}_s$. For a link of type IV${}_m$ both $s$ and $t$ are Mori fibre spaces. For a link
of type IV${}_s$ both $s$ and $t$ are small birational contractions. In this case $R$ is
not $\mathbb{Q}$-factorial; for every other type of link all varieties are
$\mathbb{Q}$-factorial. Note that there is an induced birational map $\sigma\colon\rmap
X.Y.$ but not necessarily a rational map between $S$ and $T$.
The Sarkisov program has its origin in the birational classification of ruled surfaces.
A link of type I corresponds to the diagram
\begin{diagram}
\Hz 1. & = & \Hz 1. \\
\dTo & & \dTo_{\psi}\\
\pr 2. & & \pr 1. \\
\dTo^{\phi} & \ldTo & \\
\text{pt.} & &
\end{diagram}
Note that there are no flops for surfaces so the top horizontal map is always the
identity. The top vertical arrow on the left is the blow up of a point in $\pr 2.$ and
$\psi$ is the natural map given by the pencil of lines. A link of type III is the same
diagram, reflected in a vertical line,
\begin{diagram}
\Hz 1. & = & \Hz 1. \\
\dTo^{\phi} & & \dTo \\
\pr 1. & & \pr 2. \\
& \rdTo & \dTo_{\psi} \\
& & \text{pt.}
\end{diagram}
A link of type II corresponds to the classical elementary transformation between ruled
surfaces,
\begin{diagram}
X' & = & Y' \\
\dTo & & \dTo \\
X & & Y \\
\dTo^{\phi} & & \dTo_{\psi} \\
S & = & T. \\
\end{diagram}
The birational map $\map X'.X.$ blows up a point in one fibre and the birational map $\map
Y'.Y.$ blows down the old fibre. Finally a link of type IV corresponds to switching
between the two ways to project $\pr 1.\times \pr 1.$ down to $\pr 1.$,
$$
\begin{diagram}
\pr 1.\times\pr 1. & &= & & \pr 1.\times \pr 1. \\
\dTo^{\phi} & & & & \dTo_{\psi} \\
\pr 1. & & & & \pr 1. \\
& \rdTo & & \ldTo & \\
& & \text{pt.} & &
\end{diagram}
$$
It is a fun exercise to factor the classical Cremona transformation $\sigma\colon\rmap
{\pr 2.}.{\pr 2.}.$, $\map [X:Y:Z].[X^{-1}:Y^{-1}:Z^{-1}].$ into a product of Sarkisov
links. Indeed one can use the Sarkisov program to give a very clean proof that the
birational automorphism of $\pr 2.$ is generated by this birational map $\sigma$ and
$\operatorname{PGL}(3)$. More generally the Sarkisov program can sometimes be used to calculate the
birational automorphism group of Mori fibre spaces, especially Fano varieties. With this
said, note that the following problem seems quite hard:
\begin{question}\label{q_three} What are generators of the birational automorphism group
of $\pr 3.$?
\end{question}
Note that a link of type IV${}_s$ only occurs in dimension four or more. For an example
of a link of type IV${}_s$ simply take $\rmap S.T.$ to be a flop between threefolds, let
$\map S.R.$ be the base of the flop and let $X=S\times \pr 1.$ and $Y=T\times \pr 1.$ with
the obvious maps down to $S$ and $T$. It is conceivable that one can factor a link of
type IV${}_s$ into links of type I and III. However given any positive integer $k$ it is
easy to write down examples of links of type IV which cannot be factored into fewer than
$k$ links of type I, II or III.
Let us now turn to a description of the proof of \eqref{t_sarkisov}. The proof is based
on the original ideas of the Sarkisov program (as explained by Corti and Reid
\cite{Corti95}; see also \cite{BM97a}). We are given a birational map $\sigma\colon\rmap
X.Y.$ and the objective is to factor $\sigma$ into a product of Sarkisov links. In the
original proof one keeps track of some subtle invariants and the idea is to prove:
\begin{itemize}
\item the first Sarkisov link $\sigma_1$ exists,
\item if one chooses $\sigma_1$ appropriately then the invariants improve, and
\item the invariants cannot increase infinitely often.
\end{itemize}
Sarkisov links arise naturally if one plays the $2$-ray game. If the relative Picard
number is two then there are only two rays to contract and this gives a natural way to
order the steps of the minimal model program. One interesting feature of the original
proof is that it is a little tricky to prove the existence of the first Sarkisov link,
even if we assume existence and termination of flips. In the original proof one picks a
linear system on $Y$ and pulls it back to $X$. There are then three invariants to keep
track of; the singularities of the linear system on $X$, as measured by the canonical
threshold, the number of divisors of log discrepancy one (after rescaling to the canonical
threshold) and the pseudo-effective threshold. Even for threefolds it is very hard to
establish that these invariants satisfy the ascending chain condition.
Our approach is quite different. We don't consider any linear systems nor do we try to
keep track of any invariants. Instead we use one of the main results of \cite{BCHM06},
namely finiteness of ample models for kawamata log terminal pairs $(Z,A+B)$. Here $A$ is
a fixed ample $\mathbb{Q}$-divisor and $B$ ranges over a finite dimensional affine space
of Weil divisors. The closure of the set of divisors $B$ with the same ample model is a
disjoint union of finitely many polytopes and the union of all of these polytopes
corresponds to divisors in the effective cone.
Now if the space of Weil divisors spans the N\'eron-Severi group then one can read off
which ample model admits a contraction to another ample model from the combinatorics of
the polytopes, \eqref{t_polytope}. Further this property is preserved on taking a general
two dimensional slice, \eqref{c_polytope}. Sarkisov links then correspond to points on
the boundary of the effective cone which are contained in more than two polytopes,
\eqref{t_two}. To obtain the required factorisation it suffices to simply traverse the
boundary. In other words instead of considering the closed cone of curves and playing the
$2$-ray game we look at the dual picture of Weil divisors and we work inside a carefully
chosen two dimensional affine space. The details of the correct choice of underlying
affine space are contained in \S 4.
To illustrate some of these ideas, let us consider an easy case. Let $S$ be the blow up
of $\pr 2.$ at two points. Then $S$ is a toric surface and there are five invariant
divisors. The two exceptional divisors, $E_1$ and $E_2$, the strict transform $L$ of the
line which meets $E_1$ and $E_2$, and finally the strict transform $L_1$ and $L_2$ of two
lines, one of which meets $E_1$ and one of which meets $E_2$. Then the cone of effective
divisors is spanned by the invariant divisors and according to \cite{HK00} the polytopes
we are looking for are obtained by considering the chamber decomposition given by the
invariant divisors. Since $L_1=L+E_1$ and $L_2=L+E_2$ the cone of effective divisors is
spanned by $L$, $E_1$ and $E_2$. Since $-K_S$ is ample, we can pick an ample
$\mathbb{Q}$-divisor $A$ such that $K_S+A \sim_{\mathbb{Q}} 0$ and $K_S+A+E_1+E_2+L$ is
divisorially log terminal. Let $V$ be the real vector space of Weil divisors spanned by
$E_1$, $E_2$ and $L$. In this case projecting $\mathcal{L}_A(V)$ from the origin we get
\includegraphics[bbllx=146,bblly=500,bburx=415,bbury=662]{pic1}
We have labelled each polytope by the corresponding model. Imagine going around the
boundary clockwise, starting just before the point corresponding to $L$. The point $L$
corresponds to a Sarkisov link of type IV${}_m$, the point $L+E_2$ a link of type II, the
point $E_2$ a link of type III, the point $E_1$ a link of type I and the point $L+E_1$
another link of type II.
\section{Notation and conventions}
\label{s-notation}
We work over the field of complex numbers $\mathbb{C}$. An $\mathbb{R}$-Cartier divisor
$D$ on a variety $X$ is \textit{nef} if $D\cdot C\geq 0$ for any curve $C\subset X$. We
say that two $\mathbb{R}$-divisors $D_1$, $D_2$ are $\mathbb{R}$-linearly equivalent
($D_1\sim _{\mathbb{R}} D_2$) if $D_1-D_2=\sum r_i(f_i)$ where $r_i\in \mathbb{R}$ and
$f_i$ are rational functions on $X$. We say that an $\mathbb{R}$-Weil divisor $D$ is
\textit{big} if we may find an ample $\mathbb{R}$-divisor $A$ and an $\mathbb{R}$-divisor
$B\geq 0$, such that $D \sim _{\mathbb{R}} A+B$. A divisor $D$ is
\textit{pseudo-effective}, if for any ample divisor $A$ and any rational number $\epsilon
>0$, the divisor $D+\epsilon A$ is big. If $A$ is a $\mathbb Q$-divisor, we say that $A$
is a \textit{general ample $\mathbb{Q}$-divisor} if $A$ is ample and there is a
sufficiently divisible integer $m>0$ such that $mA$ is very ample and $mA\in |mA|$ is very
general.
A \textit{log pair} $(X,\Delta)$ is a normal variety $X$ and an $\mathbb{R}$-Weil divisor
$\Delta\geq 0$ such that $K_X+\Delta$ is $\mathbb{R}$-Cartier. We say that a log pair
$(X,\Delta)$ is \textit{log smooth}, if $X$ is smooth and the support of $\Delta$ is a
divisor with global normal crossings. A projective birational morphism $g\colon \map
Y.X.$ is a \textit{log resolution} of the pair $(X,\Delta )$ if $Y$ is smooth and the
strict transform $\Gamma$ of $\Delta$ union the exceptional set $E$ of $g$ is a divisor
with normal crossings support. If we write
$$
K_Y+\Gamma+E=g^*(K_X +\Delta)+\sum a_iE_i,
$$
where $E=\sum E_i$ is the sum of the exceptional divisors then the log discrepancy
$a(E_i,X,\Delta)$ of $E_i$ is $a_i$. By convention the log discrepancy of any divisor $B$
which is not exceptional is $1-b$, where $b$ is the coefficient of $B$ in $\Delta$. The
log discrepancy $a$ is the infinimum of the log discrepancy of any divisor.
A pair $(X,\Delta)$ is \textit{kawamata log terminal} if $a>0$. We say that the pair
$(X,\Delta)$ is \textit{log canonical} if $a\geq 0$. We say that the pair $(X,\Delta)$ is
\textit{terminal} if the log discrepancy of any exceptional divisor is greater than one.
We say that a rational map $\phi\colon\rmap X.Y.$ is a \textit{rational contraction} if
there is a resolution $p\colon\map W.X.$ and $q\colon\map W.Y.$ of $\phi$ such that $p$
and $q$ are contraction morphisms and $p$ is birational. We say that $\phi$ is a
\textit{birational contraction} if $q$ is in addition birational and every $p$-exceptional
divisor is $q$-exceptional. If in addition $\phi^{-1}$ is also a birational contraction,
we say that $\phi$ is a \textit{small birational map}. We refer the reader to
\cite{BCHM06} for the definitions of negative and non-positive rational contractions
and of log terminal models.
If $\mathcal{C}$ is a closed convex in a finite dimensional real vector space then
$\mathcal{C}^*$ denotes the dual convex set in the dual real vector space.
\section{The combinatorics of ample models}
We fix some notation. $Z$ is a smooth projective variety, $V$ is a finite dimensional
affine subspace of the real vector space $\operatorname{WDiv}_{\mathbb{R}}(Z)$ of Weil divisors on $Z$,
which is defined over the rationals, and $A\geq 0$ is an ample $\mathbb{Q}$-divisor on
$Z$. We suppose that there is an element $\Theta_0$ of $\mathcal{L}_A(V)$ such that
$K_Z+\Theta_0$ is big and kawamata log terminal.
We recall some definitions and notation from \cite{BCHM06}:
\begin{definition}\label{d_ample} Let $D$ be an $\mathbb{R}$-divisor on $Z$.
We say that $f\colon\rmap Z.X.$ is the \textbf{ample model} of $D$, if $f$ is a rational
contraction, $X$ is a normal projective variety and there is an ample divisor $H$ on $X$
such that if $p\colon\map W.Z.$ and $q\colon\map W.X.$ resolve $f$ and we write
$p^*D\sim_{\mathbb{R}}q^*H+E$, then $E\geq 0$ and for every $B\sim_{\mathbb{R}} p^*D$ if
$B\geq 0$ then $B\geq E$.
\end{definition}
Note that if $f$ is birational then $q_*E=0$.
\begin{definition}\label{d_cones} Let
\begin{align*}
V_A &=\{\,\Theta \,|\, \Theta=A+B, B\in V \,\}, \\
\mathcal{L}_A(V)&=\{\,\Theta=A+B\in V_A \,|\, \text{$K_Z+\Theta$ is log canonical and $B\geq 0$} \,\}, \\
\mathcal{E}_A(V) &=\{\,\Theta\in \mathcal{L}_A(V) \,|\, \text{$K_Z+\Theta$ is pseudo-effective} \,\}.
\end{align*}
Given a rational contraction $f\colon\rmap Z.X.$, define
$$
\mathcal{A}_{A,f}(V)=\{\, \Theta\in \mathcal E_A(V) \,|\, \text{$f$ is the ample model of $(Z,\Theta)$} \,\}.
$$
\end{definition}
In addition, let $\mathcal{C}_{A,f}(V)$ denote the closure of $\mathcal{A}_{A,f}(V)$.
\begin{theorem}\label{t_polytope} There are finitely many $1\leq i\leq m$ rational
contractions $f_i\colon\rmap Z.X_i.$ with the following properties:
\begin{enumerate}
\item $\displaystyle{\{\, \mathcal{A}_i=\mathcal{A}_{A,f_i}\,|\, 1\leq i\leq m \,\}}$ is a partition
of $\mathcal{E}_{A}(V)$. $\mathcal{A}_i$ is a finite union of interiors of rational
polytopes. If $f_i$ is birational then $\mathcal{C}_i=\mathcal{C}_{A,f_i}$ is a rational
polytope.
\item If $1\leq i\leq m$ and $1\leq j\leq m$ are two indices such that
$\mathcal{A}_j\cap\mathcal{C}_i\neq\varnothing$ then there is a contraction morphism
$f_{i,j}\colon\map X_i.X_j.$ and a factorisation $f_j=f_{i,j}\circ f_i$.
\end{enumerate}
Now suppose in addition that $V$ spans the N\'eron-Severi group of $Z$.
\begin{enumerate}
\setcounter{enumi}{2}
\item Pick $1\leq i\leq m$ such that a connected component $\mathcal{C}$ of
$\mathcal{C}_i$ intersects the interior of $\mathcal{L}_A(V)$. The following are
equivalent
\begin{itemize}
\item $\mathcal{C}$ spans $V$.
\item If $\Theta\in \mathcal{A}_i\cap \mathcal{C}$ then $f_i$ is a log terminal model of
$K_Z+\Theta$.
\item $f_i$ is birational and $X_i$ is $\mathbb{Q}$-factorial.
\end{itemize}
\item If $1\leq i\leq m$ and $1\leq j \leq m$ are two indices such that $\mathcal{C}_i$
spans $V$ and $\Theta$ is a general point of $\mathcal{A}_j\cap \mathcal{C}_i$ which is
also a point of the interior of $\mathcal{L}_A(V)$ then $\mathcal{C}_i$ and $\ccone
X_i/X_j.^*\times \mathbb{R}^k$ are locally isomorphic in a neighbourhood of $\Theta$, for
some $k\geq 0$. Further the relative Picard number of $f_{i,j}\colon\map X_i.X_j.$ is
equal to the difference in the dimensions of $\mathcal{C}_i$ and $\mathcal{C}_j\cap
\mathcal{C}_i$.
\end{enumerate}
\end{theorem}
\begin{proof} (1) is proved in \cite{BCHM06}.
Pick $\Theta\in\mathcal{A}_j\cap\mathcal{C}_i$ and $\Theta'\in\mathcal{A}_i$ so that
$$
\Theta_t=\Theta+t(\Theta'-\Theta)\in \mathcal{A}_i \qquad \text{if} \qquad t\in (0,1].
$$
By finiteness of log terminal models, cf. \cite{BCHM06}, we may find a positive constant
$\delta>0$ and a birational contraction $f\colon\rmap Z.X.$ which is a log terminal model
of $K_Z+\Theta_t$ for $t\in (0,\delta]$. Replacing $\Theta'=\Theta_1$ by
$\Theta_{\delta}$ we may assume that $\delta=1$. If we set
$$
\Delta_t=f_*\Theta_t,
$$
then $K_X+\Delta_t$ is kawamata log terminal and nef, and $f$ is $K_Z+\Theta_t$
non-positive for $t\in [0,1]$. As $\Delta_t$ is big the base point free theorem implies
that $K_X+\Delta_t$ is semiample and so there is an induced contraction morphism
$g_i\colon\map X.X_i.$ together with ample divisors $H_{1/2}$ and $H_1$ such that
$$
K_X+\Delta_{1/2}=g_i^*H_{1/2} \qquad \text{and} \qquad K_X+\Delta_1=g_i^*H_1.
$$
If we set
$$
H_t=(2t-1)H_1+2(1-t)H_{1/2},
$$
then
\begin{align*}
K_X+\Delta_t &= (2t-1)(K_X+\Delta_1)+2(1-t)(K_X+\Delta_{1/2}) \\
&= (2t-1)g_i^*H_1+2(1-t)g_i^*H_{1/2} \\
&= g_i^*H_t,
\end{align*}
for all $t\in [0,1]$. As $K_X+\Delta_0$ is semiample, it follows that $H_0$ is semiample
and the associated contraction $f_{i,j}\colon\map X_i.X_j.$ is the required morphism.
This is (2).
Now suppose that $V$ spans the N\'eron-Severi group of $Z$. Suppose that $\mathcal{C}$
spans $V$. Pick $\Theta$ in the interior of $\mathcal{C}\cap \mathcal{A}_i$. Let
$f\colon\rmap Z.X.$ be a log terminal model of $K_Z+\Theta$. It is proved in
\cite{BCHM06} that $f=f_j$ for some index $1\leq j\leq m$ and that $\Theta\in
\mathcal{C}_j$. But then $\mathcal{A}_i\cap \mathcal{A}_j\neq\varnothing$ so
that $i=j$.
If $f_i$ is a log terminal model of $K_Z+\Theta$ then $f_i$ is birational and $X_i$ is
$\mathbb{Q}$-factorial.
Finally suppose that $f_i$ is birational and $X_i$ is $\mathbb{Q}$-factorial. Fix
$\Theta\in \mathcal{A}_i$. Pick any divisor $B\in V$ such that $-B$ is ample
$K_{X_i}+f_{i*}(\Theta+B)$ is ample and $\Theta+B\in \mathcal{L}_A(V)$. Then $f_i$ is
$(K_Z+\Theta+B)$-negative and so $\Theta+B\in \mathcal{A}_i$. But then $\mathcal{C}_i$
spans $V$. This is (3).
We now prove (4). Let $f=f_i$ and $X=X_i$. As $\mathcal{C}_i$ spans $V$, (3) implies
that $f$ is birational and $X$ is $\mathbb{Q}$-factorial so that $f$ is a
$\mathbb{Q}$-factorial weak log canonical model of $K_Z+\Theta$. Suppose that $\llist
E.k.$ are the divisors contracted by $f$. Pick $B_i\in V$ numerically equivalent to
$E_i$. If we let $E_0=\sum E_i$ and $B_0=\sum B_i$ then $E_0$ and $B_0$ are numerically
equivalent. As $\Theta$ belongs to the interior of $\mathcal{L}_A(V)$ we may find
$\delta>0$ such that $K_Z+\Theta+\delta E_0$ and $K_Z+\Theta+\delta B_0$ are both kawamata
log terminal. Then $f$ is $(K_Z+\Theta+\delta E_0)$-negative and so $f$ is a log terminal
model of $K_Z+\Theta+\delta E_0$ and $f_j$ is the ample model of $K_Z+\Theta+\delta E_0$.
But then $f$ is also a log terminal model of $K_Z+\Theta+\delta B_0$ and $f_j$ is also the
ample model of $K_Z+\Theta+\delta B_0$. In particular $\Theta+\delta B_0\in
\mathcal{A}_j\cap \mathcal{C}_i$. As we are supposing that $\Theta$ is general in
$\mathcal{A}_j\cap \mathcal{C}_i$, in fact $f$ must be a log terminal model of
$K_Z+\Theta$. In particular $f$ is $(K_Z+\Theta)$-negative.
Pick $\epsilon>0$ such that if $\Xi\in V$ and $\|\Xi-\Theta\|<\epsilon$ then $\Xi$ belongs
to the interior of $\mathcal{L}_A(V)$ and $f$ is $(K_Z+\Xi)$-negative. Then the condition
that $\Xi\in \mathcal{C}_i$ is simply the condition that $K_X+\Delta=f_*(K_Z+\Xi)$ is nef.
Let $W$ be the affine suspace of $\operatorname{WDiv}_{\mathbb{R}}(X)$ given by pushing forward the
elements of $V$ and let
$$
\mathcal{N}=\{\, \Delta\in W \,|\, \text{$K_X+\Delta$ is nef} \,\}.
$$
Given $(\llist a.k.)\in \mathbb{R}^k$ let $B=\sum a_iB_i$ and $E=\sum a_iE_i$. If
$\|B\|<\epsilon$ then, as $\Xi+B$ is numerically equivalent to $\Xi+E$, $K_X+\Delta\in
\mathcal{N}$ if and only if $K_X+\Delta+f_*B\in \mathcal{N}$. In particular
$\mathcal{C}_i$ is locally isomorphic to $\mathcal{N}\times \mathbb{R}^k$.
But since $f_j$ is the ample model of $K_Z+\Theta$, in fact we can choose $\epsilon$
sufficiently small so that $K_X+\Delta$ is nef if and only if $K_X+\Delta$ is nef over
$X_j$, see \S 3 of \cite{BCHM06}. There is a surjective affine linear map from
$W$ to the space of Weil divisors on $X$ modulo numerical equivalence over $X_j$
and this induces an isomorphism
$$
\mathcal{N}\simeq \ccone X/X_j.^*\times \mathbb{R}^l,
$$
in a neighbourhood of $f_*\Theta$.
Note that $K_X+f_*\Theta$ is numerically trivial over $X_j$. As $f_*\Theta$ is big and
$K_X+f_*\Theta$ is kawamata log terminal we may find an ample $\mathbb{Q}$-divisor $A'$
and a divisor $B'\geq 0$ such that
$$
K_X+A'+B' \sim_{\mathbb{R}} K_X+f_*\Theta,
$$
is kawamata log terminal. But then
$$
-(K_X+B') \sim_{\mathbb{R}} -(K_X+\Delta')+A',
$$
is ample over $X_j$. Hence $f_{ij}\colon\map X.X_j.$ is a Fano fibration and
so by the cone theorem
$$
\rho(X_i/X_j)=\operatorname{dim} \mathcal{N}.
$$
This is (4). \end{proof}
\begin{corollary}\label{c_polytope} If $V$ spans the N\'eron-Severi group of $Z$ then
there is a Zariski dense open subset $U$ of the Grassmannian $G(\alpha,V)$ of real affine
subspaces of dimension $\alpha$ such that if $[W]\in U$ and it is defined over the
rationals then $W$ satisfies (1-4) of \eqref{t_polytope}.
\end{corollary}
\begin{proof} Let $U\subset G(\alpha,V)$ be the set of real affine subspaces $W$ of $V$ of
dimension $\alpha$, which contain no face of any $\mathcal{C}_i$ or $\mathcal{L}_A(V)$.
In particular the interior of $\mathcal{L}_A(W)$ is contained in the interior of
$\mathcal{L}_A(V)$. \eqref{t_polytope} implies that (1-2) always hold for $W$ and (1-4)
hold for $V$ and so (3) and (4) clearly hold for $W\in U$. \end{proof}
From now on in this section we assume that $V$ has dimension two and satisfies (1-4) of
\eqref{t_polytope}.
\begin{lemma}\label{l_easy-cases} Let $f\colon\rmap Z.X.$ and $g\colon\rmap Z.Y.$
be two rational contractions such that $\mathcal{C}_{A,f}$ is two dimensional and
$\mathcal{O}=\mathcal{C}_{A,f}\cap \mathcal{C}_{A,g}$ is one dimensional. Assume that
$\rho(X)\geq \rho(Y)$ and that $\mathcal{O}$ is not contained in the boundary of
$\mathcal{L}_A(V)$. Let $\Theta$ be an interior point of $\mathcal{O}$ and let
$\Delta=f_*\Theta$.
Then there is a rational contraction $\pi\colon\rmap X.Y.$ which factors $g=\pi\circ f$
and either
\begin{enumerate}
\item $\rho(X)=\rho(Y)+1$ and $\pi$ is a $(K_X+\Delta)$-trivial morphism, in which case,
either
\begin{enumerate}
\item $\pi$ is birational and $\mathcal{O}$ is not contained in the boundary of
$\mathcal{E}_A(V)$, in which case, either
\begin{enumerate}
\item $\pi$ is a divisorial contraction and $\mathcal{O}\neq\mathcal{C}_{A,g}$, or
\item $\pi$ is a small contraction and $\mathcal{O}=\mathcal{C}_{A,g}$, or
\end{enumerate}
\item $\pi$ is a Mori fibre space and $\mathcal{O}=\mathcal{C}_{A,g}$ is contained in the
boundary of $\mathcal{E}_A(V)$, or
\end{enumerate}
\item $\rho(X)=\rho(Y)$, in which case, $\pi$ is a $(K_X+\Delta)$-flop and
$\mathcal{O}\neq \mathcal{C}_{A,g}$ is not contained in the boundary of
$\mathcal{E}_A(V)$.
\end{enumerate}
\end{lemma}
\begin{proof} By assumption $f$ is birational and $X$ is $\mathbb{Q}$-factorial. Let
$h\colon\rmap Z.W.$ be the ample model corresponding to $K_Z+\Theta$. Since $\Theta$ is
not a point of the boundary of $\mathcal{L}_A(V)$ if $\Theta$ belongs to the boundary of
$\mathcal{E}_A(V)$ then $K_Z+\Theta$ is not big and so $h$ is not birational. As
$\mathcal{O}$ is a subset of both $\mathcal{C}_{A,f}$ and $\mathcal{C}_{A,g}$ there are
morphisms $p\colon\map X.W.$ and $q\colon\map Y.W.$ of relative Picard number at most one.
There are therefore only two possibilities:
\begin{enumerate}
\item $\rho(X)=\rho(Y)+1$, or
\item $\rho(X)=\rho(Y)$.
\end{enumerate}
Suppose we are in case (1). Then $q$ is the identity and $\pi=p\colon\map X.Y.$ is a
contraction morphism such that $g=\pi\circ f$. Suppose that $\pi$ is birational. Then
$h$ is birational and $\mathcal{O}$ is not contained in the boundary of
$\mathcal{E}_A(V)$. If $\pi$ is divisorial then $Y$ is $\mathbb{Q}$-factorial and so
$\mathcal{O}\neq\mathcal{C}_{A,g}$. If $\pi$ is a small contraction then $Y$ is not
$\mathbb{Q}$-factorial and so $\mathcal{C}_{A,g}=\mathcal{O}$ is one dimensional. If
$\pi$ is a Mori fibre space then $\mathcal{O}$ is contained in the boundary of
$\mathcal{E}_A(V)$ and $\mathcal{O}=\mathcal{C}_{A,g}$.
Now suppose we are in case (2). By what we have already proved $\rho(X/W)=\rho(Y/W)=1$.
$p$ and $q$ are not divisorial contractions as $\mathcal{O}$ is one dimensional. $p$ and
$q$ are not Mori fibre spaces as $\mathcal{O}$ cannot be contained in the boundary of
$\mathcal{E}_A(V)$. Hence $p$ and $q$ are small and the rest is clear. \end{proof}
\begin{lemma}\label{l_negative} Let $f\colon\rmap W.X.$ be a birational contraction between
projective $\mathbb{Q}$-factorial varieties. Suppose that $(W,\Theta)$ and $(W,\Phi)$
are both kawamata log terminal.
If $f$ is the ample model of $K_W+\Theta$ and $\Theta-\Phi$ is ample then $f$ is the
result of running the $(K_W+\Phi)$-MMP.
\end{lemma}
\begin{proof} By assumption we may find an ample divisor $H$ on $W$ such that $K_W+\Phi+H$
is kawamata log terminal and ample and a positive real number $t<1$ such that $tH
\sim_{\mathbb{R}} \Theta-\Phi$. Note that $f$ is the ample model of $K_W+\Phi+tH$. Pick
any $s<t$ sufficiently close to $t$ so that $f$ is $(K_W+\Phi+sH)$-negative and yet $f$ is
still the ample model of $K_W+\Phi+sH$. Then $f$ is the unique log terminal model of
$K_W+\Phi+sH$. In particular if we run the $(K_W+\Phi)$-MMP with scaling of $H$ then,
when the value of the scalar is $s$, the induced rational map is $f$. \end{proof}
We now adopt some more notation for the rest of this section. Let $\Theta=A+B$ be a point
of the boundary of $\mathcal{E}_A(V)$ in the interior of $\mathcal{L}_A(V)$. Enumerate
$\llist \mathcal{T}.k.$ the polytopes $\mathcal{C}_i$ of dimension two which contain
$\Theta$. Possibly re-ordering we may assume that the intersections $\mathcal{O}_0$ and
$\mathcal{O}_k$ of $\mathcal{T}_1$ and $\mathcal{T}_k$ with the boundary of
$\mathcal{E}_A(V)$ and $\mathcal{O}_i=\mathcal{T}_i\cap \mathcal{T}_{i+1}$ are all one
dimensional. Let $f_i\colon\rmap Z.X_i.$ be the rational contractions associated to
$\mathcal{T}_i$ and $g_i\colon\rmap Z.S_i.$ be the rational contractions associated to
$\mathcal{O}_i$. Set $f=f_1\colon\rmap Z.X.$, $g=f_k\colon\rmap Z.Y.$, $X'=X_2$,
$Y'=X_{k-1}$. Let $\phi\colon\map X.S=S_0.$, $\psi\colon\map Y.T=S_k.$ be the induced
morphisms and let $\rmap Z.R.$ be the ample model of $K_Z+\Theta$.
\includegraphics[bbllx=90,bblly=550,bburx=487,bbury=666]{pic2}
\begin{theorem}\label{t_two} Suppose $\Phi$ is any divisor such that $K_Z+\Phi$ is
kawamata log terminal and $\Theta-\Phi$ is ample.
Then $\phi$ and $\psi$ are two Mori fibre spaces which are outputs of the $(K_Z+\Phi)$-MMP
which are connected by a Sarkisov link if $\Theta$ is contained in more than two
polytopes.
\end{theorem}
\begin{proof} We assume for simplicity of notation that $k\geq 3$. The case $k\leq 2$ is
similar and we omit it. The incidence relations between the corresponding polytopes yield
a commutative heptagon,
$$
\begin{diagram}
X' & & \rDashto & & Y' \\
\dDashto^p & & & & \dDashto_q \\
X & & & & Y \\
\dTo^{\phi} & & & & \dTo_{\psi} \\
S & & & & T \\
& \rdTo(2,2)_s & & \ldTo(2,2)_t & \\
& & R & &
\end{diagram}
$$
where $p$ and $q$ are birational maps. $\phi$ and $\psi$ are Mori fibre spaces by
\eqref{l_easy-cases}. Pick $\Theta_1$ and $\Theta_k$ in the interior of $\mathcal{T}_1$
and $\mathcal{T}_k$ sufficiently close to $\Theta$ so that $\Theta_1-\Phi$ and
$\Theta_k-\Phi$ are ample. As $X$ and $Y$ are $\mathbb{Q}$-factorial, \eqref{l_negative}
implies that $\phi$ and $\psi$ are possible outcomes of the $(K_Z+\Phi)$-MMP. Let
$\Delta=f_*\Theta$. Then $K_X+\Delta$ is numerically trivial over $R$.
Note that there are contraction morphisms $\map X_i.R.$ and that $\rho(X_i/R)\leq 2$. If
$\rho(X_i/R)=1$ then $\map X_i.R.$ is a Mori fibre space. By \eqref{t_polytope} there is
facet of $\mathcal{T}_i$ which is contained in the boundary of $\mathcal{E}_A(V)$ and so
$i=1$ or $k$. Thus $\rmap X_i.X_{i+1}.$ is a flop, $1<i<k-1$. Since $\rho(X'/R)=2$ it
follows that either $p$ is a divisorial contraction and $s$ is the identity or $p$ is a
flop and $s$ is not the identity. We have a similar dichotomy for $q\colon\rmap Y'.Y.$
and $t\colon\map T.R.$.
There are then four cases. If $s$ and $t$ are the identity then $p$ and $q$ are
divisorial extractions and we have a link of type II.
If $s$ is the identity and $t$ is not then $p$ is a divisorial extraction and $q$ is a
flop and we have a link of type I. Similarly if $t$ is the identity and $s$ is not then
$q$ is a divisorial extraction and $p$ is a flop and we have a link of type III.
Finally suppose neither $s$ nor $t$ is the identity. Then both $p$ and $q$ are flops.
Suppose that $s$ is a divisorial contraction. Let $F$ be the divisor contracted by $s$
and let $E$ be its inverse image in $X$. Since $\phi$ has relative Picard number one
$\phi^*(F)=mE$, for some positive integer $m$. Then $K_X+\Delta+\delta E$ is kawamata log
terminal for any $\delta>0$ sufficiently small and $E=\mathbf{B}(K_X+\Delta+\delta E/R)$.
If we run the $(K_X+\Delta+\delta E)$-MMP over $R$ then we end with a birational
contraction $\rmap X.W.$, which is a Mori fibre space over $R$. Since $\rho(X/R)=2$,
$W=Y$ and we have a link of type III, a contradiction. Similarly $t$ is never a
divisorial contraction. If $s$ is a Mori fibre space then $R$ is $\mathbb{Q}$-factorial
and so $t$ must be a Mori fibre space as well. This is a link of type IV${}_m$. If $s$
is small then $R$ is not $\mathbb{Q}$-factorial and so $t$ is small as well. Thus we have
a link of type IV${}_s$. \end{proof}
\section{Proof of \eqref{t_sarkisov} }
\begin{lemma}\label{l_perturb} Let $\phi\colon\map X.S.$ and $\psi\colon\map Y.T.$ be two
Sarkisov related Mori fibre spaces corresponding to two $\mathbb{Q}$-factorial kawamata
log terminal projective varieties $(X,\Delta)$ and $(Y,\Gamma)$.
Then we may find a smooth projective variety $Z$, two birational contractions
$f\colon\rmap Z.X.$ and $g\colon\rmap Z.Y.$, a kawamata log terminal pair $(Z,\Phi)$, an
ample $\mathbb{Q}$-divisor $A$ on $Z$ and a two dimensional rational affine subspace $V$
of $\operatorname{WDiv}_{\mathbb{R}}(Z)$ such that
\begin{enumerate}
\item if $\Theta\in \mathcal{L}_A(V)$ then $\Theta-\Phi$ is ample,
\item $\mathcal{A}_{A,\phi\circ f}$ and $\mathcal{A}_{A,\psi\circ g}$ are not
contained in the boundary of $\mathcal{L}_A(V)$,
\item $V$ satisfies (1-4) of \eqref{t_polytope},
\item $\mathcal{C}_{A,f}$ and $\mathcal{C}_{A,g}$ are two dimensional, and
\item $\mathcal{C}_{A,\phi\circ f}$ and $\mathcal{C}_{A,\psi\circ g}$ are one
dimensional.
\end{enumerate}
\end{lemma}
\begin{proof} By assumption we may find a $\mathbb{Q}$-factorial kawamata log terminal
pair $(Z,\Phi)$ such that $f\colon\rmap Z.X.$ and $g\colon\rmap Z.Y.$ are both outcomes of
the $(K_Z+\Phi)$-MMP.
Let $p\colon\map W.Z.$ be any log resolution of $(Z,\Phi)$ which resolves the
indeterminancy of $f$ and $g$. We may write
$$
K_W+\Psi=p^*(K_Z+\Phi)+E',
$$
where $E'\geq 0$ and $\Psi\geq 0$ have no common components, $E'$ is exceptional and
$p_*\Psi=\Phi$. Pick $-E$ ample over $Z$ with support equal to the full exceptional locus
such that $K_W+\Psi+E$ is kawamata log terminal. As $p$ is $(K_W+\Psi+E)$-negative,
$K_Z+\Phi$ is kawamata log terminal and $Z$ is $\mathbb{Q}$-factorial, the
$(K_W+\Psi+E)$-MMP over $Z$ terminates with the pair $(Z,\Phi)$ by \eqref{l_negative}.
Replacing $(Z,\Phi)$ with $(W,\Psi+E)$, we may assume that $(Z,\Phi)$ is log smooth and
$f$ and $g$ are morphisms.
Pick general ample $\mathbb{Q}$-divisors $A, \llist H.k.$ on $Z$ such that $\llist H.k.$
generate the N\'eron-Severi group of $Z$. Let
$$
H=A+\alist H.+.k..
$$
Pick sufficiently ample divisors $C$ on $S$ and $D$ on $T$ such that
$$
-(K_X+\Delta)+\phi^*C \qquad \text{and} \qquad -(K_Y+\Gamma)+\psi^*D,
$$
are both ample. Pick a rational number $0<\delta<1$ such that
$$
-(K_X+\Delta+\delta f_*H)+\phi^*C \qquad \text{and} \qquad -(K_Y+\Gamma+\delta g_*H)+\psi^*D,
$$
are both ample and $K_Z+\Phi+\delta H$ is both $f$ and $g$-negative. Replacing $H$ by
$\delta H$ we may assume that $\delta=1$. Now pick a $\mathbb{Q}$-divisor $\Phi_0\leq
\Phi$ such that $A+(\Phi_0-\Phi)$,
$$
-(K_X+f_*\Phi_0+ f_*H)+\phi^*C \quad \text{and} \quad -(K_Y+g_*\Phi_0+ g_*H)+\psi^*D,
$$
are all ample and $K_Z+\Phi_0+ H$ is both $f$ and $g$-negative.
Pick general ample $\mathbb{Q}$-divisors $F_1\geq 0$ and $G_1\geq 0$
$$
F_1\sim_{\mathbb{Q}} -(K_X+f_*\Phi_0+ f_*H)+\phi^*C \quad \text{and} \quad G_1 \sim_{\mathbb{Q}} -(K_Y+g_*\Phi_0+ g_*H)+\psi^*D.
$$
Then
$$
K_Z+\Phi_0+ H+F+G,
$$
is kawamata log terminal, where $F=f^*F_1$ and $G=g^*G_1$.
Let $V_0$ be the affine subspace of $\operatorname{WDiv}_{\mathbb{R}}(Z)$ which is the translate by
$\Phi_0$ of the vector subspace spanned by $\llist H.k.,F,G$. Suppose that $\Theta=A+B\in
\mathcal{L}_A(V_0)$. Then
$$
\Theta-\Phi=(A+\Phi_0-\Phi)+(B-\Phi_0),
$$
is ample, as $B-\Phi_0$ is nef by definition of $V_0$. Note that $\Phi_0+F+H\in
\mathcal{A}_{A,\phi\circ f}(V_0)$, $\Phi_0+G+H\in \mathcal{A}_{A,\psi\circ g}(V_0)$, and
$f$, respectively $g$, is a weak log canonical model of $K_Z+\Phi_0+F+H$, respectively
$K_Z+\Phi_0+G+H$. \eqref{t_polytope} implies that $V_0$ satisfies (1-4) of
\eqref{t_polytope}.
Since $\llist H.k.$ generate the N\'eron-Severi group of $Z$ we may find constants $\llist
h.k.$ such that $G$ is numerically equivalent to $\sum h_iH_i$. Then $\Phi_0+F+\delta
G+H-\delta(\sum h_iH_i)$ is numerically equivalent to $\Phi_0+F+H$ and if $\delta>0$ is
small enough $\Phi_0+F+\delta G+H-\sum \delta h_iH_i\in \mathcal{L}_A(V_0)$. Thus
$\mathcal{A}_{A,\phi\circ f}(V_0)$ is not contained in the boundary of
$\mathcal{L}_A(V_0)$. Similarly $\mathcal{A}_{A,\psi\circ g}(V_0)$ is not contained in
the boundary of $\mathcal{L}_A(V_0)$. In particular $\mathcal{A}_{A,f}(V_0)$ and
$\mathcal{A}_{A,g}(V_0)$ span $V_0$ and $\mathcal{A}_{A,\phi\circ f}(V_0)$ and
$\mathcal{A}_{A,\psi\circ g}(V_0)$ span affine hyperplanes of $V_0$, since
$\rho(X/S)=\rho(Y/T)=1$.
Let $V_1$ be the translate by $\Phi_0$ of the two dimensional vector space spanned by
$F+H-A$ and $F+G-A$. Let $V$ be a small general perturbation of $V_1$, which is defined
over the rationals. Then (2) holds. (1) holds, as it holds for any two dimensional
subspace of $V_0$, (3) holds by \eqref{c_polytope} and this implies that (4) and (5)
hold. \end{proof}
\begin{proof}[Proof of \eqref{t_sarkisov}] Pick $(Z,\Phi)$, $A$ and $V$ given by
\eqref{l_perturb}. Pick points $\Theta_0\in \mathcal{A}_{A,\phi\circ f}(V)$ and
$\Theta_1\in\mathcal{A}_{A,\psi\circ g}(V)$ belonging to the interior of
$\mathcal{L}_A(V)$. As $V$ is two dimensional, removing $\Theta_0$ and $\Theta_1$ divides
the boundary of $\mathcal{E}_A(V)$ into two parts. The part which consists entirely of
divisors which are not big is contained in the interior of $\mathcal{L}_A(V)$. Consider
tracing this boundary from $\Theta_0$ to $\Theta_1$. Then there are finitely many $2\leq
i\leq l$ points $\Theta_i$ which are contained in more than two polytopes
$\mathcal{C}_{A,f_i}(V)$. \eqref{t_two} implies that for each such point there is a
Sarkisov link $\sigma_i\colon\rmap X_i.Y_i.$ and $\sigma$ is the composition of these
links. \end{proof}
\bibliographystyle{/home/mckernan/Jewel/Tex/hamsplain}
|
2,877,628,090,265 | arxiv | \section{Introduction}
The utility of deriving galaxy redshifts from photometric data has
long been known (\cite{baum62,koo85,loh86}). Recently,
\cite{connolly95} developed an empirical approach as opposed to the
previous model fitting methods. Utilizing photographic data, they were
able to estimate a redshift out to $z \sim 0.5$ with a measured
dispersion of $\delta z < 0.05$. The uncertainties in that result
were dominated by the photometric errors. Simulations indicated
that with improved photometry the dispersion within the relationship
could be significantly reduced. As a result, we have embarked on an
observational program designed to obtain deep CCD multi-band
photometry in existing redshift fields.
In this paper we present the first results of this survey by extending
the previous approach using CCD photometry. Section two outlines the
basic reduction steps taken in the preparation of the sample for this
work. Section three discusses the actual fitting techniques and
investigates the intrinsic dispersion. We conclude this letter with a
discussion of the ramifications of this work and possible future
directions.
\section{Data}
The photometric data used in this analysis were taken using the PFCCD
camera with the standard {$U, B, R, \&\ I \ $} filters on the Mayall 4m at Kitt Peak
National Observatory on the nights of March 31 $-$ April 3, 1995,
March 18$-$20, 1996, and May 14$-$16, 1996. This camera uses a
$2048^2$ CCD ({\em T2KB}) with a $0.47\arcsec$/pixel
scale and a read noise of $4 e^{-}$/pixel. The gain used for these
observations was $5.4\ e^{-}/ADU$, a value which resulted from a
tradeoff between maximizing the available dynamic range and minimizing
the effects of the charge depletion problem with the CCD
electronics. These observations were chosen to coincide with the
published 14 hour redshift field of the Canada-France Redshift Survey
(CFRS). A complete discussion of the observational program including
an analysis of the custom reduction software are beyond the scope of
this letter and will be published elsewhere (\cite{myThesis}).
All three runs were reduced separately using the standard IRAF
routines. The images were initially debiased and flat fielded using
dome flats. Illumination corrections were created by stacking the
image frames in each filter, with high and low rejection to remove
objects, and then boxcar smoothing the stacked image. The individual
fields were registered to a common position in each filter, and then
stacked using a weighted average. The weights were determined by
measuring the signal to noise for several randomly chosen stars on
each frame. The stacked images for each filter were then registered to
a common image to simplify the photometric measurement in matched
apertures. The final images for each of the three runs were then
registered and stacked again using the weighted average.
Object detection and photometry were performed using a custom
pipeline. The object detection was done separately in each filter
using the SExtractor package (\cite{bertin96}). The separate
detections in each filter were then matched using a growing annulus
technique in the order of $B$, $U$, $R$, then $I$ and a master
detection list was produced. Using this list, photometry was
determined in both SExtractor's modified Kron (\cite{kron80}) aperture
and a $10 \arcsec$ diameter aperture matched in each band. The actual
photometry algorithm used involved a modification to SExtractor in
both the background calculation and pixel assignment within the
aperture of interest. The detections were photometrically calibrated
using published standards (\cite{landolt92}) which were measured at
similar airmasses to the object frames. The photometric zeropoint was
then adjusted to the AB system (\cite{okeGunn83}) using published
transformations (\cite{fukugita95}): $U_{AB} = U + 0.69$, $B_{AB} = B
- 0.14$, $R_{AB} = R + 0.17$, and $I_{AB} = I + 0.44$.
Astrometric transformations were determined from the {\it HST\ } Guide Star
Catalog II (\cite{gsc2}), after which the redshifts in the Canada
France Redshift Survey (CFRS) 14 hour field were matched to our
photometric sample. The measured dispersion between the CFRS $I_{AB}$
isophotal magnitudes and our $I_{AB}$ automatic magnitudes was $\sigma
\approx 0.13$ to $I \sim 23$ with no evident systematic deviations
from a linear relationship.
\section{Analysis}
In an effort to minimize the dispersion in our relationship, we
imposed two conditions on the data used in this analysis. First, we
restricted the photometric sample such that all object magnitudes were
below the appropriate magnitude limit at which a typical galaxy had a
measured $1 \sigma_{rms}$ magnitude error of approximately $0.1$
magnitudes. Second, we required that only the most reliable
spectroscopic identifications be incorporated into the fitting
procedure. This involved pruning the CFRS catalog such that only
non-stellar objects with redshifts having a confidence greater than
95\% were retained. This was accomplished by restricting the redshifts
used to the following quality classes: 3,4,8,93,94,98 ({\em cf.\ }
\cite{CFRS2}).
The final sample contained 89 redshifts with the following
distribution: 40 redshifts in the range (0.0, 0.4] and 49 redshifts in
the range (0.4, 0.8]. For these two subsets, we fit a second order
polynomial in {\em U B R I\ }, {\em U B R\ }, and {\em B R I\ } to the measured photometry and
published redshifts. In each region, the degrees of freedom remained a
substantial fraction of the original data (a second order fit in four
variables requires 15 parameters). This technique is a simple approach
designed to quantify the accuracy of our method for estimating
redshifts and is not the optimal parameterization of the topology of
the galaxy distribution, which is the subject of ongoing work.
\placetable{TableOne}
The redshift intervals were not chosen randomly. This technique has
been previously shown to be more sensitive to broad spectroscopic
continuum features (primarily the break in the continuum spectra at
around 4000 \AA, which moves between the $B$ and $R$ bands at $z
\approx 0.4$, and the $R$ and $I$ bands at $z \approx 0.8$) rather
than specific absorption/emission features (\cite{connolly95}). This
is clearly demonstrated in Table \ref{TableOne} where the standard
deviation in the redshift range (0.0,0.4] is only slightly higher when
the $I$ band is not included in the fit. On the other hand, when the
$U$ band is excluded from the fit, the standard deviation more than
doubles. This reflects the fact that the $I$ band is sampling the same
flat region of the spectrum as the $R$ band within this redshift
range, and is thereby providing predominantly redundant information to
the fit. In the second redshift range, the continuum break is moving
from the $B$ band into the $R$ band, which is reflected in the lower
significance of the $U$ band information. We also show the expected
intrinsic dispersion in this relationship from simulations using all
four bands ({\em cf.\ } \cite{connolly95}), which clearly shows that our
measured dispersion is completely dominated by the intrinsic scatter
within the relationship.
\placefigure{FigureOne}
The relative importance of the different bands in the individual
redshift intervals reflects the curvature inherent within the
distribution of galaxies in the multi-dimensional magnitude space. In
a given redshift range, the curvature is accurately parameterized by a
second order polynomial. Between redshift intervals, however, the
distribution displays a higher order curvature term ({\em cf.\ } the previous
discussion concerning the continuum break), which requires
the use of a piecewise second-order parameterization. As a result, the
application of these results requires an iterative approach. First, a
third order global photometric redshift relation is used to determine
an approximate redshift. From this initial redshift estimate, the
appropriate second-order relationship can then be used. If the initial
estimate is on the border between two subsets ($z \subset [0.35,0.45]$),
both relationships should be applied and the mean of the two results
used.
With the introduction of the four-vector $C = (U,B,R,I)$, the
second-order photometric-redshift relationships can be summarized
in the following manner:
\begin{displaymath}
z = Z_{\alpha} + C \cdot V_{\alpha} + C \cdot M_{\alpha} \cdot C^{T}
\end{displaymath}
where the scalar $Z_{\alpha}$, vector $V_{\alpha}$, and matrix
$M_{\alpha}$ components are listed in Table \ref{TableTwo} for the two
different redshift regimes. The parameters for the third order fit are
listed in Table \ref{TableThree}.
\section{Discussion}
We have shown that using a simple iterative process, redshifts can be
reliably estimated for objects from broadband photometry out to $z
\sim 0.8$. A comparison of our measured dispersion with the published
intrinsic dispersion from simulations (\cite{connolly95}) indicates
that we have approached the inherent scatter within the
photometric-redshift relationship. These simulations provide an
absolute lower limit to the intrinsic scatter, as they only assumed an
evolved (15 Gigayear) SED. As the additional effects of metallicity,
dust, and stellar histories can only increase the scatter within the
relationship, we do not include them in our estimation of the minimal
intrinsic scatter within the photometric-redshift relation.
Thus it is quite remarkable that we measure such a small scatter as
compared to the simple, single age, solar metallicity, and dust free
galaxies produced in the simulations. Actual galaxies are clearly more
complex, spanning a wide range of star formation histories, ages,
metallicities, and dust content, all of which would be expected to
significantly increase the measured dispersion. We see that this is
not the case, which leads us to two related conclusions. First, this
technique is extremely dependent on the $4000$ \AA\ break which is
present in nearly all galaxies. Second, metallicity, dust, and age
variations have similar effects in this multidimensional space, albeit
almost orthogonal to the redshift vector (\cite{koo86}). We plan on
improving our understanding of the multidimensional nature of the
observed galaxy population through the use of SED modeling. This will
allow us to quantify the importance of the metallicity, dust, and
different stellar histories and explore any possible degeneracies.
The photometric-redshift estimation technique can be considered to be
the equivalent of a low resolution (4 element) spectrograph. By using
more filters that are increasingly narrow, we increase the spectral
resolution of this technique. Taken to the extreme, however, this
approach will emulate a spectrograph, losing the observing efficiency
that is the primary advantage of this technique. From a comparison
between the dispersion from the three band and the four band quadratic
fits, it is clear that a marginal gain is achieved by adding a fourth
band within a given redshift regime. As a result, we believe that the
benefits achieved by adding more bands to this approach is more than
offset in the loss of observational efficiency. The simulations used
the standard {$U, B, R, \&\ I \ $} filters in order to be reliably compared to our
observations.
We are working to extend this analysis in two principal areas. First,
we are now focusing on improving our understanding of the distribution
of galaxies in this multi-dimensional flux space. This requires the
use of a stratified sampling strategy to obtain redshifts throughout
the photometric sample. These additional redshifts are primarily being
obtained using the Keck telescope within the DEEP collaboration. In
addition, we are incorporating additional physical parameters ({\em eg.\ }
surface brightness and shape parameters) via {\it HST\ } WFPC2 imaging to
quantify the different morphological tracts within the cumulative
galaxy distribution.
Second, we are extending this work to higher redshifts. For redshifts
below $z \sim 1.2$, we are working to increase the size and
stratification of our redshift sample. This involves increasing our
photometry-redshift catalog through the addition of published
redshifts and our participation in several spectroscopic surveys. In
the redshift region $1.2 \leq z \leq 2.8$, we are adding near-infrared
photometry to our catalog in order to sample the continuum features
our technique requires. Until a large quantity of reliable spectra can
be obtained within this region (the arrival of the blue camera on LRIS
will help alleviate this quandary), we will use our understanding of
the $z < 1.2$ regime and the published high $z$ work of others
(\cite{steidel96}) as boundary conditions. We can then use SED models
to extrapolate into this region, while maintaining the boundary
condition requirements at both redshift ends. As spectra in this area
become available, we will add them into the fitting procedure.
Eventually this work will allow for the estimation of not only the
redshift, but also the spectral type of an object solely from
broadband photometry.
\acknowledgments
First we wish to acknowledge Gyula Szokoly for assistance in obtaining
the data. We also would like to thank the referee for useful comments,
and Barry Lasker, Gretchen Greene, and Brian McLean for allowing us
access to an early version of the GSC II. We also wish to acknowledge
useful discussions with Mark Dickinson, Mark Subbarao, and David
Koo. RJB would like to acknowledge support from the National
Aeronautics and Space Administration Graduate Student Researchers
Program. AJC acknowledges partial support from NASA grant
AR-06394.01-95A. ASZ has been supported by the NASA LTSA program.
|
2,877,628,090,266 | arxiv | \section{Introduction}
The game of Cops and Robbers (defined at the end of this section) is usually
studied in the context of the minimum number of cops needed to have a
winning strategy, or \emph{cop number}. The cop number (written $c(G)$ for a graph $G$) is a challenging
graph parameter for a variety of reasons, and establishing upper bounds for
this parameter are the focus of Meyniel's conjecture: the cop number of a
connected $n$-vertex graph is $O(\sqrt{n}).$ For additional background on
Cops and Robbers and Meyniel's conjecture, see the recent book~\cite{bonato}.
The following elegant upper bound was given in \cite{joret}:
\begin{equation}
c(G)\leq tw(G)/2+1, \label{first}
\end{equation}%
where $tw(G)$ is the treewidth of $G.$ The bound (\ref{first}) is
tight if the graph has small treewidth (up to treewidth $5).$ Further, it
gives a simple proof that outerplanar graphs have cop number at most $2$ (first proved in \cite{clarke8}).
For many families of graphs, however, the bound (\ref{first}) is far from
tight; for example, for a positive integer $n$, a clique $K_{n}$ has treewidth $n-1,$ but is cop-win.
Similarly, Cartesian $n\times n$ grids $P_{n}\square P_{n}$ have cop number $%
2,$ but have treewidth $n.$
In this short note, we give a new bound on the cop number that exploits tree decompositions, and in some cases improves on (\ref{first}). The idea of the proof
of (\ref{first}) is to guard bags and use isometric paths to move cops from one bag to another.
We modify this approach, and our main tool is the notion of a retract, and a retract cover of a graph. See
Theorems~\ref{main1}, \ref{i}, and \ref{main2}. Besides giving the correct bounds for various families (such as
grids, cliques, and $k$-trees), our results give a new approach to bounding the cop number by exploiting properties of tree decompositions.
\subsection{Definitions and notation}
We consider only finite, reflexive, undirected graphs in the paper. For
background on graph theory, the reader is directed to \cite{diestel,west}.
The game of \emph{Cops and Robbers} was independently introduced in~\cite{nw,q} and the cop number was introduced in~\cite{af}. The game is played on a reflexive
graph; that is,
vertices each have at least one loop. Multiple edges are allowed, but make
no difference to the game play, so we always assume there is exactly one
edge between adjacent vertices. There are two players consisting of a set of
\emph{cops} and a single \emph{robber}. The game is played over a sequence
of discrete time-steps or \emph{rounds}, with the cops going first in round $%
0$ and then playing alternate time-steps. The cops and robber occupy
vertices; for simplicity, we often identify the player with the vertex they
occupy. We refer to the set of cops as $C$ and the robber as $R.$ When a
player is ready to move in a round they must move to a neighbouring vertex.
Because of the loops, players can \emph{pass}, or remain on their own
vertex. Observe that any subset of $C$ may move in a given round. The cops
win if after some finite number of rounds, one of them can occupy the same
vertex as the robber (in a reflexive graph, this is equivalent to the cop
landing on the robber). This is called a \emph{capture}. The robber wins if
he can evade capture indefinitely.
If we place a cop at each vertex, then the cops are guaranteed to win.
Therefore, the minimum number of cops required to win in a graph $G$ is a
well-defined positive integer, named the \emph{cop number} (or \emph{%
copnumber}) of the graph $G.$ We write $c(G)$ for the cop number of a graph $%
G$. In the special case $c(G)=1,$ we say $G$ is \emph{cop-win}.
An induced subgraph $H$ of $G$ is a \emph{retract} if there is a
homomorphism $f$ from $G$ onto $H$ so that $f(x)=x$ for $x\in V(H);$ that
is, $f$ is the identity on $H.$ The map $f$ is called a \emph{retraction}.
For example, each isometric path is a retract (as shown first in \cite{af}), as is each clique. Each
retract $H$ with retraction $f$ can be \emph{guarded} by a set of cops in
the following sense: if the robber is on $x$, then the cops play in $H$ as
if the robber were on $f(x)$. If there are a sufficient number of cops to
capture the image of the robber, then, after finitely many rounds, if the
robber entered $H$ he would be immediately caught. We denote the minimum
number of cops needed to guard $H$ by $\mathrm{guard}(H)$. Note this
parameter is well-defined, as each vertex of $H$ can be guarded. In the case
of an isometric path $P$, it was shown in \cite{af} that $\mathrm{guard}(P)=1.$
\section{Tree decompositions}
In a tree decomposition, each vertex of the graph is represented by a
subtree, such that vertices are adjacent only when the corresponding
subtrees intersect. Formally, given a graph $G = (V, E)$, a \textit{tree
decomposition} is a pair $(X, T)$, where $X = \{X_1, \ldots, X_n\}$ is a
family of subsets of $V$ called \emph{bags}, and $T$ is a tree whose vertices are the subsets $%
X_i$, satisfying the following three properties.
\begin{enumerate}
\item $V=\bigcup_{i=1}^nX_i.$ That is, each graph vertex is associated with
at least one tree vertex.
\item For every edge $(v, w)$ in the graph, there is a subset $X_i$ that
contains both $v$ and $w$. That is, vertices are adjacent in $G$ only when
the corresponding subtrees have a vertex in common.
\item If $X_i$, $X_j$ and $X_k$ are nodes, and $X_k$ is on the path from $%
X_i$ to $X_j,$ then $X_i\cap X_j \subseteq X_k$.
\end{enumerate}
Item (3) is equivalent to the fact that for each vertex $x$, the bags containing $x$
form a subtree of $T.$ The \textit{width} of a tree decomposition is the size of its largest set $%
X_i$ minus one. The \textit{treewidth} of a graph $G,$ written $tw(G),$ is
the minimum width among all possible tree decompositions of $G$. For more on treewidth,
see \cite{bod,diestel}.
Given an induced subgraph $H$ of $G,$ a \emph{cover} of $H$ in $G$, written
$\mathcal{C}_G(H)$ is a set of induced subgraphs $\{H_{i}:i\in I\}$ of $G$ whose
union contains $H$ (note that the subgraphs $H_i$ need not be disjoint). A
\emph{retract cover} of $H$ in $G$ is a cover where each $H_i$ is a retract; we write $\mathcal{C}_{R,G}(H)$ to denote a
retract cover.
See Figure~\ref{zero} for an example.
\begin{figure} [h!]
\begin{center}
\epsfig{figure=zero,width=1.5in,height=1in}
\caption{The subgraph induced by the white and gray vertices forms a retract cover of the subgraph $H$ induced by the white vertices.}\label{zero}
\end{center}
\end{figure}
Define the \emph{retract cover cop number} of $H$ by
\begin{equation*}
\mathrm{rcc}_G(H)=\min_{\mathcal{C}_{R,G}(H)}\sum\limits_{H_{i}\in \mathcal{C}_{R,G}(H)}\mathrm{guard}%
(H_{i}),
\end{equation*}
where the minimum ranges over all retract covers $\mathcal{C}_{R,G}(H)$ of $H$ in $G.$ For example, $\mathrm{rcc}_G(H)=1$ in the
graph in Figure~\ref{zero}, while $\mathrm{rcc}_H(H)=2$. A retract cover which achieves this minimum is called a \emph{minimal retract cover} of $H$. Note that if $\mathrm{rcc}_G(H)$-many cops are available, then after the finitely many rounds,
the cops can be positioned so that if the robber entered $H,$ he would be
immediately caught: for each retract in a retract cover of $H,$ the appropriate number of cops guard that retract.
This is an essential observation used to prove the following theorem. For a bag $B$, we use the
notation $\left\langle B\right\rangle$ for the subgraph induced by $B.$
\begin{theorem}
\label{main1}If $G$ is a graph, then%
\begin{equation*}
c(G)\leq 2\min_{T}\{\max_{B\in T}\mathrm{rcc}_G(\left\langle B\right\rangle )\},
\end{equation*}
where the minimum ranges over all tree decompositions $T$ of $G$.
\end{theorem}
For example, if $G$ is the $n\times n$ Cartesian grid $P_{n}\square P_{n}$,
then the following is a tree decomposition of $G$ into isometric paths.
Label the vertices as $(i,j)$, where $1\le i,j \le n.$ For $1\le i \le n-1$ and
$1\le j \le n,$ consider the path $$B_{i,j} = \{(i,k): j \le k \le n\} \cup \{(i + 1, k): 1 \le k \le j \}.$$
See Figure~\ref{one} for an illustration of one such path in the case $n=5$.
\begin{figure} [h!]
\begin{center}
\epsfig{figure=grid,width=1.5in,height=1.5in}
\caption{The path with white vertices is the bag $B_{2,3}.$}\label{one}
\end{center}
\end{figure}
As the retract cover number of an isometric path is $1,$ this tree decomposition and Theorem~\ref{main1} gives an upper bound of $2$
for the cop number of grids (and of course, $2$ is the correct value).
\begin{proof}
Let $m=\min_{T}\{\max_{B\in T}rcc_G(\left\langle B\right\rangle )\},$ and let $%
T$ be a fixed tree decomposition of $G$ realizing the minimum. Place $2m$ arbitrarily cops in a fixed
bag $B$ of $T$ (any bag will do, or the cops can all move to a bag in the centre of the tree to
speed up capture). We call one team of $m$ cops $X$ and the other $X^{\prime }$. Match each cop $C$ from $X$ with a unique cop $C^{\prime }$ from $%
X^{\prime },$ so that the cops $C$ and $C^{\prime }$ share each other's
positions. (In particular, at this phase of the cops' strategy, there are at least two cops at any position occupied by the cops).
After a finite number of rounds, the cops can position themselves on a minimal retract cover of $\left\langle B\right\rangle $ so
that $\left\langle B\right\rangle $ is guarded. Hence, $R$ cannot enter $B.$
Let $B^{\prime }$ be the unique bag adjacent to $B$ in $T$ which is on a shortest path connecting the
bag $B$ to a bag containing the robber. (Note that since the set
of bags containing the robber is a subtree of $T$, the robber is not necessarily in a unique bag. However, as $T$
is a tree, there is a unique bag neighbouring $B$ which has shortest distance to the subtree containing the robber.) The cops
would like to move to $B^{\prime }$ in such a way that $R$ cannot enter $B.$
The team $X$ of cops remains in the minimal retract cover of $\left\langle B\right\rangle $ and continue to guard it; the team $X^{\prime }$
moves to a minimal retract cover of $\left\langle B^{\prime }\right\rangle$, and, after finitely many rounds guards $\left\langle B^{\prime }\right\rangle$.
Note that $B\cap B^{\prime }$ remains guarded throughout the transition. The cops in $X$ are now free to move without
concern that $R$ will enter $B.$
Now the cops in team $X$ can then move to a minimal retract cover $\mathcal{C}_{R,G}(H)$ of $\left\langle B''\right\rangle $ where $B''$ is the unique bag adjacent to $B'$ which is on a shortest path in $T$ to a bag containing the robber. Team $X$ then guards $\mathcal{C}_{R,G}(H)$. Note that we may now swap the roles of $X$ and $X'$, and we have moved the cops closer to the robber in the tree $T.$ We call the process of moving $X$ from a minimal retract cover of $B$ to a minimal retract cover of $B''$ a \emph{leap step}, as the cops in $X$ move through $B'$ and onwards toward $B''$. See Figure~\ref{leap}.
\begin{figure} [h!]
\begin{center}
\epsfig{figure=leap}
\caption{A leap step}\label{leap}
\end{center}
\end{figure}
By the definition of tree decomposition, the bags containing $R$ form a
subtree $T^{\prime }$ of $T.$ By an iterated application of leap steps (after each such step, we swap the roles of teams $X$ and $X'$) for each bag on a shortest path connecting the cops' bag to the robbers, the cops move closer
to $T^{\prime }$, ensuring the robber will never enter bags previously guarded by the cops. By induction on the number of vertices of $T$, the cops may capture the robber.
\end{proof}
Observe that the proof of Theorem~\ref{main1} gives an algorithm for capturing the robber. Further, we can estimate
the length of the game using this algorithm. To be more precise, the
\emph{length} of a game is the number of rounds it takes (not including the
initial or $0$th round) to capture the robber. We say that a play of the game with $c(G)$
cops is \emph{optimal} if its length is the minimum over all possible strategies for the cops, assuming the robber is trying to evade capture for as
long as possible. There may be many optimal plays possible (for example, on $
P_{4},$ the cop may start on either vertex of the centre), but the length of
an optimal game is an invariant of $G.$ We denote this invariant $\mathrm{capt}(G),$
which we call the \emph{capture time} of $G.$ For a bound on the capture time in terms of the strategy in the proof of Theorem~\ref{main1}, note that the cops move to a bag in the centre of $T$, guard that bag, then move
towards the robber's bag. Given a tree decomposition $T$, let the number of rounds it takes to guard a minimal retract cover of any bag in $T$ be at most $g_T$. Let $tr_T$ be the number of rounds it takes to move from a minimal retract cover of a bag $B$ to a minimal retract cover of a bag which is at most distance two from $B$ in $T$ (for instance, as in a leap step). Since the capture time of $T$ is $\left\lceil\mathrm{diam}(T)/2\right\rceil$ and the worst case is that the cops will need
to guard each bag and transition along each edge of a path with that length, a bound on the capture time of $G$ is then
\begin{equation}\label{two}
\mathrm{capt}(G) \le \min_T\{g_T(\left\lceil\mathrm{diam}(T)/2\right\rceil +1)+tr_T(\left\lceil\mathrm{diam}(T)/2\right\rceil)\},
\end{equation}
where the minimum ranges over all tree decompositions $T$ of $G.$
The bound (\ref{two}) may be far from the tight, as it depends on the values of the functions $g_T$ and $tr_T$. We can make a minor improvement on (\ref{two}) in the case $\mathrm{diam} (T)$ is odd (which implies that the centre of $T$ consists of two vertices). Start each of the two teams of cops on a minimal retract cover of different bags associated with the two bags of the centre of $T$. After each of these bags is guarded, the guards may proceed with leap steps. Using this algorithm, a bound on the number of steps to capture the robber is found replacing the ceiling functions in (\ref{two}) with floor functions. Nevertheless, (\ref{two}) represents the first estimate on the capture time we are aware of applicable to diverse families of graphs such as outerplanar graphs.
We note that if each bag is a clique, then the idea of the proof shows a
strengthened bound.
\begin{theorem}\label{i}
If $G$ has a tree decomposition with each bag a clique, then the graph $G$
is cop-win.
\end{theorem}
\begin{proof}
The proof is analogous as before, but one cop is needed to guard a given
bag. That cop can move to $B\cap B^{\prime }$ without concern that $R$ will
enter $B.$
\end{proof}
Theorem~\ref{i} gives an alternative proof that chordal graphs are cop-win, as
chordal graphs are precisely those graphs with tree decompositions where
each bag induces a clique; see~\cite{bod}. Note also that $k$-trees are chordal, and have
treewidth $k;$ in particular, the bound (\ref{first}) is linear in $k,$
while Theorem~\ref{i} requires only one cop.
We finish with a bound in the case where there are conditions on the intersection of bags.
\begin{theorem}
\label{main2}If $G$ is a graph with a tree decomposition $T$ with the property that any two bags intersect in a clique, then%
\begin{equation*}
c(G)\leq \max_{B\in T}\mathrm{rcc}_G(\left\langle B\right\rangle )+1.
\end{equation*}
\end{theorem}
\begin{proof}
The proof is analogous to the proof of Theorem~\ref{main1}, except the team $%
X^{\prime }$ consists of one additional cop $C^{\prime }$. Using the
notation of the proof of Theorem~\ref{main1}, $m$-many cops guard $\left\langle
B\right\rangle ,$ while the cop $C^{\prime }$ moves to $B\cap B^{\prime }$.
Hence, $\left\langle B\right\rangle $ is guarded. The cops $X$ can move to a minimal retract cover of
$\left\langle B^{\prime }\right\rangle$ and $C^{\prime }$ ensures that $R$ never enters $
B $ (note that $B\cap B^{\prime }$ is a cut-set of $G).$ The cops may now iterate this procedure and, by induction
on the number of vertices of the tree decomposition, eventually capture the robber.
\end{proof}
\section{Acknowledgements}
The authors would like to thank BIRS where part of the research for this
paper was conducted.
|
2,877,628,090,267 | arxiv | \section{Introduction}
Complex Machine Learning (ML) models have been adopted in many real-world decision-making tasks either to support humans or even substitute them. Despite their superior performance, the black-box nature of these models necessitates need for interpretability methods to explain their automated decisions for individuals who are subject to these decisions. Among the large body of literature on interpretable ML \citep{burkart2021survey,tjoa2020survey}, Counterfactual (CF) explanations have shown promise for practitioners. A CF explanation contains one or more CF instances. A CF instance is a perturbed version of the original instance that flips the black--box model's prediction. By comparing a CF instance with the original instance, a human user receives hints on what changes to the current situation would have resulted in an alternative decision, i.e., ``If $\bm{X}$ was $\bm{X}'$, the outcome would have been $y'$ rather than $y$.''
Generating CF explanations that are useful in real-world is still challenging. First, CF explanations generated by many existing works do not take into account causal relationships among features. This results in CF instances that are not actionable in the real-world. Take a loan application as example, a CF explanation approach that does not adopt causal relationships among features could suggests to change the present employment type from ``newbie''\footnote{In German credit dataset, this qualitative value is defined as a person who has less than $1$ year experience in his/her current job.} to ``senior''\footnote{Between $4$ to $7$ years of experience according to German credit dataset.} while the age is unchanged. Second, CF explanations are subjective and should be personalized, while existing works do not take into account constraints from the end-users of the ML models.
Back to the loan application example, a user might find it feasible to change the housing type, while another might instead prefer changing, e.g., number of installments (duration in month).
To bridge these gaps, we propose a new approach to generate CF explanations for any differentiable classifier via feasible perturbations. For this, we extend \citep{mothilal2019explaining} by formulating an objective function for generating CF instances that takes into account two types of feasibility constraints:
\begin{itemize}[label=\textbullet]
\item \textbf{Global feasibilities}: unary and binary monotonic causal constraints extracted from a domain expert,
\item \textbf{Local feasibilities}: constraints in the form of feature perturbation difficulty values, given by the end-users.
\end{itemize}
The objective function is optimized using gradient descent and feasibility constraints are satisfied during the optimization by rejecting gradient steps that do not satisfy them. It is important to note that, here we differentiate between end-user and domain expert. An end-user is the individual who is subject to the decision of the ML model, e.g. a bank customer whose loan application is rejected. A domain expert, on the other hand, knows the data and the application. We believe domain experts are naturally able to give feedback on causal relationship among (at least) some features, without being constrained to know the exact functional relationship.
The same feasibility constraints were also considered in \cite{mahajan2019preserving} for CF generation. They propose a generative model based on an encoder-decoder framework, where the encoder projects features into a latent space and the decoder generates CF instances from the latent space. Their approach, however, requires complete information about the structural causal model including the causal graph and the structural equations. This assumption is highly restrictive for applicability of the method in real-world applications. To cope with this issue, \cite{mahajan2019preserving} proposed a data driven approach to approximate unary and binary monotonic causal constraints and adopt the approximated relationships in the CF generation. For local feasibility constraints, they considered \textit{implicit} user preferences, i.e., given a pair of original instance and CF instance, $\left(\bm{x}, \bm{x}'\right)$, the user outputs $1$ if CF instance is locally feasible and $0$ otherwise. However, since there is no access to the $\left(\bm{x}, \bm{x}'\right)$ query pairs apriori, they approximate the user by first asking user preferences on some $\left(\bm{x}, \bm{q}\right)$, where $\bm{q}$ are sample CF instances generated by a CF generator without considering user preferences, and then learn a model that generates scores for each pair that mimics user preferences.
Our approach is different from \cite{mahajan2019preserving} in several aspects:
\begin{itemize}
\item in \cite{mahajan2019preserving}, for approximating each binary constraint, the model learns $2$ extra parameters. This hinders the scalability of the method. Furthermore, these approximated binary constraints could be imprecise as they are learned from the data, while in our approach we rely on domain experts to provide such constraints which is more reliable,
\item local feasibility constraints are incorporated via implicit feedbacks that are approximated using a function. These feedbacks are not directly related to the final CF instances to be generated. This could result in undesirable CF instances that do not satisfy user's constraints. On the other hand, we adopt explicit user feedbacks directly into the optimization function,
\item the type of the user feedback considered in \cite{mahajan2019preserving} for local feasibility is difficult to provide and restrictive. It is difficult to provide since the user must compare the CF instance with the original instance to find out if perturbations are locally feasible or not. It is restrictive because the approach provides no tool for the user to state the level of local infeasibilty.
As an example, assume a CF instance is generated by perturbing more than one feature of the original instance where all but one perturbation satisfy user's feasibility constraints. In our approach, user feedbacks are "feature level" and they are not restricted to $\{0,1\}$,
\item last but not least, \cite{mahajan2019preserving} did not test they approach in a real user study and it is not evident from the paper how a real user could be adopted in-the-loop to obtain desirable CF explanation.
\end{itemize}
To explore the effectiveness of our explanations, we design user studies where users are asked to rank CF instances generated under different conditions. Through these studies, we found that users tend to give significantly better ranks to CF instances generated by considering global feasibility constraints compared to the case where such constraints are not considered. Furthermore, CF instances generated by adopting both local and global feasibility constraints are better than those generated by only considering global feasibility constraints. However, their difference is not statistically significant.
In summary, we make the following contributions:
\begin{itemize}[label=\textbullet]
\item we propose a novel method to generate CF explanations that preserve both local and global feasibility constraints extracted from end-users and domain experts, respectively. This is obtained via an optimization task rather than relying on heuristics \citep{karimi2020model,tolomei2017interpretable},
\item we design and conduct user studies to demonstrate the quality of generated CF instances. Our studies confirm that CF instances generated by capturing causal relationships are more favorable for end-users compared to those generated without causal constraints. Adding local feasibility constraints to the CF generation can further improve user satisfaction from CF instances.
\end{itemize}
\section{Counterfactual Generation With Local Feasibility via User-Defined Metrics}
Assume we want to explain the undesirable prediction $y$, of a binary black-box classifier $f$, for an instance $\bm{x}\in \mathbb{R}^{p}$. Throughout the paper, we assume that the black--box model is differentiable and does not change over time. In their seminal paper, Mothilal et. al \citep{mothilal2019explaining} proposed the following optimization function to generate a set of $K$ diverse CF instances to explain the prediction $f\left(\bm{x}\right)$:
\begin{equation}
\arg\min_{\bm{c}_1,\ldots,\bm{c}_K} \frac{1}{K}\sum_{k=1}^{K} \mathcal{L}\left(f(\bm{c}_k),y'\right)+\frac{\lambda_1}{K}\sum_{k=1}^{K}d\left(\bm{c}_k,\bm{x}\right)-\lambda_2\mbox{d\_div}\left(\bm{c}_1,\ldots,\bm{c}_K\right),
\label{CF_div_gen}
\end{equation}
where $\mathcal{L}$ is the loss function that pushes the predictions of $f$ for CF instances toward the desirable prediction $y'$, $d\left(.,.\right)$ is a distance measure to keep the CF instances close to $\bm{x}$, $\mbox{d\_div}\left(.,.\right)$ is the diversity metric among CF instances and $\lambda_1$ and $\lambda_2$ are regularizers that control the relative importance of diversity among the CF instances and proximity of CF instances to $\bm{x}$.
Of particular importance is the choice of distance measure $d\left(.,.\right)$ and diversity metric $\mbox{d\_div}\left(.\right)$. These metrics are subjective. When generating CF instances, a user might prefer to keep the distance between some features of the CF instance and the original instance zero, i.e., user does not tolerate any change to values of these features, while other features are adjustable. The same argument is valid for the diversity measure. To take into account these subjective constraints, we assume each user provides her preferences for perturbing features during CF generation via feature perturbation difficulty values. These values are exploited in the CF generation process via a similarity metric to be used when computing proximities.
\subsection{Proximity}
The closeness of a set of CF instances to their original instance is equal to the mean of their negative distances. Following \citep{wachter2017counterfactual,mothilal2019explaining}, we define separate distance metrics for continuous and categorical features.
\textbf{Continuous Features.} The distance between each continuous feature of a CF instance to the corresponding feature in the original instance can be formulated as the Mahalanobis distance with a user defined metric $\bm{A}$:
\begin{equation}
d_{\bm{A}}^{\mbox{cont}}\left(\bm{c}_k,\bm{x}\right) = \left(\bm{c}_k-\bm{x}\right)^T\bm{A}_{\bm{\gamma}}\left(\bm{c}_k-\bm{x}\right) = \sum_{j=1}^{p_{cont}}\gamma_j\left(\bm{c}_{kj}-\bm{x}_j\right)^2
\label{eq:mahalanobis_dist}
\end{equation}
where $\bm{A}_{\bm{\gamma}}\in \mathbb{R}^{p_{cont}\times p_{cont}}$ is a diagonal matrix with diagonal elements being encoded in the vector $\bm{\gamma}$, $\gamma_j$ is the perturbation difficulty value of the $j$\textsuperscript{th} feature, and $p_{cont}$ is the total number of continuous features. Users are restricted to assign only positive values to $\gamma_j$ to ensure positive semi-definitiy of the Mahalanobis distance metric $\bm{A}_{\bm{\gamma}}$.
Each $\gamma_j$ indicates how much influential the feature is in determining the distance between the CF instance and the original instance; a large value for a feature implies that the feature is very difficult for the user to perturb while decreasing the value toward $1$ means that it is increasingly easier to change \citep{afrabandpey2019human}. To cancel out the effect of different feature ranges, feature-wise distances can be divided by the standard deviation of the feature.
It has been shown in \citep{wachter2017counterfactual} that Manhattan distance normalized by the median absolute deviation of features has desirable properties. This metric can also adopt $\{\gamma_j\}_{j=1}^{p_{cont}}$ as multipliers for feature-wise distances.
\textbf{Categorical Features.} For categorical features, we use the overlap distance with feature perturbation difficulty values as:
\begin{equation}
d_{\bm{A}}^{\mbox{cat}}\left(\bm{c}_k,\bm{x}\right)=\sum_{j}^{p_{cat}}\gamma_jI\left(\bm{c}_{kj}\neq\bm{x}_j\right),
\label{eq:cat_dist}
\end{equation}
where $I(.)$ is the index function that returns $1$ if the condition inside it is true and 0 otherwise, and $p_{cat}$ is the total number of categorical features. Based on this metric, when $\gamma$ is large, a miss--match of the values of two categorical features results in high cost and vice versa.
\subsection{Diversity}
Following \citep{mothilal2019explaining}, we use the determinant of the kernel matrix of CF instances:
\begin{equation}
\mbox{d\_div}=\left|\mathbf{K}\right|,
\end{equation}
where $\mathbf{K}_{ij}=\frac{1}{1+d\left(\mathbf{c}_i,\mathbf{c}_j\right)}$, and $d\left(\mathbf{c}_i,\mathbf{c}_j\right)$ is the distance between two counterfactual instances $i$ and $j$ as defined in the previous subsection.
\subsection{Loss Function}
We use hinge loss defined as:
\begin{equation}
\max\left(0, 1-z\times \mbox{logit}\left(f\left(\bm{c}\right)\right)\right),
\label{eq:loss_function}
\end{equation}
where $z=-1$ for $y=0$ and $z=1$ for $y=1$ and $f(\bm{c})$ is the output of the model before entering the softmax layer (unscaled output).
\section{Modeling Global Feasibility Constraints}
We consider the case where the structural causal model of the observed data is unknown, however there is a domain expert who is able to provide unary and binary causal constraints at least for some features. Unary constraints stipulate whether a feature can increase or decrease. For example, a unary constraint on ``Age'' restricts it to only increase since in real-world it is not possible to decrease the age. Binary constraints capture the causal relationship between two features. An example of a binary causal relationship between ``education'' and ``age'' states that increasing ``education'' results in increase of ``age''. We focus on monotonic binary constraints where increasing (decreasing) an upstream feature, causes an increase (decrease) in its child.
Given unary and binary causal constraints by the domain expert, our goal is to exploit them when generating CF instances. To model binary constraints, we define a vector $\bm{b}^i\in\mathbb{R}^p$ for each feature $i\in\{1,2,3\cdots,p\}$ that represents causal relationships where $x_i$ is the upstream variable,
\begin{equation}
\bm{b}^i_j = \begin{cases}
-1 & \text{if } \left(x_{i}\rightarrow x_{j}\right) \mbox{ OR } \left(i==j\right)\\
\;0 & \text{otherwise}
\end{cases},
\end{equation}
where $x_{i}\rightarrow x_{j}$ refers to the binary causal constraint where any change (increase/decrease) in the value of the $i$\textsuperscript{th} feature monotonically changes the value of the $j$\textsuperscript{th} feature.
Let $\mathbb{L}$ be the function to be minimized (using gradient descent) to generate CF instances. At each iteration of the optimization,
$\nabla_{\bm{c}} \mathbb{L} = \left[\frac{\partial \mathbb{L}}{\partial \bm{c}_{1}}, \frac{\partial \mathbb{L}}{\partial \bm{c}_{2}}, \cdots, \frac{\partial \mathbb{L}}{\partial \bm{c}_{p}}\right]^T$ where the $i$\textsuperscript{th} element of the gradient vector, $\frac{\partial \mathbb{L}}{\partial \bm{c}_{i}}$, determines the direction (sign of $\frac{\partial \mathbb{L}}{\partial \bm{c}_{i}}$) and the size (magnitude of $\frac{\partial \mathbb{L}}{\partial \bm{c}_{i}}$) of the step to be taken w.r.t. the $i$\textsuperscript{th} feature to reach the minimum of $\mathbb{L}$. The combination of $\nabla_{\bm{c}} \mathbb{L}$ and $\bm{b}$ tells whether or not the gradient update violates existing binary causal constraints. This is obtained by
\begin{equation}
\bm{v}^i = \left(\bm{b}^i\circ \mbox{sgn}\left(\nabla_{\bm{c}} \mathbb{L}\right)\right) + \left(\mbox{sgn}\left(\frac{\partial \mathbb{L}}{\partial \bm{c}_{i}}\right).\mathbf{1}\right), \;\;\; \forall i\in\{1,\cdots,p\}
\label{eq:gradient_validity}
\end{equation}
where $\circ$ is the element-wise multiplication, $\mbox{sgn}\left(.\right)$ is the sign function, and $\mathbf{1}\in \mathbb{R}^p$ is the vector of all ones. Elements of $\bm{v}^i \in \mathbb{R}^p$ could have values in $\{0,\pm1, \pm2\}$ with the following interpretation:
\begin{itemize}[label=\textbullet]
\item when $v^i_j=0$, its either because there is no (monotonic) causal relationship between the two features, or because the gradient signs satisfies the causal relationship, or because $i=j$,
\item $v^i_j=\pm2$ happens when there is a causal relationship between the two features, but the gradient signs for the two features violates the causal relationship,
\item finally, $v^i_j=\pm1$ happens only when the gradient of the loss function w.r.t one of the features is zero. If the gradient w.r.t. the downstream feature, i.e., $x_j$, is zero, then the causal relationship is violated since the upstream feature changes while the downstream feature remains constant. Otherwise, the update is valid.
\end{itemize}
With the above interpretation and given the vectors $\{\bm{v}^i\}_{i=1}^p$, one accepts or rejects gradient update w.r.t. the $i$\textsuperscript{th} feature as follows:
\begin{multline}
\frac{\partial \mathbb{L}}{\partial \bm{c}_{i}} = \begin{cases}
0 & \text{if } \left(v^i_j = \pm2\right) \vee \left(v^i_j=\pm1 \wedge \frac{\partial \mathbb{L}}{\partial \bm{c}_{j}} = 0\right)\\
\frac{\partial \mathbb{L}}{\partial \bm{c}_{i}} & \text{otherwise}
\end{cases}, \;\;\; \forall j\in\{1,\cdots,p\}
\label{eq:gradient_update}
\end{multline}
To apply unary constraints, we define two hypothetical features shown by $U^+$ and $U^-$. These can only be used as upstream features; $U^+\rightarrow x_k$ determines that $x_k$ can only increase, while $U^-\rightarrow x_k$ states that the feature can only decrease. We define vectors $\bm{b}^{+}\in\mathbb{R}^p$ and $\bm{b}^{-}\in\mathbb{R}^p$ to model unary constraints where each row of these vectors determines whether or not the feature of the corresponding row is downstream feature for $U^+$ or $U^-$, respectively. With this definition, we have $\bm{v}^{+} = \bm{b}^{+}\circ\mbox{sgn}\left(\nabla_{\bm{c}}\mathbb{L}\right)$ and $\bm{v}^{-} = \bm{b}^{-}\circ\mbox{sgn}\left(\nabla_{\bm{c}}\mathbb{L}\right)$. Both vector $\bm{v}^{+}$ and $\bm{v}^{-}$ have values in $\{0,\pm1\}$. When $\bm{v}^{+}_j = -1$, the unary constraint is violated by the update to the $j$\textsuperscript{th} feature, while $\bm{v}^{+}_j=0$ means that the update is zero and $\bm{v}^{+}_j=+1$ represents that the gradient update satisfies the constraint. For $\bm{v}^{-}$, if the value of an element equals $+1$, the constraint is violated.
\section{Empirical Evaluation}\label{Sec:experiments}
We conduct two user studies as explained in the following subsections to verify how good are the explanations generated using our approach when compared to DiCE \cite{mothilal2019explaining}.
\subsection{Datasets and Model}\label{subsec:datasets}
The datasets used in the experiments are Adult and german credit datasets. We adopted the pre-processed version of the adult dataset based on \cite{haojun2016predicting} with $8$ features, namely, age, work class, education level, marital status, occupation, race, gender, and hours per week. For german credti, we consider $9$ demographic and socio-economic features, including: duration in month, credit history, credit amount, present employment since unemployed, sex, age, job, and number of people liable to provide maintenance for. For both datasets, categorical features are one-hot-encoded and continuous features are scaled between $0$ and $1$. Datasets are divided into $80\%-20\%$ train and test sets.
The black-box model is a single layer neural network trained for $20$ epochs with learning rate $0.01$ using ADAM optimizer \cite{kingma2015adam} to minimize cross entropy loss. Accuracies of the trained neural network on held-out test set are $83\%$ and $73\%$ for Adult and German Credit, respectively. In all experiments, we followed \cite{mothilal2019explaining} and used Manhattan distance normalized by median absolute deviation of features as distance metric for continuous features when generating counterfactual instances using DiCE and our approach. Both regularizers in Eq. \ref{CF_div_gen}, i.e., $\lambda_1$ and $\lambda_2$, are set to $1$ in the implementation. All experiments were run on a $1.9$ GHz CPU with $8$ GB RAM.
\subsection{Baselines}
We focus on DiCE \cite{mothilal2019explaining} as baseline, because our implementation is based on the DiCE code and therefore comparison is easier. In the following subsection, for the sake of brevity, we call our proposed method C-DiCE, i.e. DiCE that also takes into account Causal constraints.
\subsection{Feasibility Constraints and User Satisfaction}\label{user_study}
Most of the existing approaches for counterfactual explanation are validated without user experiment. A key limitation of this approach is that predefined metrics such as those adopted in \cite{mahajan2019preserving,mothilal2019explaining,sharma2019certifai}, do not precisely capture the human cognition when evaluating subjective criteria such as desirability of CF instances. For this, we develop experiments with real users to study the goodness of CF explanations considering following conditions,
\begin{itemize}[label=\textbullet]
\item \textbf{C1}: CF instances generated by DiCE, i.e., without feasibility constraints;
\item \textbf{C2}: CF instances generated by C-DiCE by taking into account only global feasibility constraints;
\item \textbf{C3}: CF instances generated by C-DiCE considering both global and local feasibility constraints.
\end{itemize}
Users are asked to rank CF instances generated with DiCE, C-DiCE:C2, and C-DiCE:C3, for several real instances based on three criteria: (i) validity, (ii) feasibility, and (iii) desirability. Validity determines whether or not the generated CF instance flips the outcome of the original instance. Feasibility states whether or not the values assigned to each feature or combination of features are feasible considering real-world constraints. Finally, desirability reflects users' satisfacation on CF instances. Our hypotheses were:
\begin{itemize}[label=\textbullet]
\item \textbf{H1}: CF instances generated by C-DiCE:C2 have, on average better ranks in terms of the above-mentioned criteria, than those generated by DiCE;
\item \textbf{H2}: C-DiCE:C3 generates CF instances with better average ranks compared to both DiCE and C-DiCE:C2 in terms of the introduced criteria.
\end{itemize}
We conduct two user studies: in the first one users were asked to rank CF instances generated by DiCE and C-DiCE:C2 using Adult dataset, and in the second study users were asked to rank CF instances generated by C-DiCE:C2 and C-DiCE:C3 using German credit dataset.
Since we do not have the true causal constraints, we asked $4$ data scientists who were familiar with the datasets to determine unary and binary causal constraints. Extracted constraints highly agreed with agreement rate above $90\%$ on average over all experts, however we only adopted constraints all experts agreed on. Constraints defined for adult dataset are: $education \rightarrow age$, $U^+ \rightarrow age$, and $U^+ \rightarrow education$.
For German credit dataset the feasibility constraints are: $present\_employment\_since\_unemployed \rightarrow age$, $job \rightarrow age$, $U^+ \rightarrow age$.
From the test set of each dataset, we randomly assign $5$ samples with undesirable output
to each participant. These samples are drawn randomly without replacement to ensure that participants are assigned unique samples. For each sample, we generate $10$ CF instances; in the first experiment, $5$ using DiCE and $5$ using C-DiCE:C2, and in the second experiment, $5$ using C-DiCE:C2 and $5$ using C-DiCE:C3. These CF instances are shuffled and showed to a participant together with their original instance, however, the participant is unaware of the condition using which the CF instances are generated. CF instances are visualized to the user in a spreadsheet. Developing a more user-friendly tool for visualization and feedback collection is out of the scope of this paper. The participant is asked to rank these CF instances from $1$ to $10$, where $1$ is the best rank and $10$ is the worst. Participants are not allowed to assign same ranks to several CF instances unless they are identical. At the end of the user study, we conduct an interview with the participants to discuss their ranks and ensure they understood the task correctly.
\begin{figure}[t!]
\centering
\begin{tabular}[b]{c}
\includegraphics[scale=.32]{user_specific_adult.png} \\ (a)
\end{tabular} \hspace{-3pt}
\begin{tabular}[b]{c}
\includegraphics[scale=.32]{all_users_adult.png} \\ (b)
\end{tabular} \\
\begin{tabular}[b]{c}
\includegraphics[scale=.32]{user_specific_german.png} \\ (c)
\end{tabular} \hspace{-3pt}
\begin{tabular}[b]{c}
\includegraphics[scale=.32]{all_users_german.png} \\ (d)
\end{tabular}
\caption{Results of the user studies for comparing CF instances generated by (a,b) DiCE and C-DiCE:C2 and (c,d) C-DiCE:C2 and C-DiCE:C3: (a,c) mean of the ranks provided by each user, (b,d) mean and standard deviation of the ranks provided by all users. In (a) and (b) results of User 3 are discarded.}
\label{fig:T1Results}
\end{figure}
We adopted a between subject study: $18$ participants for first experiment ($10$ male)\footnote{We hired $19$ participants for this study, but one of them (user $3$) was discarded as s/he provided incorrect input. This was pointed out during the interview.}, and $16$ participants for the second one ($9$ male). Participants were from different backgrounds including computer science ($13$), engineering fields ($8$), mathematics ($5$), social sciences ($5$), and management science ($3$). They were also from different education level including Bachelor ($6$), Masters ($8$), PhD student in 2\textsuperscript{nd} or 3\textsuperscript{rd} year ($10$), and PhD graduated ($10$); were aged $20$-$45$. Each participant was first introduced to the concept and the task using a toy example and then asked to complete the test task. To generate CF instances using C-DiCE:C3, we first ask users to provide their local feasibility constraints by assigning feature perturbation values $\gamma_j$ to each feature. In our implementation, we use $\gamma_j \in \left[1,5\right]$. Each experiment took $\sim 1$ hour and a movie ticket was awarded.
Figure \ref{fig:T1Results}.(a,b) demonstrates results of the first user study with Adult dataset. Figure \ref{fig:T1Results}.a compares the average ranks provided by each participant to each method, DiCE and C-DiCE:C2. All users provide better ranks to CF instances generated by C-DiCE:C2. Figure \ref{fig:T1Results}.b compares mean and standard deviation of the ranks over all users. Note that average rank over all users is a value in the range $\left[3,8\right]$.
To better compare the superiority of the CF instances generated by C-DiCE:C2 compared to DiCE, Table \ref{T1:best-k} demonstrates the ratio of the top-$k$ CF instances generated by DiCE and C-DiCE:C2. According to the table, CF instances generated by C-DiCE:C2 were the top-$1$ CF instance
almost twice as those generated by DiCE ($64.4\%$ compared to $35.6\%$). The difference is larger for top-$2$ and top-$3$ CF instances.
\begin{table}[t!]
\centering
\caption{Ratio (\%) of top-$k$ CF instances generated by each method to all CF instances for $k=1$, $2$, $3$ in Adult dataset. Values are averaged over $18$ users.}
\begin{tabular}{cc|c|c}
\cline{2-4}
& Top-1 & Top-2 & Top-3 \\ \hline
\multicolumn{1}{c|}{C-DiCE:C2} & $\bm{64.4}$ & $\bm{70.6}$ & $\bm{74.2}$ \\ \hline
\multicolumn{1}{c|}{DiCE} & $35.6$ & $29.4$ & $25.8$ \\ \hline
\end{tabular}
\label{T1:best-k}
\end{table}
Figure \ref{fig:T1Results}.(c,d) shows the results of the second experiment. Figure \ref{fig:T1Results}.c demonstrates that $10$ out of $16$ users gave better ranks to the CF instances generated by C-DiCE:C3, where the differences between the average ranks provided by User 27 and User 35 are very marginal. Figure \ref{fig:T1Results}.d shows that average ranks over all users for the two approaches are very close to each other.
Similar to the first experiment, Table \ref{T2:best-k} demonstrate the ratio of the top-$k$ ranked CF instances generated by C-DiCE:C3 compared to those generated by C-DiCE:C2. The table demonstrates that users prefer CF instances generated by C-DiCE:C3, however the differences in the top-$k$ values are not as large as those shown in Table \ref{T1:best-k}. In this experiment, there were many identical CF instances generated by C-DiCE:C2 and C-DiCE:C3. Participants assigned same ranks to identical CF instances; consequently summation of the values of corresponding columns in Table \ref{T2:best-k} are not $1$.
\begin{table}[t!]
\centering
\caption{Ratio (\%) of top-$k$ CF instances generated by C-DiCE under two different conditions to all CF instances for $k=1$, $2$, $3$ in German credit dataset. Values are averaged over $16$ users.}
\begin{tabular}{cc|c|c}
\cline{2-4}
& Top-1 & Top-2 & Top-3 \\ \hline
\multicolumn{1}{c|}{C-DiCE:C2} & $55$ & $50.7$ & $53.5$ \\ \hline
\multicolumn{1}{c|}{C-DiCE:C3} & $\bm{68.7}$ & $\bm{59.3}$ & $\bm{58}$ \\ \hline
\end{tabular}
\label{T2:best-k}
\end{table}
We use Bayesian T-test \cite{kruschke2013bayesian} to assess the significance of the differences between average ranks provided by the participants to CF instances generated by different approaches. Bayesian T-test constructs a distribution for the mean and standard deviation for the group of ranks given by the users to the CF instances generated by each approach. Then, it constructs a probability distribution over the differences between the group-specific distributions using MCMC estimation. Figure \ref{fig:sig_test} demonstrate these distributions for the (a) first experiment, where $\mu_1$ and $\mu_2$ refer to the average ranks of the CF instances generated by C-DiCE:C2 and DiCE, respectively, and (b) second experiment, where $\mu_1$ and $\mu_2$ refer to the average ranks given by the participants to CF instances generated by C-DiCE:C3 and C-DiCE:C2, respectively. Each distribution include the mean credible value as the best guess of the actual difference and the $95\%$ Highest Density Interval (HDI) as the range were the actual difference is with $95\%$ credibility. If the $95\%$ HDI includes zero, the difference between the average ranks is not significant. Figure \ref{fig:sig_test}.a, Figure \ref{fig:T1Results}, and Table \ref{T1:best-k} confirm that CF instances generated by C-DiCE:C2 are significantly better than those generated by DiCE. This difference is due to the adoption of global feasibility constraints. On the other hand, in Figure \ref{fig:sig_test}.b, the $95\%$ HDI contains zero which states that the difference between average ranks of CF instances generated by C-DiCE:C2 and C-DiCE:C3 is not statistically significant.
\begin{figure}[t]
\centering
\begin{tabular}[b]{c}
\includegraphics[scale=0.3]{BEST_Test_adults.png} \\ (a)
\end{tabular} \hspace{-2pt}
\begin{tabular}[b]{c}
\includegraphics[scale=0.3]{BEST_Test_german.png} \\ (b)
\end{tabular}
\caption{Results of the Bayesian \textit{t}-test for the (a) first experiment where $\mu_1$ and $\mu_2$ refer to the average ranks given to the CF instances generated by C-DiCE:C2 and DiCE, respectively, and (b) second experiment where $\mu_1$ and $\mu_2$ demonstrate average ranks of the CF instances generated by C-DiCE:C3 and C-DiCE:C2, respectively. If the $95\%$ HDI contains zero, then the difference between the average ranks of the CF instances generated by the two approach is not statistically significant.}
\label{fig:sig_test}
\end{figure}
\section{Conclusion and Discussion}\label{conclusion}
Building upon prior work on CF explanations, we introduce a novel approach to generate feasible and desirable CF explanation. We consider two levels of feasibility, i.e., global and local. The former is defined in terms of causal constraints among variables and is extracted from domain experts, while the later captures end-user defined constraints. For global feasibility, the current work accounts for unary and binary monotonic causal constraints as two most common types of constraints and left the more complicated constraints for future work. User studies demonstrate the effectiveness of the proposed approach in increasing user satisfaction about CF explanation. Designing a tool to help domain experts to provide global feasibility constraints especially in datasets with large number of features is considered as a future work.
\bibliographystyle{abbrvnat}
|
2,877,628,090,268 | arxiv |
\section{Quantum statistical mechanics}
\label{sec:zubarev:quant_stat_mech}
The aim of statistical mechanics is the description of systems which are known solely by the specification of some global variables.
Typically these systems are macroscopic, in the sense that they involve a great number of degrees of freedom, and in quantum mechanical language, the given macroscopic data are expectation values of operators.
At the microscopic level, however, there are in general several possible states producing the same values for the macroscopic variables, therefore the
state is not completely known: all we can give is the probability that the system is described by one state or another.
Therefore, such a system is represented by the set $\{|\psi_i\rangle,p_i\}$ of the possible states $|\psi_i\rangle$ compatible with the given macroscopic data each with its own probability of occurrence $p_i\in [0,1]$ such that $\sum_ip_i=1$.
The set $\{|\psi_i\rangle,p_i\}$ is called a \textit{statistical ensemble}, and each state $|\psi_i\rangle$ is an element of that ensemble.
Thus, such a system embodies two probabilities of different origin: the one is quantum mechanical and due to the $|\psi_i\rangle$, the other is classical and ascribed to our incomplete knowledge of the state represented by the $p_i$.
If the system is exactly in one of the $|\psi_i\rangle$ it is said to be in a \textit{pure state}, otherwise it is in a \textit{mixed state}.
For a pure state, all the $p_i$ vanish except one, which is equal to 1.
On the contrary, for a maximally mixed state, all the $|\psi_i\rangle$ have the same probability, which is $p_i=1/N$ with $N$ the dimension of the Hilbert space the $|\psi_i\rangle$ belong to.
For a system described by the statistical ensamble $\{|\psi_i\rangle,p_i\}$, the expectation value of an operator $\widehat{O}$ is
\begin{equation}\label{zubeq19}
\langle \widehat{O}\rangle=\sum_ip_i\langle \psi_i|\widehat{O}|\psi_i\rangle=\mathrm{tr}(\widehat{\rho}\widehat{O}),
\end{equation}
where $\widehat{\rho}$ such that $\mathrm{tr}(\widehat{\rho})=1$ is called the \textit{density operator} and is defined as
\begin{equation}\label{zubeq08}
\widehat{\rho}\equiv \sum_ip_i|\psi_i\rangle \langle \psi_i|.
\end{equation}
The density operator embodies in a compact form the statistical nature of the macroscopic system and allows us to calculate expectation values of operators, therefore it is the natural object used in quantum statistical mechanics to describe a statistical ensemble.
Our knowledge of the system is more or less complete: clearly our information is maximum when we can make predictions with full certainty, and it is larger when the system is in a pure state than in a mixed one.
Moreover, the system is better known when the number of possible
states is small or when the probability for one of them is close to unity than when there is a large number of possible
states with all approximately the same probability.
The idea is then to identify the missing information with a quantitative measure for the degree of disorder existing in a system whose preparation has a random nature.
This quantity is the \textit{von Neumann entropy}
\begin{equation}\label{zubeq01}
S\equiv -\mathrm{tr}(\widehat{\rho}\log \widehat{\rho}),
\end{equation}
which is, in fact, maximum for a maximally mixed state and minimum and equal to zero for a pure state, as desired.
The von Neumann entropy satisfies a number of other properties, but to our intents and purposes it suffices to say that it is a good measure of our missing information on a system.
Recall that the thermodynamic entropy is the von Neumann entropy multiplied by the Boltzmann constant, which in this work is set to $k_{\rm B}=1$, so we regard them as the same quantity which we indicate as $S$ and simply refer to as ``entropy''.
In order to actually be able to make predictions on a system prepared in some given way, we must know how to assign it a density operator representing the physical configuration that we want to describe.
If we know nothing about the system, the answer is simple: we must assume that all the possible
states are equally probable, any other choice would arbitrarily introduce an order for which there are no reasons of believing that it exists.
This is but the maximally mixed state, namely the one that maximizes the
entropy.
If the system is partially known, meaning that we are given some data, the same kind of philosophy applies: among all the possible
states, we must select those compatible with the given data and assign them equal probabilities.
In other words, we must look for the maximum of the
entropy while reproducing the known data.
We assume therefore that the best guess for $\widehat{\rho}$ is provided by the following prescription: amongst all the density operators compatible with the available data, we must represent the system by the one which has the largest value of the
entropy.
This is called the \textit{maximum entropy principle}.
The states determined accordingly are said to be at \textit{thermodynamic equilibrium}, all other states are called of \textit{non-equilibrium}.
Mathematically, the available information is represented by a set of expectation values $\{\langle \widehat{O}_i\rangle\}$ of a set of observables $\{\widehat{O}_i\}$.
The density operator must reproduce the data, therefore it must fulfill that constraints
\begin{equation}
\mathrm{tr}(\widehat{\rho}\widehat{O}_i)=\langle \widehat{O}_i\rangle,\qquad \forall i.
\end{equation}
For each constraint, we introduce a Lagrange multiplier $\lambda_i$ and maximize the functional expression
\begin{equation}\label{zubeq02}
-\mathrm{tr}(\widehat{\rho}\log \widehat{\rho})-\sum_i\lambda_i\left(\mathrm{tr}(\widehat{\rho}\widehat{O}_i)-\langle \widehat{O}_i\rangle \right)
\end{equation}
with respect to $\widehat{\rho}$.
The solution has the well-known form
\begin{equation}\label{zubeq03}
\widehat{\rho}=\frac{1}{Z}\exp \left[-\sum_i\lambda_i\widehat{O}_i\right],\qquad
Z=\mathrm{tr} \left(\exp \left[-\sum_i\lambda_i\widehat{O}_i\right]\right),
\end{equation}
where $Z$ is the \textit{partition function} enforcing the normalization $\mathrm{tr}(\widehat{\rho})=1$.
Depending on which set of observables is chosen, a different statistical ensamble is obtained.
We conclude this review with a remark on the time evolution.
In the Schr\"odinger picture, the evolution of the
states $|\psi_i\rangle$ is governed by the Schr\"odinger equation, whose solution is a unitary transformation of $|\psi_i\rangle$
\begin{equation}
i\frac{{\rm d}}{{\rm d} t}|\psi_i\rangle=\widehat{H}|\psi_i\rangle
\qquad \Rightarrow \qquad
|\psi_i(t)\rangle=\widehat{U}(t)|\psi_i(0)\rangle,\qquad
\widehat{U}(t)={\rm e}^{-i\widehat{H}t}
\end{equation}
with $\widehat{H}$ the Hamiltonian.
As a consequence, the evolution of the density operator is governed by the von Neumann equation, which is similar to the Heisenberg equation but with opposite sign.
Its solution is again a unitary transformation of $\widehat{\rho}$
\begin{equation}\label{zubeq14}
\frac{{\rm d} \widehat{\rho}}{{\rm d} t}=-i[\widehat{H},\widehat{\rho}]
\qquad \Rightarrow \qquad
\widehat{\rho}(t)=\widehat{U}(t)\widehat{\rho}(0)\widehat{U}^{\dagger}(t).
\end{equation}
A known property of the
entropy, which can be checked by using the definition \eqref{zubeq01}, is the invariance under unitary transformations of the density operator, therefore it is in fact stationary in the Schr\"odinger picture.
This is more obvious in the Heisenberg picture, where the
states $|\psi_i\rangle$ are constant in time, and so is the density operator.
We must conclude that, regardless of the picture, the
entropy is always stationary in quantum statistical mechanics.
However, it is well-known that, in classical physics, irreversible processes that produce entropy can definitely occur, as we discussed also in Chapter \ref{chapter:relhydro}.
From a kinetic theory perspective, for instance, if a system is initially out of thermodynamic equilibrium, the collisions between particles make the entropy increase until it reaches a maximum corresponding to a state of thermodynamic equilibrium.
Thus, in analogy with the classical theory, is there a way to describe irreversible processes where the entropy can actually increase in the quantum theory?
This question leads us to the next Section.
\section{The Zubarev approach}
\label{sec:zubarev_zub_approach}
The aim of D.\ N.\ Zubarev was actually way broader than just the description of quantum processes with entropy production.
With the development of relativistic astrophysics, cosmology, and the hydrodynamic theory of multiparticle production, in the late 70's it became necessary to go beyond the framework of the phenomenological linear relativistic hydrodynamics.
Thus, his purpose back then was rather to derive systematically the equations of non-linear quantum relativistic hydrodynamics, the transport relations and the transport coefficients without relying in the general case on the assumption that the state of the system is close to thermodynamic equilibrium.
Dealing with full non-equilibrium and including quantum effects, his formulation appears to be
completely general
and also well suited to describe, among other things, the entropy production of quantum processes.
\subsection{Local thermodynamic equilibrium density operator}
The core of the Zubarev formulation of the quantum statistical foundations of relativistic hydrodynamics is the so-called \textit{local thermodynamic equilibrium density operator}.
This is defined, according to the maximum entropy principle, as the operator $\widehat{\rho}_{\rm LE}$ that maximizes the functional expression $-\mathrm{tr}(\widehat{\rho}_{\rm LE}\log \widehat{\rho}_{\rm LE})$ with constraints of given energy, momentum and possible charge densities.
This definition does not assume any underlying kinetic theory, and it also is completely unambiguous in the non-relativistic theory as it only demands for the densities to vary significantly over distances much larger than the characteristic microscopic scale \cite{balian2006microphysics, Becattini:2014yxa}.
Its relativistic extension, however, requires a little more effort due to the stronger frame-dependence of the densities.
The first step towards a covariant expression of $\widehat{\rho}_{\rm LE}$ is the foliation of the spacetime with a 1-parameter family of 3-dimensional spacelike hypersurfaces $\{ \Sigma(\tau)\}$.
Indeed the timelike unit vector field $n^{\mu}$ normal to the hypersurfaces defines worldlines of observers, but $\tau$ does not in general coincide with the proper time of comoving observers.
By the Frobenius theorem, in order for the foliation to be well-defined, the normal unit vector must fulfill the so-called \textit{vorticity-free condition}
\begin{equation}\label{zubeq07}
\epsilon_{\mu \nu \rho \sigma}n^{\mu}(\partial^{\nu}n^{\rho}-\partial^{\rho}n^{\nu})=0,
\end{equation}
with $\epsilon_{\mu \nu \rho \sigma}$ the Levi-Civita symbol.
Such a foliation, also known as the Arnowitt-Deser-Misner (ADM) foliation, will hereafter be assumed to exist for the geometry at hand.
Whatever the expression of $\widehat{\rho}_{\rm LE}$ is, the renormalized energy-momentum and charge densities on a hypersurface $\Sigma(\tau)$ of the foliation calculated with $\widehat{\rho}_{\rm LE}$ are respectively
$n_{\mu}\mathrm{tr}(\widehat{\rho}_{\rm LE}\widehat{T}^{\mu \nu})_{\rm ren}$ and $n_{\mu}\mathrm{tr}(\widehat{\rho}_{\rm LE}\widehat{j}^{\mu})_{\rm ren}$.
These are constrained to equal the actual values, that is
\begin{equation}\label{zubeq04}
n_{\mu}\mathrm{tr}(\widehat{\rho}_{\rm LE}\widehat{T}^{\mu \nu})_{\rm ren}=n_{\mu}T^{\mu \nu},\qquad
n_{\mu}\mathrm{tr}(\widehat{\rho}_{\rm LE}\widehat{j}^{\mu})_{\rm ren}=n_{\mu}j^{\mu}.
\end{equation}
In the following, these will be referred to as the \textit{constraint equations}.
The quantities at right-hand side are the actual ones, finite, however they are known or defined.
In other words, they are what we called the given data in the previous Section.
The plain expectation values of $\widehat{T}^{\mu \nu}$ and $\widehat{j}^{\mu}$ are divergent in general, therefore it is important to emphasize that the quantities at left-hand side must be suitably renormalized in order to match the actual ones.
The renormalization procedure depends on the details of the underlying Quantum Field Theory.
Throughout the rest of this work, we will be concerned with the simplest case of free field theories, therefore renormalization is most readily established by subtraction of the vacuum expectation value
\begin{subequations}
\begin{align}
\mathrm{tr}(\widehat{\rho}_{\rm LE}\widehat{T}^{\mu \nu})_{\rm ren}=&
\mathrm{tr}(\widehat{\rho}_{\rm LE}\widehat{T}^{\mu \nu})-\langle 0|\widehat{T}^{\mu \nu}|0\rangle=
\mathrm{tr}(\widehat{\rho}_{\rm LE}:\widehat{T}^{\mu \nu}:)=\langle :\widehat{T}^{\mu \nu}:\rangle_{\rm LE}\\
\mathrm{tr}(\widehat{\rho}_{\rm LE}\widehat{j}^{\mu})_{\rm ren}=&
\mathrm{tr}(\widehat{\rho}_{\rm LE}\widehat{j}^{\mu})-\langle 0|\widehat{j}^{\mu}|0\rangle=
\mathrm{tr}(\widehat{\rho}_{\rm LE}:\widehat{j}^{\mu}:)=\langle :\widehat{j}^{\mu}:\rangle_{\rm LE},
\end{align}
\end{subequations}
which is tantamount to normally ordering the creation and annihilation operators because the conserved currents are quadratic in the fields.
In the last step, we have used the notation $\langle \widehat{O}\rangle_{\rm LE}\equiv \mathrm{tr}(\widehat{\rho}_{\rm LE}\widehat{O})$ for the expectation value of an operator $\widehat{O}$ calculated with $\widehat{\rho}_{\rm LE}$, much in the same was as \eqref{zubeq19}.
Thus, in analogy with the non-relativistic case \eqref{zubeq02}, we introduce a set of Lagrange multipliers and maximize following functional expression with respect to $\widehat{\rho}_{\rm LE}$
\begin{equation}
-\mathrm{tr}(\widehat{\rho}_{\rm LE}\log \widehat{\rho}_{\rm LE})+\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left[
\left(\mathrm{tr}(\widehat{\rho}_{\rm LE}\widehat{T}^{\mu \nu})_{\rm ren}-T^{\mu \nu}\right)\beta_{\nu}-\zeta \left(\mathrm{tr}(\widehat{\rho}_{\rm LE}\widehat{j}^{\mu})_{\rm ren}-j^{\mu}\right)\right],
\end{equation}
where ${\rm d} \Sigma_{\mu}\equiv {\rm d} \Sigma \,n_{\mu}$ with ${\rm d} \Sigma$ the measure on $\Sigma(\tau)$.
Once again in analogy with \eqref{zubeq03}, the solution of the variational problem is
\begin{equation}\label{zubeq05}
\widehat{\rho}_{\rm LE}=\frac{1}{Z_{\rm LE}}\exp \left[-\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right)\right]
\end{equation}
\begin{equation}\label{zubeq06}
Z_{\rm LE}=\mathrm{tr} \left(\exp \left[-\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right)\right]\right)
\end{equation}
where the partition function $Z_{\rm LE}$ ensures $\mathrm{tr}(\widehat{\rho}_{\rm LE})=1$.
It is important to notice that it can be kept in this simple form, without subtraction of the vacuum expectation value, because the latter is a non-operator term which would appear also in the partition function, hence cancelling out in the ratio.
In formulae
\begin{equation}\label{zubeq24}
\begin{split}
&\frac{\exp \left[-\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(:\widehat{T}^{\mu \nu}:\beta_{\nu}-\zeta :\widehat{j}^{\mu}:\right)\right]}{\mathrm{tr} \left(\exp \left[-\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(:\widehat{T}^{\mu \nu}:\beta_{\nu}-\zeta :\widehat{j}^{\mu}:\right)\right]\right)}\\
&=\frac{\exp \left[-\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left((\widehat{T}^{\mu \nu}-\langle 0|\widehat{T}^{\mu \nu}|0\rangle)\beta_{\nu}-\zeta (\widehat{j}^{\mu}-\langle 0|\widehat{j}^{\mu}|0\rangle)\right)\right]}{\mathrm{tr} \left(\exp \left[-\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left((\widehat{T}^{\mu \nu}-\langle 0|\widehat{T}^{\mu \nu}|0\rangle)\beta_{\nu}-\zeta (\widehat{j}^{\mu}-\langle 0|\widehat{j}^{\mu}|0\rangle)\right)\right]\right)}\\
&=\frac{\exp \left[-\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right)\right]}{\mathrm{tr} \left(\exp \left[-\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right)\right]\right)}
=\widehat{\rho}_{\rm LE}.
\end{split}
\end{equation}
In other words, $\widehat{\rho}_{\rm LE}$ is invariant under addition or subtraction of a non-operator term such as a vacuum expectation value.
This property will come in handy later on.
Equation \eqref{zubeq05} is the fully covariant expression of the local thermodynamic equilibrium density operator, and it holds in a general curved spacetime.
It was first obtained by Zubarev \cite{Zubarev:1979} and van Weert \cite{Vanweert1982133}, and has been recently reworked \cite{Becattini:2014yxa, Hayata:2015lga}.
In order to enforce the constraint equations \eqref{zubeq04}, we introduced the Lagrange multipliers $\beta^{\mu}$ and $\zeta$: their physical meaning is that of thermodynamic fields, and they are determined as solutions of the constraint equations themselves with $\widehat{\rho}_{\rm LE}$ given by \eqref{zubeq05}.
Now, it is clear that the local thermodynamic equilibrium density operator depends first on the chosen foliation and then on the particular hypersurface $\Sigma(\tau)$, whence on the vector field $n^{\mu}$.
In turn, the thermal expectation values at left-and side in the constraint equations also depend on $\Sigma(\tau)$.
Thus, explicitly, the constraint equations read
\begin{equation}
n_{\mu}\mathrm{tr}(\widehat{\rho}_{\rm LE}[\beta^{\mu},\zeta,n^{\mu}]\widehat{T}^{\mu \nu})_{\rm ren}=n_{\mu}T^{\mu \nu},\qquad
n_{\mu}\mathrm{tr}(\widehat{\rho}_{\rm LE}[\beta^{\mu},\zeta,n^{\mu}]\widehat{j}^{\mu})_{\rm ren}=n_{\mu}j^{\mu}
\end{equation}
where the dependence of $\widehat{\rho}_{\rm LE}$ on $\beta^{\mu}$, $\zeta$ and $n^{\mu}$ can be functional.
For any given spacetime foliation, these are 5 equations in 5 unknowns, and can be solved in principle to determine the thermodynamic fields.
In the general case this is a non-trivial task, however a great simplification is achieved if $n^{\mu}$ is taken to be the direction of $\beta^{\mu}$, that is $n^{\mu}=\beta^{\mu}/\sqrt{\beta^2}$, which is possible as $\beta^{\mu}$ is known to be timelike, provided it satisfies the vorticity-free condition \eqref{zubeq07}.
Nevertheless, in \cite{Becattini:2014yxa} it is argued that a definition of $n^{\mu}$ based on $\beta^{\mu}$ can be put forward also in the vorticous case, at least in Minkowski spacetime.
With this choice, the number of independent variables is reduced, and the above equations become
\begin{equation}
\beta_{\mu}\mathrm{tr}(\widehat{\rho}_{\rm LE}[\beta^{\mu},\zeta]\widehat{T}^{\mu \nu})_{\rm ren}=\beta_{\mu}T^{\mu \nu},\qquad
\beta_{\mu}\mathrm{tr}(\widehat{\rho}_{\rm LE}[\beta^{\mu},\zeta]\widehat{j}^{\mu})_{\rm ren}=\beta_{\mu}j^{\mu}.
\end{equation}
Thus, although being true that there is an overall ambiguity due to the choice of the hypersurface $\Sigma(\tau)$, this can in fact be chosen consistently by using $\beta^{\mu}$ itself.
This choice is known as the $\beta$-\textit{frame}, or the \textit{thermodynamic frame}, represented by the velocity field defined as
\begin{equation}\label{zubeq11}
u^{\mu}\equiv \frac{\beta^{\mu}}{\sqrt{\beta^2}},\qquad
T\equiv \frac{1}{\sqrt{\beta^2}}.
\end{equation}
The Lorentz scalar $T$ defines a \textit{proper temperature}, namely the one measured by an ideal thermometer comoving with the system, and is in general different from the one marked by a fixed ideal thermometer who sees the system passing by, that is $T_{\rm fix}=1/\beta^0$.
By virtue of this interpretation, $\beta^{\mu}$ is called the \textit{four-temperature} vector field.
Finally, $\zeta$, sometimes called the \textit{fugacity}, is the ratio between the chemical potential $\mu$ associated to the charged current $\widehat{j}^{\mu}$ and the proper temperature, $\zeta \equiv \mu/T$, as we saw in Chapter \ref{chapter:relhydro}.
Much more could actually be said about the $\beta$-frame and its properties, but, to all intents and purposes of this work, we would like the takeaway to be just the following.
First of all, it is true that the local thermodynamic equilibrium density operator depends on the choice of the foliation and then on the spacelike hypersurface onto which the densities are given.
Actually, this is not completely new, as in the usual thermal field theory it is just as well necessary to assign initial values on a Cauchy hypersurface.
However, the four-temperature can in fact help fixing this ambiguity.
And second, while in relativistic hydrodynamics the field $\beta^{\mu}$ is defined starting from the velocity field and the temperature, here we showed that the other way round is equally possible, namely we regarded the four-temperature as a fundamental quantity from which the velocity field and proper temperature were derived \cite{Van:2013sma}.
For a thorougher discussion of the $\beta$-frame and the reasons why it should be regarded as a privileged frame, see \cite{Becattini:2014yxa}.
\subsection{From local equilibrium to global and non-equilibrium}
\label{sec:LE_to_GE_and_NE}
The key observation we now want to point out is that the local thermodynamic equilibrium density operator \eqref{zubeq05} is not stationary in general.
The reason is that the quantum operators $\widehat{T}^{\mu \nu}$ and $\widehat{j}^{\mu}$ appearing in it are built from the quantum fields, which are $\tau$-dependent, hence so is $\widehat{\rho}_{\rm LE}$ through the choice of the hypersurface $\Sigma(\tau)$.
This implies that $\widehat{\rho}_{\rm LE}$ cannot be a state of the system, for it is not in the Heisenberg picture.
Recall that it was worked out starting from the quantum operators instead of the microscopic states, therefore it cannot be expressed in general in the form \eqref{zubeq08} as a state would.
The actual, stationary state in the Heisenberg picture $\widehat{\rho}$ remains unknown, however $\widehat{\rho}_{\rm LE}$ contains information about it in the thermodynamic fields $\beta^{\mu}$ and $\zeta$.
In fact, at the right-hand side of the constraint equations \eqref{zubeq04}, which define the thermodynamic fields, there are the actual densities determined as renormalized expectation values calculated with the actual state $\widehat{\rho}$.
The non-stationarity of $\widehat{\rho}_{\rm LE}$ is a property so important that is convenient emphasizing by making it explicit, therefore from now on we will write $\widehat{\rho}_{\rm LE}(\tau)$.
If the local thermodynamic equilibrium density operator is not even a state of the system, why did we put so much effort in defining it?
Are we missing anything?
The answer, which is an amendment of Zubarev's original idea, is overly simple: if at some initial $\tau_0$ the system is known to be at local thermodynamic equilibrium, the actual state is $\widehat{\rho}=\widehat{\rho}_{\rm LE}(\tau_0)$, and it remains such at any $\tau$ by virtue of stationarity.
In other words, the actual state is $\widehat{\rho}$ at any time, but if $\widehat{\rho}_{\rm LE}(\tau)$ is designed in such a way that at some $\tau_0$ we have $\widehat{\rho}_{\rm LE}(\tau_0)=\widehat{\rho}$, then the system is said to be at local thermodynamic equilibrium at $\tau_0$.
That is to say that the state still depends on the choice of the foliation, but the only hypersurface it really depends on is the one where the initial data, namely the energy-momentum and charge densities, is given, that is $\Sigma(\tau_0)$.
Note that this hypothesis does not discriminate between thermodynamic equilibrium and non-equilibrium, the actual state we are referring to can be either.
Note also that, although $\widehat{\rho}$ might be equal to $\widehat{\rho}_{\rm LE}(\tau_0)$ at some $\tau_0$, their derivatives are different: the derivatives of $\widehat{\rho}$ vanish identically because of stationarity, while those of $\widehat{\rho}_{\rm LE}(\tau_0)$ do not in principle.
Then, how do we check if a system is at local thermodynamic equilibrium at a given $\tau_0$?
The local thermodynamic equilibrium density operator is determined by fixing the expectation values of energy-momentum and charge densities equal to the actual ones.
The expectation value is a first moment.
We could consider, for instance, the second moments, namely the variances, and check if they agree with the measured values within the experimental errors at $\tau_0$.
If they do, we still do not know what the actual state of system is, but it is reasonable to assume it to be $\widehat{\rho}_{\rm LE}(\tau_0)$, which becomes more and more legitimate as one checks moments of higher and higher order.
This procedure can be far from easy to do in practice, but there is no conceptual fallacy in principle.
If at $\tau_0$ the system is at local thermodynamic equilibrium in the above sense, we can obtain the relation between $\widehat{\rho}$ and $\widehat{\rho}_{\rm LE}(\tau)$ by means of Gauss' theorem.
Let us consider a volume $\Omega$ of spacetime enclosed by two spacelike hypersurfaces of the foliation: one $\Sigma(\tau_0)$ at the initial $\tau_0$ and the other $\Sigma(\tau)$ at the ``present'' $\tau$.
Let also $\Gamma(\tau_0,\tau)$ be the timelike boundary at infinity joining $\Sigma(\tau_0)$ and $\Sigma(\tau)$.
This is configuration is shown in Figure \ref{fig01}.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{figures/foliation.eps}
\caption{Here it is shown the spacetime volume $\Omega$ enclosed between the two spacelike hypersurfaces of the foliation $\Sigma(\tau_0)$ and $\Sigma(\tau)$, where $\tau_0$ is the time where the system is at local thermodynamic equilibrium.
The timelike boundary at infinity joining them is indicated as $\Gamma(\tau_0,\tau)$, and $n^{\mu}$ is the timelike unit vector field orthogonal to the foliation.}
\label{fig01}
\end{center}
\end{figure}
Assuming that the quantum fields the conserved currents are built with satisfy boundary conditions such that the flux through $\Gamma(\tau_0,\tau)$ vanishes \cite{Becattini:2012tc}, by Gauss' theorem we have
\begin{equation}
\int_{\Omega}{\rm d}\Omega \left(\widehat{T}^{\mu \nu}\nabla_{\mu}\beta_{\nu}-\widehat{j}^{\mu}\nabla_{\mu}\zeta \right)=\int_{\Sigma(\tau)}{\rm d}\Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right)-\int_{\Sigma(\tau_0)}{\rm d}\Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right),
\end{equation}
where the conservation of the energy-momentum tensor and charged current operators were used.
Thus, the actual state is
\begin{equation}\label{zubeq09}
\begin{split}
\widehat{\rho}=&\widehat{\rho}_{\rm LE}(\tau_0)=
\frac{1}{Z_{\rm LE}(\tau_0)}\exp \left[-\int_{\Sigma(\tau_0)}{\rm d}\Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right)\right]\\
=&
\frac{1}{Z_{\rm LE}(\tau_0)}\exp \left[-\int_{\Sigma(\tau)}{\rm d}\Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right)+\int_{\Omega}{\rm d}\Omega \left(\widehat{T}^{\mu \nu}\nabla_{\mu}\beta_{\nu}-\widehat{j}^{\mu}\nabla_{\mu}\zeta \right)\right],
\end{split}
\end{equation}
where in the first line it is understood that not only the hypersurface, but the argument of the integral too are evaluated at $\tau_0$.
Note that the first integral in the second line is the one appearing in $\widehat{\rho}_{\rm LE}(\tau)$.
Although hardly useful at a practical level, this expression is exact and represents the actual, stationary state of the system, whether equilibrium or non-equilibrium.
However, at thermodynamic equilibrium we can take it a step further.
At thermodynamic equilibrium, the entropy is maximized with constrained values of energy-momentum and charge densities at any $\tau$.
Since $\widehat{\rho}_{\rm LE}(\tau)$ is also obtained by maximizing the entropy, thermodynamic equilibrium is realized if $\widehat{\rho}_{\rm LE}(\tau)=\widehat{\rho}_{\rm LE}(\tau')$ for any $\tau$ and $\tau'$.
This implies that the volume term in \eqref{zubeq09} must vanish, which happens for any volume $\Omega$ if the argument of the integral vanishes
\begin{equation}
\widehat{T}^{\mu \nu}\nabla_{\mu}\beta_{\nu}-\widehat{j}^{\mu}\nabla_{\mu}\zeta=\frac{1}{2}\widehat{T}^{\mu \nu}\left(\nabla_{\mu}\beta_{\nu}+\nabla_{\nu}\beta_{\mu}\right)-\widehat{j}^{\mu}\nabla_{\mu}\zeta=0,
\end{equation}
where we used the symmetry of the energy-momentum tensor operator.
Being the thermodynamic fields $\beta^{\mu}$ and $\zeta$ independent from each other, the condition for thermodynamic equilibrium is
\begin{equation}\label{zubeq10}
\nabla_{\mu}\beta_{\nu}+\nabla_{\nu}\beta_{\mu}=0,\qquad
\nabla_{\mu}\zeta=0
\end{equation}
namely $\zeta$ must be constant and $\beta^{\mu}$ a Killing vector \cite{Chrobok:2006rr}.
Taking into account for the definition \eqref{zubeq11} of the four-temperature, this is exactly the same result as \eqref{eqrelhydro22}.
However, the latter was obtained in the context of the Navier-Stokes theory, which is a first-order theory, while \eqref{zubeq10} was obtained in the Zubarev theory, which is not truncated to any order in the derivatives of the thermodynamic fields.
This particular stationary configuration is called a \textit{global thermodynamic equilibrium} state
\begin{equation}\label{zubeq28}
\widehat{\rho}=\frac{1}{Z}\exp \left[-\int_{\Sigma}{\rm d}\Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right)\right],
\end{equation}
\begin{equation}
Z=\mathrm{tr} \left(\exp \left[-\int_{\Sigma}{\rm d}\Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right)\right]\right).
\end{equation}
The $\tau$-dependence is omitted, as in fact there is none being $\widehat{\rho}$ a stationary state in the Heisenberg picture.
The above expression is formally analogous to the one of the local thermodynamic equilibrium density operator \eqref{zubeq05}, with the important difference that it is an actual state and the thermodynamic fields fulfill the conditions \eqref{zubeq10}.
This means that we have much more information on a system when it is at global thermodynamic equilibrium than out of equilibrium, for in the former case the thermodynamic fields are bounded to be a Killing vector and a constant, whereas in the latter they can be whatever as long as they are solutions of the constraint equations.
This is one of the reasons that make systems at local thermodynamic equilibrium harder to study, as we will have the chance to convince ourselves in the next Chapters.
However, not all is lost if local thermodynamic equilibrium proves to be too hard to deal with.
We will not go into details here, but in \cite{Becattini:2014yxa} it is shown, at least in Minkowski spacetime, that in the $\beta$-frame $\widehat{\rho}_{\rm LE}(\tau)$ can be expanded in the derivatives of the thermodynamic fields by using linear response theory around a configuration of global thermodynamic equilibrium.
The idea is that, if at local thermodynamic equilibrium the derivatives of the thermodynamic fields are small compared to their global thermodynamic equilibrium values, the so-called \textit{hydrodynamic limit}, the main contribution to thermal expectation values comes from global thermodynamic equilibrium.
This is what makes global thermodynamic equilibrium interesting even for systems at local thermodynamic equilibrium, as we shall see in the next Chapter.
We conclude this part with the following remark.
Suppose that the actual state $\widehat{\rho}=\widehat{\rho}_{\rm LE}(\tau_0)$ has some symmetry, namely it commutes with some unitary representation of subgroup ${\rm G}$ of transformations of the proper orthochronous Poincar\'e group ${\rm IO}(1,3)^{\uparrow}_+$.
Let it be $\widehat{U}(g)$ in the Hilbert space.
Then we have
\begin{equation}
\begin{split}
\widehat{U}(g)\widehat{\rho}\widehat{U}^{-1}(g)=&
\frac{1}{Z}\exp \left[-\int_{\Sigma(\tau_0)}{\rm d} \Sigma_{\mu}(x)\left(\widehat{U}(g)\widehat{T}^{\mu \nu}(x)\widehat{U}^{-1}(g)\beta_{\nu}(x)\right.\right.\\
&\left.\left.-\zeta(x)\widehat{U}(g)\widehat{j}^{\mu}(x)\widehat{U}^{-1}(g)\right)\right]\\
=&\frac{1}{Z}\exp \left[-\int_{\Sigma(\tau_0)}{\rm d} \Sigma_{\mu}(x)\left({D(g^{-1})^{\mu}}_{\rho}{D(g^{-1})^{\nu}}_{\sigma}\widehat{T}^{\rho \sigma}(g(x))\beta_{\nu}(x)\right.\right.\\
&\left.\left.-\zeta(x){D(g^{-1})^{\mu}}_{\rho}\widehat{j}^{\rho}(g(x))\right)\right].
\end{split}
\end{equation}
By setting $y\equiv g(x)$ and using the fact that ${\rm d} \Sigma_{\mu}(x)={D(g)^{\nu}}_{\mu}{\rm d} \Sigma_{\nu}(x)$ we get
\begin{equation}
\begin{split}
\widehat{U}(g)\widehat{\rho}\widehat{U}^{-1}(g)=&
\frac{1}{Z}\exp \left[-\int_{g(\Sigma(\tau_0))}{\rm d} \Sigma_{\rho}(y)\left(\widehat{T}^{\mu \nu}(y){D(g^{-1})^{\nu}}_{\sigma}\beta_{\nu}(g^{-1}(y))\right.\right.\\
&\left.\left.-\zeta(g^{-1}(y))\widehat{j}^{\rho}(y)\right)\right],
\end{split}
\end{equation}
so, if the hypersurface is invariant under the transformation $g$ and if
\begin{equation}\label{zubeq12}
{D(g^{-1})^{\nu}}_{\sigma}\beta_{\nu}(g^{-1}(y))=\beta_{\sigma}(y),\qquad
\zeta(g^{-1}(y))=\zeta(y),
\end{equation}
then the density operator is invariant under the unitary transformation, $\widehat{U}(g)\widehat{\rho}\widehat{U}^{-1}(g)=\widehat{\rho}$.
Equations \eqref{zubeq12} specify the symmetry conditions on the thermodynamic fields in order for $\widehat{\rho}$ to be invariant.
An invariance of $\widehat{\rho}$ has consequences on the thermal expectation values of operators.
For instance, for the energy-momentum tensor
\begin{equation}\label{zubeq13}
\begin{split}
T^{\mu \nu}=&
\mathrm{tr}(\widehat{\rho}\widehat{T}^{\mu \nu}(x))=
\mathrm{tr}(\widehat{\rho}\widehat{U}^{-1}(g)\widehat{T}^{\mu \nu}(x)\widehat{U}(g))\\
=&{D(g)^{\mu}}_{\rho}{D(g)^{\nu}}_{\sigma}\mathrm{tr}(\widehat{\rho}\widehat{T}^{\rho \sigma}(g^{-1}(x)))=
{D(g)^{\mu}}_{\rho}{D(g)^{\nu}}_{\sigma}T^{\rho \sigma}(g^{-1}(x)).
\end{split}
\end{equation}
If we consider a 1-parameter subgroup of transformations $g_{\phi}$, for instance the rotations around some axis, then, by equations \eqref{zubeq12} and \eqref{zubeq13}, the Lie derivative of the field under consideration along the vector field $X(x)={\rm d} g_{\phi}/{\rm d} \phi$ vanishes
\begin{equation}
{\cal L}_X(\beta)^{\mu}=0,\qquad
{\cal L}_X(T)^{\mu \nu}=0.
\end{equation}
Now the following question arises: if ${\rm G}$ is a symmetry subgroup for the actual state, is it so also for the local thermodynamic equilibrium density operator at any time?
In formulae, does the following implication hold at any $\tau$?
\begin{equation}
\widehat{U}(g)\widehat{\rho}\widehat{U}^{-1}(g)=\widehat{\rho}
\qquad \Rightarrow \qquad
\widehat{U}(g)\widehat{\rho}_{\rm LE}(\tau)\widehat{U}^{-1}(g)=\widehat{\rho}_{\rm LE}(\tau).
\end{equation}
It can be shown that if ${\rm G}$ transforms $\Sigma(\tau_0)$ into itself and the thermodynamic fields fulfill \eqref{zubeq12}, then the answer is yes.
Recall that, by definition, $\widehat{\rho}_{\rm LE}(\tau)$ is determined by maximizing the entropy while satisfying the constraint equations.
Clearly, both the entropy and the constraint equations are invariant under unitary transformations of $\widehat{\rho}_{\rm LE}(\tau)$, therefore, if $\widehat{\rho}_{\rm LE}(\tau)$ is a solution of the variational problem, so is $\widehat{U}(g)\widehat{\rho}_{\rm LE}(\tau)\widehat{U}^{-1}(g)$.
This implies that either $\widehat{U}(g)\widehat{\rho}_{\rm LE}(\tau)\widehat{U}^{-1}(g)$ coincides with $\widehat{\rho}_{\rm LE}(\tau)$, or it is an entirely different solution.
In either case, it is possible to generate a symmetric solution under the subgroup ${\rm G}$ by taking a particular solution $\widehat{\rho}_{\rm LE}^{(0)}(\tau)$ and summing over all the elements of ${\rm G}$
\begin{equation}
\widehat{\rho}_{\rm LE}(\tau)=\frac{1}{M({\rm G})}\sum_{g\in {\rm G}}\widehat{U}(g)\widehat{\rho}_{\rm LE}^{(0)}(\tau)\widehat{U}^{-1}(g),
\end{equation}
where $M({\rm G})$ is the volume of ${\rm G}$.
Thus, a sufficient condition for $\widehat{\rho}_{\rm LE}(\tau)$ to be symmetric under ${\rm G}$ at any $\tau$, is that the thermodynamic fields fulfill \eqref{zubeq12} at any $\tau$.
This will be important especially in Chapter \ref{chapter:boost}.
\subsection{Entropy production rate}
\label{sec:entr_prod_rate}
We conclude this part with an argument linking back to the end of the previous Section, namely the entropy production of irreversible quantum processes.
We will follow the steps of \cite{Becattini:2019dxo} with a few important adjustments concerning the renormalization.
As it should be clear by now, the local thermodynamic equilibrium density operator is not a state of the system for it is not stationary.
However, it still is an operator, so we can take traces of it and do all the operations we do on operators.
Thus, if we pretend it is a state and we treat it as such, the corresponding entropy is in fact $\tau$-dependent
\begin{equation}\label{zubeq15}
S(\tau)=-\mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau)\log \widehat{\rho}_{\rm LE}(\tau)).
\end{equation}
That is because $\widehat{\rho}_{\rm LE}(\tau)$ is built from the quantum operators as in \eqref{zubeq05}, not the microscopic states as in \eqref{zubeq08}, therefore its $\tau$-dependence is governed by $\Sigma(\tau)$ instead of the unitary transformation \eqref{zubeq14}.
In other words, the invariance of the entropy under unitary transformations of the state will not prevent it from being $\tau$-dependent this time, because, as $\widehat{\rho}_{\rm LE}(\tau)$ is not a state, its time evolution is not controlled by a unitary transformation but by $\Sigma(\tau)$ instead.
Thus, up to more convincing proposals, equation \eqref{zubeq15} is conventionally taken as the definition of a non-stationary entropy, despite $\widehat{\rho}_{\rm LE}(\tau)$ not being a state.
The next step is finding an expression for the entropy production rate, that is ${\rm d} S(\tau)/{\rm d} \tau$.
Recall that, by definition, the entropy \eqref{zubeq15} can be expressed in the form of \eqref{eqrelhydro19} as
\begin{equation}
S(\tau)=\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}s^{\mu}.
\end{equation}
Let us emphasize that out of equilibrium, since $\nabla_{\mu}s^{\mu}>0$, the entropy is a frame-dependent quantity as it varies with the integration hypersurface $\Sigma(\tau)$.
The derivative of the entropy with respect to $\tau$ can be computed by exploiting a general formula for the variation of an integral between two infinitesimally close hypersurfaces, which we do not prove here
\begin{equation}\label{zubeq16}
\frac{{\rm d} S(\tau)}{{\rm d} \tau}=
\int_{\Sigma(\tau)}{\rm d} \Sigma_{\nu}U^{\nu}\nabla_{\mu}s^{\mu}+
\frac{1}{2}\int_{\partial \Sigma(\tau)}{\rm d} \tilde{S}_{\mu \nu}(s^{\mu}U^{\nu}-s^{\nu}U^{\mu}),
\end{equation}
where $\partial \Sigma(\tau)$ is the 2-dimensional boundary of $\Sigma(\tau)$, ${\rm d} \tilde{S}_{\mu \nu}=-\frac{1}{2}\epsilon_{\mu \nu \rho \sigma}{\rm d} S^{\rho \sigma}$ is the dual of the measure ${\rm d} S^{\rho \sigma}\equiv {\rm d} x^{\rho}\wedge {\rm d} x^{\sigma}$ and $U^{\mu}\equiv {\rm d} x^{\mu}/{\rm d} \tau$ \cite{Becattini:2019dxo}.
Assuming that the boundary term vanishes, we are left with
\begin{equation}\label{zubeq18}
\frac{{\rm d} S(\tau)}{{\rm d} \tau}=
\int_{\Sigma(\tau)}{\rm d} \Sigma_{\nu}U^{\nu}\nabla_{\mu}s^{\mu}.
\end{equation}
Now we want to repeat the same steps with \eqref{zubeq15} and compare with the above result.
As we are going to exploit the invariance of $\widehat{\rho}_{\rm LE}(\tau)$ under the subtraction of a vacuum expectation value, let us first define out of convenience the following notation
\begin{equation}
:Z_{\rm LE}(\tau):\,\equiv \mathrm{tr} \left(\exp \left[-\int_{\Sigma}{\rm d} \Sigma_{\mu}\left(:\widehat{T}^{\mu \nu}:\beta_{\nu}-\zeta :\widehat{j}^{\mu}:\right)\right]\right).
\end{equation}
By using the expression \eqref{zubeq05} of $\widehat{\rho}_{\rm LE}(\tau)$ and the above mentioned invariance, we find
\begin{equation}
\begin{split}
S(\tau)=&
-\mathrm{tr} \left(\widehat{\rho}_{\rm LE}(\tau)\log \frac{\exp \left[-\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right)\right]}{Z_{\rm LE}(\tau)}\right)\\
=&-\mathrm{tr} \left(\widehat{\rho}_{\rm LE}(\tau)\log \frac{\exp \left[-\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(:\widehat{T}^{\mu \nu}:\beta_{\nu}-\zeta :\widehat{j}^{\mu}:\right)\right]}{:Z_{\rm LE}(\tau):}\right)\\
=&\log :Z_{\rm LE}(\tau):+\mathrm{tr} \left(\widehat{\rho}_{\rm LE}(\tau)\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(:\widehat{T}^{\mu \nu}:\beta_{\nu}-\zeta :\widehat{j}^{\mu}:\right)\right)\\
=&\log :Z_{\rm LE}(\tau):+\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(\langle :\widehat{T}^{\mu \nu}:\rangle_{\rm LE}\beta_{\nu}-\zeta \langle :\widehat{j}^{\mu}:\rangle_{\rm LE}\right)\\
=&\log :Z_{\rm LE}(\tau):+\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(T^{\mu \nu}\beta_{\nu}-\zeta j^{\mu}\right).
\end{split}
\end{equation}
In the last step, we used the definition ${\rm d} \Sigma_{\mu}={\rm d} \Sigma \,n_{\mu}$ and the constraint equations \eqref{zubeq04} to replace the renormalized energy-momentum and charge densities at local thermodynamic equilibrium with the actual ones.
By deriving with respect to $\tau$, using the formula \eqref{zubeq16} and assuming once again that the boundary term vanishes, we obtain
\begin{equation}\label{zubeq17}
\frac{{\rm d} S(\tau)}{{\rm d} \tau}=\frac{{\rm d} \log :Z_{\rm LE}(\tau):}{{\rm d} \tau}+\int_{\Sigma(\tau)}{\rm d} \Sigma_{\lambda}U^{\lambda}\left(T^{\mu \nu}\nabla_{\mu}\beta_{\nu}-j^{\mu}\nabla_{\mu}\zeta \right),
\end{equation}
where the conservation law of the energy-momentum tensor and charged current, namely the hydrodynamic equations \eqref{eqrelhydro02} and \eqref{eqrelhydro03}, were also used.
The derivative of the logarithm of the partition function requires a little more effort.
Let us start by defining the operator $\widehat{\Upsilon}(\tau)$, which will appear again later on, as
\begin{equation}\label{zubeq23}
Z_{\rm LE}(\tau)=\mathrm{tr} \left({\rm e}^{-\widehat{\Upsilon}(\tau)}\right),\qquad
\widehat{\Upsilon}(\tau)\equiv \int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right),
\end{equation}
hence the derivative
\begin{equation}
\begin{split}
\frac{{\rm d} \log :Z_{\rm LE}(\tau):}{{\rm d} \tau}=&
\frac{1}{:Z_{\rm LE}(\tau):}\frac{{\rm d} :Z_{\rm LE}(\tau):}{{\rm d} \tau}=
\frac{1}{:Z_{\rm LE}(\tau):}\frac{{\rm d}}{{\rm d} \tau}\mathrm{tr} \left({\rm e}^{-:\widehat{\Upsilon}(\tau):}\right)\\
=&-\frac{1}{:Z_{\rm LE}(\tau):}\mathrm{tr} \left({\rm e}^{-:\widehat{\Upsilon}(\tau):}\frac{:\widehat{\Upsilon}(\tau):}{{\rm d} \tau}\right)=
-\left\langle \frac{{\rm d} :\widehat{\Upsilon}(\tau):}{{\rm d} \tau}\right\rangle_{\rm LE}.
\end{split}
\end{equation}
We use again the formula \eqref{zubeq16} assuming that the boundary term vanishes, and with the conservation laws of the energy-momentum tensor and charged current operators we get
\begin{equation}
\frac{{\rm d} :\widehat{\Upsilon}(\tau):}{{\rm d} \tau}=
\int_{\Sigma(\tau)}{\rm d} \Sigma_{\lambda}\left(:\widehat{T}^{\mu \nu}:\nabla_{\mu}\beta_{\nu}-:\widehat{j}^{\mu}:\nabla_{\mu}\zeta \right),
\end{equation}
hence
\begin{equation}
\frac{{\rm d} \log :Z_{\rm LE}(\tau):}{{\rm d} \tau}=-\int_{\Sigma(\tau)}{\rm d} \Sigma_{\lambda}U^{\lambda}\left(\langle :\widehat{T}^{\mu \nu}:\rangle_{\rm LE}\nabla_{\mu}\beta_{\nu}-\langle :\widehat{j}^{\mu}:\rangle_{\rm LE}\nabla_{\mu}\zeta \right),
\end{equation}
and plugging into \eqref{zubeq17} we finally obtain
\begin{equation}
\frac{{\rm d} S(\tau)}{{\rm d} \tau}=
\int_{\Sigma(\tau)}{\rm d} \Sigma_{\lambda}U^{\lambda}\left[\left(T^{\mu \nu}-\langle :\widehat{T}^{\mu \nu}:\rangle_{\rm LE}\right)\nabla_{\mu}\beta_{\nu}-\left(j^{\mu}-\langle :\widehat{j}^{\mu}:\rangle_{\rm LE}\right)\nabla_{\mu}\zeta \right].
\end{equation}
Comparing with \eqref{zubeq18} and taking into account that the equation should hold at any $\tau$, we finally obtain the following result for the entropy production rate
\begin{equation}\label{zubeq30}
\nabla_{\mu}s^{\mu}=\left(T^{\mu \nu}-\langle :\widehat{T}^{\mu \nu}:\rangle_{\rm LE}\right)\nabla_{\mu}\beta_{\nu}-\left(j^{\mu}-\langle :\widehat{j}^{\mu}:\rangle_{\rm LE}\right)\nabla_{\mu}\zeta,
\end{equation}
which was first found in \cite{Zubarev:1979} and later reworked in \cite{Becattini:2019dxo}.
As we can see, the entropy production rate depends on the deviation of the actual values of the conserved currents from those at local thermodynamic equilibrium and, as expected, on the derivatives of the thermodynamic fields, which essentially are the dissipations.
By the second law of thermodynamics, the above quantity is bounded to always be non-negative.
At global thermodynamic equilibrium the thermodynamic fields fulfill the conditions \eqref{zubeq10}, so the entropy production rate vanishes as expected.
It is important to emphasize that equation \eqref{zubeq30} represents the entropy production rate at local thermodynamic equilibrium, which is the underlying assumption of relativistic hydrodynamics and contains global thermodynamic equilibrium as a special case.
However, no analogous formula is known for the entropy production rate or relativistic quantum fluids fully out of thermodynamic equilibrium.
Let us seize the opportunity of having mentioned the divergencelessness of the entropy current at global thermodynamic equilibrium to make the following geometrical remark.
Being $\nabla_{\mu}s^{\mu}=0$, the entropy current can be expressed as the Hodge dual of an exact 3-form, which, if the domain is topologically contractible, can in turn be written as the exterior derivative of a 2-form.
This eventually amounts to state that the original entropy current can be cast into the divergence of an antisymmetric tensor field
\begin{equation}
\nabla_{\mu}s^{\mu}=0\qquad \Rightarrow \qquad
s^{\mu}=\nabla_{\nu}\varsigma^{\mu \nu},\qquad
\varsigma^{\mu \nu}=-\varsigma^{\nu \mu}.
\end{equation}
We call this $\varsigma^{\mu \nu}$ a \textit{potential} of the entropy current.
Whence, by using Stoke's theorem
\begin{equation}\label{zubeq31}
S=\int_{\Sigma}{\rm d} \Sigma_{\mu}s^{\mu}=
\int_{\Sigma}{\rm d} \Sigma_{\mu}\nabla_{\nu}\varsigma^{\mu \nu}=
\frac{1}{2}\int_{\partial \Sigma}{\rm d} \tilde{S}_{\mu \nu}\varsigma^{\mu \nu}=
-\frac{1}{4}\int_{\partial \Sigma}{\rm d} S^{\rho \sigma}\sqrt{|g|}\epsilon_{\mu \nu \rho \sigma}\varsigma^{\mu \nu},
\end{equation}
where $\partial \Sigma$ is the 2-dimensional boundary of $\Sigma$, ${\rm d} S^{\rho \sigma}={\rm d} x^{\rho}\wedge {\rm d} x^{\sigma}$ is the measure on $\partial \Sigma$, and ${\rm d} \tilde{S}_{\mu \nu}=-\frac{1}{2}\sqrt{|g|}\epsilon_{\mu \nu \rho \sigma}{\rm d} S^{\rho \sigma}$ is its dual.
In other words, the entropy can be expressed not only as a 3-dimensional integral on $\Sigma$, but also as a 2-dimensional integral on the boundary of $\Sigma$.
This fact is known in literature \cite{Wald:1993nt, Eling:2012xa, Majhi:2011ws, Majhi:2012tf} and has been reworked in \cite{Becattini:2019poj} including a practical example of calculation, which we will present in Subsection \ref{subsec:gte_entropy}.
Although elegantly concise and quite straightforward in its meaning, equation \eqref{zubeq30} can only provide us information on the divergence of the entropy current, not the entropy current itself.
In fact, in principle, there are infinitely many different entropy currents sharing the same divergence.
But as we saw in the last Chapter, the structure of the entropy current is crucial in defining the nature of the relativistic hydrodynamic theory, so it would be far more meaningful to know the expression of the entropy current instead of its divergence.
In the next Section we will put forward a general method to derive the entropy current directly in terms of the conserved currents.
\section{The entropy current method}
\label{sec:zubarev_entropy_current_method}
In this Section we retrace the steps of \cite{Becattini:2019poj}, where we determined, to the best of our knowledge, the first proof of existence of the entropy current at local thermodynamic equilibrium and put forward a general method to calculate it.
The crucial point will be the fact that the logarithm of the partition function is also extensive, meaning that it can be expressed as an integral of a vector field on a hypersurface of the foliation, which is usually assumed without proof.
Let us start our journey by noticing that the entropy straight out of the local thermodynamic equilibrium density operator \eqref{zubeq05} reads
\begin{equation}
S(\tau)=\log Z_{\rm LE}(\tau)+\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}\beta_{\nu}-\zeta \langle \widehat{j}^{\mu}\rangle_{\rm LE}\right).
\end{equation}
By definition of the entropy current \eqref{eqrelhydro19}, this is also equal to
\begin{equation}
S(\tau)=\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}s^{\mu}.
\end{equation}
We readily understand that, in order for the entropy to be extensive and so for the entropy current to exist, the logarithm of the partition function must also be extensive, namely there must be a vector field $\phi^{\mu}$, called the \textit{thermodynamic potential current}, such that
\begin{equation}
\log Z_{\rm LE}(\tau)\equiv \int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\phi^{\mu}.
\end{equation}
This way, we have the following expression for the entropy current
\begin{equation}
s^{\mu}=\phi^{\mu}+\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}\beta_{\nu}-\zeta \langle \widehat{j}^{\mu}\rangle_{\rm LE}.
\end{equation}
The existence of the thermodynamic potential current and, in turn, of the entropy current is usually assumed.
This was made clear in Chapter \ref{chapter:relhydro}, when the entropy current was postulated in order to express the second law of thermodynamics in a local covariant fashion so that it could be enforced by relativistic hydrodynamics.
In this Section we provide the first general proof of this typically understood hypothesis and we put forward a method to calculate the thermodynamic potential current, whence the entropy current, at local thermodynamic equilibrium.
The first step is a modification of the local thermodynamic equilibrium density operator by the introduction of a dimensionless parameter $\lambda$ in the following way
\begin{equation}\label{zubeq21}
\widehat{\rho}_{\rm LE}(\tau,\lambda)\equiv
\frac{1}{Z_{\rm LE}(\tau,\lambda)}\exp \left[-\lambda \int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right)\right],
\end{equation}
\begin{equation}\label{zubeq22}
Z_{\rm LE}(\tau,\lambda)\equiv
\mathrm{tr} \left(\exp \left[-\lambda \int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right)\right]\right).
\end{equation}
This is purposely designed in such a way to recover the original density operator \eqref{zubeq05} and partition function \eqref{zubeq06} for $\lambda=1$.
By taking the derivative with respect to $\lambda$ of the logarithm of this new $\lambda$-dependent partition function, we obtain
\begin{equation}\label{zubeq20}
\frac{\partial \log Z_{\rm LE}(\tau,\lambda)}{\partial \lambda}=
-\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}(\lambda)\beta_{\nu}-\zeta \langle \widehat{j}^{\mu}\rangle_{\rm LE}(\lambda)\right),
\end{equation}
where by the symbol $\langle \widehat{O}\rangle_{\rm LE}(\lambda)$ we mean the expectation value of $\widehat{O}$ calculated with $\widehat{\rho}_{\rm LE}(\tau,\lambda)$, that is $\langle \widehat{O}\rangle_{\rm LE}(\lambda)\equiv \mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau,\lambda)\widehat{O})$.
Note that the $\lambda$-dependence affects only the expectation values and not the thermodynamic fields, for it stems from the density operator with whom the expectation values are calculated.
If the above result looks confusing at a glance, let us just point out that it is but the same as the following known formula, which is obtained from \eqref{zubeq03} and can be easily checked
\begin{equation}
Z=\mathrm{tr} \left(\exp \left[-\sum_i\lambda_i\widehat{O}_i\right]\right)
\qquad \Rightarrow \qquad
\frac{\partial \log Z}{\partial \lambda_k}=-\langle \widehat{O}_k\rangle.
\end{equation}
Let us now integrate both sides of \eqref{zubeq20} with respect to $\lambda$ from some $\lambda_0$ to $\lambda=1$
\begin{equation}
\log Z_{\rm LE}(\tau)-\log Z_{\rm LE}(\tau,\lambda_0)=-\int_{\lambda_0}^1{\rm d} \lambda \int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left(\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}(\lambda)\beta_{\nu}-\zeta \langle \widehat{j}^{\mu}\rangle_{\rm LE}(\lambda)\right),
\end{equation}
where we used the feature that the $\lambda$-modification is so that $\log Z_{\rm LE}(\tau,\lambda=1)=\log Z_{\rm LE}(\tau)$.
If we can exchange the order of the $\lambda$-integration and the $\Sigma(\tau)$ one, we have
\begin{equation}
\log Z_{\rm LE}(\tau)-\log Z_{\rm LE}(\tau,\lambda_0)=\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu} \int_1^{\lambda_0}{\rm d} \lambda \left(\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}(\lambda)\beta_{\nu}-\zeta \langle \widehat{j}^{\mu}\rangle_{\rm LE}(\lambda)\right).
\end{equation}
Thus, if there exists a particular $\lambda_0$ such that $\log Z_{\rm LE}(\tau,\lambda_0)=0$, we have both the proof of the extensivity of the logarithm of the partition function and a formula for the thermodynamic potential current
\begin{equation}
\log Z_{\rm LE}(\tau)=\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\phi^{\mu},\qquad
\phi^{\mu}=\int_1^{\lambda_0}{\rm d} \lambda \left(\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}(\lambda)\beta_{\nu}-\zeta \langle \widehat{j}^{\mu}\rangle_{\rm LE}(\lambda)\right).
\end{equation}
The key quantity for the determination of such a special $\lambda_0$ is the operator $\widehat{\Upsilon}(\tau)$ already defined in \eqref{zubeq23}, in particular its spectrum.
Suppose that $\widehat{\Upsilon}(\tau)$ is bounded from below, namely there exists a lowest eigenvalue $\Upsilon_0(\tau)$ and a corresponding eigenvector $|0_{\Upsilon}(\tau)\rangle$, and also let the latter be non-degenerate.
In the following, we will omit the $\tau$-dependence in the operator $\widehat{\Upsilon}(\tau)$, in the eigenvalues $\Upsilon_i(\tau)$ and in the eigenvector $|0_{\Upsilon}(\tau)\rangle$ in order to ease the notation and highlight the $\lambda$-dependence instead, but it should be kept in mind that they are all $\tau$-dependent in principle.
In this hypothesis, the eigenvalues can be arranged in ascending order as $\Upsilon_0<\Upsilon_1<\Upsilon_2<\cdots$ and the partition function can be written as
\begin{equation}\label{zubeq25}
Z_{\rm LE}(\tau,\lambda)={\rm e}^{-\lambda \Upsilon_0}\left(1+{\rm e}^{-\lambda(\Upsilon_1-\Upsilon_0)}+{\rm e}^{\lambda(\Upsilon_2-\Upsilon_0)}+\cdots \right).
\end{equation}
Then, if $\Upsilon_0=0$ and we let $\lambda \to +\infty$, we notice that
\begin{equation}
\lim_{\lambda \to +\infty}Z_{\rm LE}(\tau,\lambda)=1
\qquad \Rightarrow \qquad \lim_{\lambda \to +\infty}\log Z_{\rm LE}(\tau,\lambda)=0,
\end{equation}
therefore $\lambda_0=+\infty$ would be our candidate; however, we might expect $\Upsilon_0$ to be different from zero in general.
So, in order for these wheels to spin the way we want, we exploit the above mentioned property \eqref{zubeq24} of the local thermodynamic equilibrium density operator, namely the invariance under addition or subtraction of non-operator terms in the argument of the exponential function, provided the same is done in the partition function.
Recall that the entropy depends only on the density operator, therefore this is also an invariance of the entropy.
Thus, with the replacement
\begin{equation}
\widehat{\rho}_{\rm LE}(\tau,\lambda)=\frac{{\rm e}^{-\lambda \widehat{\Upsilon}(\tau)}}{Z_{\rm LE}(\tau,\lambda)}
\qquad \mapsto \qquad
\widehat{\rho}'_{\rm LE}(\tau,\lambda)\equiv \frac{{\rm e}^{-\lambda \widehat{\Upsilon}'(\tau)}}{Z'_{\rm LE}(\tau,\lambda)}
\end{equation}
where
\begin{equation}
\begin{split}
\widehat{\Upsilon}'\equiv&
\widehat{\Upsilon}-\Upsilon_0
=\widehat{\Upsilon}-\langle 0_{\Upsilon}|\widehat{\Upsilon}|0_{\Upsilon}\rangle \\
=&\int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left[\left(\widehat{T}^{\mu \nu}-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle \right)\beta_{\nu}-\zeta \left(\widehat{j}^{\mu}-\langle 0_{\Upsilon}|\widehat{j}^{\mu}|0_{\Upsilon}\rangle\right)\right],
\end{split}
\end{equation}
the entropy is in fact unchanged.
In other words, if this is useful to find an expression for the entropy current, we are free to make such a tweak and rest assured that the entropy stays the same.
The rationale behind this transformation is that the new operator $\widehat{\Upsilon}'$ has a vanishing lowest eigenvalue, $\Upsilon'_0=\Upsilon_0-\Upsilon_0=0$, therefore the new partition function
\begin{equation}
Z'_{\rm LE}(\tau,\lambda)=\mathrm{tr} \left(\exp \left[-\lambda \int_{\Sigma(\tau)}{\rm d} \Sigma_{\mu}\left[\left(\widehat{T}^{\mu \nu}-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle \right)\beta_{\nu}-\zeta \left(\widehat{j}^{\mu}-\langle 0_{\Upsilon}|\widehat{j}^{\mu}|0_{\Upsilon}\rangle\right)\right]\right]\right),
\end{equation}
which, like \eqref{zubeq25} and taking into account that $\Upsilon'_i=\Upsilon_i-\Upsilon_0$, can also be written as
\begin{equation}
Z'_{\rm LE}(\tau,\lambda)=1+{\rm e}^{-\lambda(\Upsilon_1-\Upsilon_0)}+{\rm e}^{-\lambda(\Upsilon_2-\Upsilon_0)}+\cdots,
\end{equation}
does have a vanishing logarithm for $\lambda \to +\infty$, therefore $\lambda_0=+\infty$ is indeed the sought solution.
So, by repeating the above steps with $Z_{\rm LE}(\tau,\lambda)$ replaced by $Z'_{\rm LE}(\tau,\lambda)$, the thermodynamic potential current and the entropy current read respectively
\begin{equation}\label{zubeq26}
\phi^{\mu}=\int_1^{+\infty}{\rm d} \lambda \left[\left(\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle\right)(\lambda)\beta_{\nu}-\zeta \left(\langle \widehat{j}^{\mu}\rangle_{\rm LE}-\langle 0_{\Upsilon}|\widehat{j}^{\mu}|0_{\Upsilon}\rangle \right)(\lambda)\right],
\end{equation}
\begin{equation}\label{zubeq27}
s^{\mu}=\phi^{\mu}+\left(\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle \right)\beta_{\nu}-\zeta \left(\langle \widehat{j}^{\mu}\rangle_{\rm LE}-\langle 0_{\Upsilon}|\widehat{j}^{\mu}|0_{\Upsilon}\rangle \right).
\end{equation}
In summary, a sufficient condition for the existence of the thermodynamic potential current and the entropy current at local thermodynamic equilibrium is that the operator $\widehat{\Upsilon}$ must be bounded from below with non-degenerate lowest eigenvalue.
In this case, the two ingredients necessary to calculate the entropy current are:
\begin{enumerate}
\item the expectation values of the conserved currents at local thermodynamic equilibrium, i.e.\ $\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}$ and $\langle \widehat{j}^{\mu}\rangle_{\rm LE}$, and
\item the eigenvector $|0_{\Upsilon}\rangle$ corresponding to the lowest eigenvalue $\Upsilon_0$ of the operator $\widehat{\Upsilon}$.
\end{enumerate}
The quantum operators of the conserved currents $\widehat{T}^{\mu \nu}$ and $\widehat{j}^{\mu}$ are built by using the quantum field expansion, which in turn is obtained by solving the quantum field equations of motion of the Quantum Field Theory at hand, so the preliminary step of the method is to do that first.
Then, the algorithm for the calculation is the following:
\begin{enumerate}
\item Take the expectation values at local thermodynamic equilibrium $\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}$ and $\langle \widehat{j}^{\mu}\rangle_{\rm LE}$ and subtract from them the expectation values in $|0_{\Upsilon}\rangle$, namely $\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle$ and $\langle 0_{\Upsilon}|\widehat{j}^{\mu}|0_{\Upsilon}\rangle$.
This must be done using $\widehat{\rho}_{\rm LE}(\tau,\lambda)$, so the result will be $\lambda$-dependent.
\item Multiply by the corresponding thermodynamic fields, which are not $\lambda$-dependent, and integrate with respect to $\lambda$ from $\lambda=1$ to $\lambda=+\infty$ in order to obtain the thermodynamic potential current $\phi^{\mu}$ according to \eqref{zubeq26}.
\item Plug the result into \eqref{zubeq27} and obtain the entropy current $s^{\mu}$.
\end{enumerate}
Let us emphasize once again that the $\tau$-dependence have been omitted from $\widehat{\Upsilon}$, $\Upsilon_i$ and $|0_{\Upsilon}\rangle$ in order to ease the notation, but they are all $\tau$-dependent quantities in principle.
Note also that, although the operator $\widehat{\Upsilon}$ can be shown to reduce to the Hamiltonian in the special case of global thermodynamic equilibrium in Minkowski spacetime with $\zeta=0$ and $\beta^{\mu}=(1/T)(1,\mathbf{0})$, it is not so in general.
Thus, $|0_{\Upsilon}\rangle$ is not the vacuum state of the theory, and the expectation values in \eqref{zubeq26} and \eqref{zubeq27} do not coincide in general with the physically normally ordered ones.
If we wish not to be misled, we should therefore regard $|0_{\Upsilon}\rangle$ simply as the lowest eigenvector of some operator instead of some vacuum state.
Moreover, we stress once more that the result \eqref{zubeq27} is the entropy current at local thermodynamic equilibrium, which is the underlying assumption of relativistic hydrodynamics and includes global thermodynamic equilibrium as a special case.
However, no analogous expression is known for the entropy current for relativistic quantum fluids fully out of thermodynamic equilibrium.
Note that the expression for the entropy current we just derived fulfills the feature of being divergenceless at global thermodynamic equilibrium.
First of all, at global thermodynamic equilibrium we have $\nabla_{\mu}\langle \widehat{O}\rangle=\langle \nabla_{\mu}\widehat{O}\rangle$, because $\nabla_{\mu}\widehat{\rho}=0$ thanks to stationarity.
Same thing for $\nabla_{\mu}\langle 0_{\Upsilon}|\widehat{O}|0_{\Upsilon}\rangle=\langle 0_{\Upsilon}|\nabla_{\mu}\widehat{O}|0_{\Upsilon}\rangle$, as the non-degenerate state $|0_{\Upsilon}\rangle$ is stationary at global thermodynamic equilibrium.
Thus, for the thermodynamic potential current we have
\begin{equation}
\begin{split}
\nabla_{\mu}\phi^{\mu}=&
\nabla_{\mu}\int_1^{+\infty}{\rm d} \lambda \left[\left(\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle\right)(\lambda)\beta_{\nu}-\zeta \left(\langle \widehat{j}^{\mu}\rangle_{\rm LE}-\langle 0_{\Upsilon}|\widehat{j}^{\mu}|0_{\Upsilon}\rangle \right)(\lambda)\right]\\
=&\int_1^{+\infty}{\rm d} \lambda \left[\left(\langle \nabla_{\mu}\widehat{T}^{\mu \nu}\rangle-\langle 0_{\Upsilon}|\nabla_{\mu}\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle \right)(\lambda)\beta_{\nu}-
\zeta \left(\langle \nabla_{\mu}\widehat{j}^{\mu}\rangle-\langle 0_{\Upsilon}|\nabla_{\mu}\widehat{j}^{\mu}|0_{\Upsilon}\rangle \right)(\lambda)\right.\\
&+\left.\left(\langle \widehat{T}^{\mu \nu}\rangle-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle \right)(\lambda)\nabla_{\mu}\beta_{\nu}-
\nabla_{\mu}\zeta \left(\langle \widehat{j}^{\mu}\rangle-\langle 0_{\Upsilon}|\widehat{j}^{\mu}|0_{\Upsilon}\rangle \right)\right]=0.
\end{split}
\end{equation}
The second line vanishes because of the conservation laws of the conserved currents quantum operators, and the last line vanishes because of the geometric conditions \eqref{zubeq10} for global thermodynamic equilibrium.
For the exact same reasons, we have
\begin{equation}
\begin{split}
\nabla_{\mu}s^{\mu}=&
\nabla_{\mu}\phi^{\mu}+
\nabla_{\mu}\left(\langle \widehat{T}^{\mu \nu}\rangle-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle \right)\beta_{\nu}-
\zeta \nabla_{\mu}\left(\langle \widehat{j}^{\mu}\rangle-\langle 0_{\Upsilon}|\widehat{j}^{\mu}|0_{\Upsilon}\rangle \right)\\
&+\left(\langle \widehat{T}^{\mu \nu}\rangle-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle \right)\nabla_{\mu}\beta_{\nu}-
\nabla_{\mu}\zeta \left(\langle \widehat{j}^{\mu}\rangle-\langle 0_{\Upsilon}|\widehat{j}^{\mu}|0_{\Upsilon}\rangle \right)=0.
\end{split}
\end{equation}
\section{Summary and outlook}
Over the years, relativistic hydrodynamics has proven to be a powerful and versatile theory for describing, among others, astrophysical, cosmological and nuclear phenomena.
At some point in time, it became necessary to both go beyond the usual first or second order theory and also implement effects of quantum origin in the hydrodynamic framework.
An example of the latter is the entropy production of irreversible quantum processes.
It is in this atmosphere that the study of the quantum statistical foundations of relativistic hydrodynamics started to take hold, and it still is to date.
To these purposes, in the late 70's D.\ N.\ Zubarev put forward an approach to relativistic quantum statistical mechanics based on a covariant expression of the density operator at local thermodynamic equilibrium, $\widehat{\rho}_{\rm LE}(\tau)$, built according to the maximum entropy principle in terms of the quantum operators of the conserved currents.
After a quick review of quantum statistical mechanics, we started this Chapter by introducing this very object and presenting its most important and interesting features.
A fundamental one is that this operator is not stationary, therefore it cannot be a state of the system in the Heisenberg picture.
Notwithstanding, if at some $\tau_0$ the system is known to be at local thermodynamic equilibrium, the actual stationary state of the system $\widehat{\rho}$ is in fact $\widehat{\rho}_{\rm LE}(\tau_0)$, whether at or out of thermodynamic equilibrium.
We also took this a step further by showing that stationarity at global thermodynamic equilibrium translates into geometrical conditions on the thermodynamic fields, specifically the timelike four-temperature $\beta^{\mu}$ must be a Killing vector field and the fugacity $\zeta$ must be constant.
These are a prerogative of global thermodynamic equilibrium: at local thermodynamic equilibrium and at full non-equilibrium, the thermodynamic fields can be whatever, as long as they solve the constraint equations, what makes this kind of systems much harder to study.
Concerning the problem of entropy production, which has indeed an important place in this work, we showed that an equation for the entropy production rate at local thermodynamic equilibrium can be derived directly from the expression of the density operator.
However, as elegant and clear as this result may be, it does not provide us in fact with any information on the very structure of the entropy current, which, from a relativistic hydrodynamic standpoint, would be desirable to know.
In this respect, in the last Section we showed that a sufficient condition for the existence of the entropy current at local thermodynamic equilibrium is that the operator $\widehat{\Upsilon}(\tau)$ should be bounded from below with non-degenerate lowest eingevalue.
In this case, we proved that the logarithm of the partition function is extensive and, a consequence, that an entropy current exists at local thermodynamic equilibrium, making it the first general proof of this usually tacitly understood hypothesis.
With our method, we derived an expression for the entropy current at local thermodynamic equilibrium based on two main ingredients: the expectation values of the conserved currents at local thermodynamic equilibrium and the eigenvector corresponding to the lowest eigenvalue of $\widehat{\Upsilon}(\tau)$.
In Chapters \ref{chapter:gteacceleration} and \ref{chapter:boost}, we will use our method and the Zubarev approach in general to study two different kinds of systems.
As we shall see, aside from their entropy currents, they will be interesting in their own right, both theoretically and phenomenologically.
\section{Global thermodynamic equilibrium with acceleration}
\label{sec:gte_gte_with_acceleration}
The first step in order to make any thermal field theory is the determination of the density operator representing the state at hand, which will then allow for the calculation of thermal expectation values.
In this Section, we will present the state of global thermodynamic equilibrium with acceleration, a non-trivial instance of global thermodynamic equilibrium in Minkowski spacetime.
\subsection{Global thermodynamic equilibrium in Minkowski spacetime}
As we showed in Chapter \ref{chapter:zubarev}, in order for a state to be of global thermodynamic equilibrium, the thermodynamic fields must fulfill the geometric conditions \eqref{zubeq10}, that is the four-temperature $\beta^{\mu}$ must be a (timelike) Killing vector field and the fugacity $\zeta$ must be a constant.
The existence of global timelike Killing vector fields, whence of global thermodynamic equilibrium states, is a prerogative of so-called \textit{stationary} spacetimes, of which Minkowski is an example.
The solution of the Killing equation \eqref{zubeq10} in Minkowski spacetime has the form
\begin{equation}\label{gteq05}
\beta_{\mu}=b_{\mu}+\varpi_{\mu \nu}x^{\nu},
\end{equation}
where $b_{\mu}$ is a constant vector field and $\varpi_{\mu \nu}$ a constant tensor given by the exterior derivative of the four-temperature,
\begin{equation}\label{gteq06}
\varpi_{\mu \nu}=-\frac{1}{2}\left(\partial_{\mu}\beta_{\nu}-\partial_{\nu}\beta_{\mu}\right),
\end{equation}
called the \textit{thermal vorticity} \cite{Becattini:2012tc}.
This can be further decomposed, in general, in a way similar to how the electromagnetic tensor is decomposed into the comoving electric and magnetic fields \cite{Buzzegoli:2017cqy}
\begin{equation}\label{gteq07}
\varpi_{\mu \nu}=\alpha_{\mu}u_{\nu}-\alpha_{\nu}u_{\mu}+\epsilon_{\mu \nu \rho \sigma}w^{\rho}u^{\sigma},
\end{equation}
where $\alpha^{\mu}$ and $w^{\mu}$ are two spacelike vector fields defined as
\begin{equation}
\alpha_{\mu}\equiv \varpi_{\mu \nu}u^{\nu},\qquad
w_{\mu}\equiv -\frac{1}{2}\epsilon_{\mu \nu \rho \sigma}\varpi^{\nu \rho}u^{\sigma}.
\end{equation}
By expressing them in terms of the four-temperature and the proper temperature, one can show that their physical meaning is that of acceleration and vorticity fields respectively both divided by the proper temperature
\begin{equation}\label{gteq24}
\alpha_{\mu}=\frac{A_{\mu}}{T},\qquad w_{\mu}=\frac{\omega_{\mu}}{T},
\end{equation}
where the acceleration field was defined in \eqref{eqrelhydro23} while the \textit{vorticity} field is defined as $\omega_{\mu}\equiv -\frac{1}{2}\sqrt{\beta^2}\epsilon_{\mu \nu \rho \sigma}\partial^{\rho}u^{\nu}u^{\sigma}$.
Thus, we understand that the thermal vorticity accounts, in some sense, for thermal effects due to the acceleration and rotation of the fluid.
To get an idea of their order of magnitude, it is convenient to go to the local rest frame and express the above vector fields in terms of the local acceleration ${\bf a}$ and local angular velocity $\boldsymbol{\omega}$, that is $\alpha^{\mu}=(0,{\bf a}/T)$ and $w^{\mu}=(0,\boldsymbol{\omega}/T)$.
In order for the decomposition \eqref{gteq07} to have the correct dimensions, it is realized that the appropriate combination of natural constants is
\begin{equation}\label{gteq25}
|\alpha^{\mu}|=\frac{\hbar}{ck_{\rm B}}\frac{|{\bf a}|}{T},\qquad
|w^{\mu}|=\frac{\hbar}{k_{\rm B}}\frac{|\boldsymbol{\omega}|}{T},
\end{equation}
so they are terms of relativistic and/or quantum origin possibly appreciable at low temperature.
For typical systems on Earth, they are in fact negligible both because of the suppression coming from the natural constants and the room temperature being large with respect to the magnitude of the acceleration and rotation.
However, although possessing a very high temperature ($k_{\rm B}T\simeq 200\,{\rm MeV}$), the quark-gluon plasma produced in heavy-ion collisions has such large values of acceleration ($\hbar |{\bf a}|/c\simeq 10\,{\rm MeV}$) and vorticity ($h|\boldsymbol{\omega}|\simeq 12\,{\rm MeV}$) that these terms are in fact meaningful ($|\alpha^{\mu}|\simeq 0.05$, $|w^{\mu}|\simeq 0.06$), enough so to generate observable effects such as the polarization of $\Lambda$ hyperions \cite{STAR:2017ckg, Adam:2018ivw}.
With equations \eqref{gteq05} and \eqref{gteq06}, the density operator at global thermodynamic equilibrium \eqref{zubeq28} readily becomes
\begin{equation}
\begin{split}
\widehat{\rho}=&
\frac{1}{Z}\exp \left[-\int_{\Sigma}{\rm d} \Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\left(b_{\nu}+\varpi_{\nu \lambda}x^{\lambda}\right)-\zeta \widehat{j}^{\mu}\right)\right]\\
=&\frac{1}{Z}\exp \left[-b_{\mu}\int_{\Sigma}{\rm d} \Sigma_{\mu}\widehat{T}^{\mu \nu}+\frac{1}{2}\varpi_{\nu \lambda}\int_{\Sigma}{\rm d} \Sigma_{\mu}\left(x^{\nu}\widehat{T}^{\mu \lambda}-x^{\lambda}\widehat{T}^{\mu \nu}\right)+\zeta \int_{\Sigma}{\rm d} \Sigma_{\mu}\widehat{j}^{\mu}\right],
\end{split}
\end{equation}
where $b_{\mu}$, $\varpi_{\mu \nu}$ and $\zeta$ are taken out of the integral as they are constant.
Here we recognize the generators of the Poincaré group
\begin{equation}\label{gteq43}
\widehat{P}^{\mu}=\int_{\Sigma}{\rm d} \Sigma_{\mu}\widehat{T}^{\mu \nu},\qquad
\widehat{J}^{\nu \lambda}=\int_{\Sigma}{\rm d} \Sigma_{\mu}\left(x^{\nu}\widehat{T}^{\mu \lambda}-x^{\lambda}\widehat{T}^{\mu \nu}\right)
\end{equation}
the former being the four-momentum operator and the latter the generators of Lorentz transformations, and also the conserved charge associated to the current $\widehat{j}^{\mu}$
\begin{equation}
\widehat{Q}=\int_{\Sigma}{\rm d} \Sigma_{\mu}\widehat{j}^{\mu},
\end{equation}
therefore we find
\begin{equation}\label{gteq16}
\widehat{\rho}=\frac{1}{Z}\exp \left[-b_{\mu}\widehat{P}^{\mu}+\frac{1}{2}\varpi_{\mu \nu}\widehat{J}^{\mu \nu}+\zeta \widehat{Q}\right].
\end{equation}
In other words, aside from a constant fugacity associated to the conserved charge ascribed to internal symmetries, the most general expression of the density operator at global thermodynamic equilibrium in Minkowski spacetime is given by a linear combination of the generators of the Poincaré group with ten constant coefficients.
Depending on how these coefficients are specified, different kinds of global thermodynamic equilibrium are obtained.
In the simplest case of a vanishing thermal vorticity $\varpi_{\mu \nu}=0$, the four-temperature is just $\beta_{\mu}=b_{\mu}$, and the following familiar density operator is recovered
\begin{equation}\label{gteq11}
\widehat{\rho}=\frac{1}{Z}\exp \left[-b_{\mu}\widehat{P}^{\mu}+\zeta \widehat{Q}\right]=\frac{1}{Z}\exp \left[-\frac{\widehat{H}}{T_0}-\zeta \widehat{Q}\right],
\end{equation}
where the last expression is obtained by setting $\beta_{\mu}=b_{\mu}=(1/T_0)(1,{\bf 0})$ with $T_0$ a constant thermodynamic parameter with the dimensions of a temperature and $\widehat{H}$ the Hamiltonian.
This state is called \textit{homogeneous global thermodynamic equilibrium}.
As for a non-vanishing thermal vorticity, there are two possible cases.
The first one is that of a thermal vorticity with only a transverse (or ``magnetic'') component
\begin{equation}
b_{\mu}=\frac{1}{T_0},\qquad
\varpi_{\mu \nu}=\frac{\omega}{T_0}\left(g_{1\mu}g_{2\nu}-g_{1\nu}g_{2\mu}\right),
\end{equation}
where $\omega$ is a constant thermodynamic parameter with the dimensions of an angular velocity.
The four-temperature then reads
\begin{equation}
\beta_{\mu}=\frac{1}{T_0}(1,\omega y,-\omega x,0),
\end{equation}
hence the density operator
\begin{equation}\label{gteq03}
\widehat{\rho}=\frac{1}{Z}\exp \left[-\frac{\widehat{H}}{T_0}+\frac{\omega}{T_0}\widehat{J}_z+\zeta \widehat{Q}\right],
\end{equation}
with $\widehat{J}_z$ the angular momentum operator along $z$.
As it is known, this expression represents a fluid at thermodynamic equilibrium rigidly rotating around the $z$ direction with constant angular velocity $\omega$ \cite{landau2013statistical}, therefore it is called \textit{global thermodynamic equilibrium with rotation}.
This was used for instance in \cite{Vilenkin:1980zv, Becattini:2011ev} to study quantum effects in rotating relativistic matter and reworked more thoroughly in \cite{Becattini:2014yxa}.
The last and less known case is that of a thermal vorticity with only a longitudinal (or ``electric'') component
\begin{equation}
b_{\mu}=\frac{1}{T_0}(1,{\bf 0}),\qquad
\varpi_{\mu \nu}=\frac{a}{T_0}\left(g_{0\nu}g_{3\mu}-g_{0\mu}g_{3\nu}\right),
\end{equation}
where $a$ is a thermodynamic parameter with the dimensions of an acceleration.
The four-temperature then reads
\begin{equation}\label{gteq02}
\beta_{\mu}=\frac{1}{T_0}\left(1+az,0,0,-at\right),
\end{equation}
hence the density operator
\begin{equation}\label{gteq01}
\widehat{\rho}=\frac{1}{Z}\exp \left[-\frac{\widehat H}{T_0}+\frac{a}{T_0}\widehat{K}_z+\zeta \widehat{Q}\right],
\end{equation}
with $\widehat{K}_z$ the boost operator along $z$.
As we shall shortly see, this expression represents a fluid at thermodynamic equilibrium with acceleration field of constant magnitude along the flow, therefore it is called \textit{global thermodynamic equilibrium with acceleration}.
This was studied in detail in \cite{Becattini:2017ljh}, where it was shown that the Unruh effect is indeed recovered, and partly also in \cite{Becattini:2019poj}, where the calculation of the thermal expectation value of the energy-momentum tensor was completed and the entropy current calculated.
Equations \eqref{gteq03} and \eqref{gteq01} are the only two independent global thermodynamic equilibria in Minkowski spacetime with non-vanishing thermal vorticity, all other cases are combinations thereof.
For a discussion on the general case with both rotation and acceleration, see \cite{Korsbakken:2004bv}.
We conclude this part with the following remark.
As we mentioned in Chapter \ref{chapter:zubarev}, the symmetries of the density operator constrain the form of the thermal expectation values calculated with it.
In general, by looking at equation \eqref{gteq16}, the thermal expectation value of any local operator $\widehat{O}$ can depend on $b_{\mu}$, $\varpi_{\mu \nu}$, $x_{\mu}$ and $g_{\mu \nu}$.
Concerning the dependence on $x_{\mu}$, let us indicate the translation operator as $\widehat{\sf T}(x)\equiv \exp[-ix_{\mu}\widehat{P}^{\mu}]$, hence
\begin{equation}\label{gteq17}
\begin{split}
\langle \widehat{O}(x)\rangle=&
\mathrm{tr} \left(\widehat{\rho}\widehat{\sf T}(x)\widehat{O}(0)\widehat{\sf T}^{-1}(x)\right)=
\mathrm{tr} \left(\widehat{\sf T}^{-1}(x)\widehat{\rho}\widehat{\sf T}(x)\widehat{O}(0)\right)\\
=&\frac{1}{Z}\mathrm{tr} \left(\exp \left[b_{\mu}\widehat{P}^{\mu}+\frac{1}{2}\varpi_{\mu \nu}\widehat{\sf T}^{-1}(x)\widehat{J}^{\mu \nu}\widehat{\sf T}(x)\right]\widehat{O}(0)\right)\\
=&\frac{1}{Z}\mathrm{tr} \left(\exp \left[-\beta_{\mu}(x)\widehat{P}^{\mu}+\frac{1}{2}\varpi_{\mu \nu}\widehat{J}^{\mu \nu}\right]\widehat{O}(0)\right)=
\langle \widehat{O}(0)\rangle_{\beta_{\mu}(x)},
\end{split}
\end{equation}
where the known relation of the Poincaré algebra was used
\begin{equation}
\widehat{\sf T}^{-1}(x)\widehat{J}_{\mu \nu}\widehat{\sf T}(x)=\widehat{J}_{\mu \nu}-x_{\nu}\widehat{P}_{\mu}+x_{\mu}\widehat{P}_{\nu}.
\end{equation}
Equation \eqref{gteq17} implies that the dependence on the spacetime point of the thermal expectation value of any quantum operator cannot be arbitrary, but it can be only through the four-temperature.
This property will come in handy later on.
\subsection{Global thermodynamic equilibrium with acceleration}
Let us now focus on the configuration of global thermodynamic equilibrium with acceleration.
From here on, we will set $\zeta=0$ for simplicity without loss of generality, namely we will forget about the charged current and consider only the energy-momentum tensor.
The expression of the density operator is then
\begin{equation}\label{gteq04}
\widehat{\rho}=\frac{1}{Z}\exp \left[-\frac{\widehat{H}}{T_0}+\frac{a}{T_0}\widehat{K}_z\right].
\end{equation}
From the general discussion in the previous Chapter, we recall that in the argument of the exponential function there appear the quantum operators corresponding to the conserved charges.
One particular feature of \eqref{gteq04} is that the Hamiltonian $\widehat{H}$ and the boost operator $\widehat{K}_z$ do not commute with each other, and yet they are both conserved.
This occurs because the boost operator is explicitly time-dependent
\begin{equation}
\widehat{K}_z=\widehat{J}_{30}=t\widehat{P}_z-\int {\rm d}^3{\rm x}\,z\widehat{T}^{00},
\end{equation}
so its Heisenberg equation reads
\begin{equation}
i\frac{{\rm d} \widehat{K}_z}{{\rm d} t}=[\widehat{K}_z,\widehat{H}]+i\frac{\partial \widehat{K}_z}{\partial t}=-i\widehat{P}_z+i\widehat{P}_z=0.
\end{equation}
Although possible at a mathematical level, the question arises if this state can be physically realized.
As we mentioned in the last Chapter, the global thermodynamic equilibrium density operator is in fact an approximation of the local thermodynamic equilibrium one at first order in the derivatives of the thermodynamic fields \cite{Becattini:2014yxa}.
This makes \eqref{gteq04} a reasonable approximation for describing fluids at local thermodynamic equilibrium with non-vanishing acceleration (and vorticity), such as the quark-gluon plasma, in the hydrodynamic limit.
In order to work out the kinematic quantities, it is convenient to shift the origin of the $z$ coordinate by defining $z'\equiv z+1/a$, so that the four-temperature in \eqref{gteq02} simply reads
\begin{equation}\label{gteq08}
\beta^{\mu}=\frac{a}{T_0}(z',0,0,t).
\end{equation}
Then, according to \eqref{zubeq11}, the velocity field and proper temperature defined by its direction and inverse magnitude are respectively given by
\begin{equation}\label{gteq09}
u^{\mu}=\frac{1}{\sqrt{{z'}^2-t^2}}(z',0,0,t),\qquad
T=\frac{T_0}{a\sqrt{{z'}^2-t^2}}.
\end{equation}
It is easily realized that the flow lines, namely the curves with $u^{\mu}$ as tangent vector field, are hyperbolae of constant values of ${z'}^2-t^2$, and the proper temperature is constant along them.
This should be no surprise, being in fact a general feature of Killing vectors to have constant magnitude along their integral curves.
Now, by using the definition \eqref{eqrelhydro23} of the acceleration field, we readily obtain the expression
\begin{equation}\label{gteq37}
A^{\mu}=\frac{1}{{z'}^2-t^2}(t,0,0,z'),
\end{equation}
which is clearly orthogonal to the four-temperature and whose magnitude is constant along the flow lines.
In other words, the density operator \eqref{gteq04} represents a fluid at global thermodynamic equilibrium with acceleration field of constant magnitude along the flow, hence the name ``global thermodynamic equilibrium with acceleration''.
Furthermore, referring to the general decomposition \eqref{gteq07} of the thermal vorticity, in our specific case at hand the vorticity field is identically null, so we are actually left with
\begin{equation}\label{gteq18}
\varpi_{\mu \nu}=\alpha_{\mu}u_{\nu}-\alpha_{\nu}u_{\mu},
\end{equation}
where $\alpha^{\mu}$ has constant magnitude in the whole spacetime
\begin{equation}\label{gteq23}
\alpha^2=\frac{A^2}{T^2}=-\frac{a^2}{T_0^2}.
\end{equation}
As for the thermodynamics, the key point is that the four-temperature \eqref{gteq08} is not globally timelike for it has a bifurcated Killing horizon at $|z'|=t$ splitting Minkowski spacetime into four different subspaces as shown in Figure \ref{fig:wedges}:
\begin{itemize}
\item $|t|<z'$ is called the \textit{Right Rindler Wedge} (RRW),
\item $|t|<-z'$ is called the \textit{Left Rindler Wedge} (LRW),
\item $t>|z'|$ is called the \textit{Expanding Degenerate Kasner Universe} (EDK),
\item $t<-|z'|$ is called the \textit{Contracting Degenerate Kasner Universe} (EDK).
\end{itemize}
In particular, in order to make hydrodynamics and thermodynamics out of $\beta^{\mu}$, we have to figure out in which subspace it is both timelike and future-oriented globally, so that the interpretation of $u^{\mu}$ and $T$ as a velocity field and a proper temperature respectively makes sense.
By looking at Figure \ref{fig:wedges}, we can tell that the subspace we are looking for is the right Rindler wedge, so that is where we must restrict.
\begin{figure}
\begin{center}
\includegraphics[width=0.745\textwidth]{figures/wedges.eps}
\caption{2-dimensional $(t,z)$ section of Minkowski spacetime.
The hyperbolae are the flow lines of the four-temperature, the latter being the tangent vector field.
The red lines crossing at $z'=0$, i.e.\ $z=-1/a$, are the bifurcated Killing horizon of $\beta^{\mu}$ splitting the spacetime into four different subspaces: the right Rindler wedge (RRW), the left Rindler wedge (LRW), the expanding degenerate Kasner universe (EDK) and the contracting degenerate Kasner universe (CDK).
The four-temperature is globally timelike and future-oriented only in the right Rindler wedge.
The blue line through $z'=0$ and contained in the two Rindler wedges is a hyperplane $\Sigma$ orthogonal to $\beta^{\mu}$: such hyperplanes can be used to foliate the subspaces where the four-temperature is timelike.}
\label{fig:wedges}
\end{center}
\end{figure}
An observer moving along a constant-acceleration path, that is along the flow, is called a \textit{Rindler observer}.
Note that the expression of the proper temperature in equation \eqref{gteq09} is but an instance of Tolman's law \cite{Buchholz:2015fqa}, but it is most naturally obtained in the Zubarev approach by demanding the four-temperature to be a Killing vector field.
\subsection{Factorization of the density operator}
In \cite{Becattini:2017ljh}, it is shown that the density operator \eqref{gteq04} has an important factorization property.
Rather than \eqref{gteq04}, it is convenient to consider the general expression \eqref{zubeq28}, which still holds of course, and rewrite it by defining $\widehat{\Pi}$ as
\begin{equation}\label{gteq10}
\widehat{\rho}=\frac{1}{Z}\exp \left[-\frac{\widehat{\Pi}}{T_0}\right],\qquad
\widehat{\Pi}\equiv T_0\int_{\Sigma}{\rm d} \Sigma_{\mu}\widehat{T}^{\mu \nu}\beta_{\nu}=\widehat{H}-a\widehat{K}_z.
\end{equation}
At global thermodynamic equilibrium, this is independent on the choice of the spacelike hypersurface $\Sigma$ of the foliation.
To define the foliation, we can exploit the fact that the four-temperature \eqref{gteq08} fulfills the vorticity-free condition \eqref{zubeq07}, that is
\begin{equation}
\epsilon^{\mu \nu \rho \sigma}\beta_{\sigma}\partial_{\nu}\beta_{\rho}=0.
\end{equation}
In particular, the family of 3-dimensional spacelike hypersurfaces orthogonal to it is given by the hyperplanes through $z'=0$ and entirely contained in the right and left Rindler wedges, as shown in Figure \ref{fig:wedges}, so let $\Sigma$ be any of them.
Thus, let us decompose the hyperplane as $\Sigma=\Sigma_{\rm R}\cup \Sigma_{\rm L}$ with $\Sigma_{\rm R}$ and $\Sigma_{\rm L}$ entirely contained in the right and left Rindler wedge respectively, namely $\Sigma_{\rm R}$ being the $z'>0$ part of $\Sigma$ and $\Sigma_{\rm L}$ the $z'<0$ part.
This way, $\widehat{\Pi}$ is given by the sum of two terms, each depending only on the field degrees of freedom in one of the two Rindler wedges
\begin{equation}
\widehat{\Pi}=\widehat{\Pi}_{\rm R}-\widehat{\Pi}_{\rm L}
\end{equation}
\begin{equation}
\widehat{\Pi}_{\rm R}\equiv T_0\int_{\Sigma_{\rm R}}{\rm d} \Sigma_{\mu}\widehat{T}^{\mu \nu}\beta_{\nu},\qquad
\widehat{\Pi}_{\rm L}\equiv -T_0\int_{\Sigma_{\rm L}}{\rm d} \Sigma_{\mu}\widehat{T}^{\mu \nu}\beta_{\nu}.
\end{equation}
The minus sign in the definition of $\widehat{\Pi}_{\rm L}$ is due to the fact that $n^{\mu}$ orthogonal to $\Sigma_{\rm L}$ has opposite orientation with respect to $\beta^{\mu}$ in the left Rindler wedge.
Now the key point is the following.
The only contribution to the commutator $[\widehat{\Pi}_{\rm R},\widehat{\Pi}_{\rm L}]$ stems from $z'=0$, which is the only point where the quantum field operators in the energy-momentum tensor have non-vanishing commutators.
However, the four-temperature altogether vanishes there, as we can convince ourselves by noticing that the proper temperature \eqref{gteq09} diverges there, or simply just by looking at Figure \ref{fig:wedges}, therefore the commutator is in fact null.
This implies that the density operator is factorized as
\begin{equation}
\widehat{\rho}=
\frac{1}{Z}\exp \left[-\frac{\widehat{\Pi}_{\rm R}-\widehat{\Pi}_{\rm L}}{T_0}\right]=
\frac{1}{Z}\exp \left[-\frac{\widehat{\Pi}_{\rm R}}{T_0}\right]\exp \left[\frac{\widehat{\Pi}_{\rm L}}{T_0}\right].
\end{equation}
Formally, since $\widehat{\Pi}_{\rm R}$ and $\widehat{\Pi}_{\rm L}$ involve only field degrees of freedom each in its corresponding Rindler wedge, in the Hilbert space of the field states we actually have $\widehat{\Pi}_{\rm R}\equiv \widehat{\Pi}_{\rm R}\otimes \widehat{I}_{\rm L}$ and $\widehat{\Pi}_{\rm L}\equiv \widehat{I}_{\rm R}\otimes \widehat{\Pi}_{\rm L}$, with $\widehat{I}_{\rm R}$ and $\widehat{I}_{\rm L}$ the identities in the right and left Rindler wedge respectively.
In turn, this implies that the partition function is factorized as well
\begin{equation}
Z=\mathrm{tr} \left(\exp \left[-\frac{\widehat{\Pi}_{\rm R}-\widehat{\Pi}_{\rm L}}{T_0}\right]\right)=
\mathrm{tr}_{\rm R}\left(\exp \left[-\frac{\widehat{\Pi}_{\rm R}}{T_0}\right]\right)
\mathrm{tr}_{\rm L}\left(\exp \left[\frac{\widehat{\Pi}_{\rm L}}{T_0}\right]\right)\equiv Z_{\rm R}Z_{\rm L},
\end{equation}
where by $\mathrm{tr}_{\rm R}$ and $\mathrm{tr}_{\rm L}$ we mean that the trace is calculated only on the Hilbert space spanned by the field degrees of freedom in the right and left Rindler wedge respectively, which is called a \textit{partial trace}.
In summary, the density operator becomes factorized into two density operators commuting with each other
\begin{equation}\label{gteq41}
\widehat{\rho}=\widehat{\rho}_{\rm R}\otimes \widehat{\rho}_{\rm L},\qquad
[\widehat{\rho}_{\rm R},\widehat{\rho}_{\rm L}]=0
\end{equation}
each involving the field degrees of freedom in either of the Rindler wedges
\begin{equation}\label{gteq12}
\widehat{\rho}_{\rm R}=
\frac{1}{Z_{\rm R}}\exp \left[-\frac{\widehat{\Pi}_{\rm R}}{T_0}\right]=
\frac{1}{Z_{\rm R}}\exp \left[-\int_{\Sigma_{\rm R}}{\rm d} \Sigma_{\mu}\widehat{T}^{\mu \nu}\beta_{\nu}\right],
\end{equation}
\begin{equation}
\widehat{\rho}_{\rm L}=
\frac{1}{Z_{\rm L}}\exp \left[\frac{\widehat{\Pi}_{\rm L}}{T_0}\right]=
\frac{1}{Z_{\rm L}}\exp \left[-\int_{\Sigma_{\rm L}}{\rm d} \Sigma_{\mu}\widehat{T}^{\mu \nu}\beta_{\nu}\right].
\end{equation}
They are obtained by taking the partial trace of the whole density operator on the opposite wedge, so, referring to the terminology of Quantum Information Theory, they are called \textit{reduced density operators}
\begin{equation}\label{gteq42}
\widehat{\rho}_{\rm R}=\mathrm{tr}_{\rm L}(\widehat{\rho}),\qquad
\widehat{\rho}_{\rm L}=\mathrm{tr}_{\rm R}(\widehat{\rho}).
\end{equation}
Thus, the two Rindler wedges are completely disconnected.
As a consequence, the thermal expectation value of an operator $\widehat{O}(x)$ with $x$ in the right Rindler wedge, for instance, will only depend on the reduced density operator $\widehat{\rho}_{\rm R}$, regardless of the field states in the left Rindler wedge
\begin{equation}
\langle \widehat{O}(x)\rangle=\mathrm{tr}(\widehat{\rho}\widehat{O}(x))=\mathrm{tr}_{\rm R}(\widehat{\rho}_{\rm R}\widehat{O}(x)).
\end{equation}
Equation \eqref{gteq10} defining the operator $\widehat{\Pi}$ is reminiscent of the homogeneous global thermodynamic equilibrium density operator \eqref{gteq11}.
Indeed, by calculating the commutator with the quantum field operator, one can show that $\widehat{\Pi}$ is the generator of translations along the flow in Minkowski spacetime \cite{Korsbakken:2004bv}, same for $\widehat{\Pi}_{\rm R}$ and $\widehat{\Pi}_{\rm L}$ in their respective Rindler wedges \cite{Becattini:2017ljh}.
\section{Thermal expectation values and the Unruh effect}
\label{sec:gte_tev_unruh}
With the density operator, we are now in a position to calculate thermal expectation values of quantum operators.
In particular, we are interested in that of the energy-momentum tensor, for it is both of physical concern in its own right and also necessary for the calculation of the entropy current.
As already mentioned, the quantum operators are built with the quantum field operators, and as such they depend on the Quantum Field Theory underlying the hydrodynamic theory.
In this work, we will consider the simple case of a free real scalar field in the right Rindler wedge.
As we shall see, the Unruh effect will emerge from thermal expectation values in a natural way.
\subsection{Free scalar field theory in the right Rindler wedge}
The problem of a free real scalar field in the right Rindler wedge is indeed a well-known one in Physics.
Here, we just present a short summary highlighting the characteristics most salient for this work, for more details we refer to the review \cite{Crispino:2007eb}.
In order to build the energy-momentum tensor operator and calculate its thermal expectation value, we first need the expression of the quantum field operator, which is obtained as a solution of its equation of motion.
The Lagrangian density $\widehat{\cal L}$ of a free real scalar field $\widehat{\psi}$ of mass $m$ in Minkowski spacetime is
\begin{equation}\label{eq19}
\widehat{\cal L}=\frac{1}{2}\left(g^{\mu \nu}\partial_{\mu}\widehat{\psi} \partial_{\nu}\widehat{\psi}-m^2\widehat{\psi}^2\right)
\end{equation}
and the corresponding equation of motion is the Klein-Gordon equation
\begin{equation}
\left(\Box+m^2\right)\widehat{\psi}=0.
\end{equation}
Provided initial data, the solutions of Klein-Gordon equation can be uniquely determined in Minkowski spacetime, for it is globally hyperbolic with constant-time hypersurfaces as Cauchy surfaces.
Given any two solutions $\phi_1$ and $\phi_2$, we can build the \textit{Klein-Gordon inner product} to define orthogonality and normalization of the solutions
\begin{equation}
(\phi_1,\phi_2)_{\rm KG}\equiv i\int_{\Sigma}{\rm d}\Sigma_{\mu}\left(\phi_1^*\partial^{\mu}\phi_2-\phi_2\partial^{\mu}\phi_1^*\right),
\end{equation}
with $\Sigma$ a spacelike hypersurface with future-oriented unit orthogonal vector field.
This is independent of $\Sigma$, and also independent of time if $\Sigma$ is a constant-time hypersurface.
Let $\{u_i\}$ be a set of solutions orthonormalized according to the above inner product, that is
\begin{equation}
(u_i,u_j)_{\rm KG}=\delta_{ij}=-(u_i^*,u_j^*)_{\rm KG},\qquad (u_i^*,u_j)_{\rm KG}=0=(u_i,u_j^*)_{\rm KG}.
\end{equation}
The general solution of the Klein-Gordon equation can then be expanded as
\begin{equation}
\widehat{\psi}=\sum_i\left(u_i\widehat{a}_i+u_i^*\widehat{a}_i^{\dagger}\right),
\end{equation}
where, $\widehat{a}_i^{\dagger}$ and $\widehat{a}_i$ are the creation and annihilation operators respectively.
Using the orthonormality of the solutions $u_i$, one can show that
\begin{equation}
\widehat{a}_i=(u_i,\widehat{\psi})_{\rm KG},\qquad \widehat{a}_i^{\dagger}=-(u_i^*,\widehat{\psi})_{\rm KG},
\end{equation}
which, together with the canonical commutation relations, imply as expected
\begin{equation}
[\widehat{a}_i,\widehat{a}_j^{\dagger}]=\delta_{ij},\qquad [\widehat{a}_i,\widehat{a}_j]=[\widehat{a}_i^{\dagger},\widehat{a}_j^{\dagger}]=0.
\end{equation}
The vacuum state is defined by requiring it to be annihilated by all the annihilation operators, and the Fock space is built by applying the creation operators to it.
The annihilation operators, in turn, are determined by the solutions $u_i$, so it is actually their choice that determines the vacuum.
Such choice is not unique, and so is the vacuum in principle, however Minkowski spacetime is \textit{static}.
In a static spacetime it is natural for the solutions $u_i$ to have a time-dependence of the form ${\rm e}^{-i\omega_it}$, where $\omega_i$ are positive constants interpreted as the energies of the particles with respect to the future-oriented Killing vector $\partial_t$.
For this reason, the solutions $u_i$ are called \textit{positive-frequency modes}, and the $u_i^*$ \textit{negative-frequency modes}.
In Minkowski spacetime, the quantum field expansion usually reads
\begin{equation}\label{gteq27}
\widehat{\psi}(x)=\int_{-\infty}^{+\infty}{\rm d}^3{\rm p}\left(v_{\bf p}\widehat{a}_{\bf p}+v^*_{\bf p}\widehat{a}^{\dagger}_{\bf p}\right),
\end{equation}
where the positive-frequency modes are the plane waves
\begin{equation}\label{gteq49}
v_{\bf p}=\frac{1}{\sqrt{(2\pi)^32\omega_{\bf p}}}{\rm e}^{-i(\omega_{\bf p} t-{\bf p}\cdot {\bf x})}.
\end{equation}
If a spacetime is both static and globally hyperbolic, the choice of positive-frequency modes with time-dependence of the form ${\rm e}^{-i\omega_it}$ leads to a natural vacuum state that preserves time-translation symmetry, called the \textit{static vacuum}.
The static vacuum in Minkowski spacetime is called the \textit{Minkowski vacuum}, indicated as $|0_{\rm M}\rangle$, that is the state annihilated by all the annihilation operators $\widehat{a}_{\bf p}$.
The right Rindler wedge is also a static and globally hyperbolic spacetime, and it possesses a timelike future-oriented Killing vector given by the boost generator $z'\partial_t+t\partial_{z'}$ playing the role of time-translations generator, so we can solve the Klein-Gordon equation therein as well.
To do so, it is convenient to introduce a set of hyperbolic coordinates called the \textit{Rindler coordinates} $(\tau,{\bf x}_{\rm T},\xi)$, where the ``transverse coordinates'' ${\bf x}_{\rm T} \equiv (x,y)$ are the same as Minkowski's, and $(\tau,\xi)$ are related to $(t,z')$ by
\begin{equation}
\tau \equiv \frac{1}{2a}\log \left(\frac{z'+t}{z'-t}\right),\qquad \xi \equiv \frac{1}{2a}\log \left[a^2\left({z'}^2-t^2\right)\right]
\end{equation}
whose inverse read
\begin{equation}
t=\frac{{\rm e}^{a\xi}}{a}\sinh(a\tau),\qquad
z'=\frac{{\rm e}^{a\xi}}{a}\cosh(a\tau)
\end{equation}
spanning indeed the right Rindler wedge, as shown in Figure \ref{fig:rindlercoord}.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\textwidth]{figures/rindler_coord.eps}
\caption{2-dimensional section of the right Rindler wedge in the $(t,z)$ plane, spanned by the Rindler coordinates $(\tau,\xi)$.
The straight lines through $z=-1/a$, i.e.\ $z'=0$, are $\tau={\rm const.}$ hypersurfaces, while the hyperbolae are $\xi={\rm const.}$ hypersurfaces.}
\label{fig:rindlercoord}
\end{center}
\end{figure}
Plugging them into the Klein-Gordon equation, the positive-frequency modes are obtained
\begin{equation}\label{gteq30}
u_{\omega,{\bf k}_{\rm T}}(\tau,{\bf x}_{\rm T},\xi)\equiv \sqrt{\frac{1}{4\pi^4a}\sinh \left(\frac{\pi \omega}{a}\right)}{\rm K}_{i\frac{\omega}{a}}\left(\frac{m_{\rm T}{\rm e}^{a\xi}}{a}\right){\rm e}^{-i(\omega \tau-{\bf k}_{\rm T} \cdot {\bf x}_{\rm T})},
\end{equation}
where ${\rm K}_{i\frac{\omega}{a}}$ are the modified Bessel functions, $m_{\rm T}$ is the \textit{transverse mass}
\begin{equation}
m_{\rm T}=\sqrt{{\bf k}_{\rm T}^2+m^2},
\end{equation}
${\bf k}_{\rm T} \equiv (k_x,k_y)$ is the ``transverse momentum'' and $\omega \ge 0$ is the frequency.
Together with the negative-frequency modes $u_{\omega,{\bf k}_{\rm T}}^*$, they are orthonormalized according to the Klein-Gordon inner product.
Thus, the field can be expanded as
\begin{equation}\label{gteq13}
\widehat{\psi}(\tau,{\bf x}_{\rm T},\xi)=\int_0^{+\infty}{\rm d} \omega \int_{-\infty}^{+\infty}{\rm d}^2{\rm k_T}\left(u_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}}+u^*_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\right),
\end{equation}
where $\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}$ and $\widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}}$ are the creation and annihilation operators respectively, given by
\begin{equation}
\widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}}=(u_{\omega,{\bf k}_{\rm T}},\widehat{\psi})_{\rm KG},\qquad \widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}=-(u^*_{\omega,{\bf k}_{\rm T}},\widehat{\psi})_{\rm KG}.
\end{equation}
With the canonical commutation relations, the above equations imply the usual algebra
\begin{equation}\label{gteq15}
\begin{split}
[\widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}},\widehat{a}^{\rm R\dagger}_{\omega',{\bf k}_{\rm T}'}]=&\delta(\omega-\omega')\,\delta^2({\bf k}_{\rm T}-{\bf k}_{\rm T}'),\\
[\widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}},\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}]=&0=[\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}},\widehat{a}^{\rm R\dagger}_{\omega',{\bf k}_{\rm T}'}].
\end{split}
\end{equation}
In the left Rindler wedge, the coordinates $t\equiv a^{-1}{\rm e}^{a\bar{\xi}}\sinh(a\bar{\tau})$ and $z'\equiv -a^{-1}{\rm e}^{a\bar{\xi}}\cosh(a\bar{\tau})$ are introduced in order to solve the Klein-Gordon equation.
An expansion analogous to \eqref{gteq13} holds there, with the important difference that the role of the positive- and negative-frequency modes is interchanged as a consequence of the fact that the boost generator, acting as a time-translations generator, is past-oriented
\begin{equation}\label{gteq26}
\widehat{\psi}(\bar{\tau},{\bf x}_{\rm T},\bar{\xi})=\int_0^{+\infty}{\rm d} \omega \int_{-\infty}^{+\infty}{\rm d}^2{\rm k_T}\left(u_{\omega,{\bf k}_{\rm T}}^*\widehat{a}^{\rm L}_{\omega,{\bf k}_{\rm T}}+u_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm L\dagger}_{\omega,{\bf k}_{\rm T}}\right),
\end{equation}
where $u_{\omega,{\bf k}_{\rm T}}$ has the same expression as \eqref{gteq30} with $(\tau,\xi)$ replaced by $(\bar{\tau},\bar{\xi})$.
The static vacuum in the Rindler wedges, called the \textit{Rindler vacuum} and indicated as $|0_{\rm R}\rangle$, is the state annihilated by all $\widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}}$ and $\widehat{a}^{\rm L}_{\omega,{\bf k}_{\rm T}}$, namely $\widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}}|0_{\rm R}\rangle=0=\widehat{a}^{\rm L}_{\omega,{\bf k}_{\rm T}}|0_{\rm R}\rangle$.
As for the energy-momentum tensor, we consider the canonical one
\begin{equation}\label{gteq19}
\widehat{T}^{\mu \nu}=\frac{1}{2}\left(\partial^{\mu}\widehat{\psi}\partial^{\nu}\widehat{\psi}+\partial^{\nu}\widehat{\psi}\partial^{\mu}\widehat{\psi}\right)-g^{\mu \nu}\widehat{\cal L},
\end{equation}
but this is not the only possible choice in principle.
Plugging the quantum field expansion into it, and then the result into \eqref{gteq12}, the following expressions of the reduced density operators in the Rindler wedges are obtained
\begin{equation}\label{gteq28}
\widehat{\rho}_{\rm R}=\frac{1}{Z_{\rm R}}\exp \left[-\frac{\widehat{\Pi}_{\rm R}}{T_0}\right],\qquad
\widehat{\Pi}_{\rm R}=\int_0^{+\infty}{\rm d} \omega \int_{-\infty}^{+\infty}{\rm d}^2{\rm k_T}\,\omega \widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}}
\end{equation}
\begin{equation}\label{gteq29}
\widehat{\rho}_{\rm L}=\frac{1}{Z_{\rm L}}\exp \left[\frac{\widehat{\Pi}_{\rm L}}{T_0}\right],\qquad
\widehat{\Pi}_{\rm L}=\int_0^{+\infty}{\rm d} \omega \int_{-\infty}^{+\infty}{\rm d}^2{\rm k_T}\,\omega \widehat{a}^{\rm L\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm L}_{\omega,{\bf k}_{\rm T}}.
\end{equation}
They are diagonal in the Rindler creation and annihilation operators, so they have the same vacuum as the quantum field \eqref{gteq13} and \eqref{gteq26}, the Rindler vacuum $|0_{\rm R}\rangle$.
Moreover, their being diagonal is very convenient for it allows thermal expectation values to be calculated by using standard methods.
\subsection{Thermal expectation values}
Operators of physical interest, such as the energy-momentum tensor, are quadratic in the quantum field, therefore their thermal expectation values are given in terms of thermal expectation values of products of creation and annihilation operators.
These can be calculated in the Rindler wedges by using standard methods, thanks to the reduced density operators \eqref{gteq28} and \eqref{gteq29} being diagonal.
In particular, in the right Rindler wedge, which is the subspace of our interest, they are \cite{Becattini:2017ljh}
\begin{subequations}
\begin{align}
\langle \widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}\rangle=&n_{\rm B}\,\delta(\omega-\omega')\,\delta^2({\bf k}_{\rm T}-{\bf k}_{\rm T}')\label{gteq14a}\\
\langle \widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R\dagger}_{\omega',{\bf k}_{\rm T}'}\rangle=&\left(n_{\rm B}+1\right)\delta(\omega-\omega')\,\delta^2({\bf k}_{\rm T}-{\bf k}_{\rm T}')\label{gteq14b}\\
\langle \widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}\rangle=&0=\langle \widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R\dagger}_{\omega',{\bf k}_{\rm T}'}\rangle \label{gteq14c},
\end{align}
\end{subequations}
where $n_{\rm B}$ is the Bose-Einstein distribution
\begin{equation}
n_{\rm B}=\frac{1}{{\rm e}^{\omega/T_0}-1}.
\end{equation}
We prove the first one as an example.
For the sake of simplicity, let us temporarily introduce $\beta_0\equiv 1/T_0$ as a new parameter unrelated to the 0-component of $\beta_{\mu}$, and define
\begin{equation}\label{gteq50}
\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}(\beta_0)\equiv {\rm e}^{-\beta_0\widehat{\Pi}_{\rm R}}\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}{\rm e}^{\beta_0\widehat{\Pi}_{\rm R}}.
\end{equation}
The derivative with respect to $\beta_0$ reads
\begin{equation}
\begin{split}
\frac{\partial}{\partial \beta_0}\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}(\beta_0)=&
-[\widehat{\Pi}_{\rm R},\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}(\beta_0)]=
-{\rm e}^{-\beta_0\widehat{\Pi}_{\rm R}}[\widehat{\Pi}_{\rm R},\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}]{\rm e}^{\beta_0\widehat{\Pi}_{\rm R}}\\
=&-\omega {\rm e}^{-\beta_0\widehat{\Pi}_{\rm R}}\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}{\rm e}^{\beta_0\widehat{\Pi}_{\rm R}}=
-\omega \widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}(\beta_0),
\end{split}
\end{equation}
whose solution is
\begin{equation}
\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}(\beta_0)={\rm e}^{-\beta_0\omega }\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}.
\end{equation}
By exploiting the cyclic property of the trace, we have
\begin{equation}
\begin{split}
\langle \widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}\rangle=&
\mathrm{tr}_{\rm R}\left(\widehat{\rho}_{\rm R}\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}\right)=
\mathrm{tr}_{\rm R}\left(\frac{{\rm e}^{-\beta_0\widehat{\Pi}_{\rm R}}}{Z_{\rm R}}\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}\right)\\
=&\frac{1}{Z_{\rm R}}\mathrm{tr}_{\rm R}\left(\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}{\rm e}^{-\beta_0\widehat{\Pi}_{\rm R}}\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}{\rm e}^{\beta_0\widehat{\Pi}_{\rm R}}{\rm e}^{-\beta_0\widehat{\Pi}_{\rm R}}\right)\\
=&\mathrm{tr}_{\rm R}\left(\frac{{\rm e}^{-\beta_0\widehat{\Pi}_{\rm R}}}{Z_{\rm R}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}(\beta_0)\right)=
{\rm e}^{-\beta_0\omega}\mathrm{tr}_{\rm R}\left(\widehat{\rho}_{\rm R}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\right)\\
=&{\rm e}^{-\beta_0\omega}\langle \widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}+[\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'},\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}]\rangle \\
=&{\rm e}^{-\beta_0\omega}\left(\langle \widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}\rangle+\delta(\omega-\omega')\,\delta^2({\bf k}_{\rm T}-{\bf k}_{\rm T}')\right),
\end{split}
\end{equation}
thus
\begin{equation}\label{gteq51}
\langle \widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}\rangle \left(1-{\rm e}^{-\beta_0\omega}\right)=\delta(\omega-\omega')\,\delta^2({\bf k}_{\rm T}-{\bf k}_{\rm T}'),
\end{equation}
hence the result \eqref{gteq14a}.
Equations \eqref{gteq14b} and \eqref{gteq14c} are proven in the same way.
The $+1$ term in \eqref{gteq14b} stems from the commutation relations \eqref{gteq15} and gives rise to divergencies, therefore some renormalization will be needed at some point.
More on this will be said shortly, for now we just want to point out that, in the Rindler vacuum, the above thermal expectation values have the same form evaluated at $T_0=0$.
This means that renormalization in the Rindler vacuum is but the subtraction of the $T_0=0$ contribution, which is tantamount to canceling the $+1$ term.
The structure of thermal expectation values depends on the symmetries of the density operator and by the quantities at disposal.
At global thermodynamic equilibrium, these are $b_{\mu}$, $\varpi_{\mu \nu}$, $x_{\mu}$ and $g_{\mu \nu}$, but as shown in \eqref{gteq17}, the dependence on $x_{\mu}$ can only be through $\beta_{\mu}$, which also includes $b_{\mu}$.
As for the thermal vorticity $\varpi_{\mu \nu}$, in our case of global thermodynamic equilibrium with acceleration, this is decomposed in terms of $u_{\mu}$ and $\alpha_{\mu}$ as in \eqref{gteq18}, and the dependence on $u_{\mu}$ is the same as on $\beta_{\mu}$.
The only non-vanishing component of the first derivative of $\beta_{\mu}$ is $\varpi_{\mu \nu}$, which is constant, therefore derivatives of higher order are identically null.
In summary, the most general structure of the thermal expectation value of the energy-momentum tensor is
\begin{equation}
\langle \widehat{T}^{\mu \nu}\rangle=F_1\beta^{\mu}\beta^{\nu}+F_2g^{\mu \nu}+F_3\alpha^{\mu}\alpha^{\nu}+F_4(\alpha^{\mu}\beta^{\nu}+\alpha^{\nu}\beta^{\mu}).
\end{equation}
The only Lorentz scalar we can build with $\beta^{\mu}$ and $\alpha^{\mu}$ are $\beta^2=1/T^2$ and $\alpha^2=-a^2/T_0^2$, being $\beta^{\mu}$ and $\alpha^{\mu}$ mutually orthogonal.
Thus, the scalar functions $F_1$ and $F_2$ will depend on $\beta^2$ and $\alpha^2$, while $F_3$ and $F_4$ are expected to be directly proportional to $\alpha^2$ only as the ideal form of $\langle \widehat{T}^{\mu \nu}\rangle$ ought to be recovered for vanishing acceleration field.
On the other hand, in our case of free real scalar field theory, the Hamiltonian $\widehat{H}$ in \eqref{gteq04} is even under time-reversal, thus so is the density operator.
In formulae, $\widehat{\Theta}\widehat{\rho}\widehat{\Theta}^{-1}=\widehat{\rho}$, where $\widehat{\Theta}$ is the time-reversal operator.
Care must be taken that $\widehat{\Theta}$ depends on the hypersurface of simultaneity with respect to which time is reflected, so the above equation and the following ones should be understood with respect to the $t=0$ hypersurface.
On the other hand, the momentum $\widehat{T}^{0i}$ is odd under time-reversal, i.e.\ $\widehat{\Theta}\widehat{T}^{0i}\widehat{\Theta}^{-1}=-\widehat{T}^{0i}$.
Together with the evenness of $\widehat{\rho}$ and the cyclicity of the trace, this implies that $\langle \widehat{T}^{0i}\rangle=0$; conversely, $\langle \widehat{T}^{00}\rangle$ and $\langle \widehat{T}^{ij}\rangle$ do not vanish.
Now, in the above equation $\alpha^i$ and $\beta^0$ do not vanish at $t=0$, which means that the coupling between $\alpha^{\mu}$ and $\beta^{\mu}$ breaks time-reversal symmetry, therefore $F_4$ must be identically zero, and we are left with
\begin{equation}
\langle \widehat{T}^{\mu \nu}\rangle=F_1\beta^{\mu}\beta^{\nu}+F_2g^{\mu \nu}+F_3\alpha^{\mu}\alpha^{\nu}.
\end{equation}
The functions $F_i$ can be renamed using a more familiar notation in order to make their physical meaning more apparent
\begin{equation}\label{gteq45}
\langle \widehat{T}^{\mu \nu}\rangle=\rho u^{\mu}u^{\nu}-p\Delta^{\mu \nu}+kA^{\mu}A^{\nu}.
\end{equation}
Here, $\rho$ and $p$ reconstruct the ideal form, so they are the energy density and the pressure respectively, while $k$ is an anisotropic pressure term.
Interestingly enough, although being a non-ideal term, $k$ is not technically a dissipation, for entropy is not produced at global thermodynamic equilibrium.
Explicitly, they read
\begin{equation}
\rho=\langle \widehat{T}^{\mu \nu}\rangle u_{\mu}u_{\nu},
\end{equation}
\begin{equation}
p=\frac{1}{2}\langle \widehat{T}^{\mu \nu}\rangle \left(\frac{A_{\mu}A_{\nu}}{A^2}-\Delta_{\mu \nu}\right),
\end{equation}
\begin{equation}
k=\frac{1}{2A^2}\langle \widehat{T}^{\mu \nu}\rangle \left(3\frac{A_{\mu}A_{\nu}}{A^2}-\Delta_{\mu \nu}\right).
\end{equation}
Their expressions depend on the specific Quantum Field Theory underlying the hydrodynamic theory, so we now calculate them in our case of free real scalar field theory in the right Rindler wedge.
We take the energy density $\rho$ as an example, $p$ and $k$ are worked out following analogous steps.
As for the energy density, note that we can either take the thermal expectation value $\langle \widehat{T}^{\mu \nu}\rangle$ first and then project onto $u_{\mu}u_{\mu}$ or vieceversa, so let us first project onto $u_{\mu}u_{\nu}$ and then take the thermal expectation value $\langle \widehat{T}^{\mu \nu}u_{\mu}u_{\nu}\rangle$.
Using the canonical energy-momentum tensor \eqref{gteq19}, we see that the energy density is given by the difference of two terms
\begin{equation}\label{gteq20}
\rho=\langle (u^{\mu}\partial_{\mu}\widehat{\psi})^2\rangle-\langle \widehat{\cal L}\rangle.
\end{equation}
Let us focus on the first term.
In Rindler coordinates, the convective derivative is simply $u^{\mu}\partial_{\mu}={\rm e}^{-a\xi}\partial_{\tau}$, so with the quantum field expansion \eqref{gteq13} and the equations \eqref{gteq14a}--\eqref{gteq14c} we come to
\begin{equation}\label{gteq35}
\langle (u^{\mu}\partial_{\mu}\widehat{\psi})^2\rangle=
\int_0^{+\infty}{\rm d} \omega \int_{-\infty}^{+\infty}{\rm d}^2{\rm k_T}\,\omega^2|u_{\omega,{\bf k}_{\rm T}}|^2(2n_{\rm B}+1).
\end{equation}
The $+1$ term stemming from the commutation relations \eqref{gteq15} gives rise to divergencies and needs renormalization.
More on the renormalization scheme will be said shortly, for the time being let us renormalize with respect to the Rindler vacuum $|0_{\rm R}\rangle$, thus subtract the $T_0=0$ contribution, which is tantamount to cancel the $+1$ term.
Then
\begin{equation}
\begin{split}
\langle (u^{\mu}\partial_{\mu}\widehat{\psi})^2\rangle&-\langle 0_{\rm R}|(u^{\mu}\partial_{\mu}\widehat{\psi})^2|0_{\rm R}\rangle=
\frac{{\rm e}^{-2a\xi}}{2\pi^4a}\int_0^{+\infty}{\rm d} \omega \,\omega^2\sinh \left(\frac{\pi \omega}{a}\right)n_{\rm B}\times \\
&\times \int_{-\infty}^{+\infty}{\rm d}^2{\rm k_T}\,{\rm K}_{i\frac{\omega}{a}}\left(\frac{m_{\rm T} {\rm e}^{a\xi}}{a}\right){\rm K}_{-i\frac{\omega}{a}}\left(\frac{m_{\rm T} {\rm e}^{a\xi}}{a}\right),
\end{split}
\end{equation}
where ${\rm K}_{i\frac{\omega}{a}}^*={\rm K}_{-i\frac{\omega}{a}}$ was used.
In the massless limit we have $m_{\rm T}=|{\bf k}_{\rm T}|$, and the integral in the transverse momentum can be carried out analytically
\begin{equation}\label{gteq21}
\begin{split}
&\int_{-\infty}^{+\infty}{\rm d}^2{\rm k_T}\,{\rm K}_{i\frac{\omega}{a}}\left(\frac{m_{\rm T} {\rm e}^{a\xi}}{a}\right){\rm K}_{-i\frac{\omega}{a}}\left(\frac{m_{\rm T} {\rm e}^{a\xi}}{a}\right)\\
&=\pi a^2{\rm e}^{-2a\xi}\Gamma \left(1+i\frac{\omega}{a}\right)\Gamma \left(1-i\frac{\omega}{a}\right)=
\pi^2 a{\rm e}^{-2a\xi}\frac{\omega}{\sinh \left(\frac{\pi \omega}{a}\right)}
\end{split}
\end{equation}
with $\Gamma$ the Gamma function, hence
\begin{equation}
\langle (u^{\mu}\partial_{\mu}\widehat{\psi})^2\rangle-\langle 0_{\rm R}|(u^{\mu}\partial_{\mu}\widehat{\psi})^2|0_{\rm R}\rangle=
\frac{{\rm e}^{-4a\xi}}{2\pi^2}\int_0^{+\infty}{\rm d} \omega \,\frac{\omega^3}{{\rm e}^{\omega/T_0}-1}.
\end{equation}
This integral is carried out by exploiting the standard trick involving the geometric series
\begin{equation}
\begin{split}
&\int_0^{+\infty}{\rm d} \omega \frac{\omega^3}{{\rm e}^{\omega/T_0}-1}=
\int_0^{+\infty}{\rm d} \omega \,\omega^3\sum_{n=1}^{+\infty}{\rm e}^{-n\frac{\omega}{T_0}}=
\sum_{n=1}^{+\infty}\int_0^{+\infty}{\rm d} \omega \,\omega^3{\rm e}^{-n\frac{\omega}{T_0}}\\
&=T_0^4\sum_{n=1}^{+\infty}\frac{1}{n^4}\int_0^{+\infty}{\rm d} x\,x^3{\rm e}^{-x}=
6T_0^4\sum_{n=1}^{+\infty}\frac{1}{n^4}=
6T_0^4\zeta(4)=
\frac{\pi^4}{15}T_0^4,
\end{split}
\end{equation}
with $\zeta$ the Riemann zeta function.
The order of the integration and the series could be exchanged thanks to the convergence of the series.
Thus, we obtain the result found in \cite{Becattini:2017ljh}
\begin{equation}
\langle (u^{\mu}\partial_{\mu}\widehat{\psi})^2\rangle-\langle 0_{\rm R}|(u^{\mu}\partial_{\mu}\widehat{\psi})^2|0_{\rm R}\rangle=
\frac{\pi^2}{30}T_0^4{\rm e}^{-4a\xi}=
\frac{\pi^2}{30\beta^4}=
\frac{\pi^2}{30}T^4,
\end{equation}
where we used the expression of the proper temperature in Rindler coordinates
\begin{equation}\label{gteq22}
\beta^2=\frac{a^2}{T_0^2}\left({z'}^2-t^2\right)=\frac{{\rm e}^{2a\xi}}{T_0^2}.
\end{equation}
As for the second term in \eqref{gteq20}, it is convenient to use the Klein-Gordon equation of motion to rewrite it as
\begin{equation}
\langle \widehat{\cal L}\rangle=\frac{1}{4}\langle \Box \widehat{\psi}^2\rangle=\frac{1}{4}\Box \langle \widehat{\psi}^2\rangle.
\end{equation}
The D'Alambert operator can be taken out of the thermal expectation value because $\Box \widehat{\rho}=0$ at global thermodynamic equilibrium with acceleration.
Using the quantum field expansion \eqref{gteq13} and the equations \eqref{gteq14a}--\eqref{gteq14c}, we readily obtain
\begin{equation}\label{gteq36}
\langle \widehat{\psi}^2\rangle=
\int_0^{+\infty}{\rm d} \omega \int_{-\infty}^{+\infty}{\rm d}^2{\rm k_T}\,|u_{\omega,{\bf k}_{\rm T}}|^2(2n_{\rm B}+1).
\end{equation}
Once again, the $+1$ term stemming from the commutation relations \eqref{gteq15} gives rise to divergencies and must be renormalized.
In the Rindler vacuum renormalization scheme, we have
\begin{equation}
\begin{split}
\langle \widehat{\psi}^2\rangle&-\langle 0_{\rm R}|\widehat{\psi}^2|0_{\rm R}\rangle=
\frac{1}{2\pi^4a}\int_0^{+\infty}{\rm d} \omega \,\sinh \left(\frac{\pi \omega}{a}\right)n_{\rm B}\times \\
&\times \int_{-\infty}^{+\infty}{\rm d}^2{\rm k_T}\,{\rm K}_{i\frac{\omega}{a}}\left(\frac{m_{\rm T} {\rm e}^{a\xi}}{a}\right){\rm K}_{-i\frac{\omega}{a}}\left(\frac{m_{\rm T} {\rm e}^{a\xi}}{a}\right)
\end{split}
\end{equation}
The integral in the transverse momentum is performed analytically in the massless limit by using \eqref{gteq21}, hence
\begin{equation}
\langle \widehat{\psi}^2\rangle-\langle 0_{\rm R}|\widehat{\psi}^2|0_{\rm R}\rangle=
\frac{{\rm e}^{-2a\xi}}{2\pi^2}\int_0^{+\infty}{\rm d} \omega \frac{\omega}{{\rm e}^{\omega/T_0}-1}=
\frac{T_0^2}{12}{\rm e}^{-2a\xi}.
\end{equation}
In the last step, the integral was carried out by using again the geometric series trick as above, with now $\zeta(2)$ instead of $\zeta(4)$.
The D'Alambert operator in Rindler coordinates reads
\begin{equation}
\Box={\rm e}^{-2a\xi}\left(\partial_{\tau}^2-\partial_{\xi}^2\right)-\left(\partial_x^2+\partial_y^2\right),
\end{equation}
hence the result found in \cite{Becattini:2019poj}
\begin{equation}
\langle \widehat{\cal L}\rangle-\langle 0_{\rm R}|\widehat{\cal L}|0_{\rm R}\rangle=
\frac{1}{4}\Box \left(\langle \widehat{\psi}^2\rangle-\langle 0_{\rm R}|\widehat{\psi}^2|0_{\rm R}\rangle \right)=
-\frac{a^2T_0^2}{12}{\rm e}^{-4a\xi}=
\frac{\alpha^2}{12\beta^4}=
\frac{\alpha^2}{12}T^4,
\end{equation}
where \eqref{gteq22} and \eqref{gteq23} were used to express the result in terms of $\alpha^2$ and $\beta^2$.
In summary, the energy density for a free real scalar field in the massless limit renormalized with respect to the Rindler vacuum is
\begin{equation}
\rho_{\rm R}\equiv
\left(\langle \widehat{T}^{\mu \nu}\rangle-\langle 0_{\rm R}|\widehat{T}^{\mu \nu}|0_{\rm R}\rangle \right)u_{\mu}u_{\nu}=
\frac{\pi^2}{30\beta^4}-\frac{\alpha^2}{12\beta^4}=
\left(\frac{\pi^2}{30}-\frac{\alpha^2}{12}\right)T^4.
\end{equation}
This result was found in \cite{Buzzegoli:2017cqy, Becattini:2015nva} with a perturbative expansion of the density operator \eqref{gteq04} in $\alpha^{\mu}$ at order $\alpha^2$, so we conclude that the perturbative series for the free real scalar field is simply a polynomial in $\alpha^{\mu}$ of order 2.
The pressure $p$ and the anisotropic pressure $k$ can be worked out following similar steps.
By defining their values in the same renormalization scheme as
\begin{equation}
p_{\rm R}\equiv \frac{1}{2}\left(\langle \widehat{T}^{\mu \nu}\rangle-\langle 0_{\rm R}|\widehat{T}^{\mu \nu}|0_{\rm R}\rangle \right)\left(\frac{A_{\mu}A_{\nu}}{A^2}-\Delta_{\mu \nu}\right)
\end{equation}
\begin{equation}
k_{\rm R}\equiv \frac{1}{2A^2}\left(\langle \widehat{T}^{\mu \nu}\rangle-\langle 0_{\rm R}|\widehat{T}^{\mu \nu}|0_{\rm R}\rangle \right)\left(3\frac{A_{\mu}A_{\nu}}{A^2}-\Delta_{\mu \nu}\right)
\end{equation}
the final expressions read
\begin{equation}\label{gteq31}
\rho_{\rm R}=
\frac{\pi^2}{30\beta^4}-\frac{\alpha^2}{12\beta^4}=
\left(\frac{\pi^2}{30}-\frac{\alpha^2}{12}\right)T^4,
\qquad (m=0)
\end{equation}
\begin{equation}\label{gteq32}
p_{\rm R}=\frac{\pi^2}{90\beta^4}+\frac{\alpha^2}{18\beta^4}=
\left(\frac{\pi^2}{90}+\frac{\alpha^2}{18}\right)T^4,
\qquad (m=0)
\end{equation}
\begin{equation}\label{gteq33}
k_{\rm R}=-\frac{\alpha^2}{12\beta^4}=-\frac{\alpha^2}{12}T^4,
\qquad (m=0).
\end{equation}
This is the thermal expectation value of the energy-momentum tensor of a free real scalar field at global thermodynamic equilibrium in the right Rindler wedge in the massless limit.
As expected, for vanishing acceleration field the anisotropic pressure term vanishes, the ideal form is recovered and the energy density and the isotropic pressure take on their familiar forms.
Let us also emphasize that, by definition of $\alpha^{\mu}$ \eqref{gteq24} and by equation \eqref{gteq25}, the $\alpha^2$ terms are quantum corrections, in the sense that they are proportional to $\hbar^2$ thus vanishing in the limit $\hbar \to 0$.
\subsection{Particles and vacuum: the Unruh effect}
\label{sec:gte_unruh_effect}
As we had occasion to convince ourselves a few times already, thermal expectation values of operators quadratic in the quantum field are divergent owing to the $+1$ term in \eqref{gteq14b} stemming from the commutation relations of creation and annihilation operators \eqref{gteq15}.
In free field theory, they are usually renormalized by subtracting the vacuum expectation value; however,
the vacuum state is defined by the creation and annihilation operators, which in turn depend on the choice of the positive-frequency modes.
As it is known, especially in the context of Quantum Field Theory in curved spacetime, such a choice is not unique in general, therefore we face an ambiguity.
In our case, the quantum field could either be expanded in the whole Minkowski spacetime in the usual plane waves form as in \eqref{gteq27}, or in the two Rindler wedges in terms of the Bessel functions as in \eqref{gteq13} and \eqref{gteq26}.
Consequently, it is found that Rindler creation and annihilation operators are related to the Minkowski ones by a non-trivial Bogolyubov transformation, therefore the Rindler vacuum $|0_{\rm R}\rangle$ and the Minkowski vacuum $|0_{\rm M}\rangle$ are two different states, as first pointed out by Fulling \cite{Fulling:1972md}.
In particular, if we take the standpoint of the Rindler observer, the Minkowski vacuum is seen as a thermal state of particles at the temperature $T_0=\frac{a}{2\pi}$
\begin{equation}\label{gteq34}
\langle 0_{\rm M}|\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}|0_{\rm M}\rangle=
\langle 0_{\rm M}|\widehat{a}^{\rm L\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm L}_{\omega',{\bf k}_{\rm T}'}|0_{\rm M}\rangle=
\frac{1}{{\rm e}^{\frac{2\pi}{a}\omega}-1}\delta(\omega-\omega')\,\delta^2({\bf k}_{\rm T}-{\bf k}_{\rm T}').
\end{equation}
This is the famous \textit{Unruh effect}, and $T_0=\frac{a}{2\pi}$ is called the \textit{Unruh temperature} \cite{Unruh:1976db}.
In short, the Unruh effect states that uniformly accelerated observers in Minkowski spacetime, i.e.\ linearly accelerated observers with constant proper acceleration called the Rindler observers, associate a thermal bath of Rindler particles to the no-particle state of inertial observers, i.e.\ the Minkowski vacuum.
Rindler particles are associated with positive-frequency modes as defined by Rindler observers in contrast to Minkowski particles, which are associated with positive-frequency modes as defined by inertial observers.
This is a conceptually subtle Quantum Field Theory result, which has played a crucial role in our understanding that the particle content of a field theory is, in this sense, observer-dependent.
In view of these findings, which vacuum should we choose to renormalize thermal expectation values?
The answer is that, as long as both choices give finite results, it depends on which kind of observer we want to take the standpoint of.
We have seen that taking the point of view of the Rindler observer, namely renormalizing with respect to the Rindler vacuum, means the subtraction of the $T_0=0$ contribution, which is tantamount to neglecting the divergent $+1$ term in equation \eqref{gteq14b}.
In this renormalization scheme, the thermal expectation value of the energy-momentum tensor is given in \eqref{gteq31}--\eqref{gteq33}.
On the other hand, from \eqref{gteq34} it is clear that thermal expectation values of products of Rindler creation and annihilation operators in the Minkowski vacuum take on the same form as \eqref{gteq14a}--\eqref{gteq14c} calculated at $T_0=\frac{a}{2\pi}$.
Therefore, taking the standpoint of the inertial observer, that is renormalizing with respect to the Minkowski vacuum, means the subtraction of the $T_0=\frac{a}{2\pi}$ contribution.
The former eventuality has already been studied, so let us focus here on the latter.
In the Minkowski renormalization scheme, equations \eqref{gteq14a}--\eqref{gteq14c} read
\begin{equation}
\langle \widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}\rangle-\langle 0_{\rm M}|\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}|0_{\rm M}\rangle=
\left(\frac{1}{{\rm e}^{\omega/T_0}-1}-\frac{1}{{\rm e}^{\frac{2\pi}{a}\omega}-1}\right)\delta(\omega-\omega')\,\delta^2({\bf k}_{\rm T}-{\bf k}_{\rm T}')
\end{equation}
\begin{equation}
\langle \widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R\dagger}_{\omega',{\bf k}_{\rm T}'}\rangle-\langle 0_{\rm M}|\widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R\dagger}_{\omega',{\bf k}_{\rm T}'}|0_{\rm M}\rangle=
\left(\frac{1}{{\rm e}^{\omega/T_0}-1}-\frac{1}{{\rm e}^{\frac{2\pi}{a}\omega}-1}\right)\delta(\omega-\omega')\,\delta^2({\bf k}_{\rm T}-{\bf k}_{\rm T}')
\end{equation}
\begin{equation}
\langle \widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}\rangle-\langle 0_{\rm M}|\widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega',{\bf k}_{\rm T}'}|0_{\rm M}\rangle=0=\langle \widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R\dagger}_{\omega',{\bf k}_{\rm T}'}\rangle-\langle 0_{\rm M}|\widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R\dagger}_{\omega',{\bf k}_{\rm T}'}|0_{\rm M}\rangle.
\end{equation}
These expressions are suggestive as they make it apparent that the thermal expectation value of any operator quadratic in the quantum field, such as the energy-momentum tensor, vanish when the temperature is as low as the Unruh temperature.
By looking at non-renormalized equations \eqref{gteq35} and \eqref{gteq36}, it is clear that the energy density not only vanishes at $T_0=\frac{a}{2\pi}$, but becomes negative for lower temperatures.
And this is actually not a prerogative of the energy density, but a general feature of operators quadratic in the quantum field.
Thus, in this sense, the Unruh temperature is an absolute lower bound for the temperature, a result found in \cite{Becattini:2017ljh}.
It is important to stress that this conclusion holds locally, for a comoving observer.
From the expression of the proper temperature $T$ \eqref{gteq09} and the acceleration field $A^{\mu}$ \eqref{gteq37}, it is readily seen that in general one can write $T=T_0\sqrt{|A^2|}/a$, so, at the level of the proper temperature, the bound $T_0\ge \frac{a}{2\pi}$ implies
\begin{equation}
T\ge T_{\rm U}\equiv \frac{\sqrt{|A^2|}}{2\pi},
\end{equation}
where $T_{\rm U}$ is called the \textit{comoving} or \textit{proper Unruh temperature}.
That is, the temperature measured by a comoving thermometer cannot be lower than the magnitude of the acceleration field divided by $2\pi$.
Recall that the magnitude of the acceleration field is constant along a fixed flow line and varies from one flow line to another, therefore so does the proper Unruh temperature, thus behaving like the proper temperature itself.
The feature of the proper Unruh temperature being a lower bound for the proper temperature is apparent in the thermal expectation value of the energy-momentum tensor renormalized with repsect to the Minkowski vacuum.
By defining
\begin{equation}
\rho_{\rm M}\equiv \left(\langle \widehat{T}^{\mu \nu}\rangle-\langle 0_{\rm M}|\widehat{T}^{\mu \nu}|0_{\rm M}\rangle \right)u_{\mu}u_{\nu},
\end{equation}
\begin{equation}
p_{\rm M}\equiv \frac{1}{2}\left(\langle \widehat{T}^{\mu \nu}\rangle-\langle 0_{\rm M}|\widehat{T}^{\mu \nu}|0_{\rm M}\rangle \right)\left(\frac{A_{\mu}A_{\nu}}{A^2}-\Delta_{\mu \nu}\right),
\end{equation}
\begin{equation}
k_{\rm M}\equiv \frac{1}{2A^2}\left(\langle \widehat{T}^{\mu \nu}\rangle-\langle 0_{\rm M}|\widehat{T}^{\mu \nu}|0_{\rm M}\rangle \right)\left(3\frac{A_{\mu}A_{\nu}}{A^2}-\Delta_{\mu \nu}\right),
\end{equation}
one can find the following results in the massless case
\begin{equation}
\rho_{\rm M}=
\left(\frac{\pi^2}{30}-\frac{\alpha^2}{12}\right)(T^4-T_{\rm U}^4),
\qquad (m=0)
\end{equation}
\begin{equation}
p_{\rm M}=
\left(\frac{\pi^2}{90}+\frac{\alpha^2}{18}\right)(T^4-T_{\rm U}^4),
\qquad (m=0)
\end{equation}
\begin{equation}
k_{\rm M}=
-\frac{\alpha^2}{12}(T^4-T_{\rm U}^4),
\qquad (m=0).
\end{equation}
Although the conclusion that scalar thermodynamic functions renormalized with respect to the Minkowski vacuum can be expressed in the above fashion was obtained for a free field theory, it is likely to hold for any interacting field theory as well.
In fact, the Unruh effect was derived for general interacting field theories in \cite{Bisognano:1975ih, Bisognano:1976za} within an axiomatic quantum field theory approach by taking advantage of the KMS feature of the expectation values for the density operator at hand.
For a recent discussion, see also \cite{Gransee:2015aba, Gransee:2016hep}.
\section{Entropy current in the right Rindler wedge}
\label{sec:gte_entropy_current_in_RRW}
With the thermal expectation value of the energy-momentum tensor, we are now in a position to use our method to calculate the entropy current including quantum corrections.
Its divergence will provide information on the entropy production rate, which is expected to vanish at global thermodynamic equilibrium, while its integral will give us the total entropy.
As we shall see, thanks to the factorization property of the density operator, the entropy thereby obtained will be interestingly related to some entanglement entropy.
\subsection{Entropy current calculation}
As a quick recap, in order to apply the method for the calculation of the entropy current put forward in Section \ref{sec:zubarev_entropy_current_method}, we need two ingredients:
\begin{enumerate}
\item the thermal expectation value the energy-momentum tensor $\langle \widehat{T}^{\mu \nu}\rangle$, and
\item the eigenvector $|0_{\Upsilon}\rangle$ corresponding to the lowest, non-degenerate eigenvalue of $\widehat{\Upsilon}$.
\end{enumerate}
Then, the algorithm is the following:
\begin{enumerate}
\item Take $\langle \widehat{T}^{\mu \nu}\rangle$ and subtract $\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle$ by using the $\lambda$-dependent density operator $\widehat{\rho}_{\rm R}(\lambda)$ defined according to \eqref{zubeq21}.
This way, the result is $\lambda$-dependent.
\item Contract with $\beta_{\nu}$, which is $\lambda$-independent, and integrate in $\lambda$ from $\lambda=1$ to $\lambda=+\infty$ in order to obtain the thermodynamic potential current $\phi^{\mu}$ defined in \eqref{zubeq26}.
\item Plug the result into \eqref{zubeq27} and obtain the entropy current $s^{\mu}$.
\end{enumerate}
By comparing the definition \eqref{zubeq23} of the operator $\widehat{\Upsilon}$ with the expression \eqref{gteq28} of the reduced density operator at global thermodynamic equilibrium with acceleration in the right Rindler wedge, we immediately understand that
\begin{equation}\label{gteq38}
\widehat{\Upsilon}=\frac{\widehat{\Pi}_{\rm R}}{T_0}=
\frac{1}{T_0}\int_0^{+\infty}{\rm d} \omega \int_{-\infty}^{+\infty}{\rm d}^2{\rm k_T}\,\omega \widehat{a}^{\rm R\dagger}_{\omega,{\bf k}_{\rm T}}\widehat{a}^{\rm R}_{\omega,{\bf k}_{\rm T}}.
\end{equation}
Thus, the lowest eigenvector of $\widehat{\Upsilon}$ is the Rindler vacuum, $|0_{\Upsilon}\rangle=|0_{\rm R}\rangle$, which is non-degenerate, and the corresponding eigenvalue is $\Upsilon_0=0$.
Hence,
\begin{equation}
\langle \widehat{T}^{\mu \nu}\rangle-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle=
\langle \widehat{T}^{\mu \nu}\rangle-\langle 0_{\rm R}|\widehat{T}^{\mu \nu}|0_{\rm R}\rangle,
\end{equation}
which was already calculated in \eqref{gteq31}--\eqref{gteq33} for a massless field, so we are good to go for what concerns the ingredients.
Now, according to the method, the above quantity should be calculated with the modified density operator $\widehat{\rho}_{\rm R}(\lambda)$ in order to be $\lambda$-dependent.
Let us take a closer look at what this operation actually means at a practical level.
By comparing equation \eqref{zubeq21} with \eqref{gteq04} or with \eqref{gteq38}, we can tell that in our case the introduction of $\lambda$ is but a rescaling of $T_0$ as $T_0\mapsto T_0/\lambda$.
It can be easily checked that this transformation gives precisely the desired result.
Thus, we have
\begin{equation}
\langle \widehat{T}^{\mu \nu}\rangle(\lambda)-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle(\lambda)=
\rho_{\rm R}(\lambda)u^{\mu}u^{\nu}-p_{\rm R}(\lambda)\Delta^{\mu \nu}+k_{\rm R}(\lambda)A^{\mu}A^{\nu},
\end{equation}
where the rescaling $T_0\mapsto T_0/\lambda$ affects only $\rho_{\rm R}$, $p_{\rm R}$ and $k_{\rm R}$ because $u^{\mu}$, $\Delta^{\mu \nu}$ and $A^{\mu}$ are independent of $T_0$.
This result should now be contracted with $\beta_{\nu}$.
Care must be taken that this $\beta_{\nu}$ does not undergo the temperature rescaling: it concerns only the thermal expectation value $\langle \widehat{T}^{\mu \nu}\rangle-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle$, as can be clearly seen following the steps of the method derivation in Section \ref{sec:zubarev_entropy_current_method}.
Whence
\begin{equation}
\left(\langle \widehat{T}^{\mu \nu}\rangle(\lambda)-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle(\lambda)\right)\beta_{\nu}=
\rho_{\rm R}(\lambda)\beta^{\mu}.
\end{equation}
In other words, we do not need the thermal expectation value of the whole energy-momentum tensor, only the energy density contributes to the thermodynamic potential current and the entropy current.
The expression of $\rho_{\rm R}(\lambda)$ is obtained from \eqref{gteq31} by making explicit the $T_0$-dependence using \eqref{gteq09} and \eqref{gteq23}.
To this purpose, it is also convenient to extract the $T_0$-dependence from the four-temperature $\beta^{\mu}$ of \eqref{gteq08} by defining the Killing vector field $\gamma^{\mu}\equiv T_0\beta^{\mu}$, which is $T_0$-independent, hence
\begin{equation}
\rho_{\rm R}=
\left(\frac{\pi^2}{30}-\frac{\alpha^2}{12}\right)\frac{1}{\beta^4}=
\left(\frac{\pi^2}{30}+\frac{a^2}{12T_0^2}\right)\frac{T_0^4}{\gamma^4}=
\frac{\pi^2}{30\gamma^4}T_0^4+\frac{a^2}{12\gamma^4}T_0^2,
\end{equation}
thus
\begin{equation}
\rho_{\rm R}(\lambda)=
\frac{\pi^2}{30\gamma^4}\frac{T_0^4}{\lambda^4}+\frac{a^2}{12\gamma^4}\frac{T_0^2}{\lambda^2}.
\end{equation}
According to \eqref{zubeq26}, the thermodynamic potential current is readily obtained
\begin{equation}
\begin{split}
\phi^{\mu}=&\beta^{\mu}\int_1^{+\infty}{\rm d} \lambda \,\rho_{\rm R}(\lambda)=\left(\frac{\pi^2T_0^4}{30\gamma^4}\int_1^{+\infty}\frac{{\rm d} \lambda}{\lambda^4}+\frac{a^2T_0^2}{12\gamma^4}\int_1^{+\infty}\frac{{\rm d} \lambda}{\lambda^2}\right)\beta^{\mu}\\
=&\left(\frac{\pi^2}{90}\frac{T_0^4}{\gamma^4}+\frac{a^2}{12T_0^2}\frac{T_0^4}{\gamma^4}\right)\beta^{\mu}=
\left(\frac{\pi^2}{90\beta^4}-\frac{\alpha^2}{12\beta^4}\right)\beta^{\mu}.
\end{split}
\end{equation}
Plugging this result into \eqref{zubeq27}, we finally have the entropy current
\begin{equation}
\begin{split}
s^{\mu}=&
\phi^{\mu}+\left(\langle \widehat{T}^{\mu \nu}\rangle-\langle 0_{\rm R}|\widehat{T}^{\mu \nu}|0_{\rm R}\rangle \right)\beta_{\nu}=
\phi^{\mu}+\rho_{\rm R}\beta^{\mu}\\
=&\left(\frac{\pi^2}{90\beta^4}-\frac{\alpha^2}{12\beta^4}+\frac{\pi^2}{30\beta^4}-\frac{\alpha^2}{12\beta^4}\right)\beta^{\mu}=
\left(\frac{2\pi^2}{45\beta^4}-\frac{\alpha^2}{6\beta^4}\right)\beta^{\mu}.
\end{split}
\end{equation}
In summary,
\begin{equation}
\phi^{\mu}=
\left(\frac{\pi^2}{90\beta^4}-\frac{\alpha^2}{12\beta^4}\right)\beta^{\mu}=
\left(\frac{\pi^2}{90}-\frac{\alpha^2}{12}\right)T^3u^{\mu},\qquad (m=0)
\end{equation}
\begin{equation}\label{gteq39}
s^{\mu}=
\left(\frac{2\pi^2}{45\beta^4}-\frac{\alpha^2}{6\beta^4}\right)\beta^{\mu}=
\left(\frac{2\pi^2}{45}-\frac{\alpha^2}{6}\right)T^3u^{\mu},\qquad (m=0).
\end{equation}
These are the final expressions of the thermodynamic potential current and the entropy current of a free real scalar massless field at global thermodynamic equilibrium with acceleration in the right Rindler wedge \cite{Becattini:2019poj}.
Now some comments are in order.
First of all, we stress that, by definition of $\alpha^{\mu}$ \eqref{gteq24} and by equation \eqref{gteq25}, the $\alpha^2$ terms are quantum corrections, in the sense that they are proportional to $\hbar^2$ thus vanishing in the limit $\hbar \to 0$.
It is also easy to check that the entropy current, and the thermodynamic potential current as well, actually, are divergenceless.
In fact, both of the currents are given by a constant term multiplied by $\beta^{\mu}/\beta^4$, so
\begin{equation}
\begin{split}
\partial_{\mu}s^{\mu}\propto &\partial_{\mu}\phi^{\mu}\propto
\partial_{\mu}\frac{\beta^{\mu}}{\beta^4}=
\partial_t\frac{\beta^t}{\beta^4}+\partial_{z'}\frac{\beta^{z'}}{\beta^4}\\
=&\partial_t\left[\frac{T_0^4}{a^4}\left({z'}^2-t^2\right)^{-2}\frac{a}{T_0}z'\right]+\partial_{z'}\left[\frac{T_0^4}{a^4}\left({z'}^2-t^2\right)^{-2}\frac{a}{T_0}t\right]\\
=&\frac{T_0^3}{a^3}\left[4\left({z'}^2-t^2\right)^{-3}tz'-4\left({z'}^2-t^2\right)^{-3}z't\right]=0.
\end{split}
\end{equation}
This was expected because, as a general fact of global thermodynamic equilibrium, the entropy is constant in time, so the entropy production rate is identically null.
As we discussed earlier, the Unruh temperature is an absolute lower bound for the temperature at global thermodynamic equilibrium with acceleration, so the question arises if the entropy current vanishes or not at that temperature.
It turns out that it does not, in particular we have
\begin{equation}\label{gteq40}
s^{\mu}(T_{\rm U})=\frac{32\pi^2}{45}T_{\rm U}^3u^{\mu},\qquad (m=0).
\end{equation}
The reason for a non-vanishing entropy current in the Minkowski vacuum might be the fact that the field degrees of freedom in the left Rindler wedge were traced out, in the sense that $s^{\mu}$ is the entropy current pertaining the entropy in the right Rindler wedge, thus calculated with the reduced density operator $\widehat{\rho}_{\rm R}$.
This partial trace operation often leads to thermal states, thus a non-vanishing entropy current.
Another interesting observation is the following.
From equations \eqref{zubeq26} and \eqref{zubeq27}, it is clear that the thermodynamic potential current and the entropy current depend on the specific energy-momentum tensor quantum operator we decide to choose.
Our choice so far has been the canonical tensor \eqref{gteq19}, namely the minimal coupling one, but this is not the only possibility.
For instance, we could as well consider an ``improved'' tensor which is traceless for a massless field, namely the conformal coupling one,
\begin{equation}
\widehat{T}^{\mu \nu}_{\rm imp}\equiv \widehat{T}^{\mu \nu}-\frac{1}{6}\left(\partial^{\mu}\partial^{\nu}-g^{\mu \nu}\Box \right)\widehat{\psi}^2,
\end{equation}
with $\widehat{T}^{\mu \nu}$ appearing there being the canonical tensor.
In \cite{Becattini:2011ev} it is argued that, as a general feature of global thermodynamic equilibrium with acceleration or rotation, the energy density pertaining to this improved tensor is different from the one corresponding to the canonical tensor.
Indeed, by using the previously calculated expressions, the additional term with respect to the canonical value in the Rindler renormalization scheme for a massless field is
\begin{equation}
-\frac{1}{6}\left(u^{\mu}u^{\nu}\partial_{\mu}\partial_{\nu}-\Box \right)\left(\langle \widehat{\psi}^2\rangle-\langle 0_{\rm R}|\widehat{\psi}^2|0_{\rm R}\rangle \right)=
-\frac{1}{6}\left(u^{\mu}u^{\nu}\partial_{\mu}\partial_{\nu}-\Box \right)\frac{1}{12\beta^2}=
\frac{\alpha^2}{12\beta^4},
\end{equation}
hence the ``improved'' energy density
\begin{equation}
\rho^{\rm imp}_{\rm R}=\frac{\pi^2}{30\beta^4}=\frac{\pi^2}{30}T^4,\qquad (m=0).
\end{equation}
That is, the energy density calculated with the ``improved'' energy-momentum tensor for the massless free real scalar field depends only on $\beta^2$ and not on $\alpha^2$, which is a somewhat surprising feature.
Likewise, the entropy current gets modified and one is left with only the first term of equation \eqref{gteq39}
\begin{equation}
s^{\mu}_{\rm imp}=\frac{2\pi^2}{45\beta^4}\beta^{\mu}=
\frac{2\pi^2}{45}T^3u^{\mu},\qquad (m=0).
\end{equation}
At the Unruh temperature, we get an expression that, oddly enough, differs from the one obtained from the canonical tensor \eqref{gteq40} by a factor 16
\begin{equation}
s^{\mu}_{\rm imp}(T_{\rm U})=\frac{2\pi^2}{45}T_{\rm U}^3u^{\mu},\qquad (m=0).
\end{equation}
\subsection{Entropy in the right Rindler wedge and the entanglement entropy}
\label{subsec:gte_entropy}
Equation \eqref{gteq39} is the entropy current in the right Rindler wedge, therefore its integral on a 3-dimensional spacelike hypersurface $\Sigma_{\rm R}$ contained in the right Rindler wedge and whose boundary is the 2-dimensional surface $(t=0,z'=0)$ is the entropy in that subspace
\begin{equation}
S_{\rm R}=\int_{\Sigma_{\rm R}}{\rm d} \Sigma_{\mu}s^{\mu}.
\end{equation}
As mentioned already, the right Rindler wedge is foliated by constant-$\tau$ hypersurfaces.
Let $\Sigma_{\rm R}(0)$ and $\Sigma_{\rm R}(\tau^*)$ be two such hypersurfaces, in particular the $\tau=0$ and $\tau=\tau^*$ ones respectively, as shown in Figure \ref{fig:rindler_entropy}.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\textwidth]{figures/rindler_entropy.eps}
\caption{In this 2-dimensional $(t,z)$-section of the right Rindler wedge, $\Sigma_{\rm R}(0)$ and $\Sigma_{\rm R}(\tau^*)$ are two constant-$\tau$ hypersurfaces of the foliation, corresponding to $\tau=0$ and $\tau=\tau^*$ respectively.
$\Gamma(0,\tau^*)$ is the timelike boundary at infinity joining $\Sigma_{\rm R}(0)$ and $\Sigma_{\rm R}(\tau^*)$.
The hyperbolae are the flow lines of the four-temperature, which is tangent to $\Gamma(0,\tau^*)$.}
\label{fig:rindler_entropy}
\end{center}
\end{figure}
In order for the integral to be independent on the choice of the hypersurface of the foliation, two conditions have to be met.
The first one is that the entropy current must be divergenceless, which we already checked.
And the second one is the vanishing of the flux through the timelike boundary at infinity joining the two hypersurfaces of the foliation, indicated as $\Gamma(0,\tau^*)$ in Figure \ref{fig:rindler_entropy}.
This condition is checked just as readily, for the entropy current is parallel to the four-temperature, which is tangent to $\Gamma(0,\tau^*)$, hence ${\rm d} \Sigma_{\mu}s^{\mu}={\rm d} \Sigma \,n_{\mu}s^{\mu}=0$.
In summary, we can choose whatever hypersurface of the foliation, as expected from global thermodynamic equilibrium.
A straightforward calculation of the entropy on $\Sigma_{\rm R}(0)$ with the entropy current \eqref{gteq39} pertaining to the canonical energy-momentum tensor yields
\begin{equation}\label{gteq47}
S_{\rm R}=\left(\int_{-\infty}^{+\infty}{\rm d}^2{\rm x_T}\right)\left(\frac{2\pi^2}{45}-\frac{\alpha^2}{6}\right)\frac{T_0^3}{a^3}\lim_{z'\to 0}\frac{1}{2{z'}^2}.
\end{equation}
This expression has two main properties.
The one is the proportionality to the area of the 2-dimensional boundary surface separating the left and right Rindler wedges, that is the so-called \textit{area law}, which is clearly reminiscent of the Bekenstein-Hawking formula for the black hole entropy \cite{Bekenstein:1973ur}.
The other is the divergence as $z'\to 0$, owing to the fact that the proper temperature diverges at the Killing horizon.
Our result is in full agreement with previous literature \cite{Bombelli:1986rw, Srednicki:1993im}.
This entropy we just worked out is the entropy in the right Rindler wedge, namely
the entropy calculated with the reduced density operator pertaining to the right Rindler wedge
\begin{equation}\label{gteq44}
S_{\rm R}=-\mathrm{tr}_{\rm R}(\widehat{\rho}_{\rm R}\log \widehat{\rho}_{\rm R}).
\end{equation}
Recall from \eqref{gteq41} and \eqref{gteq42} that the full density operator $\widehat{\rho}$ in Minkowski spacetime is factorized as $\widehat{\rho}=\widehat{\rho}_{\rm R}\otimes \widehat{\rho}_{\rm L}$, with $\widehat{\rho}_{\rm R}=\mathrm{tr}_{\rm L}(\widehat{\rho})$ and $\widehat{\rho}_{\rm L}=\mathrm{tr}_{\rm R}(\widehat{\rho})$ the reduced density operators of the right and left Rindler wedges respectively.
In Quantum Information Theory, the entropy obtained from the reduced density operator of a bipartite system is called the \textit{entanglement entropy}, which, in some sense, is a measure for the entanglement of a system.
Thus, the above $S_{\rm R}$ is in fact the entanglement entropy of the right Rindler wedge with the left Rindler wedge.
The area law is a property characteristic of the entanglement entropy, and the fact that $S_{\rm R}$ exhibits it too, somehow strengthens our belief in this connection.
The calculation of the entanglement entropy in Quantum Mechanics may not be necessarily hard, depending on the specific system at hand; whereas, as a matter of fact, when it comes to Quantum Field Theory it turns out to be a formidable task which one is not in general able to accomplish.
Only few exact analytical results are known, and most of them heavily rely on the power of conformal symmetry in 2 dimensions \cite{Calabrese:2004eu, Calabrese:2005zw, Calabrese:2009qy, Casini:2005zv, Casini:2005rm}.
The prescription for a holographic dual of the entanglement entropy in the context of the AdS/CFT correspondence has turned this complicated procedure into a conceptually far more intuitive one, which eventually amounts to finding minimal-area surfaces in some geometry, and as such it constitutes a milestone indeed \cite{Ryu:2006bv, Ryu:2006ef}.
However, it is still far from easy to obtain analytical results for non-trivial systems.
In this respect, having been able to perform an exact analytical calculation for a system which is both mathematically non-trivial and of physical concern, makes our result obtained in \cite{Becattini:2019poj} even more interesting.
Note that $S_{\rm R}$ does not vanish in the Minkowski vacuum, that is when the temperature is as low as the Unruh temperature.
This should not be completely surprising as $S_{\rm R}$ is obtained from $\widehat{\rho}_{\rm R}$, which in turn is obtained from $\widehat{\rho}$ by taking the partial trace over the field degrees of freedom of the left Rindler wedge.
This operation is in general likely to lead to thermal states, as it does indeed in our case too.
Note also that, if we used the ``improved'' energy-momentum tensor instead of the canonical one, the area law and the divergence at the Killing horizon would have stayed the same, so $S_{\rm R}$ would still be infinite overall; however, the constant factor in front of it would have changed to $2\pi^2/45$.
This is somewhat unexpected for the following reason.
As we mentioned, the expression of the entropy current does depend on the specific form of the energy-momentum tensor quantum operator.
On the other hand, the generators of the Poincaré group \eqref{gteq43} should not \cite{Becattini:2011ev}.
Consequently, neither should the entropy, because $S_{\rm R}$ is obtained from $\widehat{\rho}_{\rm R}$ as in \eqref{gteq44}, and $\widehat{\rho}_{\rm R}$, referring to \eqref{gteq04}, can in turn be written in terms of the Poincaré generators as
\begin{equation}
\widehat{\rho}_{\rm R}=
\mathrm{tr}_{\rm L}\left(\frac{1}{Z}\exp \left[-\frac{\widehat{H}}{T_0}+\frac{a}{T_0}\widehat{K}_z\right]\right).
\end{equation}
Nevertheless, $\widehat{\Pi}_{\rm R}$ and $\widehat{\Pi}_{\rm L}$ might inherit a dependence on the energy-momentum tensor quantum operator because of the truncation at $z'=0$.
This issue will be the subject of further investigation.
We conclude with a final remark referring to a comment we made at the end of Subsection \ref{sec:entr_prod_rate}, which essentially states the following.
Since the entropy current is divergenceless at global thermodynamic equilibrium, if its domain is topologically contractible it can be expressed as the divergence of an antisymmetric rank-2 tensor called the \textit{potential} of $s^{\mu}$
\begin{equation}
\partial_{\mu}s^{\mu}=0\qquad \Rightarrow \qquad s^{\mu}=\partial_{\nu}\varsigma^{\mu \nu},\qquad \varsigma^{\mu \nu}=-\varsigma^{\nu \mu}.
\end{equation}
With this potential, the entropy can be cast into a 2-dimensional integral as in \eqref{zubeq31}, i.e.\
\begin{equation}\label{gteq48}
S=\int_{\Sigma}{\rm d} \Sigma_{\mu}s^{\mu}=
\int_{\Sigma}{\rm d} \Sigma_{\mu}\partial_{\nu}\varsigma^{\mu \nu}=
\frac{1}{2}\int_{\partial \Sigma}{\rm d} \tilde{S}_{\mu \nu}\varsigma^{\mu \nu}=
-\frac{1}{4}\int_{\partial \Sigma}{\rm d} S^{\rho \sigma}\sqrt{|g|}\epsilon_{\mu \nu \rho \sigma}\varsigma^{\mu \nu},
\end{equation}
where $\partial \Sigma$ is the 2-dimensional boundary surface of $\Sigma$.
The right Rindler wedge is topologically contractible, therefore such a potential for the entropy current \eqref{gteq39} ought to exist.
In order to work it out, we follow the same kind of philosophy as for the derivation of the thermal expectation value of the energy-momentum tensor \eqref{gteq45}.
To form an antisymmetric tensor, we can only use the vector fields $\beta^{\mu}$ and $\alpha^{\mu}$, hence the only possible combination is $\alpha^{\mu}\beta^{\nu}-\alpha^{\nu}\beta^{\mu}$.
According to the decomposition \eqref{gteq18}, this is just proportional to the thermal vorticity, so we have in general
\begin{equation}\label{gteq46}
\varsigma^{\mu \nu}=G\varpi^{\mu \nu}
\end{equation}
for some thermodynamic scalar function $G=G(\alpha^2,\beta^2)$.
The divergence of this expression must reproduce the entropy current
\begin{equation}
s^{\mu}=\partial_{\nu}(G\varpi^{\mu \nu})=\varpi^{\mu \nu}\partial_{\nu}G,
\end{equation}
where we took into account that the thermal vorticity is constant at global thermodynamic equilibrium.
By introducing the entropy density $s=u_{\mu}s^{\mu}$, we get
\begin{equation}
s=u_{\mu}s^{\mu}=u_{\mu}\varpi^{\mu \nu}\partial_{\nu}G=
-\alpha^{\nu}\partial_{\nu}G=
-\alpha^{\nu}\frac{\partial G}{\partial \beta^2}\partial_{\nu}\beta^2,
\end{equation}
where we used again the decomposition \eqref{gteq18} of the thermal vorticity and, in the last equality, the fact that $\alpha^2$ is constant, so its derivative is identically null.
Using the Killing equation and \eqref{gteq18} once again, we find
\begin{equation}
\partial_{\nu}\beta^2=
g^{\rho \sigma}\partial_{\nu}(\beta_{\rho}\beta_{\sigma})=
g^{\rho \sigma}(\beta_{\rho}\partial_{\nu}\beta_{\sigma}+\beta_{\sigma}\partial_{\nu}\beta_{\rho})=
\beta^{\sigma}\varpi_{\sigma \nu}+\beta^{\rho}\varpi_{\rho \nu}=
-2\sqrt{\beta^2}\alpha_{\nu},
\end{equation}
hence
\begin{equation}
s=2\alpha^2\sqrt{\beta^2}\frac{\partial G}{\partial \beta^2}\qquad \Rightarrow \qquad
G=\int {\rm d} \beta^2\frac{s}{2\alpha^2\sqrt{\beta^2}}\frac{\partial G}{\partial \beta^2}.
\end{equation}
The entropy density obtained from the entropy current \eqref{gteq39} is
\begin{equation}
s=u_{\mu}s^{\mu}=\left(\frac{2\pi^2}{45}-\frac{\alpha^2}{6}\right)\frac{1}{(\beta^2)^{3/2}},
\end{equation}
so we can work out the expression of $G$
\begin{equation}
\begin{split}
G=&
\int {\rm d} \beta^2 \left(\frac{2\pi^2}{45}-\frac{\alpha^2}{6}\right)\frac{1}{(\beta^2)^{3/2}} \frac{1}{2\alpha^2\sqrt{\beta^2}}\\
=&\frac{1}{2\alpha^2}\left(\frac{2\pi^2}{45}-\frac{\alpha^2}{6}\right)\int {\rm d} \beta^2 \frac{1}{(\beta^2)^2}\\
=&-\frac{1}{2\alpha^2}\left(\frac{2\pi^2}{45}-\frac{\alpha^2}{6}\right)\frac{1}{\beta^2}=
-\frac{s}{2\alpha^2}\sqrt{\beta^2}.
\end{split}
\end{equation}
Plugging this into \eqref{gteq46}, we finally obtain the potential of the entropy current \cite{Becattini:2019poj}
\begin{equation}
\varsigma^{\mu \nu}=\frac{s}{2\alpha^2}(\beta^{\mu}\alpha^{\nu}-\beta^{\nu}\alpha^{\mu}).
\end{equation}
The boundary of the hypersurface $\Sigma_{\rm R}(0)$ is the $(x,y)$ plane $(t=0,z'=0)$ together with the plane $(t=0,z'=+\infty)$.
In the latter, the argument of the last integral in \eqref{gteq48} vanishes for $s \propto {z'}^{-3}$, $\alpha^2$ is constant
and $\beta^0 \alpha^3 \propto z'$.
We are thus left with the $(x,y)$ plane, and taking into account that the indices $\rho$ and $\sigma$ can only take on values $1$ and $2$ and of the dependence of $\beta^{\mu}$ and $\alpha^{\mu}$ on $(t,z')$, we end up with the same result as \eqref{gteq47}.
\section{Summary and outlook}
In this Chapter, we considered a relativistic quantum fluid at global thermodynamic equilibrium with acceleration, a non-trivial instance of global thermodynamic equilibrium in Minkowski spacetime of both phenomenological and theoretical concern.
For this particular system, the four-temperature has a Killing horizon making it timelike and future-oriented only in the right Rindler wedge, therefore that is the subspace we had to restrict in order to make thermodynamics and hydrodynamics out of it.
Moreover, the density operator is factorized into two reduced density operators each pertaining to a single Rindler wedge, so that thermal expectation values in each wedge can be calculated by taking the partial trace with the corresponding reduced density operator.
In order to obtain the entropy current, we calculated the thermal expectation value of the energy-momentum tensor in the right Rindler wedge.
In particular, we considered the canonical tensor for the simple instance of a free real scalar quantum field, and the calculation was performed analytically in the massless case.
The plain expectation value was expectedly divergent, therefore it needed renormalization.
We discussed both the Rindler and the Minkowski renormalization schemes, and saw how the Unruh effect arises naturally with the Unruh temperature being a lower bound for the temperature.
We then practically tested our method by working out the entropy current for this system including quantum corrections.
The corresponding entropy production rate was identically null, in agreement with the general condition of global thermodynamic equilibrium.
The entropy current did not vanish in the Minkowski vacuum, that is at a temperature as low as the Unruh temperature, which is supposedly due to having traced out the field degrees of freedom in the left Rindler wedge.
As clear from the general method, the expression of the entropy current depends on the form of the energy-momentum tensor quantum operator.
All our calculations were performed with the canonical tensor, i.e.\ the minimal coupling one, but then we compared the result with the one we would have obtained by using an ``improved'' tensor which is traceless for a massless field, i.e.\ the conformal coupling one.
Interestingly enough, the quantum correction due to the acceleration field disappears in the latter case.
Finally, we integrated the above entropy current and obtained the entropy in the right Rindler wedge.
The result has two apparent properties, both in full agreement with known literature: the one is the so-called area law, namely the proportionality to the area of the 2-dimensional surface separating the two Rindler wedges, clearly reminiscent of the Bekenstein-Hawking formula for the black hole entropy; whereas the other is the divergence due to the proper temperature being infinite at the Killing horizon.
This entropy can also be seen as the one obtained by using the von Neumann formula with the reduced density operator of the right Rindler wedge.
By virtue of the factorization of the overall density operator, this is also, by definition, the entanglement entropy of the right Rindler wedge with the left Rindler wedge.
The calculation of the entanglement entropy in Quantum Field Theory is a very hard task and a hot topic in Quantum Information Theory in general.
Having been able to perform it in an exact analytical way by using a method completely orthogonal to the standard ones in the case of a system both mathematically non-trivial and of physical concern adds all the more interest to our result.
We also worked out the potential of the entropy current and recast the entropy in the form of a 2-dimensional surface integral.
Oddly, while maintaining the area law and the divergence at the Killing horizon, the expression of the entropy seemed to change by a constant factor if we considered the ``improved'' energy-momentum tensor instead of the canonical one.
This is a somewhat unexpected feature that deserves further study.
\section{Thermal field theory with boost invariance}
\label{sec:boost_boost_invariance}
In order to be able to make any thermal field theory, we must first determine the density operator representing the state at hand, which will then enable us to calculate thermal expectation values.
In this Section, we will present the density operator of a relativistic quantum fluid with boost invariance, a non-trivial instance of a non-equilibrium state in the future light-cone particularly interesting in the context of heavy-ion collisions.
\subsection{Boost invariance}
In Chapter \ref{chapter:gteacceleration}, we considered a relativistic quantum fluid at global thermodynamic equilibrium in the right Rindler wedge.
This time, we focus on another patch of Minkowski spacetime, that is the future light-cone defined by $t>|z|$.
Much in the same way as we introduced the Rindler coordinates to span the right Rindler wedge, we now introduce a new set of hyperbolic coordinates in order to span the future light-cone.
These are the so-called \textit{Milne coordinates} $(\tau,{\bf x}_{\rm T},\eta)$, where the transverse coordinates ${\bf x}_{\rm T}=(x,y)$ are once again the same as Minkowski's, while $(\tau,\eta)$ are related to $(t,z)$ by
\begin{equation}
\tau \equiv \sqrt{t^2-z^2},\qquad
\eta \equiv \frac{1}{2}\log \left(\frac{t+z}{t-z}\right)
\end{equation}
whose inverse read
\begin{equation}
t\equiv \tau \cosh \eta,\qquad
z\equiv \tau \sinh \eta.
\end{equation}
The hyperbolic time $\tau$ can be referred to as the \textit{Milne time}, while the coordinate $\eta$ is called the \textit{spacetime rapidity}.
The Milne coordinates span indeed the future light-cone, as shown in Figure \ref{fig:milne_coord},
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{figures/milne_coord.eps}
\caption{2-dimensional section of the future light-cone in the $(t,z)$ plane, spanned by the Milne coordinates $(\tau,\eta)$.
The hyperbolae are $\tau={\rm const.}$ hypersurfaces, while the straight lines through the origin are $\eta={\rm const.}$ hypersurfaces.}
\label{fig:milne_coord}
\end{center}
\end{figure}
and the reason for their name is that the line element is but the metrics of the so-called \textit{Milne universe}
\begin{equation}
{\rm d} s^2=
{\rm d} t^2-{\rm d} x^2-{\rm d} y^2-{\rm d} z^2=
{\rm d} \tau^2-{\rm d} x^2-{\rm d} y^2-\tau^2 {\rm d} \eta^2.
\end{equation}
The foliation of the future light-cone with the family of 3-dimensional spacelike hypersurfaces $\{\Sigma(\tau)\}$ given by the constant-$\tau$ hyperboloids is well-defined, for the vorticity-free condition \eqref{zubeq07} is met by the timelike vector field of unit magnitude orthogonal to them,
\begin{equation}
n^{\mu}=\frac{1}{\sqrt{t^2-z^2}}(t,0,0,z)=
(\cosh \eta,0,0,\sinh \eta).
\end{equation}
This vector field can be used to choose the hydrodynamic velocity field consistently with the foliation by taking $u^{\mu}=n^{\mu}$, which is essentially the $\beta$-frame, thus
\begin{equation}\label{boosteq07}
u^{\mu}=(\cosh \eta,0,0,\sinh \eta).
\end{equation}
Then, the tetrad onto which tensors are decomposed is most naturally given by
\begin{subequations}
\begin{align}
(\partial_{\tau})^{\mu}=&(\cosh \eta,0,0,\sinh \eta)=u^{\mu},\label{boosteq06a}\\
(\partial_x)^{\mu}=&(0,1,0,0)\equiv \hat{i}^{\mu},\label{boosteq06b}\\
(\partial_y)^{\mu}=&(0,0,1,0)\equiv \hat{j}^{\mu},\label{boosteq06c}\\
(\partial_{\eta})^{\mu}=&\tau(\sinh \eta,0,0,\cosh \eta)\equiv \tau \hat{\eta}^{\mu},\label{boosteq06d}
\end{align}
\end{subequations}
where the hat $\hat{}$ stands for unit magnitude, namely $\hat{i}^2=\hat{j}^2=\hat{\eta}^2=-1$.
For reasons that are going to become clear shortly, let us assume that the proper temperature and the fugacity are scalar functions depending on the Milne time $\tau$ only, that is $T=T(\tau)$ and $\zeta=\zeta(\tau)$, so the four-temperature reads
\begin{equation}\label{boosteq03}
\beta^{\mu}=\frac{u^{\mu}}{T(\tau)}=\frac{1}{T(\tau)}(\cosh \eta,0,0,\sinh \eta).
\end{equation}
Suppose also that the system reaches local thermodynamic equilibrium at some $\tau_0$, so, according to the discussion in Subsection \ref{sec:LE_to_GE_and_NE}, the stationary state in the Heisenberg picture is the local thermodynamic equilibrium density operator at $\tau_0$
\begin{equation}\label{boosteq01}
\widehat{\rho}=\widehat{\rho}_{\rm LE}(\tau_0)=
\frac{1}{Z_{\rm LE}(\tau_0)}\exp \left[-\int_{\Sigma(\tau_0)}{\rm d} \Sigma_{\mu}\left(\widehat{T}^{\mu \nu}\beta_{\nu}-\zeta \widehat{j}^{\mu}\right)\right],
\end{equation}
where the argument of the integral is intended to be evaluated at $\tau_0$.
The four-temperature is timelike on $\Sigma(\tau_0)$, hence thermodynamically meaningful, but it is not a Killing vector field, as can be checked.
The Milne universe, in fact, is not a stationary spacetime, and as such it does not possess any global timelike Killing vector field.
From the cosmological point of view, it represents instead a linearly expanding universe foliated by hyperboloids with constant negative curvature.
As a consequence, global thermodynamic equilibrium states cannot exist in the Milne universe, therefore the density operator \eqref{boosteq01} represents a non-equilibrium state.
In Subsection \ref{sec:LE_to_GE_and_NE} it was pointed out that if the spacelike hypersurface $\Sigma(\tau_0)$ is invariant under any transformation $g$ of a subgroup ${\rm G}$ of the proper orthochronous Poincar\'e group ${\rm IO}(1,3)^{\uparrow}_+$ and the thermodynamic fields change according to the transformation laws \eqref{zubeq12}, which we hereby report
\begin{equation}\label{boosteq02}
{D(g^{-1})^{\nu}}_{\sigma}\beta_{\nu}(g^{-1}(y))=\beta_{\sigma}(y),\qquad
\zeta(g^{-1}(y))=\zeta(y),
\end{equation}
then ${\rm G}$ is a symmetry group for the density operator, in the sense that $\widehat{U}(g)\widehat{\rho}\widehat{U}^{-1}(g)=\widehat{\rho}$ with $\widehat{U}(g)$ a unitary representation of $g\in {\rm G}$.
The thermodynamic fields appearing in the density operator \eqref{boosteq01}, namely the four-temperature \eqref{boosteq03} at $\tau_0$ and the fugacity $\zeta(\tau_0)$, meet the conditions \eqref{boosteq02} for the following transformations:
\begin{itemize}
\item boosts with hyperbolic angle $\xi$ along the $z$ direction, indicated as ${\sf L}_3(\xi)$,
\item translations along the $x$ and $y$ directions, indicated as ${\sf T}_1(a)$ and ${\sf T}_2(a)$ respectively,
\item rotations of angle $\varphi$ in the $(x,y)$ plane, indicated as ${\sf R}_{12}(\varphi)$,
\end{itemize}
and the hyperboloid $\Sigma(\tau_0)$ is invariant under the same transformations.
This is but the Euclidean group in 2-dimensions times the Lorentz group in 1-dimension, namely ${\rm IO}(2)\otimes {\rm SO}(1,1)$.
Moreover, the density operator is also invariant under:
\begin{itemize}
\item space reflections mapping $(x,y,z)$ to $(-x,-y,-z)$.
\end{itemize}
Altogether, we refer to this symmetry of the density operator as \textit{boost invariance}.
Accordingly, the description of the system represented by $\widehat{\rho}$ must have the same form in all frames boosted in the longitudinal direction.
This is essentially what in literature is often known as the \textit{Bjorken symmetry}, named after J.\ D.\ Bjorken who first proposed it as a symmetry approximately realized in the central-rapidity region in heavy-ion collisions \cite{Bjorken:1982qr}.
As it should be clear by now, the symmetries of the system constrain the form of quantities at the particular thermodynamic configuration at hand.
These constraints are given by the vanishing of Lie derivatives of such quantities along the vector fields associated to the symmetry group, which in the present case can be readily found
\begin{subequations}
\begin{align}
\left(\frac{{\rm d} {\sf L}_3(\xi)x}{{\rm d} \xi}\right)^{\mu}=&(z,0,0,t)=\tau \hat{\eta}^{\mu},\label{boosteq04a}\\
\left(\frac{{\rm d} {\sf T}_1(a)x}{{\rm d} a}\right)^{\mu}=&(0,1,0,0)=\hat{i}^{\mu},\label{boosteq04b}\\
\left(\frac{{\rm d} {\sf T}_2(a)y}{{\rm d} a}\right)^{\mu}=&(0,0,1,0)=\hat{j}^{\mu},\label{boosteq04c}\\
\left(\frac{{\rm d} {\sf R}_{12}(\varphi)x}{{\rm d} \varphi}\right)^{\mu}=&(0,-y,x,0)=r \hat{\varphi}^{\mu}\label{boosteq04d},
\end{align}
\end{subequations}
where $r\equiv \sqrt{x^2+y^2}$ and $\hat{\varphi}^{\mu}\equiv \frac{1}{r}(0,-y,x,0)$.
Note that \eqref{boosteq04a}--\eqref{boosteq04c} are just \eqref{boosteq06b}--\eqref{boosteq06d} of the above tetrad, which, by construction, have vanishing Lie derivatives along each other.
Scalars, such as the proper temperature $T$ and the fugacity $\zeta$, can only depend on $\tau$ as a consequence of ${\cal L}_XT=0$ and ${\cal L}_X\zeta=0$ with $X$ either of \eqref{boosteq04a}--\eqref{boosteq04d}
\begin{equation}\label{boosteq05}
T=T(\tau),\qquad \zeta=\zeta(\tau).
\end{equation}
Let us prove it for $T$ as an example.
With little effort, one can show that
\begin{equation}
{\cal L}_{\hat{i}}T=\hat{i}^{\mu}\partial_{\mu}T=\partial_xT=0,
\end{equation}
\begin{equation}
{\cal L}_{\hat{j}}T=\hat{j}^{\mu}\partial_{\mu}T=\partial_yT=0,
\end{equation}
\begin{equation}
{\cal L}_{\tau \hat{\eta}}T=\tau \hat{\eta}^{\mu}\partial_{\mu}T=\partial_{\eta}T=0,
\end{equation}
\begin{equation}
{\cal L}_{r\hat{\varphi}}T=r\hat{\varphi}^{\mu}\partial_{\mu}T=-y\partial_yT+x\partial_xT=0.
\end{equation}
The first two equations imply that $T$ cannot depend neither on $x$ nor on $y$, making the last equation trivial, while the third equation forbids the dependence on $\eta$, so the only possible dependence left is in fact on $\tau$.
The same holds for $\zeta$, hence \eqref{boosteq05}, and that is the reason why we previously assumed the temperature and the fugacity to be scalar functions of $\tau$ only.
This philosophy can be further applied to vectors and tensors of any rank.
A vector field $V^{\mu}$ is decomposed, in principle, as
\begin{equation}
V^{\mu}=A(\tau)u^{\mu}+B(\tau)\hat{i}^{\mu}+C(\tau)\hat{j}^{\mu}+D(\tau)\tau \hat{\eta}^{\mu},
\end{equation}
however, the constrain ${\cal L}_{r\hat{\varphi}}V=0$ implies $B=C=0$, and the invariance under space reflections demands $D=0$ as it maps $\eta \mapsto -\eta$, hence
\begin{equation}
V^{\mu}=A(\tau)u^{\mu}.
\end{equation}
Concerning vector fields, a well-known feature of the Bjorken model is the existence of a unique timelike vector field of unit magnitude compatible with boost invariance, and this is given precisely by \eqref{boosteq07} \cite{Bjorken:1982qr}.
That is to say that the hydrodynamic velocity field is in fact completely fixed by the symmetry, which is a very powerful statement for the following reason.
At global thermodynamic equilibrium, the thermodynamic fields are determined by the geometric conditions \eqref{zubeq10}, namely the four-temperature must be a Killing vector field and the fugacity a constant.
Out of global thermodynamic equilibrium, these conditions no longer hold and the thermodynamic fields can be whatever, so we have no information on them whatsoever.
This is one of the reasons that make systems out of global thermodynamic equilibrium much harder to study.
However, boost invariance completely fixes the hydrodynamic velocity field and constrains the proper temperature and the fugacity to depend on the Milne time only, therefore, the form of the four temperature cannot be but \eqref{boosteq03}.
The expressions of the temperature and the fugacity as functions of $\tau$ are obtained from the constraint equations of local thermodynamic equilibrium \eqref{zubeq04}, that is
\begin{equation}
n_{\mu}\mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau)\widehat{T}^{\mu \nu})_{\rm ren}=n_{\mu}T^{\mu \nu},\qquad
n_{\mu}\mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau)\widehat{j}^{\mu})_{\rm ren}=n_{\mu}j^{\mu}.
\end{equation}
Thus, although out of thermodynamic equilibrium, boost invariance actually provides us with information on the thermodynamic fields, enough so to at least attempt calculations.
This constitutes a precious opportunity for us to scratch the surface of the vast and mostly unexplored topic that is non-equilibrium physics.
In particular, we will be interested in the energy-momentum tensor as usual.
Much in the same way as above, the most general form a symmetric rank-2 tensor, such as the thermal expectation value of the energy-momentum tensor, is constrained to be
\begin{equation}\label{boosteq12}
T^{\mu \nu}=\rho(\tau)u^{\mu}u^{\nu}+p^{\rm T}(\tau)(\hat{i}^{\mu}\hat{i}^{\nu}+\hat{j}^{\mu}\hat{j}^{\nu})+p^{\rm L}(\tau)\hat{\eta}^{\mu}\hat{\eta}^{\nu},
\end{equation}
where $\rho$ is the energy density, $p^{\rm T}$ the transverse pressure and $p^{\rm L}$ the longitudinal pressure.
For the sake of clarity, the left-hand side stands explicitly for
\begin{equation}\label{boosteq08}
T^{\mu \nu}=\mathrm{tr}(\widehat{\rho}\widehat{T}^{\mu \nu})=
\mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau_0)\widehat{T}^{\mu \nu}(\tau)),
\end{equation}
namely the non-equilibrium density operator is at fixed $\tau_0$, whereas the energy-momentum tensor quantum operator varies with $\tau$, hence the $\tau$-dependence of $\rho$, $p^{\rm T}$ and $p^{\rm L}$.
The pressures in the $x$ and $y$ directions are equal and given by the transverse pressure due to rotational symmetry in the $(x,y)$ plane.
However, as full rotational symmetry in the $(x,y,z)$ space is lacking, the longitudinal pressure will be in general different.
This is reminiscent of the case of global thermodynamic equilibrium with acceleration studied in Chapter \ref{chapter:gteacceleration}, where an anisotropic quantum term was present in the longitudinal direction.
Thus, the anisotropy in the current case can also expected to be a quantum effect.
Provided an underlying Quantum Field Theory is chosen, the three scalar functions $\rho$, $p^{\rm T}$ and $p^{\rm L}$ can be determined once we are able to calculate thermal expectation values with the non-equilibrium density operator as in \eqref{boosteq08}.
This is a task one is not able to accomplish in general.
A possible approach to tackle this problem is that of ideal hydrodynamics, where local thermodynamic equilibrium is assumed.
In that case, the dissipations are supposed to be small, so that the non-equilibrium state is in fact close to thermodynamic equilibrium, and by equation \eqref{zubeq09}, the non-equilibrium density operator is approximated by the local thermodynamic equilibrium one.
However, this should not be misleading: unless one uses linear response theory to further approximate the local thermodynamic equilibrium density operator with the global thermodynamic equilibrium one \cite{Becattini:2014yxa}, the calculation of thermal expectation values at local thermodynamic equilibrium is usually far from feasible.
And even if it were, the actual non-equilibrium expressions of $\rho$, $p^{\rm T}$ and $p^{\rm L}$, that is the constitutive equations, would still be unknown.
Nevertheless, for our system at hand, boost invariance is so powerful that a non-equilibrium analysis turns out to be possible to some extent, at least for a free Quantum Field Theory.
For the sake of simplicity, in the following we will suppose that there is no charged current present, that is $\zeta \equiv 0$.
Thus, by using the same symbols as in \eqref{gteq10}, the non-equilibrium density operator is
\begin{equation}\label{boosteq16}
\widehat{\rho}=\frac{1}{Z}\exp \left[-\frac{\widehat{\Pi}(\tau_0)}{T(\tau_0)}\right],
\end{equation}
where $Z=Z_{\rm LE}(\tau_0)$, $n_{\mu}=u_{\mu}$ and the operator $\widehat{\Pi}(\tau_0)$ is given by
\begin{equation}
\widehat{\Pi}(\tau_0)=
\int_{\Sigma(\tau_0)}{\rm d} \Sigma \,\widehat{T}^{\mu \nu}(\tau_0)u_{\mu}u_{\nu}=
\tau_0\int_{-\infty}^{+\infty}{\rm d}^2{\rm x_T}\,{\rm d} \eta \,\widehat{T}^{\mu \nu}(\tau_0)u_{\mu}u_{\nu}
\end{equation}
with ${\rm d} \Sigma=\tau_0\,{\rm d}^2{\rm x_T}\,{\rm d} \eta$ the measure on $\Sigma(\tau_0)$ in Milne coordinates.
All the integration variables are intended to be integrated from $-\infty$ to $+\infty$.
The expression of the energy-momentum tensor quantum operator depends on the specific underlying Quantum Field Theory.
For a free Quantum Field Theory, this density operator represents a relativistic quantum fluid with hydrodynamic velocity field given by \eqref{boosteq07} which is initially interacting, ceases interactions at $\tau_0$ and propagates freely thereafter.
Thus, in the classical limit, the kinetic theory solutions of the free-streaming Boltzmann equation starting from the local thermodynamic
equilibrium expressions with proper temperature $T(\tau_0)$ and flow velocity $u^{\mu}$ at $\tau_0$, reported in Appendix \ref{appendix:free_sreaming}, are expected to be recovered.
We want to conclude with the following remark.
In Subsection \ref{sec:LE_to_GE_and_NE}, the question arose if a symmetry group for the stationary state $\widehat{\rho}=\widehat{\rho}_{\rm LE}(\tau_0)$ is also a symmetry group for the local thermodynamic equilibrium density operator $\widehat{\rho}_{\rm LE}(\tau)$ at any $\tau$.
Intuitively, if the hypersurface $\Sigma(\tau)$ is mapped into itself and the thermodynamic fields change according to \eqref{boosteq02} at any $\tau$, then the answer is yes.
The constant-$\tau$ hypersurfaces, the four-temperature \eqref{boosteq03} and the fugacity $\zeta(\tau)$ meet indeed those conditions at any $\tau$, therefore boost invariance is also a symmetry of $\widehat{\rho}_{\rm LE}(\tau)$.
The expression of the local thermodynamic equilibrium density operator is formally the same as that of the non-equilibrium one at $\tau$ instead of $\tau_0$
\begin{equation}\label{boosteq15}
\widehat{\rho}_{\rm LE}(\tau)=\frac{1}{Z_{\rm LE}(\tau)}\exp \left[-\frac{\widehat{\Pi}(\tau)}{T(\tau)}\right],
\end{equation}
where $\widehat{\Pi}(\tau)$ is in general different from $\widehat{\Pi}(\tau_0)$ and given by
\begin{equation}
\widehat{\Pi}(\tau)=
\int_{\Sigma(\tau)}{\rm d} \Sigma \,\widehat{T}^{\mu \nu}(\tau)u_{\mu}u_{\nu}=
\tau \int_{-\infty}^{+\infty}{\rm d}^2{\rm x_T}\,{\rm d} \eta \,\widehat{T}^{\mu \nu}(\tau)u_{\mu}u_{\nu}.
\end{equation}
In the case of a free Quantum Field Theory, the physical system described by this density operator is a relativistic quantum fluid with hydrodynamic velocity field given by \eqref{boosteq07} who never ceases interactions and does not decouple.
Of course, the forms of scalar, vector and tensor quantities are still those discussed above, as they descend purely from symmetry considerations.
In particular, the expression of the thermal expectation value of the energy-momentum tensor is
\begin{equation}\label{boosteq13}
\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}=
\mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau)\widehat{T}^{\mu \nu}(\tau))=
\rho_{\rm LE}(\tau)u^{\mu}u^{\nu}+p^{\rm T}_{\rm LE}(\tau)(\hat{i}^{\mu}\hat{i}^{\nu}+\hat{j}^{\mu}\hat{j}^{\nu})+p^{\rm L}_{\rm LE}(\tau)\hat{\eta}^{\mu}\hat{\eta}^{\nu}.
\end{equation}
Since the non-equilibrium density operator $\widehat{\rho}$ is simply the local thermodynamic equilibrium one $\widehat{\rho}_{\rm LE}(\tau)$ evaluated at $\tau_0$, the operator $\widehat{\Pi}(\tau_0)$ will just be $\widehat{\Pi}(\tau)$ evaluated at $\tau_0$.
However, comparing equations \eqref{boosteq08} and \eqref{boosteq13}, it is clear that the non-equilibrium thermal expectation value \eqref{boosteq12} is not the same as the local thermodynamic equilibrium one \eqref{boosteq13} evaluated at $\tau_0$.
Once again, the energy-momentum tensor quantum operator, and therefore the scalar functions in the above thermal expectation value, depend on the specific underlying Quantum Field Theory.
In analogy with Chapter \ref{chapter:gteacceleration}, in the following we will consider a free real scalar quantum field in the future light-cone.
\subsection{Free scalar field theory in the future light-cone}
The problem of a free real scalar quantum field in the future light-cone is a known one in literature, especially in Quantum Field Theory in curved spacetime and cosmology in general \cite{Birrell:1982ix, Padmanabhan:1990fm, Arcuri:1994kd}, but also in the context of multiparticle production \cite{Berges:2017zws, Berges:2017hne}.
In \cite{Arcuri:1994kd} it was shown that the free real scalar field can be expanded in the future light-cone in a complete set of solutions that do not mix the positive- and negative-frequency modes of the usual plane waves expansion.
Then, S.\ Akkelin showed in \cite{Akkelin:2018hpu} that these modes can be obtained starting from the usual plane waves modes.
Here, we follow his derivation by using a slightly different notation and adding some clues.
The Klein Gordon equation for a free real scalar quantum field $\widehat{\psi}$ of mass $m$ in Milne coordinates reads
\begin{equation}
(\Box+m^2)\widehat{\psi}=
\left[\frac{1}{\tau}(\tau \partial_{\tau})-\partial_x^2-\partial_y^2-\frac{1}{\tau^2}\partial_{\eta}^2+m^2\right]\widehat{\psi}=0.
\end{equation}
The solution is given by the following quantum field expansion
\begin{equation}\label{boosteq09}
\widehat{\psi}(\tau,{\bf x}_{\rm T},\eta)=
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{4\pi \sqrt{2}}
\left[h({\sf p},\tau){\rm e}^{i({\bf p}_{\rm T} \cdot {\bf x}_{\rm T}+\mu \eta)}\widehat{b}_{\sf p}+h^*({\sf p},\tau){\rm e}^{-i({\bf p}_{\rm T} \cdot {\bf x}_{\rm T}+\mu \eta)}\widehat{b}_{\sf p}^{\dagger}\right].
\end{equation}
Here, we indicated the momentum in Milne coordinates as ${\sf p}\equiv({\bf p}_{\rm T},\mu)$ to distinguish it from the Cartesian vector ${\bf p}=({\bf p}_{\rm T},p_z)$, where ${\bf p}_{\rm T}=(p_x,p_y)$ is the transverse momentum and $\mu$ is the momentum component along the $\eta$-direction.
The integral is intended to run from $-\infty$ to $+\infty$ for each integration variable.
The $\tau$-dependence is stored in the complex functions $h$ and $h^*$ solutions of the equation
\begin{equation}
\left[\frac{1}{\tau}\partial_{\tau}(\tau \partial_{\tau})+m_{\rm T}^2+\frac{\mu^2}{\tau^2}\right]h({\sf p},\tau)=0,
\end{equation}
which is indeed a Bessel equation with the transverse mass given by
\begin{equation}
m_{\rm T}=\sqrt{{\bf p}_{\rm T}^2+m^2}.
\end{equation}
In fact, $h$ and $h^*$ are defined by means of the Hankel functions ${\rm H}^{(2)}$ and ${\rm H}^{(1)}$ as
\begin{equation}\label{boosteq72}
\begin{split}
h({\sf p},\tau)\equiv&
-i{\rm e}^{\frac{\pi}{2}\mu}{\rm H}^{(2)}_{i\mu}(m_{\rm T} \tau)\\
h^*({\sf p},\tau)\equiv&
i{\rm e}^{-\frac{\pi}{2}\mu}{\rm H}^{(1)}_{i\mu}(m_{\rm T} \tau),
\end{split}
\end{equation}
which can be checked to be the complex conjugate of each other and also invariant under momentum reflections ${\sf p}\mapsto -{\sf p}$ by using the integral representation \cite{Gradshteyn:1702455}
\begin{equation}\label{boosteq10}
\begin{split}
{\rm H}^{(2)}_{i\mu}(m_{\rm T} \tau)=&
-\frac{1}{i\pi}{\rm e}^{-\frac{\pi}{2}\mu}\int_{-\infty}^{+\infty}{\rm d} \theta \,{\rm e}^{-im_{\rm T} \tau \cosh \theta+i\mu \theta}\\
{\rm H}^{(1)}_{i\mu}(m_{\rm T} \tau)=&
\frac{1}{i\pi}{\rm e}^{\frac{\pi}{2}\mu}\int_{-\infty}^{+\infty}{\rm d} \theta \,{\rm e}^{im_{\rm T} \tau \cosh \theta-i\mu \theta}.
\end{split}
\end{equation}
The following quantity is also usually defined in Milne coordinates
\begin{equation}\label{boosteq30}
\omega^2({\sf p},\tau)\equiv m_{\rm T}^2+\frac{\mu^2}{\tau^2}={\sf p}^2+m^2,
\end{equation}
which is invariant under momentum reflections ${\sf p}\mapsto -{\sf p}$.
Finally, $\widehat{b}_{\sf p}^{\dagger}$ and $\widehat{b}_{\sf p}$ are creation and annihilation operators respectively in the future light-cone obeying the usual algebra
\begin{equation}\label{boosteq18}
\begin{split}
[\widehat{b}_{\sf p},\widehat{b}_{{\sf p}'}^{\dagger}]=&\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu'),\\
[\widehat{b}_{\sf p},\widehat{b}_{{\sf p}'}]=&0=[\widehat{b}_{\sf p}^{\dagger},\widehat{b}_{{\sf p}'}^{\dagger}].
\end{split}
\end{equation}
This allows us to clarify the interpretation of $\mu$, which turns out to be the eigenvalue of the boost operator along the longitudinal direction $\widehat{K}_z$, that is
\begin{equation}
\widehat{U}({\sf L}_3(\xi))\widehat{b}_{\sf p}^{\dagger}\widehat{U}^{-1}({\sf L}_3(\xi))={\rm e}^{-i\xi \widehat{K}_z}\widehat{b}_{\sf p}^{\dagger}{\rm e}^{i\xi \widehat{K}_z}={\rm e}^{-i\xi \mu}\widehat{b}_{\sf p}^{\dagger}
\end{equation}
with $\widehat{U}$ a unitary representation.
In other words, the operator $\widehat{b}_{\sf p}^{\dagger}$ creates a state with eigenvalue $\mu$, much in the same way as for the angular momentum operator in the familiar Quantum Mechanics.
In \cite{Akkelin:2018hpu} it was shown explicitly that the quantum field expansion \eqref{boosteq09} in the future light-cone can also be worked out starting from the standard plane waves expansion in Minkowski spacetime \eqref{gteq27}.
The frequency of the plane wave \eqref{gteq49} is
\begin{equation}
\omega_{\bf p}^2=m_{\rm T}^2+p_z^2.
\end{equation}
One can then express $\omega_{\bf p}$ and $p_z$ in terms of a hyperbolic angle ${\rm y}$ called the \textit{momentum rapidity}, or simply the \textit{rapidity}, as
\begin{equation}
\omega_{\bf p}=m_{\rm T} \cosh {\rm y},\qquad
p_z=m_{\rm T} \sinh {\rm y}.
\end{equation}
Thus, by performing the following ``Fourier-like'' transformation of the plane wave creation and annihilation operators $\widehat{a}_{\bf p}^{\dagger}$ and $\widehat{a}_{\bf p}$ in the plane waves expansion
\begin{equation}\label{boosteq11}
\widehat{a}_{\bf p}^{\dagger}=
\frac{1}{\sqrt{2\pi m_{\rm T} \cosh {\rm y}}}\int_{-\infty}^{+\infty}{\rm d} \mu \,{\rm e}^{-i\mu {\rm y}}\widehat{b}_{\sf p}^{\dagger},
\end{equation}
changing integration variable from $p_z$ to ${\rm y}$ and then to $\theta={\rm y}-\eta$ and exploiting the integral representation \eqref{boosteq10}, the field expansion \eqref{boosteq09} is finally obtained \cite{Akkelin:2018hpu}.
Equation \eqref{boosteq11} shows that the creation operators in the future light-cone depend only on the creation operators in Minkowski spacetime, and same for the corresponding annihilation operators.
No mixing between creation and annihilation operators is present.
Consequently, the vacuum state of the operators $\widehat{b}_{\sf p}$ in the future light-cone coincides with that of $\widehat{a}_{\bf p}$ in Minkowski spacetime, that is the Minkowski vacuum $|0_{\rm M}\rangle$.
The ultimate reason for this is that the function $h$ can be expressed can be expressed as a linear combination of plane wave modes with only positive frequencies \cite{Mukhanov:2007zz}.
This is clearly at variance with the case of global thermodynamic equilibrium with acceleration considered in Chapter \ref{chapter:gteacceleration}, where the quantum field was expanded in the Rindler wedges in terms of the Rindler creation and annihilation operators, whose relation with the plane waves creation and annihilation operators was a non-trivial Bogolyubov transformation, therefore the Rindler vacuum and the Minkowski vacuum did not coincide.
The consequences of this fact will be further explored in Subsection \ref{sec:boost_lte_renormalization}.
Now we turn to the density operator.
It is important to stress that the expression of the non-equilibrium density operator \eqref{boosteq16} is exactly the same as the local thermodynamic equilibrium one \eqref{boosteq15} evaluated at $\tau_0$.
Whether one or the other is used makes indeed a difference when it comes to calculating thermal expectation values, as exemplified by equations \eqref{boosteq08} and \eqref{boosteq13}.
However, as far as the very structure of the density operator is concerned, they really are equivalent.
Therefore, without loss of generality, we can let $\tau$ be whatever and focus on the local thermodynamic equilibrium one for the rest of this part.
Then, in Section \ref{sec:boost_lte_analysis} we will perform a local thermodynamic equilibrium analysis, whereas in Section \ref{sec:boost_ne_analysis} a non-equilibrium one, thus distinguishing explicitly the two cases.
Moreover, the ${\sf p}$-dependence will be omitted throughout for ease of notation, restoring it only when necessary.
As for the energy-momentum tensor operator, we consider again the canonical tensor
\begin{equation}\label{boosteq50}
\widehat{T}^{\mu \nu}=\frac{1}{2}\left(\partial^{\mu}\widehat{\psi}\partial^{\nu}\widehat{\psi}+\partial^{\nu}\widehat{\psi}\partial^{\mu}\widehat{\psi}\right)-g^{\mu \nu}\widehat{\cal L},\qquad
\widehat{\cal L}=\frac{1}{2}\left(g^{\mu \nu}\partial_{\mu}\widehat{\psi}\partial_{\nu}\widehat{\psi}-m^2\widehat{\psi}^2\right).
\end{equation}
In order to work $\widehat{\Pi}(\tau)$ out, the above expression must be contracted twice with the hydrodynamic velocity field, yielding
\begin{equation}\label{boosteq34}
\widehat{T}^{\mu \nu}u_{\mu}u_{\nu}=
\frac{1}{2}\left[(\partial_{\tau}\widehat{\psi})^2+(\partial_x\widehat{\psi})^2+(\partial_y\widehat{\psi})^2+\frac{1}{\tau^2}(\partial_{\eta}\widehat{\psi})^2+m^2\widehat{\psi}^2\right].
\end{equation}
By using the quantum field expansion \eqref{boosteq09} together with the integral representation of the Hankel functions \eqref{boosteq10} and exploiting the invariance of $h(\tau)$ and $h^*(\tau)$ under reflections ${\sf p}\mapsto -{\sf p}$, the following expression is obtained
\begin{equation}
\begin{split}
\widehat{\Pi}(\tau)=&
\tau \int {\rm d}^2{\rm p_T}\,{\rm d} \mu \frac{\pi}{8}\left[\left(|\partial_{\tau}h(\tau)|^2+\omega^2(\tau)|h(\tau)|^2\right)\left(\widehat{b}_{\sf p}\widehat{b}_{\sf p}^{\dagger}+\widehat{b}_{\sf p}^{\dagger}\widehat{b}_{\sf p}\right)\right.\\
&\left.+\left((\partial_{\tau}h(\tau))^2+\omega^2(\tau)h^2(\tau)\right)\widehat{b}_{\sf p}\widehat{b}_{-{\sf p}}+
\left((\partial_{\tau}h^*(\tau))^2+\omega^2(\tau)(h^*(\tau))^2\right)\widehat{b}_{\sf p}^{\dagger}\widehat{b}_{-{\sf p}}^{\dagger}\right].
\end{split}
\end{equation}
It is convenient to define the positive real function $K(\tau)$ and the complex function $\Lambda(\tau)$ as
\begin{equation}\label{boosteq35}
K(\tau)\equiv \frac{\pi \tau}{4\omega(\tau)}\left[|\partial_{\tau}h(\tau)|^2+\omega^2(\tau)|h(\tau)|^2\right],
\end{equation}
\begin{equation}\label{boosteq36}
\Lambda(\tau)\equiv \frac{\pi \tau}{4\omega(\tau)}\left[(\partial_{\tau}h(\tau))^2+\omega^2(\tau)h^2(\tau)\right].
\end{equation}
Let us stress that the ${\sf p}$-dependence is intended.
Being $h(\tau)$ and $\omega(\tau)$ invariant under momentum reflections ${\sf p}\mapsto -{\sf p}$, so are $K(\tau)$ and $\Lambda(\tau)$, but more interestingly they are related by
\begin{equation}\label{boosteq14}
K^2(\tau)-|\Lambda(\tau)|^2=1.
\end{equation}
This occurs thanks to the left-hand side being proportional to the Wronskian of the Hankel functions
\begin{equation}
K^2(\tau)-|\Lambda(\tau)|^2=
-\left(\frac{\pi m_{\rm T} \tau}{4}\right)^2
\left(W[{\rm H}^{(2)}_{i\mu}(m_{\rm T} \tau),{\rm H}^{(1)}_{i\mu}(m_{\rm T} \tau)]\right)^2,
\end{equation}
which not only is known, but it turns out to be a very simple function of the argument \cite{Gradshteyn:1702455}
\begin{equation}
W[{\rm H}^{(2)}_{i\nu}(x),{\rm H}^{(1)}_{i\nu}(x)]=
{\rm H}^{(2)'}_{i\nu}(x){\rm H}^{(1)}_{i\nu}(x)-{\rm H}^{(1)'}_{i\nu}(x){\rm H}^{(2)}_{i\nu}(x)=
\frac{4i}{\pi x}.
\end{equation}
This will greatly simplify the calculation of thermal expectation values at local thermodynamic equilibrium in the next Section.
The property \eqref{boosteq14} can also be rephrased by expressing $K(\tau)$ and $\Lambda(\tau)$ in terms of some hyperbolic angle $\Theta({\sf p},\tau)$ and phase $\chi({\sf p},\tau)$
\begin{equation}\label{boosteq22}
K(\tau)=\cosh(2\Theta(\tau)),\qquad
\Lambda(\tau)={\rm e}^{i\chi(\tau)}\sinh(2\Theta(\tau)),
\end{equation}
which will be useful later on.
For now, $\widehat{\Pi}(\tau)$ is recast in terms of $K(\tau)$ and $\Lambda(\tau)$ as
\begin{equation}\label{boosteq17}
\widehat{\Pi}(\tau)=\int {\rm d}^2{\rm p_T}\,{\rm d} \mu \frac{\omega(\tau)}{2}
\left[K(\tau)\left(\widehat{b}_{\sf p}\widehat{b}_{\sf p}^{\dagger}+\widehat{b}_{\sf p}^{\dagger}\widehat{b}_{\sf p}\right)+\Lambda(\tau)\widehat{b}_{\sf p}\widehat{b}_{-{\sf p}}+\Lambda^*(\tau)\widehat{b}_{\sf p}^{\dagger}\widehat{b}_{-{\sf p}}^{\dagger}\right]
\end{equation}
At variance with equations \eqref{gteq28} and \eqref{gteq29} found in Chapter \ref{chapter:gteacceleration}, this expression is not diagonal in the creation and annihilation operators due to the terms proportional to $\Lambda(\tau)$ and $\Lambda^*(\tau)$.
This is quite unfortunate, for if it were diagonal, thermal expectation values of products of creation and annihilation operators, whence of operators quadratic in the field, could be calculated by using standard methods.
In the following, we look for a suitable Bogolyubov transformation to diagonalize $\widehat{\Pi}(\tau)$.
\subsection{Diagonalization of the density operator}
Let us define a new set of creation and annihilation operators by combining $\widehat{b}_{\sf p}$ and $\widehat{b}_{\sf p}^{\dagger}$ to attempt a diagonalization of $\widehat{\Pi}(\tau)$ in \eqref{boosteq17}, whence $\widehat{\rho}_{\rm LE}(\tau)$ in \eqref{boosteq15}.
Since the terms we wish to get rid of mix $\widehat{b}_{\sf p}$ with $\widehat{b}_{-{\sf p}}$ and their Hermitean conjugates, we start with a Bogolyubov transformation of the following form
\begin{equation}\label{boosteq19}
\begin{split}
\widehat{\xi}_{\sf p}^{\dagger}(\tau)\equiv &A({\sf p},\tau)\widehat{b}_{\sf p}^{\dagger}-B({\sf p},\tau)\widehat{b}_{-{\sf p}}\\
\widehat{\xi}_{\sf p}(\tau)\equiv &A^*({\sf p},\tau)\widehat{b}_{\sf p}-B^*({\sf p},\tau)\widehat{b}_{-{\sf p}}^{\dagger}
\end{split}
\end{equation}
where $A$ and $B$ are complex functions to be determined.
In order for $\widehat{\xi}_{\sf p}^{\dagger}(\tau)$ and $\widehat{\xi}_{\sf p}(\tau)$ to be creation and annihilation operators respectively, they are demanded to obey the usual algebra
\begin{equation}\label{boosteq27}
\begin{split}
[\widehat{\xi}_{\sf p}(\tau),\widehat{\xi}_{{\sf p}'}^{\dagger}(\tau)]=&
\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu')\\
[\widehat{\xi}_{\sf p}(\tau),\widehat{\xi}_{{\sf p}'}(\tau)]=&0=
[\widehat{\xi}_{\sf p}^{\dagger}(\tau),\widehat{\xi}_{{\sf p}'}^{\dagger}(\tau)].
\end{split}
\end{equation}
By enforcing the commutation relations \eqref{boosteq18}, these in turn imply
\begin{equation}
\begin{split}
&\left(|A({\sf p},\tau)|^2-|B({\sf p},\tau)|^2\right)\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu')=\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu')\\
&\left[A^*(-{\sf p},\tau)B^*({\sf p},\tau)-A^*({\sf p},\tau)B^*(-{\sf p},\tau)\right]\delta^2({\bf p}_{\rm T}+{\bf p}_{\rm T}')\,\delta(\mu+\mu')=0\\
&\left[A({\sf p},\tau)B(-{\sf p},\tau)-A(-{\sf p},\tau)B({\sf p},\tau)\right]\delta^2({\bf p}_{\rm T}+{\bf p}_{\rm T}')\,\delta(\mu+\mu')=0
\end{split}
\end{equation}
whose a possible solution is
\begin{equation}\label{boosteq20}
A({\sf p},\tau)=A(-{\sf p},\tau),\qquad
B({\sf p},\tau)=B(-{\sf p},\tau),\qquad
|A({\sf p},\tau)|^2-|B({\sf p},\tau)|^2=1
\end{equation}
hence the following hyperbolic expressions
\begin{equation}\label{boosteq21}
A({\sf p},\tau)=\cosh \theta({\sf p},\tau) \,{\rm e}^{i\chi_A({\sf p},\tau)},\qquad
B({\sf p},\tau)=\sinh \theta({\sf p},\tau) \,{\rm e}^{i\chi_B({\sf p},\tau)}
\end{equation}
for some hyperbolic angle $\theta({\sf p},\tau)$ and phases $\chi_A({\sf p},\tau)$ and $\chi_B({\sf p},\tau)$.
This is not the only possible solution, but we choose it as it makes it easy to invert the Bogolyubov transformation \eqref{boosteq19} as
\begin{equation}\label{boosteq24}
\begin{split}
\widehat{b}_{\sf p} =& A({\sf p},\tau)\widehat{\xi}_{\sf p}(\tau) + B^*({\sf p},\tau)\widehat{\xi}_{-{\sf p}}^{\dagger}(\tau)\\
\widehat{b}_{\sf p}^{\dagger} =& A^*({\sf p},\tau)\widehat{\xi}_{\sf p}^{\dagger}(\tau) + B({\sf p},\tau)\widehat{\xi}_{-{\sf p}}(\tau),
\end{split}
\end{equation}
plug it into \eqref{boosteq17} and obtain
\begin{equation}\label{boosteq26}
\begin{split}
\widehat{\Pi}(\tau)=&
\int{\rm d}^2{\rm p_T}\,{\rm d}\mu\,\frac{\omega}{2}\left\{
\left[K\left(|A|^2+|B|^2\right)+\Lambda AB^*+\Lambda^*A^*B\right]\left(\widehat{\xi}_{\sf p}\widehat{\xi}_{\sf p}^{\dagger}+\widehat{\xi}_{\sf p}^{\dagger}\widehat{\xi}_{\sf p}\right)\right.\\
&\left.+\left(2KAB+\Lambda A^2+\Lambda^*B^2\right)\widehat{\xi}_{\sf p}\widehat{\xi}_{-{\sf p}}+\left(2KA^*B^*+\Lambda^*{A^*}^2+\Lambda{B^*}^2\right)\widehat{\xi}_{\sf p}^{\dagger}\widehat{\xi}_{-{\sf p}}^{\dagger}\right\},
\end{split}
\end{equation}
where the invariance under momentum reflections ${\sf p}\mapsto -{\sf p}$ was used.
In order to make $\widehat{\Pi}(\tau)$ diagonal, the second line must vanish, so we look for $A$ and $B$ such that
\begin{subequations}
\begin{align}
2KAB+\Lambda A^2+\Lambda^*B^2=&0\label{boosteq23a}\\
2KA^*B^*+\Lambda^*{A^*}^2+\Lambda{B^*}^2=&0.
\end{align}
\end{subequations}
The two equations are the complex conjugate of each other, so they are in fact one complex equation, that is two real equations.
Apparently there are four real unknowns, namely the moduli and phases of $A$ and $B$, but the moduli are related by the last equation in \eqref{boosteq20}.
Moreover, the two equations are invariant under an overall phase shift of both $A$ and $B$, so they actually depend on the difference between the two phases instead of the phases themselves.
In summary, they are two real equations in two real unknowns, one modulus and the phase difference, so we can try to solve the system.
With the hyperbolic expressions \eqref{boosteq21} and \eqref{boosteq22}, equation \eqref{boosteq23a} can be rewritten as
\begin{equation}
\cosh(2\Theta)\sinh(2\theta)\,{\rm e}^{i(\chi_A+\chi_B)}+\sinh(2\Theta)\left(\cosh^2\theta \,{\rm e}^{i(\chi+2\chi_A)}+\sinh^2\theta \,{\rm e}^{i(2\chi_B-\chi)}\right)=0
\end{equation}
whose solution is
\begin{equation}
\chi_B({\sf p},\tau)-\chi_A({\sf p},\tau)=\chi({\sf p},\tau),\qquad
\theta({\sf p},\tau)=-\Theta({\sf p},\tau).
\end{equation}
By setting $\chi_A\equiv 0$ without loss of generality, the functions $A$ and $B$ fulfilling the Bogolyubov relations \eqref{boosteq19} are
\begin{equation}\label{boosteq25}
A({\sf p},\tau)=\cosh \Theta({\sf p},\tau),\qquad
B({\sf p},\tau)=-\sinh \Theta({\sf p},\tau) \,{\rm e}^{i\chi({\sf p},\tau)},
\end{equation}
so the inverse transformation \eqref{boosteq24} reads
\begin{equation}\label{boosteq29}
\begin{split}
\widehat{b}_{\sf p}=&\cosh \Theta({\sf p}, \tau)\widehat{\xi}_{\sf p}(\tau)-\sinh \Theta({\sf p},\tau)\,{\rm e}^{-i\chi({\sf p},\tau)}\widehat{\xi}_{-{\sf p}}^{\dagger}(\tau)\\
\widehat{b}_{\sf p}^{\dagger}=&\cosh \Theta({\sf p}, \tau) \widehat{\xi}_{\sf p}^{\dagger}(\tau)-\sinh \Theta ({\sf p},\tau)\,{\rm e}^{i\chi({\sf p},\tau)}\widehat{\xi}_{-{\sf p}}(\tau).
\end{split}
\end{equation}
Plugging \eqref{boosteq25} and \eqref{boosteq22} into \eqref{boosteq26} and using the commutation relations \eqref{boosteq27} we finally obtain the following result
\begin{equation}
\begin{split}
\widehat{\Pi}(\tau)=&
\int {\rm d}^2{\rm p_T}\,{\rm d} \mu \, \frac{\omega(\tau)}{2}\left(\widehat{\xi}_{\sf p}(\tau)\widehat{\xi}_{\sf p}^{\dagger}(\tau)+\widehat{\xi}_{\sf p}^{\dagger}(\tau)\widehat{\xi}_{\sf p}(\tau)\right)\\
=&\int {\rm d}^2{\rm p_T}\,{\rm d} \mu \, \omega(\tau) \left(\widehat{\xi}_{\sf p}^{\dagger}(\tau)\widehat{\xi}_{\sf p}(\tau)+\frac{1}{2}\right).
\end{split}
\end{equation}
Since this is diagonal, so is the density operator; however, note that the density operator being diagonal in a set of creation and annihilation operators different from that appearing in the quantum field expansion is at variance with the case of global thermodynamic equilibrium with acceleration studied in Chapter \ref{chapter:gteacceleration}, as can be checked by looking at equations \eqref{gteq13} and \eqref{gteq26} together with \eqref{gteq28} and \eqref{gteq29}.
With a diagonal density operator, we are now in a good spot to calculate thermal expectation values of operators quadratic in the quantum field, such as the energy-momentum tensor, by using standard methods.
\section{Local thermodynamic equilibrium analysis}
\label{sec:boost_lte_analysis}
So far, we have been concerned only with the structure of the density operator, therefore we could take either $\tau$ or $\tau_0$ equivalently since $\widehat{\rho}_{\rm LE}(\tau)$ and $\widehat{\rho}=\widehat{\rho}_{\rm LE}(\tau_0)$ have the same properties.
However, when it comes to thermal expectation values it does indeed make a difference whether the density operator is evaluated at $\tau$ or $\tau_0$.
In this Section, we will perform a local thermodynamic equilibrium analysis, therefore the $\tau$-dependence will be highlighted in order to dispel any potential doubt.
On the other hand, for ease of notation, the dependence on the momentum ${\sf p}$ will be omitted throughout whenever unnecessary.
As we shall see, we will be able to obtain analytical results, making this the first example of an exactly solvable system at local thermodynamic equilibrium, to the best of our knowledge.
\subsection{Thermal expectation values}
As a quick recap, the thermal expectation value of the energy-momentum tensor calculated with the local thermodynamic equilibrium density operator is
\begin{equation}\label{boosteq28}
\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}=\mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau)\widehat{T}^{\mu \nu}(\tau)),
\end{equation}
and boost invariance constrains its form to be
\begin{equation}\label{boosteq32}
\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}=
\rho_{\rm LE}(\tau)u^{\mu}u^{\nu}+p^{\rm T}_{\rm LE}(\tau)(\hat{i}^{\mu}\hat{i}^{\nu}+\hat{j}^{\mu}\hat{j}^{\nu})+p^{\rm L}_{\rm LE}(\tau)\hat{\eta}^{\mu}\hat{\eta}^{\nu}.
\end{equation}
The local thermodynamic equilibrium density operator built with the canonical energy-momentum tensor operator of a free real scalar quantum field is not diagonal in the creation and annihilation operators $\widehat{b}_{\sf p}^{\dagger}$ and $\widehat{b}_{\sf p}$, but it is so in the creation and annihilation operators $\widehat{\xi}_{\sf p}^{\dagger}(\tau)$ and $\widehat{\xi}_{\sf p}(\tau)$ obtained from the former through the Bogolyubov transformation \eqref{boosteq19}
\begin{equation}
\widehat{\rho}_{\rm LE}(\tau)=\frac{1}{Z_{\rm LE}(\tau)}\exp \left[-\frac{\widehat{\Pi}(\tau)}{T(\tau)}\right],\qquad
\widehat{\Pi}(\tau)=\int {\rm d}^2{\rm p_T}\,{\rm d} \mu \,\omega(\tau) \left(\widehat{\xi}_{\sf p}^{\dagger}(\tau)\widehat{\xi}_{\sf p}(\tau)+\frac{1}{2}\right).
\end{equation}
This diagonal expression allows for thermal expectation values of products of $\widehat{\xi}_{\sf p}^{\dagger}(\tau)$ and $\widehat{\xi}_{\sf p}(\tau)$ to be obtained by using the standard method shown through \eqref{gteq50}--\eqref{gteq51} in Chapter \ref{chapter:gteacceleration}, hence the result
\begin{subequations}
\begin{align}
\langle \widehat{\xi}_{\sf p}^{\dagger}(\tau)\widehat{\xi}_{{\sf p}'}(\tau)\rangle_{\rm LE}=&n_{\rm B}(\tau)\,\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu')\label{boosteq31a}\\
\langle \widehat{\xi}_{\sf p}(\tau)\widehat{\xi}_{{\sf p}'}^{\dagger}(\tau)\rangle_{\rm LE}=&\left(n_{\rm B}(\tau)+1\right)\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu')\label{boosteq31b}\\
\langle \widehat{\xi}_{\sf p}(\tau)\widehat{\xi}_{{\sf p}'}(\tau)\rangle_{\rm LE}=&0=\langle \widehat{\xi}_{\sf p}^{\dagger}(\tau)\widehat{\xi}_{{\sf p}'}^{\dagger}(\tau)\rangle_{\rm LE}\label{boosteq31c},
\end{align}
\end{subequations}
where $n_{\rm B}(\tau)$ is the Bose-Einstein distribution
\begin{equation}
n_{\rm B}(\tau)=\frac{1}{{\rm e}^{\omega(\tau)/T(\tau)}-1}.
\end{equation}
with $\omega(\tau)$ given by \eqref{boosteq30}.
This is indeed the average number of excitations of the operator $\widehat{\xi}_{\sf p}^{\dagger}(\tau)$, but by no means a density of particles.
Since the quantum field is expanded in terms of the creation and annihilation operators $\widehat{b}_{\sf p}^{\dagger}$ and $\widehat{b}_{\sf p}$, the particle density is given by $\langle \widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}\rangle_{\rm LE}$.
Another reason to be interested in thermal expectation values of products of $\widehat{b}_{\sf p}^{\dagger}$ and $\widehat{b}_{\sf p}$ is the fact that the energy-momentum tensor operator is quadratic in the quantum field, therefore it contains such terms.
The thermal expectation values of these products are most readily obtained from \eqref{boosteq31a}--\eqref{boosteq31c} by means of the inverse Bogolyubov transformation \eqref{boosteq29}
\begin{subequations}
\begin{align}
\langle \widehat{b}_{\sf p}\widehat{b}_{{\sf p}'}\rangle_{\rm LE}=&
-\frac{1}{2}\sinh \left(2\Theta(\tau)\right){\rm e}^{-i\chi(\tau)}\left(2n_{\rm B}(\tau)+1\right)\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu')\label{boosteq33a}\\
\langle \widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}^{\dagger}\rangle_{\rm LE}=&
-\frac{1}{2}\sinh \left(2\Theta(\tau)\right){\rm e}^{i\chi(\tau)}\left(2n_{\rm B}(\tau)+1\right)\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu')\label{boosteq33b}\\
\langle \widehat{b}_{\sf p}\widehat{b}_{{\sf p}'}^{\dagger}\rangle_{\rm LE}=&
\left[\cosh \left(2\Theta(\tau)\right)n_{\rm B}(\tau)+\cosh^2\Theta(\tau)\right]\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu')\label{boosteq33c}\\
\langle \widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}\rangle_{\rm LE}=&
\left[\cosh \left(2\Theta(\tau)\right)n_{\rm B}(\tau)+\sinh^2\Theta(\tau)\right]\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu').\label{boosteq33d}
\end{align}
\end{subequations}
These equations make it clear that the hyperbolic angle $\Theta(\tau)$, which can be determined from \eqref{boosteq22}, accounts for field vacuum effects.
More concerning the vacuum will be said shortly in Subsection \ref{sec:boost_lte_renormalization}.
Note that the $+1$ term in \eqref{boosteq31b}, stemming from the commutation relations \eqref{boosteq27}, gives rise to divergencies.
The details of renormalization will be shortly discussed in Subsection \ref{sec:boost_lte_renormalization}.
For the time being, we simply point out that renormalizing with respect to the vacuum of the operators $\widehat{\xi}_{\sf p}(\tau)$, whence of $\widehat{\Pi}(\tau)$, is but the subtraction of the $T(\tau)=0$ contribution, which eventually amount to canceling the $+1$ in \eqref{boosteq31b}.
With equations \eqref{boosteq33a}--\eqref{boosteq33d}, we are in a good position to calculate the thermal expectation value of the energy-momentum tensor \eqref{boosteq32}, that is the energy density, transverse pressure and longitudinal pressure at local thermodynamic equilibrium.
First of all, these scalar functions are obtained from the following contractions
\begin{equation}
\rho_{\rm LE}(\tau)=\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}u_{\mu}u_{\nu},
\end{equation}
\begin{equation}
p^{\rm T}_{\rm LE}(\tau)=\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}\hat{i}_{\mu}\hat{i}_{\nu}=
\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}\hat{j}_{\mu}\hat{j}_{\nu}=
\frac{1}{2}\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}\left(\hat{i}_{\mu}\hat{i}_{\nu}+\hat{j}_{\mu}\hat{j}_{\nu}\right),
\end{equation}
\begin{equation}
p^{\rm L}_{\rm LE}(\tau)=\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}\hat{\eta}_{\mu}\hat{\eta}_{\nu}.
\end{equation}
We take the energy density as an example, the pressures are worked out through the same procedure.
By plugging the quantum field expansion \eqref{boosteq09} into the contraction \eqref{boosteq34}, we have the following expression of the energy density in terms of thermal expectation values of products of creation and annihilation operators $\widehat{b}_{\sf p}^{\dagger}$ and $\widehat{b}_{\sf p}$
\begin{equation}\label{boosteq52}
\begin{split}
\rho_{\rm LE}(\tau)=&\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}u_{\mu}u_{\nu}=
\int\frac{{\rm d^2p_T}\,{\rm d}\mu\,{\rm d^2p_T'}\,{\rm d}\mu'}{4(4\pi)^2}\times \\
&\times \left\{\left[(\partial_{\tau}h({\sf p},\tau))(\partial_{\tau}h({\sf p}',\tau))-\left(p_xp_x'+p_yp_y'+\frac{1}{\tau^2}\mu\mu'-m^2\right)h({\sf p},\tau)h({\sf p}',\tau)\right]\times \right.\\
&\times {\rm e}^{i[({\bf p}_{\rm T}+{\bf p}_{\rm T}')\cdot {\bf x}_{\rm T}+(\mu+\mu')\eta]}\langle \widehat{b}_{\sf p}\widehat{b}_{{\sf p}'}\rangle_{\rm LE}+\\
&+\left[(\partial_{\tau}h({\sf p},\tau))(\partial_{\tau}h^*({\sf p}',\tau))+\left(p_xp_x'+p_yp_y'+\frac{1}{\tau^2}\mu\mu'+m^2\right)h({\sf p},\tau)h^*({\sf p}',\tau)\right]\times \\
&\times {\rm e}^{i[({\bf p}_{\rm T}-{\bf p}_{\rm T}')\cdot {\bf x}_{\rm T}+(\mu-\mu')\eta]}\langle \widehat{b}_{\sf p}\widehat{b}_{{\sf p}'}^{\dagger}\rangle_{\rm LE}+\\
&+\left[(\partial_{\tau}h^*({\sf p},\tau))(\partial_{\tau}h({\sf p}',\tau))+\left(p_xp_x'+p_yp_y'+\frac{1}{\tau^2}\mu\mu'+m^2\right)h^*({\sf p},\tau)h({\sf p}',\tau)\right]\times \\
&\times {\rm e}^{-i[({\bf p}_{\rm T}-{\bf p}_{\rm T}')\cdot {\bf x}_{\rm T}+(\mu-\mu')\eta]}\langle \widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}\rangle_{\rm LE}+\\
&+\left[(\partial_{\tau}h^*({\sf p},\tau))(\partial_{\tau}h^*({\sf p}',\tau))-\left(p_xp_x'+p_yp_y'+\frac{1}{\tau^2}\mu\mu'-m^2\right)h^*({\sf p},\tau)h^*({\sf p}',\tau)\right]\times \\
&\left.\times {\rm e}^{-i[({\bf p}_{\rm T}+{\bf p}_{\rm T}')\cdot {\bf x}_{\rm T}+(\mu+\mu')\eta]}\langle \widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}^{\dagger}\rangle_{\rm LE}\right\}.
\end{split}
\end{equation}
Then, by using equations \eqref{boosteq33a}--\eqref{boosteq33d}, we obtain
\begin{equation}
\begin{split}
\rho_{\rm LE}(\tau)=&
\int \frac{{\rm d^2p_T}\,{\rm d}\mu}{4(4\pi)^2}
\bigg[
\left[|\partial_{\tau}h(\tau)|^2+\omega^2(\tau)|h(\tau)|^2\right]\left(2n_{\rm B}(\tau)+1\right)\cosh(2\Theta(\tau))
\\
&-\frac{1}{2}\left[(\partial_{\tau}h(\tau))^2+\omega^2(\tau)h^2(\tau)\right]\left(2n_{\rm B}(\tau)+1\right)\sinh(2\Theta(\tau))\,{\rm e}^{-i\chi(\tau)}\\
&\left.
-\frac{1}{2}\left[(\partial_{\tau}h^*(\tau))^2+\omega^2(\tau)(h^*(\tau))^2\right]\left(2n_{\rm B}(\tau)+1\right)\sinh(2\Theta(\tau))\,{\rm e}^{i\chi(\tau)}.
\right]
\end{split}
\end{equation}
In square brackets we recognize the functions $K$ and $\Lambda$ defined in \eqref{boosteq35} and \eqref{boosteq36}.
These are expressed in terms of the hyperbolic angle $\Theta(\tau)$ and the phase $\chi(\tau)$ as in \eqref{boosteq22}, hence
\begin{equation}
\begin{split}
\rho_{\rm LE}(\tau)=&
\int {\rm d}^2{\rm p_T}\,{\rm d} \mu \frac{\omega(\tau)}{16\pi^3\tau}
\left[\cosh^2(2\Theta(\tau))-\sinh^2(2\Theta(\tau))\right]\left(2n_{\rm B}(\tau)+1\right)\\
=&\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau}\omega(\tau)\left(n_{\rm B}(\tau)+\frac{1}{2}\right).
\end{split}
\end{equation}
The transverse and longitudinal pressures are worked out following the same steps, and the final expressions read
\begin{equation}\label{boosteq37}
\rho_{\rm LE}(\tau)=\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau \omega(\tau)}\omega^2(\tau)\left(n_{\rm B}(\tau)+\frac{1}{2}\right),
\end{equation}
\begin{equation}\label{boosteq38}
p^{\rm T}_{\rm LE}(\tau)=\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau \omega(\tau)}\frac{|{\bf p}_{\rm T}|^2}{2}\left(n_{\rm B}(\tau)+\frac{1}{2}\right),
\end{equation}
\begin{equation}\label{boosteq39}
p^{\rm L}_{\rm LE}(\tau)=\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau \omega(\tau)}\frac{\mu^2}{\tau^2}\left(n_{\rm B}(\tau)+\frac{1}{2}\right).
\end{equation}
These are the integral expressions of the thermal expectation value of the canonical energy-momentum tensor of a free real scalar quantum field at local thermodynamic equilibrium in the future light-cone.
To the best of our knowledge, this was first obtained in \cite{Rindori:2021quq}.
From the mathematical point of view, there is a key fact that greatly eases the calculations to the point that simple analytical results such as \eqref{boosteq37}--\eqref{boosteq39} can be obtained, and that is the following.
The functions $A(\tau)$ and $B(\tau)$ of the Bogolyubov transformation, appearing above in the thermal expectation values of products of $\widehat{b}_{\sf p}$ and $\widehat{b}_{\sf p}^{\dagger}$, are expressed in terms of the functions $K(\tau)$ and $\Lambda(\tau)$ through equation \eqref{boosteq25}.
The functions $K(\tau)$ and $\Lambda(\tau)$, in turn, can be written in terms of the hyperbolic angle $\Theta(\tau)$ and the phase $\chi(\tau)$ thanks to the relation \eqref{boosteq14}, which eventually is a consequence of the Wronskian of the Hankel functions being reconstructed.
In summary, the ultimate reason for the above analytical results is the reconstruction of the Wronskian of the Hankel functions, occurring because the density operator and the energy-momentum tensor operator in \eqref{boosteq28} are evaluated at the same $\tau$.
In the non-equilibrium analysis to be carried out in Section \ref{sec:boost_ne_analysis}, this will no longer hold true because of the mixing of $\tau$ and $\tau_0$, as exemplified in \eqref{boosteq08}.
The difficulties we will encounter there will make us appreciate the above exact analytical results even more.
Equations \eqref{boosteq37}--\eqref{boosteq39} can be recast in a more compact form.
Let us define the following real function taking on different expressions for either the energy density, the transverse pressure or the longitudinal pressure
\begin{equation}
\gamma(\tau)\equiv \left\{
\begin{aligned}
\omega^2(\tau),\qquad \qquad &\rho_{\rm LE}(\tau)\\
-\frac{\mu^2}{\tau^2}-m^2,\qquad \qquad &p^{\rm T}_{\rm LE}(\tau)\\
-m_{\rm T}^2+\frac{\mu^2}{\tau^2},\qquad \qquad &p^{\rm L}_{\rm LE}(\tau)
\end{aligned}
\right.
\end{equation}
and a modification of the functions $K(\tau)$ and $\Lambda(\tau)$ defined in \eqref{boosteq35} and \eqref{boosteq36} respectively
\begin{equation}\label{boosteq41}
K_{\gamma}(\tau)\equiv \frac{\pi \tau}{4\omega(\tau)}\left[|\partial_{\tau}h(\tau)|^2+\gamma(\tau)|h(\tau)|^2\right],
\end{equation}
\begin{equation}\label{boosteq42}
\Lambda_{\gamma}(\tau)\equiv \frac{\pi \tau}{4\omega(\tau)}\left[(\partial_{\tau}h(\tau))^2+\gamma(\tau)h^2(\tau)\right].
\end{equation}
Once again, the ${\sf p}$-dependence is omitted in $\gamma(\tau)$, $K_{\gamma}(\tau)$ and $\Lambda_{\gamma}(\tau)$ both to ease the notation and to highlight the $\tau$-dependence, which is the conceptually important one.
Much in the same way as $K(\tau)$ and $\Lambda(\tau)$ satisfy \eqref{boosteq14} thanks to the Wrosnkian of the Hankel functions, $K_{\gamma}(\tau)$ and $\Lambda_{\gamma}(\tau)$ are related by
\begin{equation}\label{boosteq40}
K^2_{\gamma}(\tau)-|\Lambda_{\gamma}(\tau)|^2=
\frac{\gamma(\tau)}{\omega^2(\tau)}.
\end{equation}
In the case of the energy density, $K_{\gamma}(\tau)$ and $\Lambda_{\gamma}(\tau)$ coincide with $K(\tau)$ and $\Lambda(\tau)$ respectively, thus \eqref{boosteq40} reduces correctly to \eqref{boosteq14}.
With these new functions, we can further define $\Gamma^{\gamma}_{\rm LE}\equiv \{\rho_{\rm LE},p^{\rm T}_{\rm LE},p^{\rm L}_{\rm LE}\}$ as the set of functions that make up the thermal expectation value of the energy-momentum tensor distinguished by the value of the function $\gamma(\tau)$, thus the \eqref{boosteq37}--\eqref{boosteq39} can be compactly rearranged as
\begin{equation}\label{boosteq45}
\Gamma^{\gamma}_{\rm LE}(\tau)=
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau \omega(\tau)}\omega^2(\tau)
\left[K_{\gamma}(\tau)K(\tau)-{\rm Re}\left(\Lambda_{\gamma}(\tau)\Lambda^*(\tau)\right)\right]
\left(n_{\rm B}(\tau)+\frac{1}{2}\right).
\end{equation}
That is the same structure we will recover in the non-equilibrium analysis too.
However, in the present local thermodynamic equilibrium analysis we can take it a step further by recasting the expression in square brackets in the following form by using the definitions \eqref{boosteq35}, \eqref{boosteq36}, \eqref{boosteq41} and \eqref{boosteq42} and exploiting again the Wronskian of the Hankel functions
\begin{equation}
K_{\gamma}(\tau)K(\tau)-{\rm Re}\left(\Lambda_{\gamma}(\tau)\Lambda^*(\tau)\right)=
\frac{\omega^2(\tau)+\gamma(\tau)}{2\omega^2(\tau)},
\end{equation}
hence
\begin{equation}\label{boosteq46}
\Gamma^{\gamma}_{\rm LE}(\tau)=
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau \omega(\tau)}
\frac{\omega^2(\tau)+\gamma(\tau)}{2}\left(n_{\rm B}(\tau)+\frac{1}{2}\right).
\end{equation}
This can be further recast in a more familiar form.
By changing the integration variable $\mu$ to $p_z\equiv \mu/\tau$, the measure becomes
\begin{equation}\label{boosteq43}
{\rm d}^2{\rm p_T}\frac{{\rm d} \mu}{\tau}={\rm d}^2{\rm p_T}\,{\rm d} p_z={\rm d}^3{\rm p},
\end{equation}
and $\omega(\tau)$ the on-shell energy
\begin{equation}\label{boosteq44}
\omega(\tau)=\sqrt{m_{\rm T}^2+\frac{\mu^2}{\tau^2}}=\sqrt{m_{\rm T}^2+p_z^2}=\sqrt{|{\bf p}_{\rm T}|^2+p_z^2+m^2}\equiv \varepsilon.
\end{equation}
In turn, $n_{\rm B}(\tau)$ becomes the energy-dependent Bose-Einstein distribution $n_{\rm B}(\varepsilon,T(\tau))$ in the phase space, hence the the energy density \eqref{boosteq37} as well as the transverse and longitudinal pressures \eqref{boosteq38} and \eqref{boosteq39} can be written as the familiar momentum integrals of the relativistic neutral Bose gas.
Altogether, the non-renormalized thermal expectation value of the energy-momentum tensor at local equilibrium reads
\begin{equation}\label{boosteq47}
\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}=\int \frac{{\rm d}^3{\rm p}}{\varepsilon}P^{\mu}P^{\nu}\left(\frac{1}{{\rm e}^{\beta_{\lambda}P^{\lambda}}-1}+\frac{1}{2}\right),
\end{equation}
where $\beta^{\mu}$ is the four-temperature \eqref{boosteq03}.
Therefore, the thermodynamic functions $\Gamma^{\gamma}_{\rm LE}(\tau)$ are just the familiar functions of $T(\tau)$ as the usual ideal relativistic gas.
In particular, the transverse and the longitudinal pressures are in fact equal, namely
\begin{equation}
p^{\rm T}_{\rm LE}(\tau)=p^{\rm L}_{\rm LE}(\tau)\equiv p_{\rm LE}(\tau).
\end{equation}
We can convince ourselves of this equality simply by noticing that the measure \eqref{boosteq43}, the energy \eqref{boosteq44} and the Bose-Einstein distribution $n_{\rm B}(\tau)$ are all invariant under rotations of the momentum ${\bf p}$, so it makes no difference between $p_x$, $p_y$ and $p_z$.
Thus, the transverse pressure \eqref{boosteq38} and the longitudinal pressure \eqref{boosteq39} are equal.
In other words, the thermal expectation value of the energy-momentum tensor at local thermodynamic equilibrium has the ideal form
\begin{equation}
\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}=
\rho_{\rm LE}(\tau)u^{\mu}u^{\nu}-p_{\rm LE}(\tau)\Delta^{\mu \nu}.
\end{equation}
This is a somewhat unexpected feature of local thermodynamic equilibrium, totally at variance with the case of global thermodynamic equilibrium with acceleration studied in Chapter \ref{chapter:gteacceleration}
Of course, the plain thermal expectation values \eqref{boosteq45}, or equivalently \eqref{boosteq46} or \eqref{boosteq47}, are divergent owing to the $1/2$ terms stemming from the commutation relations \eqref{boosteq27}, therefore they must be suitably renormalized.
\subsection{Renormalization}
\label{sec:boost_lte_renormalization}
Renormalization in free Quantum Field Theory is most readily established by the subtraction of some vacuum expectation value.
In the present case of a relativistic quantum fluid with boost invariance, we face an ambiguity in the choice of the vacuum which is somehow different from the one discussed in Subsection \ref{sec:gte_unruh_effect} for a relativistic quantum fluid at global thermodynamic equilibrium with acceleration.
There, the quantum field was expanded in the Rindler wedges in terms of the Rindler creation and annihilation operators, and the density operator was diagonal with respect to them.
Their vacuum was then the Rindler vacuum.
Those operators were related to the usual plane wave ones by a non-trivial Bogolyubov transformation, therefore the Rindler vacuum did not coincide with the Minkowski vacuum.
Thus, the ambiguity back then was choosing between the two natural vacua given by the Rindler and the Minkowski vacuum.
By taking the standpoint of the Rindler observer, the Minkowski vacuum appeared not as empty of particles but populated by a thermal distribution of them at a temperature proportional to the acceleration.
Together with the vacuum choice ambiguity, the key ingredient that let the Unruh effect emerge was the fact that the four-temperature possessed a Killing horizon.
Here, the quantum field is expanded in the creation and annihilation operators $\widehat{b}_{\sf p}^{\dagger}$ and $\widehat{b}_{\sf p}$, but the density operator is diagonal in a different set of operators, the $\widehat{\xi}_{\sf p}^{\dagger}(\tau)$ and $\widehat{\xi}_{\sf p}(\tau)$, related to the previous ones by a non-trivial Bogolyubov transformation.
Thus, the vacua of the two set of operators do not coincide.
The $\widehat{b}_{\sf p}^{\dagger}$ and $\widehat{b}_{\sf p}$ can be obtained from the usual plane wave ones $\widehat{a}_{\bf p}^{\dagger}$ and $\widehat{a}_{\bf p}$ by means of the ``Fourier-like'' transformation \eqref{boosteq11}, which does not involve any mixing of creation and annihilation operators, therefore the vacuum of $\widehat{b}_{\sf p}$ is in fact the Minkowski vacuum $|0_{\rm M}\rangle$.
The Minkowski vacuum, of course, is a static state.
On the other hand, the vacuum of $\widehat{\xi}_{\sf p}(\tau)$ is a $\tau$-dependent state, and we indicated it as $|0_{\tau}\rangle$ to emphasize this fact
\begin{equation}\label{boosteq70}
\widehat{\xi}_{\sf p}(\tau)|0_{\tau}\rangle=0.
\end{equation}
By using standard methods \cite{Mukhanov:2007zz}, from the coefficients of the Bogolyubov transformation \eqref{boosteq29}, it can be shown that $|0_{\tau}\rangle$ is related to $|0_{\rm M}\rangle$ by \cite{Rindori:2021quq}
\begin{equation}
|0_{\tau}\rangle = \prod_{{\sf p}} \frac{1}{|\cosh \Theta({\sf p},\tau)|^{1/2}}\exp \left[ -\frac{1}{2}\tanh \Theta({\sf p},\tau) {\rm e}^{-i\chi({\sf p},\tau)} \widehat{b}_{\sf p}^{\dagger}\widehat{b}_{-{\sf p}}^{\dagger}\right] |0_{\rm M}\rangle
\end{equation}
In summary, the ambiguity here lies in the choice of either $|0_{\tau}\rangle$ or $|0_{\rm M}\rangle$, leading to two different renormalization schemes
\begin{equation}\label{boosteq48}
{\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}}_{\tau}\equiv
\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}-\langle 0_{\tau}|\widehat{T}^{\mu \nu}|0_{\tau}\rangle,
\end{equation}
\begin{equation}\label{boosteq49}
{\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}}_{\rm M}\equiv
\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}-\langle 0_{\rm M}|\widehat{T}^{\mu \nu}|0_{\rm M}\rangle.
\end{equation}
This situation is reminiscent of the Unruh effect, and indeed from the velocity field \eqref{boosteq07} a non-vanishing acceleration field can be derived; however, the relation \eqref{boosteq11} between the plane wave operators $\widehat{a}_{\bf p}^{\dagger}$ and $\widehat{a}_{\bf p}$ and the Milne ones $\widehat{b}_{\sf p}^{\dagger}$ and $\widehat{b}_{\sf p}$ is linear, making the vacua of the two sets of operators coincide.
Therefore, Milne observers count the same particles as inertial observers, given by
\begin{equation}
\langle \widehat{a}_{\bf p}^{\dagger}\widehat{a}_{\bf p}\rangle_{\rm LE}=
\frac{1}{2\pi m_{\rm T} \sqrt{\cosh {\rm y}\cosh {\rm y}'}}\int_{-\infty}^{+\infty}{\rm d} \mu \,{\rm e}^{-i\mu({\rm y}-{\rm y}')}\langle \widehat{b}_{\sf p}^{\dagger}\widehat{b}_{\sf p}\rangle_{\rm LE}.
\end{equation}
In other words, no analog of the Unruh effect is present between inertial and Milne observers.
The ultimate reason for this seems also to be the fact that the future light-cone is not a stationary spacetime, and as such it does not possess any global timelike Killing vector field.
In fact, the four-temperature \eqref{boosteq03}, which is the only possible one compatible with boost invariance, is not a Killing vector field, so it has no Killing horizon whatsoever.
Without a Killing horizon, no Unruh effect seems to emerge.
The next step is the comparison between the two renornalization schemes \eqref{boosteq48} and \eqref{boosteq49}.
Let us start from the former, which is also the most straightforward one.
In the limit of vanishing proper temperature, the local thermodynamic equilibrium density operator reduces to the vacuum $|0_{\tau}\rangle$, that is
\begin{equation}\label{boosteq71}
\lim_{T(\tau)\to 0}\widehat{\rho}_{\rm LE}(\tau)=
\lim_{T(\tau)\to 0}\frac{1}{Z_{\rm LE}(\tau)}\exp \left[-\frac{\widehat{\Pi}(\tau)}{T(\tau)}\right]=
|0_{\tau}\rangle \langle 0_{\tau}|.
\end{equation}
The first thing we can deduce from this fact is that the vacuum $|0_{\tau}\rangle \langle 0_{\tau}|$ has the same symmetries as the local thermodynamic equilibrium density operator $\widehat{\rho}_{\rm LE}(\tau)$ which, however, is less symmetric than the supposedly Poincar\'e-invariant Minkowski vacuum.
This does not imply that $|0_{\tau}\rangle$ is degenerate, but that Poincar\'e transformations will give rise to non-vanishing components of excited states.
The second one is that renormalization with respect to the vacuum $|0_{\tau}\rangle$ is essentially the subtraction of the $T(\tau)=0$ contribution.
This could just as well be realized by looking at the thermal expectation values of products of $\widehat{\xi}_{\sf p}(\tau)$ and $\widehat{\xi}_{\sf p}^{\dagger}(\tau)$ \eqref{boosteq31a}--\eqref{boosteq31c}.
There, the subtraction of the $T(\tau)=0$ contribution eventually amounts to canceling the $+1$ term in \eqref{boosteq31b}, which in turn stems from the commutation relations \eqref{boosteq27}.
In this scheme, the energy density and the pressure at local thermodynamic equilibrium become
\begin{equation}\label{boosteq56}
{\rho_{\rm LE}}_{\tau}(\tau)\equiv
\rho_{\rm LE}(\tau)-\langle 0_{\tau}|\widehat{T}^{\mu \nu}|0_{\tau}\rangle u_{\mu}u_{\nu}=
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau \omega(\tau)}\omega^2(\tau)n_{\rm B}(\tau)
\end{equation}
\begin{equation}\label{boosteq57}
{p_{\rm LE}}_{\tau}(\tau)\equiv
p_{\rm LE}(\tau)-\langle 0_{\tau}|\widehat{T}^{\mu \nu}|0_{\tau}\rangle \hat{e}_{\mu}\hat{e}_{\nu}=
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau \omega(\tau)}p_i^2(\tau)n_{\rm B}(\tau)
\end{equation}
with $\hat{e}_{\mu}$ either $\hat{i}_{\mu}$, $\hat{j}_{\mu}$ or $\hat{\eta}_{\mu}$ and $p_i$ either $p_x$, $p_y$ or $\mu/\tau$.
Equivalently, these can be recast in the more compact forms \eqref{boosteq45} and \eqref{boosteq46} as
\begin{equation}\label{boosteq58}
\begin{split}
{\Gamma^{\gamma}_{\rm LE}}_{\tau}(\tau)=&
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau \omega(\tau)}\omega^2(\tau)\left[K_{\gamma}(\tau)K(\tau)-{\rm Re}\left(\Lambda_{\gamma}(\tau)\Lambda^*(\tau)\right)\right]n_{\rm B}(\tau)\\
=&\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau \omega(\tau)}
\frac{\omega^2(\tau)+\gamma(\tau)}{2}n_{\rm B}(\tau).
\end{split}
\end{equation}
These integrals are convergent due to the exponential suppression for large values of the momentum ${\sf p}$ provided by the Bose-Einstein distribution.
In particular, in the massless limit they can be carried out analytically to yield the familiar expressions of the energy density and pressure of a neutral Bose gas at temperature $T(\tau)$
\begin{equation}\label{boosteq59}
{\rho_{\rm LE}}_{\tau}(\tau)=\frac{\pi^2}{30}T^4(\tau),\qquad (m=0),
\end{equation}
\begin{equation}\label{boosteq60}
{p_{\rm LE}}_{\tau}(\tau)=\frac{1}{3}{\rho_{\rm LE}}_{\tau}(\tau)=
\frac{\pi^2}{90}T^4(\tau),\qquad (m=0).
\end{equation}
Now we turn to the renormalization scheme \eqref{boosteq49}, namely we take the standpoint either of the inertial or Milne observer and we subtract the Minkowski vacuum contribution.
In order to calculate the expectation value of the energy-momentum tensor operator in the Minkowski vacuum, we exploit the fact that $\widehat{T}^{\mu \nu}$ is built with the quantum field as in \eqref{boosteq50}, and the quantum field is in turn expanded in Milne coordinates as in \eqref{boosteq09} in terms of $\widehat{b}_{\sf p}$ and $b_{\sf p}^{\dagger}$, whose vacuum is $|0_{\rm M}\rangle$.
Much in the same way as previously done, we take the energy density as a prototype, with the pressure to be worked out following analogous steps.
The energy density in the Minkowski vacuum is given by \eqref{boosteq52} where the thermal expectation values of products of $\widehat{b}_{\sf p}$ and $\widehat{b}_{\sf p}^{\dagger}$ are supposed to be calculated on $|0_{\rm M}\rangle$ instead of using $\widehat{\rho}_{\rm LE}(\tau)$.
Of course, the products $\widehat{b}_{\sf p}\widehat{b}_{{\sf p}'}$, $\widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}^{\dagger}$ and $\widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}$ have vanishing expectation value on the Minkowski vacuum as $\widehat{b}_{\sf p}|0_{\rm M}\rangle=0$.
On the other hand, the expectation value of $\widehat{b}_{\sf p}\widehat{b}^{\dagger}_{{\sf p}'}$ is non-vanishing and worked out with the commutation relations \eqref{boosteq18}
\begin{equation}\label{boosteq51}
\langle 0_{\rm M}|\widehat{b}_{\sf p}\widehat{b}^{\dagger}_{{\sf p}'}|0_{\rm M}\rangle=
\langle 0_{\rm M}|\widehat{b}^{\dagger}_{{\sf p}'}\widehat{b}_{\sf p}|0_{\rm M}\rangle+\langle 0_{\rm M}|[\widehat{b}_{\sf p},\widehat{b}^{\dagger}_{{\sf p}'}]|0_{\rm M}\rangle=
\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu').
\end{equation}
Thus, the energy density in the Minkowski vacuum is readily obtained from \eqref{boosteq52}
\begin{equation}
\begin{split}
\langle 0_{\rm M}|\widehat{T}^{\mu \nu}|0_{\rm M}\rangle u_{\mu}u_{\nu}=&
\int \frac{{\rm d^2p_T}\,{\rm d}\mu}{4(4\pi)^2}\left(|\partial_{\tau}h(\tau)|^2+\omega^2(\tau)|h(\tau)|^2\right)\\
=&\int \frac{{\rm d^2p_T}\,{\rm d}\mu}{(2\pi)^3\tau \omega(\tau)}\omega^2(\tau) \frac{K(\tau)}{2}.
\end{split}
\end{equation}
Repeating the same steps for the pressure, the same result is found with $K(\tau)$ replaced by $K_{\gamma}(\tau)$, the two of them coinciding in the case of the energy density.
We can therefore conclude that the thermal expectation value of the energy-momentum tensor renormalized with respect to the Minkowski vacuum reads
\begin{equation}
\begin{split}
{\Gamma^{\gamma}_{\rm LE}}_{\rm M}(\tau)=&
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau \omega(\tau)}\omega^2(\tau)\times \\
&\times \left\{
\left[K_{\gamma}(\tau)K(\tau)-{\rm Re}\left(\Lambda_{\gamma}(\tau)\Lambda^*(\tau)\right)\right]\left(n_{\rm B}(\tau)+\frac{1}{2}\right)-\frac{K_{\gamma}(\tau)}{2}
\right\}.
\end{split}
\end{equation}
This expression, however, subtly hides some problem.
Let us take once again the energy density as an example and split it into two terms as
\begin{equation}\label{boosteq53}
\begin{split}
{\rho_{\rm LE}}_{\rm M}(\tau)=&
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau}\omega(\tau)n_{\rm B}(\tau)-
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau}\omega(\tau)\frac{K(\tau)-1}{2}.
\end{split}
\end{equation}
The first term coincides with \eqref{boosteq56} found in the former renormalization scheme and is clearly convergent.
The second term depends on the transverse momentum ${\bf p}_{\rm T}$ only through its modulus ${\rm p_T}\equiv |{\bf p}_{\rm T}|$, therefore we can change to polar variables in the transverse momentum and factorize the angular part
\begin{equation}
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau}\omega(\tau)\frac{K(\tau)-1}{2}=
\frac{1}{(2\pi)^2\tau}\int_{-\infty}^{+\infty}{\rm d} \mu \int_0^{+\infty}{\rm d} {\rm p_T}\,{\rm p_T}\,\omega(\tau)\frac{K(\tau)-1}{2}.
\end{equation}
Changing variable from ${\rm p_T}$ to the transverse mass $m_{\rm T}$ and then to $v\equiv m_{\rm T} \tau$ we have
\begin{equation}
\frac{1}{(2\pi)^2\tau^2}\int_{-\infty}^{+\infty}{\rm d} \mu \int_{m\tau}^{+\infty}{\rm d} v\,v \sqrt{v^2+\mu^2}\frac{K(v)-1}{2}.
\end{equation}
In \cite{Rindori:2021quq}, an asymptotic analysis was performed for $m\tau \gg 1$, or, restoring the natural constants, $m\tau c^2\gg \hbar$.
As the details of this analysis are quite involved, they will be addressed in a dedicated space in Subsection \ref{sec:boost_ne_asymptotic_analysis}.
For the time being, we simply mention that the trend of $K(\tau)-1$ in this limit is
\begin{equation}\label{boosteq54}
K(\tau)-1=\cosh \Theta(\tau)-1\simeq \frac{\Theta^2(\tau)}{2}\simeq \frac{m_{\rm T}^4}{\tau^2\omega^6(\tau)}.
\end{equation}
In this limit we also have $v\gg 1$, and using \eqref{boosteq54} the integral in $v$ turns out to diverge
\begin{equation}
\int_{m\tau}^{+\infty}{\rm d} v\,v \sqrt{v^2+\mu^2}\frac{v^4}{\left(\sqrt{v^4+\mu^2}\right)^6}\sim
\int_{m\tau}^{+\infty}{\rm d} v\,v\frac{v^4}{v^5}\sim \infty.
\end{equation}
Of course this is not a formal proof, but a strong indication nonetheless that renormalizing with respect to the Minkowski vacuum provides us with thermal expectation values that are still divergent.
This fact seems to automatically single out $|0_{\tau}\rangle$ as a preferred choice, however, it must be emphasized that $|0_{\tau}\rangle$ is a $\tau$-dependent vacuum, therefore thermal expectation values renormalized accordingly might inherit some eventually undesired $\tau$-dependence.
These issues will be further discussed in Subsection \ref{sec:boost_ne_renormalization} in the context of the non-equilibrium analysis.
For the time being, we would like the takeaway to be the fact that the Minkowski vacuum renormalization scheme should be discarded as it appears to provide us with thermal expectation values that are still divergent.
Moreover, the next Subsection will make it clear why at some point we would have needed to study the effects of the $|0_{\tau}\rangle$ contributions subtraction anyway.
\subsection{Entropy current in the future light-cone}
\label{sec:boost_lte_entropy_current}
With the plain thermal expectation value of the energy-momentum tensor, we are now in a good spot to calculate the entropy current at local thermodynamic equilibrium by using the method put forward Section \ref{sec:zubarev_entropy_current_method}.
That is in fact well-suited for systems at local thermodynamic equilibrium, which is the underlying hypothesis of relativistic hydrodynamics, and global thermodynamic equilibrium is included as a particular case.
However, no known expression for the entropy current nor for the entropy production exists for a Quantum Field Theory out of thermodynamic equilibrium, therefore in the non-equilibrium analysis to be carried out in Section \ref{sec:boost_ne_analysis} we will not be able to re-apply our method.
As a quick recap, the method demands to be provided with two ingredients:
\begin{enumerate}
\item the thermal expectation value the energy-momentum tensor $\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}$, and
\item the eigenvector $|0_{\Upsilon}\rangle$ corresponding to the lowest, non-degenerate eigenvalue of $\widehat{\Upsilon}(\tau)$.
\end{enumerate}
Then, the algorithm is the following:
\begin{enumerate}
\item Take $\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}$ and subtract $\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle$ by using the $\lambda$-dependent density operator $\widehat{\rho}_{\rm LE}(\lambda)$ defined according to \eqref{zubeq21}.
This way, the result is $\lambda$-dependent.
\item Contract with $\beta_{\nu}$, which is $\lambda$-independent, and integrate in $\lambda$ from $\lambda=1$ to $\lambda=+\infty$ in order to obtain the thermodynamic potential current $\phi^{\mu}$ defined in \eqref{zubeq26}.
\item Plug the result into \eqref{zubeq27} and obtain the entropy current $s^{\mu}$.
\end{enumerate}
By comparing the definition \eqref{zubeq23} of the operator $\widehat{\Upsilon}(\tau)$ with the expression \eqref{boosteq15} of the local thermodynamic equilibrium density operator with boost invariance in the future light-cone, we immediately understand that
\begin{equation}\label{boosteq55}
\widehat{\Upsilon}(\tau)=\frac{\widehat{\Pi}(\tau)}{T(\tau)}=
\frac{1}{T(\tau)}\int {\rm d}^2{\rm p_T}\,{\rm d} \mu \,\omega(\tau)\left(\widehat{\xi}_{\sf p}^{\dagger}(\tau)\widehat{\xi}_{\sf p}(\tau)+\frac{1}{2}\right).
\end{equation}
Thus, the lowest eigenvector of $\widehat{\Upsilon}(\tau)$ is the vacuum of $\widehat{\Pi}(\tau)$, that is $|0_{\tau}\rangle$.
As we discussed already in Subsection \ref{sec:boost_lte_renormalization}, $|0_{\tau}\rangle$ is a $\tau$-dependent and non-degenerate state which does not coincide with the Minkowski vacuum and possesses the same symmetries as the local thermodynamic equilibrium density operator.
Hence
\begin{equation}
\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle=
\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}-\langle 0_{\tau}|\widehat{T}^{\mu \nu}|0_{\tau}\rangle,
\end{equation}
which was calculated in \eqref{boosteq56} and \eqref{boosteq57} or equivalently \eqref{boosteq58}, with analytical results given by \eqref{boosteq59} and \eqref{boosteq60} for a massless field.
So, we are good to go for what concerns the ingredients.
According to the method, the above quantity should be calculated with the modified density operator $\widehat{\rho}_{\rm LE}(\tau,\lambda)$ defined in \eqref{zubeq21} in order to be $\lambda$-dependent.
Once again, by comparing \eqref{zubeq21} with \eqref{boosteq55}, we can tell that this is but a rescaling of the proper temperature as $T(\tau)\mapsto T(\tau)/\lambda$, in fact this transformation gives precisely the sought result.
Thus, we have
\begin{equation}
\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}(\lambda)-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle(\lambda)=
{\rho_{\rm LE}}_{\tau}(\tau,\lambda)u^{\mu}u^{\nu}-
{p_{\rm LE}}_{\tau}(\tau,\lambda)\Delta^{\mu \nu},
\end{equation}
where the rescaling $T(\tau)\mapsto T(\tau)/\lambda$ affects only the energy density and the pressure because $u^{\mu}$ and $\Delta^{\mu \nu}$ are independent of $T(\tau)$.
This result should be contracted with the four-temperature while taking care that $\beta_{\nu}$ does not undergo the temperature rescaling, for it concerns only the thermal expectation value $\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle$, as can be clearly seen in the derivation in Section \ref{sec:zubarev_entropy_current_method}.
Whence
\begin{equation}
\left(\langle \widehat{T}^{\mu \nu}\rangle_{\rm LE}(\lambda)-\langle 0_{\Upsilon}|\widehat{T}^{\mu \nu}|0_{\Upsilon}\rangle(\lambda)\right)\beta_{\nu}=
{\rho_{\rm LE}}_{\tau}(\tau,\lambda)\beta^{\mu},
\end{equation}
therefore we do not need to know the thermal expectation value of the whole energy-momentum tensor, only the energy density contributes to the thermodynamic potential current and the entropy current.
By looking at equation \eqref{boosteq56}, we see that the only dependence of the energy density on the proper temperature is in the Bose-Einstein distribution, thus
\begin{equation}
{\rho_{\rm LE}}_{\tau}(\tau,\lambda)=
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau \omega(\tau)}\omega^2(\tau)\frac{1}{{\rm e}^{\lambda \omega(\tau)/T(\tau)}-1}.
\end{equation}
For a massless field, the $\lambda$-integration can be carried out analytically, yielding
\begin{equation}
\int_1^{+\infty}{\rm d} \lambda \,{\rho_{\rm LE}}_{\tau}(\tau,\lambda)=
\int_1^{+\infty}{\rm d} \lambda \,\frac{\pi^2}{30}\frac{T^4(\tau)}{\lambda^4}=
\frac{\pi^2}{90}T^4(\tau),\qquad (m=0)
\end{equation}
and the following expression for the thermodynamic potential current is readily obtained
\begin{equation}
\phi^{\mu}=\frac{\pi^2}{90\beta^4}\beta^{\mu}=
\frac{\pi^2}{90}T^3(\tau)u^{\mu},\qquad (m=0).
\end{equation}
Plugging this result into \eqref{zubeq27}, we finally obtain the entropy current
\begin{equation}
s^{\mu}=\frac{2\pi^2}{45\beta^4}\beta^{\mu}=
\frac{2\pi^2}{45}T^3(\tau)u^{\mu},\qquad (m=0).
\end{equation}
These are the expressions of the thermodynamic potential current and the entropy current of a free real massless scalar field at local thermodynamic equilibrium with boost invariance in the future light-cone \cite{Rindori:2021quq}.
The entropy current coincides with the classical expression of the usual relativistic kinetic theory, therefore, no quantum corrections are apparently present, which is somewhat unexpected
\section{Non-equilibrium analysis}
\label{sec:boost_ne_analysis}
So far, we have been concerned with an analysis at local thermodynamic equilibrium, which is a non-equilibrium configurations but still close to global thermodynamic equilibrium, in the sense that the volume term in \eqref{zubeq09} containing the derivatives of the thermodynamic fields is small enough to be neglected.
In this Section, we perform a full non-equilibrium analysis instead, therefore we consider the full non-equilibrium density operator $\widehat{\rho}=\widehat{\rho}_{\rm LE}(\tau_0)$.
The calculation of the thermal expectation value of the energy-momentum tensor goes on much in the same way as we saw in the local thermodynamic equilibrium analysis.
However, there are now complications due to the mixing of terms calculated at $\tau$ and others at $\tau_0$.
It is therefore mandatory to pay attention to these dependencies, so the dependence on ${\sf p}$ will still be omitted whenever possible.
\subsection{Thermal expectation values}
\label{subsec:ne_tev}
As previously mentioned, the non-equilibrium density operator and the local thermodynamic equilibrium one have the same symmetries, so the former will simply be the latter calculated at $\tau_0$, namely
\begin{equation}\label{boosteq69}
\widehat{\rho}=\widehat{\rho}_{\rm LE}(\tau_0)=
\frac{1}{Z_{\rm LE}(\tau_0)}\exp \left[-\frac{\widehat{\Pi}(\tau_0)}{T(\tau_0)}\right]
\end{equation}
with $\widehat{\Pi}(\tau_0)$ given by
\begin{equation}
\widehat{\Pi}(\tau_0)=\int {\rm d}^2{\rm p_T}\,{\rm d} \mu \,\omega(\tau_0)\left(\widehat{\xi}_{\sf p}^{\dagger}(\tau_0)\widehat{\xi}_{\sf p}(\tau_0)+\frac{1}{2}\right).
\end{equation}
The operators $\widehat{\xi}_{\sf p}(\tau_0)$ and $\widehat{\xi}_{\sf p}^{\dagger}(\tau_0)$ are those that diagonalize $\widehat{\Pi}(\tau_0)$, which is but $\widehat{\Pi}(\tau)$ at $\tau_0$, so they are just $\widehat{\xi}_{\sf p}(\tau)$ and $\widehat{\xi}_{\sf p}^{\dagger}(\tau)$ respectively at $\tau_0$.
In terms of $\widehat{b}_{\sf p}$ and $\widehat{b}_{\sf p}^{\dagger}$, they are obtained from \eqref{boosteq29} at $\tau_0$, namely
\begin{equation}
\begin{split}
\widehat{b}_{\sf p}=&\cosh \Theta({\sf p}, \tau_0)\widehat{\xi}_{\sf p}(\tau_0)-\sinh \Theta({\sf p},\tau_0)\,{\rm e}^{-i\chi({\sf p},\tau_0)}\widehat{\xi}_{-{\sf p}}^{\dagger}(\tau_0),\\
\widehat{b}_{\sf p}^{\dagger}=&\cosh \Theta({\sf p}, \tau_0) \widehat{\xi}_{\sf p}^{\dagger}(\tau_0)-\sinh \Theta ({\sf p},\tau_0)\,{\rm e}^{i\chi({\sf p},\tau_0)}\widehat{\xi}_{-{\sf p}}(\tau_0).
\end{split}
\end{equation}
Thermal expectation values out of thermodynamic equilibrium are calculated with this density operator as anticipated in \eqref{boosteq08}, which we hereby report.
The non-equilibrium thermal expectation value of the energy-momentum tensor is
\begin{equation}
T^{\mu \nu}=\mathrm{tr}(\widehat{\rho}\widehat{T}^{\mu \nu})=\mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau_0)\widehat{T}^{\mu \nu}(\tau)).
\end{equation}
This is constrained by boost invariance to be of the form \eqref{boosteq12}, that is
\begin{equation}
T^{\mu \nu}=\rho(\tau)u^{\mu}u^{\nu}+p^{\rm T}(\tau)(\hat{i}^{\mu}\hat{i}^{\nu}+\hat{j}^{\mu}\hat{j}^{\nu})+p^{\rm L}(\tau)\hat{\eta}^{\mu}\hat{\eta}^{\nu}.
\end{equation}
In order to calculate the energy density and the two pressures, we need to know the non-equilibrium thermal expectation values of products of $\widehat{b}_{\sf p}$ and $\widehat{b}_{\sf p}^{\dagger}$ with $\widehat{\rho}$, that is
\begin{subequations}
\begin{align}
\langle \widehat{b}_{\sf p}\widehat{b}_{{\sf p}'}\rangle=&
\mathrm{tr}(\widehat{\rho}\,\widehat{b}_{\sf p}\widehat{b}_{{\sf p}'})=
\mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau_0)\widehat{b}_{\sf p}\widehat{b}_{{\sf p}'})\\
\langle \widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}^{\dagger}\rangle=&
\mathrm{tr}(\widehat{\rho}\,\widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}^{\dagger})=
\mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau_0)\widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}^{\dagger})\\
\langle \widehat{b}_{\sf p}\widehat{b}_{{\sf p}'}^{\dagger}\rangle=&
\mathrm{tr}(\widehat{\rho}\,\widehat{b}_{\sf p}\widehat{b}_{{\sf p}'}^{\dagger})=
\mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau_0)\widehat{b}_{\sf p}\widehat{b}_{{\sf p}'}^{\dagger})\\
\langle \widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}\rangle=&
\mathrm{tr}(\widehat{\rho}\,\widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'})=
\mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau_0)\widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}).
\end{align}
\end{subequations}
These are most readily given by evaluating \eqref{boosteq33a}--\eqref{boosteq33d} at $\tau_0$
\begin{subequations}
\begin{align}
\langle \widehat{b}_{\sf p}\widehat{b}_{{\sf p}'}\rangle=&
-\frac{1}{2}\sinh \left(2\Theta(\tau_0)\right){\rm e}^{-i\chi(\tau_0)}\left(2n_{\rm B}(\tau_0)+1\right)\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu')\label{boosteq61a}\\
\langle \widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}^{\dagger}\rangle=&
-\frac{1}{2}\sinh \left(2\Theta(\tau_0)\right){\rm e}^{i\chi(\tau_0)}\left(2n_{\rm B}(\tau_0)+1\right)\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu')\label{boosteq61b}\\
\langle \widehat{b}_{\sf p}\widehat{b}_{{\sf p}'}^{\dagger}\rangle=&
\left[\cosh \left(2\Theta(\tau_0)\right)n_{\rm B}(\tau_0)+\cosh^2\Theta(\tau_0)\right]\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu')\label{boosteq61c}\\
\langle \widehat{b}_{\sf p}^{\dagger}\widehat{b}_{{\sf p}'}\rangle=&
\left[\cosh \left(2\Theta(\tau_0)\right)n_{\rm B}(\tau_0)+\sinh^2\Theta(\tau_0)\right]\delta^2({\bf p}_{\rm T}-{\bf p}_{\rm T}')\,\delta(\mu-\mu').\label{boosteq61d}
\end{align}
\end{subequations}
Taking once again the energy density as a prototype, its non-equilibrium thermal expectation value is obtained from \eqref{boosteq52} by replacing \eqref{boosteq33a}--\eqref{boosteq33d} with \eqref{boosteq61a}--\eqref{boosteq61d}
\begin{equation}
\begin{split}
\rho(\tau)=&
\mathrm{tr}(\widehat{\rho}\widehat{T}^{\mu \nu})u_{\mu}u_{\nu}=
\mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau_0)\widehat{T}^{\mu \nu}(\tau))u_{\mu}u_{\nu}\\
=&\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau}\omega(\tau)
\left[K(\tau)K(\tau_0)-{\rm Re}\left(\Lambda(\tau)\Lambda^*(\tau_0)\right)\right]
\left(n_{\rm B}(\tau_0)+\frac{1}{2}\right).
\end{split}
\end{equation}
As in the local thermodynamic equilibrium analysis, the pressures are worked out following the same steps.
Thus, the non-equilibrium analog of equation \eqref{boosteq45} reads
\begin{equation}\label{boosteq65}
\Gamma^{\gamma}(\tau)=
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau \omega(\tau)}\omega^2(\tau)
\left[K_{\gamma}(\tau)K(\tau_0)-{\rm Re}\left(\Lambda_{\gamma}(\tau)\Lambda^*(\tau_0)\right)\right]
\left(n_{\rm B}(\tau_0)+\frac{1}{2}\right).
\end{equation}
Note that at $\tau=\tau_0$, namely at local thermodynamic equilibrium, this equation reduces to \eqref{boosteq45}, and so the results of the local thermodynamic equilibrium analysis are recovered as expected.
However, at later times $\tau>\tau_0$, because of the mixing of some functions at $\tau$ and others at $\tau_0$, the Wronskian of the Hankel functions is, unfortunately, not reconstructed.
When written explicitly, the above equation is given by a non-trivial combination of Hankel fuctions of the same order but with different arguments which cannot be recast in any simple form.
We can therefore tell that for $\tau >\tau_0$ the thermal expectation value of the energy-momentum tensor differs from its local thermodynamic equilibrium form, but we are not able to perform the integration of this combination of Hankel functions in general.
Indeed, since we are dealing with a free Quantum Field Theory, one expects to find the same expression as for the free-streaming solution of Boltzmann equation in Milne coordinates, which is reported in Appendix \ref{appendix:free_sreaming}.
However, there will also be quantum corrections due to vacuum subtraction.
In the next Subsection, we will consider a suitable approximation in which the calculation of the non-equilibrium thermal expectation value of the energy-momentum tensor becomes feasible.
\subsection{Asymptotic analysis}
\label{sec:boost_ne_asymptotic_analysis}
In \cite{Rindori:2021quq}, the behaviour of the stress-energy tensor and related quantities for late times $\tau$ was studied.
The starting point is the function $h(\tau)$ defined in \eqref{boosteq72}, that is
\begin{equation}\label{hankelint}
h(\tau)=-i{\rm e}^{\frac{\pi}{2}\mu}{\rm H}^{(2)}_{i\mu}(m_{\rm T} \tau)
\end{equation}
The asymptotic expansion of the Hankel functions for large arguments is \cite{Gradshteyn:1702455}
\begin{equation}\label{large_x}
{\rm H}^{(2)}_{\nu}(x)\sim\sqrt{\frac{2}{\pi x}}{\rm e}^{-i\left( x-\frac{\pi}{2}\nu -\frac{\pi}{4} \right)}\sum_n
\frac{1}{(2ix)^n} \frac{\Gamma(\nu+1/2 +n)}{n!\Gamma(\nu+1/2-n)}
\end{equation}
which is valid for ${\rm Re}(\nu)>-1/2$ and $|{\rm arg}(x)|<\pi$.
By making use of the property $z\Gamma(z) = \Gamma(z+1)$, substituting $x=m_{\rm T}\tau$ and $\nu=i\mu$, and plugging into \eqref{hankelint}, we get
\begin{equation}\label{h_large_arg}
h(\tau) \sim \sqrt{\frac{-2i}{\pi m_{\rm T}\tau}}{\rm e}^{-im_{\rm T}\tau}\sum_n
\frac{1}{(2im_{\rm T}\tau)^n} \frac{\left(i\mu +\frac{1}{2} -n \right)^{(2n)}}{n!},
\end{equation}
valid for large $m_{\rm T}\tau$.
Similarly, using the exact relation \cite{Gradshteyn:1702455}
\begin{equation}
z \partial_z {\rm H}^{(2)}_{\nu}(z)
=\nu {\rm H}^{(2)}_{\nu}(z) -z {\rm H}^{(2)}_{\nu+1}(z),
\end{equation}
along with the expansion \eqref{large_x}, one obtains the expansion for the derivative $\partial_\tau h$
\begin{equation}\label{h_dot_large_arg}
\begin{split}
\partial_\tau h(\tau) \sim&
-im_{\rm T}\sqrt{\frac{-2i}{\pi m_{\rm T}\tau}}{\rm e}^{-im_{\rm T}\tau}\times \\
&\times \left[1+ \sum_{n>0} \frac{1}{(2im_{\rm T}\tau)^{n}}\left(-2i\mu \frac{\left(i\mu +\frac{3}{2} -n \right)^{(2n-2)}}{(n-1)!} + \frac{\left(i\mu +\frac{3}{2} -n \right)^{(2n)}}{n!}\right) \right].
\end{split}
\end{equation}
Particularly, retaining the terms up to first order (next to leading) in $m_{\rm T}\tau$ we get
\begin{equation}
\begin{split}
h(\tau) \simeq&
\sqrt{\frac{-2i}{\pi m_{\rm T}\tau}}{\rm e}^{-im_{\rm T}\tau}\left[ 1 -\frac{i}{2m_{\rm T}\tau} \left( i\mu -\frac{1}{2}\right)\left( i\mu +\frac{1}{2}\right)\right]\\
=&\sqrt{\frac{-2i}{\pi m_{\rm T}\tau}}{\rm e}^{-im_{\rm T}\tau}\left[1 +i\frac{1+4\mu^2}{8m_{\rm T}\tau} \right],
\end{split}
\end{equation}
\begin{equation}
\partial_{\tau} h(\tau) \simeq-im_{\rm T}\sqrt{\frac{-2i}{\pi m_{\rm T}\tau}}e^{-im_{\rm T}\tau}\left[1-i\frac{3-4\mu^2}{8m_{\rm T}\tau}\right].
\end{equation}
Feeding the above expansions in the definitions \eqref{boosteq41} and \eqref{boosteq42}, we obtain
\begin{equation}\label{kappa3}
K_{\gamma}(\tau)\simeq
\frac{1}{2 m_{\rm T}\omega(\tau)} \left[m_{\rm T}^2
+\gamma(\tau)\right],
\end{equation}
\begin{equation}\label{lambda3}
\Lambda_{\gamma}(\tau)\simeq
\frac{1}{2m_{\rm T}\omega(\tau)}
{\rm e}^{-2im_{\rm T}\tau}
\left[-m_{\rm T}^2\left(1-i\frac{3-4\mu^2}{4m_{\rm T}\tau} \right) +\gamma(\tau)\left(1+i\frac{1+4\mu^2}{4m_{\rm T}\tau} \right) \right].
\end{equation}
In the case of $K_{\gamma}(\tau)$, the remainder is of the order $1/[m_{\rm T}\omega(\tau)(m_{\rm T}\tau)^2]$, while for $\Lambda_{\gamma}$ is of order ${\rm e}^{-2im_{\rm T}\tau}/[m_{\rm T}\omega(\tau)(m_{\rm T}\tau)^2]$.
The last formulae are indicative of the behaviour both at late times and for large values of the transverse momentum ${\rm p_T}$, whence the transverse mass $m_{\rm T}$, necessary for the convergence check of the renormalized thermal expectation values.
In both cases, $m_{T}\tau\gg1$ and the previous asymptotic expansion is appropriate.
Concerning the large transverse momentum behaviour, equation \eqref{kappa3} tells us that $K(\tau)\to 1$, therefore the leading order of $K(\tau)-1$ is zero, at least at first order.
However, from equation \eqref{lambda3} and the exact relation \eqref{boosteq22}, one can obtain the value of the second order expansion.
Indeed, for $\gamma(\tau)=\omega(\tau)$, in the large $m_{\rm T}$ limit
\begin{equation}
\Lambda(\tau)=
\frac{1}{2m_{\rm T} \tau}{\rm e}^{-2im_{\rm T}\tau}
+{\rm ord}\left( \frac{{\rm e}^{-2im_{\rm T}\tau}}{m_{\rm T}^2} \right),
\end{equation}
therefore
\begin{equation}
\frac{1}{2m_{\rm T}\tau}\simeq |\Lambda(\tau)|=
\sinh \Theta(\tau)\simeq \Theta(\tau),
\end{equation}
then, still in the limit of large $m_{\rm T}$,
\begin{equation}
K(\tau)-1\simeq
\frac{1}{2}\Theta^2(\tau)\simeq
\frac{1}{8m_{\rm T}^2\tau^2}.
\end{equation}
A very similar approach can be used for the late time behaviour $\tau\to\infty$.
Namely, using the asymptotic formulae \eqref{kappa3} and \eqref{lambda3}, making an expansion of $\omega(\tau)$ and $\gamma(\tau)$ for large $\tau$ keeping only the first orders, that is up to the order $1/\tau$ for $K_{\gamma}(\tau)$ and to the order ${\rm e}^{-2im_{\rm T}\tau}/\tau$ for $\Lambda_{\gamma}(\tau)$.
This difference between the order of $K_{\gamma}(\tau)$ and $\Lambda_{\gamma}(\tau)$ is because of the different gauge functions in the two cases from the large argument expansion of the Hankel functions.
Therefore, one has
\begin{equation}
K_{\gamma}(\tau)\simeq
\frac{m_{\rm T}^2 +\tilde{\gamma}}{2 m_{\rm T}^2},
\end{equation}
\begin{equation}
\Lambda_{\gamma}(\tau)\simeq
\frac{1}{2m_{\rm T}^2 }{\rm e}^{-2im_{\rm T}\tau}
\left[\tilde{\gamma} - m_{\rm T}^2 +i\frac{m_{\rm T}^2(3-4\mu^2)+\tilde{\gamma}(1+4\mu^2)}{4m_{\rm T}\tau}\right],
\end{equation}
with $\tilde{\gamma}$ being
\begin{equation}
\tilde{\gamma}\equiv
\lim_{\tau \to \infty}\gamma(\tau)=
\begin{cases}
m_{\rm T}^2, & \mbox{for }\rho(\tau)\\
-m^2, & \mbox{for } p^{\rm T}(\tau)\\
-m_{\rm T}^2, &\mbox{for } p^{\rm L}(\tau).
\end{cases}
\end{equation}
It is important to note that, beside the case related to the energy density and only at the leading order, $\Lambda_{\gamma}(\tau)$ has a rapidly oscillating phase preventing a proper limit in the function domain.
However, it converges in the distribution domain, which is fine since it has to be integrated.
In fact the limits
\begin{equation}
\lim_{\tau \to \infty} \sin(2m_{\rm T}\tau), \qquad
\lim_{\tau \to \infty} \cos(2m_{\rm T}\tau)
\end{equation}
are proportional to Dirac delta functions.
Making use of the formula for the delta families
\begin{equation}
\delta(x)=\lim_{\epsilon\to 0} \frac{1}{\epsilon}f\left(\frac{x}{\epsilon}\right),
\end{equation}
for a generic function $f$ normalized to $1$, by indicating $\epsilon \equiv 1/\tau$ in the first case and $\epsilon \equiv 1/\sqrt{\tau}$ in the second one, we have
\begin{align}
\lim_{\epsilon \to 0} \left(\frac{1}{\pi}\frac{\sin(x/\epsilon)}{x}\right)=\delta(x)
\qquad &\Rightarrow \qquad
\sin(2m_{\rm T}\tau)\;\stackrel{\tau\to \infty}{\longrightarrow}\;2\pi m_{\rm T}\,\delta(2m_{\rm T}),\\
\lim_{\epsilon \to 0}
\left(\frac{1}{\epsilon \sqrt{\pi}}\cos\left(\frac{x^2}{\epsilon^2}\right)\right)
=\delta(x) \qquad &\Rightarrow \qquad
\cos(2m_{\rm T}\tau)\;\stackrel{\tau \to \infty}{\longrightarrow}\;
\sqrt{\frac{\pi}{\tau}}\,\delta(\sqrt{2m_{\rm T}}).
\end{align}
In both cases, the Dirac delta function is outside of the domain of integration, and all these integrals are vanishing in the long proper time limit.
These expressions can now be used to calculate the asymptotic value of the non-equilibrium thermal expectation value \eqref{boosteq65} at late times.
In particular, we want to focus on the energy density, in which case $K_{\gamma}(\tau)$ and $\Lambda_{\gamma}(\tau)$ coincide with $K(\tau)$ and $\Lambda(\tau)$ respectively.
As $\tau$ grows to infinity, $K(\tau)$
tends to the constant value $1$, whereas $\Lambda(\tau)$
vanishes, thus
\begin{equation}\label{boosteq68}
\begin{split}
\rho(\tau)=&
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau}\omega(\tau)
\left[K(\tau)K(\tau_0)-{\rm Re}\left(\Lambda(\tau)\Lambda^*(\tau_0)\right)\right]
\left(n_{\rm B}(\tau_0)+\frac{1}{2}\right)\\
\simeq&
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau}\omega(\tau)
K(\tau_0)\left(n_{\rm B}(\tau_0)+\frac{1}{2}\right)\\
=&\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau}\omega(\tau)
\left(n_{\rm B}(\tau_0)+\frac{1}{2}\right)+
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau}\omega(\tau)
\left(K(\tau_0)-1\right)\left(n_{\rm B}(\tau_0)+\frac{1}{2}\right)\\
=&\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau}\omega(\tau)
\left(n_{\rm B}(\tau_0)+\frac{1}{2}\right)+
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau}\omega(\tau)
2\sinh^2 \Theta(\tau_0)\left(n_{\rm B}(\tau_0)+\frac{1}{2}\right)
\end{split}
\end{equation}
where equation \eqref{boosteq22} was used in the last step.
Renormalization is about to be discussed in Subsection \ref{sec:boost_ne_renormalization}.
As we shall shortly see there, once suitably renormalized, the first integral will correspond to the classical free-streaming solution in Milne coordinates, reported in Appendix \ref{appendix:free_sreaming}.
On the other hand, the second integral will be a pure quantum correction due to vacuum effects, for it will vanish only for $\Theta(\tau_0)=0$.
\subsection{Renormalization}
\label{sec:boost_ne_renormalization}
The non-equilibrium thermal expectation values \eqref{boosteq65} as well as the asymptotic energy density \eqref{boosteq68} are divergent due to the $1/2$ terms stemming from the commutation relations between $\widehat{\xi}_{\sf p}(\tau_0)$ and $\widehat{\xi}_{\sf p}^{\dagger}(\tau_0)$ that must be suitably renormalized.
What we are facing here is essentially the same ambiguity of which vacuum contribution should be subtracted as the one discussed in Subsection \ref{sec:boost_lte_renormalization} in the context of the local thermodynamic equilibrium analysis.
The choice there was between the two vacua $|0_{\tau}\rangle$ and $|0_{\rm M}\rangle$, the former being the $\tau$-dependent vacuum of the local thermodynamic equilibrium density operator whereas the latter being the static Minkowski vacuum.
In particular, it was shown that by taking the standpoint of the inertial or the Milne observer and renormalizing with respect to the Minkowski vacuum, we ended up with thermal expectation values that were still divergent.
This persuaded us that the Minkowski vacuum renormalization scheme should be discarded.
That conclusion still holds in the present non-equilibrium analysis, but now we have to take a closer look at what the subtraction of the contribution of the $\tau$-dependent vacuum $|0_{\tau}\rangle$ implies.
The renormalized thermal expectation value of the energy-momentum tensor ought to fulfill the hydrodynamic equations $\partial_{\mu}T^{\mu \nu}=0$.
However, in the $|0_{\tau}\rangle$ scheme, the renormalized tensor
\begin{equation}
T^{\mu \nu}_{\tau}\equiv
\mathrm{tr}(\widehat{\rho}\,\widehat{T}^{\mu \nu})-\langle 0_{\tau}|\widehat{T}^{\mu \nu}|0_{\tau}\rangle=
\mathrm{tr} \left[(\widehat{\rho}-|0_{\tau}\rangle \langle 0_{\tau}|)\widehat{T}^{\mu \nu}\right]
\end{equation}
has a non-vanishing divergence due to the $\tau$-dependence of $|0_{\tau}\rangle$
\begin{equation}
\begin{split}
\partial_{\mu}T^{\mu \nu}_{\tau}=&
\partial_{\mu}\mathrm{tr} \left[(\widehat{\rho}-|0_{\tau}\rangle \langle 0_{\tau}|)\widehat{T}^{\mu \nu}\right]\\
=&\mathrm{tr} \left[(\widehat{\rho}-|0_{\tau}\rangle \langle 0_{\tau}|)\partial_{\mu}\widehat{T}^{\mu \nu}\right]-
\mathrm{tr} \left[(\partial_{\mu}|0_{\tau}\rangle \langle 0_{\tau}|)\widehat{T}^{\mu \nu}\right]\\
=&-\mathrm{tr} \left[u_{\mu}\left(\partial_{\tau}|0_{\tau}\rangle \langle 0_{\tau}|\right)\widehat{T}^{\mu \nu}\right]\ne 0,
\end{split}
\end{equation}
where the conservation law $\partial_{\mu}\widehat{T}^{\mu \nu}=0$ and the stationarity of $\widehat{\rho}$ were used.
Therefore, although the energy-momentum tensor quantum operator is conserved, its renormalized thermal expectation value does not fulfill the hydrodynamic equations in this scheme.
We conclude that, in order to have a thermal expectation which is both finite and conserved, the vacuum to be subtracted must be stationary.
Since the Minkowski vacuum can neither be chosen, our most reasonable alternative is the vacuum of the non-equilibrium density operator \eqref{boosteq69}, namely the vacuum of $\widehat{\Pi}(\tau_0)$.
That is but the state annihilated by the operators $\widehat{\xi}_{\sf p}(\tau_0)$, so, in analogy with \eqref{boosteq70}, we indicate it as $|0_{\tau_0}\rangle$
\begin{equation}
\widehat{\xi}_{\sf p}(\tau_0)|0_{\tau_0}\rangle \equiv 0.
\end{equation}
As mentioned already, the operators $\widehat{\xi}_{\sf p}(\tau_0)$ are just $\widehat{\xi}_{\sf p}(\tau)$ evaluated at a fixed $\tau_0$, hence $|0_{\tau_0}\rangle$ is $|0_{\tau}\rangle$ at $\tau_0$, and as such it is a stationary state suitable for a proper renormalization.
Much in the same way as $|0_{\tau}\rangle$ is obtained from $\widehat{\rho}_{\rm LE}(\tau)$ in the $T(\tau)\to 0$ limit as in \eqref{boosteq71}, $|0_{\tau_0}\rangle$ is obtained from $\widehat{\rho}$ in the $T(\tau_0)\to 0$ limit
\begin{equation}
\lim_{T(\tau_0)\to 0}\widehat{\rho}=
\lim_{T(\tau_0)\to 0}\widehat{\rho}_{\rm LE}(\tau_0)=
\lim_{T(\tau_0)\to 0}\frac{1}{Z_{\rm LE}(\tau_0)}\exp \left[-\frac{\widehat{\Pi}(\tau_0)}{T(\tau_0)}\right]=
|0_{\tau_0}\rangle \langle 0_{\tau_0}|.
\end{equation}
Thus, renormalization with respect to $|0_{\tau_0}\rangle$ is but the subtraction of the $T(\tau_0)=0$ contribution.
The non-equilibrium thermal expectation value of the energy-momentum tensor thereby renormalized is
\begin{equation}
T^{\mu \nu}_{\tau_0}\equiv
\mathrm{tr}(\widehat{\rho}\widehat{T}^{\mu \nu})-\langle 0_{\tau_0}|\widehat{T}^{\mu \nu}|0_{\tau_0}\rangle=
\mathrm{tr}(\widehat{\rho}_{\rm LE}(\tau_0)\widehat{T}^{\mu \nu})-\langle 0_{\tau_0}|\widehat{T}^{\mu \nu}|0_{\tau_0}\rangle,
\end{equation}
in particular, equation \eqref{boosteq65} becomes
\begin{equation}
\Gamma^{\gamma}_{\tau_0}(\tau)=
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau \omega(\tau)}\omega^2(\tau)
\left[K_{\gamma}(\tau)K(\tau_0)-{\rm Re}\left(\Lambda_{\gamma}(\tau)\Lambda^*(\tau_0)\right)\right]
n_{\rm B}(\tau_0),
\end{equation}
whereas the asymptotic expression of the energy-density at late times \eqref{boosteq68} reads
\begin{equation}
\rho_{\tau_0}(\tau)=
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau}\omega(\tau)
n_{\rm B}(\tau_0)+
\int \frac{{\rm d}^2{\rm p_T}\,{\rm d} \mu}{(2\pi)^3\tau}\omega(\tau)
2\sinh^2 \Theta(\tau_0)n_{\rm B}(\tau_0).
\end{equation}
The first term corresponds to the classical free-streaming solution in Milne coordinates, as shown in Appendix \ref{appendix:free_sreaming}.
On the other hand, the second one is a pure quantum correction due to vacuum effects, interpretation supported by the fact that it vanishes only if $\Theta(\tau_0)=0$.
Interestingly enough, the latter does not vanish at later times, in fact it can as well be comparable with the former if the main argument of $\Theta(\tau_0)$, namely $m_{\rm T} \tau_0$, is of order ${\cal O}(1)$, that is for an early decoupling of the system.
\section{Summary and outlook}
In this Chapter, we considered a relativistic quantum fluid with boost invariance.
This is of particular concern for the physics of the quark-gluon plasma as boost invariance is approximately realized in the central-rapidity region in heavy-ion collisions, although modern hydrodynamic calculations can go beyond this model by including transverse expansion and, eventually, by breaking boost invariance itself.
A boost-invariant fluid is inherently out of thermodynamic equilibrium, so the thermodynamic fields are in general unknown.
However, geometrical constraints descended from this particular symmetry that provided us with enough information on the thermodynamic fields and other quantities to at least attempt actual calculations.
After thoroughly presenting boost invariance and its properties, we considered as a Quantum Field Theory underlying the hydrodynamic theory a free real scalar field theory in the future light-cone.
This had quite some differences with respect to the case considered in Chapter \ref{chapter:gteacceleration} in the right Rindler wedge.
In the Rindler wedges, the quantum field was expanded in terms of Rindler creation and annihilation operators, which were different from the usual operators appearing in the plane waves expansion, and the density operator was diagonal with respect to them.
In particular, these two sets of operators were related by a non-trivial Bogolyubov transformation, which, together with the four-temperature having a Killing horizon, ultimately gave rise to the Unruh effect.
On the other hand, in the future light-cone the quantum field was expanded in terms of creation and annihilation operators linearly related to the plane waves ones, but the density operator was diagonalized by a new set of operators related to the former by a non-trivial Bogolyubov transformation.
Moreover, the four-temperature had no Killing horizon for it was not a Killing vector field.
In fact, the future-light-cone is not a stationary spacetime, therefore it does not possess any global timelike Killing vector field.
In view of these facts, there appeared to be no analog of the Unruh effect in the future light-cone.
With a diagonal density operator, we were in a good position to calculate thermal expectation values.
First, we performed a local thermodynamic equilibrium analysis.
At local thermodynamic equilibrium, in fact, due to the reconstruction of the Wronskian of the Hankel functions, thermal expectation values of operators of physical interest, such as the energy-momentum tensor, could be calculated in an exact analytical way.
Somehow unexpectedly, the ideal form of the energy-momentum tensor was obtained.
Renormalization was also discussed, pointing out peculiar vacuum effects: the subtraction of the Minkowski vacuum contribution ended up in thermal expectation values which were still infinite, at variance, for instance, with the case studied in Chapter \ref{chapter:gteacceleration}.
The entropy current thereby obtained seemed not to contain any apparent quantum correction, which was equally unexpected
Finally, we performed a full non-equilibrium analysis.
This time, however, because of the mixing of terms calculated at different times, thermal expectation values became non-trivial combinations of Hankel functions evaluated at different arguments.
Neither the Wronskian could be reconstructed, nor we were able to carry out exact integration of such combinations.
In order to go on, we considered an asymptotic expansion which simplified the expressions and eventually allowed us to obtain results.
Renormalization was also discussed in this limit, highlighting again the interesting vacuum effects.
The subtraction of the Minkowski vacuum contribution still ended up in divergent thermal expectation values, therefore we had to subtract the $|0_{\tau_0}\rangle$ contribution, with $|0_{\tau_0}\rangle$ being the stationary vacuum of $\widehat{\xi}_{\sf p}(\tau_0)$ at fixed $\tau_0$.
Interestingly, in this renormalization scheme the energy density turned out to be the sum of two terms: the former being the classical free-streaming solution in Milne coordinates, while the second one being a pure quantum correction due to a vacuum term.
No entropy current or entropy production rate could be calculated since no formula is currently known for such quantities in the case of a Quantum Field Theory fully out of thermodynamic equilibrium.
\section{Fluids}
\label{sec:relhydro_fluids}
When we stumble upon the word ``hydro'', our thoughts instinctively go to water and liquids in general.
This is certainly not wrong, as hydrodynamics has indeed to do with the motion of fluids, but in modern Physics the concept of fluid is far more general than that.
In order to get some insight on this point, let us consider a system of $N$ particles interacting through some coupling and look at the ratio ${\cal R}\equiv \lambda_{\rm DB}/l$, where $\lambda_{\rm DB}$ is the de Broglie wavelength associated to each particle and $l$ is the characteristic distance between particles.
If ${\cal R}\gtrsim 1$ there is a considerable overlap of the wavefunctions of the particles, thus the quantum mechanical nature of the system is important and the evolution will be governed by the Schr\"odinger equation for the $N$-particle wavefunction.
On the other hand, if ${\cal R}\ll 1$ the particles are so far away from each other that quantum interference can be neglected, so there will be $N$ single-particle wavefunctions each evolving according to the Schr\"odinger equation, moving like classical particles.
This is but the Ehrenfest's theorem, stating that the evolution of the expectation value of each observable coincides with the evolution described by classical mechanics.
Of course, if $N$ is very large, solving $N$ Schr\"odinger equations does not seem practically feasible, a statistical approach would then be necessary.
However, there is also a regime where another kind of description is possible.
Suppose that the number of particles $N$ is very large, and that the system extends over a length scale $L$ so much larger than $l$ (and of course $\lambda_{\rm DB}$) that the single-particle dynamics cannot be followed, that is ${\cal R}\lll 1$.
This separation between the microscopic and the macroscopic scales is quantified by the Knudsen number $K_{\rm N}\equiv l/L$, which should be $K_{\rm N}\ll 1$ in the above configuration.
In this case, the system can be conveniently approximated as a continuum whose dynamics can be studied collectively.
This continuum is called a \textit{fluid}, a system whose large-scale properties can be described effectively without having to worry about the features that the constituent elements have at much smaller length-scales.
A fluid is usually thought as divided in so-called \textit{fluid elements}, namely ``cells'' large enough to contain a great number of particles, but still small compared to the macroscopic scale $L$ in order to guarantee homogeneity within them.
As a consequence, in each fluid element the particles have the same average velocity and are at thermodynamic equilibrium.
The properties of neighbouring fluid elements, which can be different and even discontinuous, represent then the global properties of the fluid.
The collective dynamics of the fluid is called \textit{fluid dynamics} or \textit{hydrodynamics}: its aim is the study of the evolution of the fluid by solving its equations of motion, the \textit{hydrodynamic equations}.
Thus, hydrodynamics is an effective classical theory describing the evolution of a system in terms of few quantities, although the underlying microscopic theory might be quantum mechanical and possibly complicated.
Moreover, this definition of a fluid appears to be quite general: it certainly includes the liquids and gases we intuitively expect, but actually many other different kinds of systems as well.
These are some of the reasons why hydrodynamics finds applications in a vast number of phenomena ranging from meteorology over to heavy-ion collisions, relativistic astrophysics and cosmology.
As hydrodynamics establishes a connection between the microscopic and the macroscopic properties of a system, when does it become \textit{relativistic}?
This can occur essentially in three different contexts, which are not mutually exclusive in general.
The first one is when the velocities of the constituent particles within a fluid element are close to the speed of light, or, equivalently, when their Lorentz factor is significantly larger than one.
The second context applies instead when the Lorentz factor of the macroscopic motion of the fluid is significantly larger than one, quite independently of the microscopic properties of the fluid.
The third and final one emerges whenever the macroscopic gravitational field is strong enough to require a description in terms of General Relativity.
In this latter case, no assumption is made about the velocity of the fluid, which can even be at rest as, for instance, in a stationary relativistic star.
As we mentioned, a relativistic fluid is thought as divided in many fluid elements.
What we can calculate in those elements are averages of an underlying particle substrate.
The distribution function to calculate those averages usually comes from a relativistic kinetic theory, a model describing the non-equilibrium dynamics of a fluid whose microscopic degrees of freedom can be regarded as a diluted system of weakly interacting particles.
The averages thereby obtained enter the hydrodynamic equations, so in this sense relativistic hydrodynamics can be thought as a coarse graining of an underlying relativistic kinetic theory.
This approach works well when the mean free path of the particles is much larger than their thermal wavelength, however, it breaks down for strongly interacting systems, which require a full quantum field theoretical description in general.
This does not mean that a hydrodynamic theory cannot be built for such systems, but only that it should be done without necessarily relying on a relativistic kinetic theory.
The problem of calculating the averages entering the hydrodynamic equations in the case of an underlying Quantum Field Theory will be the subject of next Chapter.
In the present one, we assume they have been somehow obtained and we focus on their hydrodynamic evolution.
The first problem we come across when we forget about a particle substrate is the definition of a hydrodynamic velocity field $u^{\mu}$, which, in a kinetic theory, would intuitively be the average velocity of the particles.
Thus, the idea is to consider as the fundamental quantity the energy-momentum tensor $T^{\mu \nu}$, which always exists, and define a velocity field from it alone.
It is known that if $T^{\mu \nu}u_{\mu}u_{\nu}\ge 0$ for any timelike vector field $u^{\mu}$, called the \textit{weak energy condition}, then there exists a unique timelike vector field ${u_{\rm L}}^{\mu}$ with magnitude ${u_{\rm L}}^{\mu}{u_{\rm L}}_{\mu}=1$ such that $T^{\mu \nu}{u_{\rm L}}_{\mu}=\lambda {u_{\rm L}}^{\nu}$ for some $\lambda \ge 0$.
In other words, if the energy density measured by any observer is non-negative, the energy-momentum tensor has a unique timelike eigenvector.
The reference frame comoving with ${u_{\rm L}}^{\mu}$ is called the \textit{local rest frame}, and it is such that $T^{00}=\lambda$ and $T^{0i}=0$ for some $T^{ij}$.
So, one way to proceed is to take ${u_{\rm L}}^{\mu}$ as the hydrodynamic velocity field of the fluid.
However, this frame choice, called the \textit{Landau frame} (hence the subscript ${\rm L}$), is not the only possible one.
In fact, the system could also possess a charged current $j^{\mu}$, for instance an electromagnetic or baryonic current, as it often happens in astrophysics, so we might as well define the hydrodynamic velocity field as the direction of the charged current ${u_{\rm E}}^{\mu}\equiv j^{\mu}/\sqrt{j^2}$, where by $j^2$ we mean $j^2\equiv j^{\mu}j_{\mu}$.
This frame choice, called the \textit{Eckart frame} (hence the subscript ${\rm E}$), is in some sense the most straightforward generalization of the velocity field of non-relativistic hydrodynamics.
In fact, if $j^{\mu}$ is taken as the baryonic current, ${u_{\rm E}}^{\mu}$ coincides with the average particles' velocity of non-relativistic hydrodynamics.
Anyway, whatever the frame we choose, the philosophy is that the fundamental quantities are the energy-momentum tensor and the charged currents, which always exist, and the velocity field is built from them.
This is somehow the other way round of what is usually done in non-relativistic hydrodynamics, where all the quantities are built starting from the velocity field.
Once the hydrodynamic velocity field is given, the other kinematic quantities can be worked out.
The acceleration vector field, for instance, is defined as the derivative of the velocity field along the flow
\begin{equation}\label{eqrelhydro23}
A^{\mu}\equiv \frac{{\rm d} u^{\mu}}{{\rm d} \tau}=u^{\nu}\nabla_{\nu}u^{\mu},
\end{equation}
where $\tau$ is the fluid proper time, namely the proper time of an observer comoving with the fluid, and $\nabla_{\mu}$ the covariant derivative.
Of course, if the spacetime is flat, the covariant derivative is simply replaced by the standard derivative $\partial_{\mu}$.
The hydrodynamic velocity is timelike, has unit magnitude by definition
\begin{equation}\label{eqrelhydro01}
u^2\equiv u^{\mu}u_{\mu}=1
\end{equation}
and it is orthogonal to the acceleration field, which is spacelike
\begin{equation}
u^{\mu}A_{\mu}=0.
\end{equation}
For future purposes, it is also convenient to define the \textit{projection tensor} $\Delta_{\mu \nu}$, which projects tensors on the hypersurface orthogonal to the velocity field, as
\begin{equation}
\Delta_{\mu \nu}\equiv g_{\mu \nu}-u_{\mu}u_{\nu},
\end{equation}
where $g_{\mu \nu}$ is the metric tensor.
Sometimes it is also useful to decompose the covariant derivative along the velocity field and orthogonally to it, namely
\begin{equation}
\nabla_{\mu}\equiv u_{\mu}D+D_{\mu},
\end{equation}
where
\begin{equation}
D\equiv u^{\mu}\nabla_{\mu}=\frac{{\rm d}}{{\rm d} \tau},\qquad
D_{\mu}\equiv {\Delta^{\nu}}_{\mu}\nabla_{\nu}
\end{equation}
with $D$ often called the \textit{convective derivative}.
\section{Ideal hydrodynamics}
\label{sec:relhydro_ideal_hydro}
In non-relativistic hydrodynamics, a \textit{perfect} or \textit{ideal fluid} is a fluid such that, at thermodynamic equilibrium, taken any section, all the inner forces are orthogonal to that section.
This could be rephrased by saying that the stress tensor $T^{ij}$ has vanishing shear stress, which in turn means that $T^{ij}$ is diagonal with all equal elements on the diagonal, i.e.\ $T^{ij}=-p\delta^{ij}$ for some eigenvalue $p$ with $\delta^{ij}$ the Kronecker delta.
Of course, in a relativistic theory we should consider the full energy-momentum tensor instead of the stress tensor alone.
Thus, in order to generalize this definition to the relativistic case, we could exploit the aforementioned known result stating that, if the energy-momentum tensor fulfills the weak energy condition, in its local rest frame its components are $T^{00}\ge 0$, $T^{0i}=0$ and some $T^{ij}$.
Therefore, we might think of defining an ideal relativistic fluid as a system whose energy-momentum tensor at thermodynamic equilibrium is diagonal and isotropic in its local rest frame, namely
\begin{equation}
T^{\mu \nu}=(\rho+p)u^{\mu}u^{\nu}-pg^{\mu \nu}=
\rho u^{\mu}u^{\nu}-p\Delta^{\mu \nu}.
\end{equation}
Here, $\rho$ and $p$ have the meaning of equilibrium energy density and equilibrium pressure respectively, while $g^{\mu \nu}$, being in the local rest frame, is simply the Minkowskian metric tensor.
Likewise, the charged current $j^{\mu}$ reads
\begin{equation}
j^{\mu}=nu^{\mu},
\end{equation}
where $n$ is the equilibrium charge density.
\textit{Ideal hydrodynamics} corresponds to adopting the thermodynamic equilibrium forms of the energy-momentum tensor and charged current also out of equilibrium, promoting the energy density, pressure, charge density and velocity field to slowly-varying functions of the spacetime point, that is
\begin{equation}
T^{\mu \nu}(x)=\left[\rho(x)+p(x)\right]u^{\mu}(x)u^{\nu}(x)-p(x)g^{\mu \nu}(x),
\end{equation}
\begin{equation}
j^{\mu}(x)=n(x)u^{\mu}(x).
\end{equation}
The above expressions were obtained in the local rest frame, but being fully covariant they generalize to any frame, so $g^{\mu \nu}$ is in fact a generic metric tensor.
In the following, the dependence on the spacetime point will be omitted for ease of notation, unless otherwise specified.
Note that, under this hypothesis, the Landau frame coincides with the Eckart frame, namely ${u_{\rm L}}^{\mu}={u_{\rm E}}^{\mu}\equiv u^{\mu}$, so in this sense there is no ambiguity in the frame choice in ideal hydrodynamics.
This feature will not be preserved in non-ideal hydrodynamics.
The hydrodynamic equations are simply the conservation laws of the energy-momentum tensor and the charged current
\begin{equation}\label{eqrelhydro02}
\nabla_{\mu}T^{\mu \nu}=0,
\end{equation}
\begin{equation}\label{eqrelhydro03}
\nabla_{\mu}j^{\mu}=0.
\end{equation}
They amount to a set of 5 equations in 6 unknowns: the energy density $\rho$, the pressure $p$, the charge density $n$ and the three independent components of the velocity field $u^{\mu}$ (recall that $u^{\mu}$ satisfies the normalization condition \eqref{eqrelhydro01}).
Thus, in order to solve the system, we need an extra relation.
Technically, any relation between two unknowns is fine, for instance between the energy density and the pressure.
However, we can as well assume that, although globally out of equilibrium in general, for sufficiently small fluid elements the system is at thermodynamic equilibrium locally in space and time.
In this hypothesis, we can define a temperature and a chemical potential for each fluid element, so the thermodynamic quantities $\rho$, $p$ and $n$ can be related to them by an \textit{equation of state} reducing the number of unknowns to 5.
The achievement of this configuration, called \textit{local thermodynamic equilibrium}, is therefore equivalent to assuming the existence of the temperature and chemical potential for each fluid element, $T=T(x)$ and $\mu=\mu(x)$, which, from a relativistic standpoint, are defined as those measured in the local rest frame of the fluid element.
Once $T$, $\mu$ and $u^{\mu}$ are given on some three-dimensional spacelike hypersurface, one can try to solve the ideal hydrodynamic equations.
The four energy-momentum conservation equations are usually decomposed into one energy conservation equation and three momentum conservation equations, obtained by projecting \eqref{eqrelhydro02} along the velocity field and orthogonally to it respectively.
Together with the charged current conservation, they read
\begin{subequations}
\begin{align}
u_{\nu}\nabla_{\mu}T^{\mu \nu}=&\nabla_{\mu}(\rho u^{\mu})+p\nabla_{\mu}u^{\mu}=0,\label{eqrelhydro04}\\
{\Delta^{\lambda}}_{\nu}\nabla_{\mu}T^{\mu \nu}=&(\rho+p)A^{\lambda}-{\Delta^{\lambda}}_{\nu}\nabla^{\nu}p=0,\label{eqrelhydro05}
\end{align}
\end{subequations}
\begin{equation}\label{eqrelhydro07}
\nabla_{\mu}j^{\mu}=u^{\mu}\nabla_{\mu}n+n\nabla_{\mu}u^{\mu}=0.
\end{equation}
In the non-relativistic limit, equations \eqref{eqrelhydro04} and \eqref{eqrelhydro05} reproduce the well-known continuity and Euler equations respectively
\begin{subequations}
\begin{align}
\nabla_{\mu}(\rho u^{\mu})+p\nabla_{\mu}u^{\mu}=0&\xrightarrow[\nabla_{\mu}\to \partial_{\mu}]{p\ll \rho c^2}\partial_t \rho+\nabla \cdot (\rho {\bf v})=0\\
(\rho+p)A^{\lambda}-{\Delta^{\lambda}}_{\nu}\nabla^{\nu}p=0&\xrightarrow[\nabla_{\mu}\to \partial_{\mu}]{p\ll \rho c^2}\rho {\bf a}+\nabla p=0
\end{align}
\end{subequations}
where $\rho$ is now the mass density, ${\bf a}$ is the spatial component of the acceleration field $A^{\mu}$ and $\nabla$ is the gradient with respect to ${\bf x}$.
The next step is to consider the thermodynamics of ideal relativistic fluids.
The entropy $S$ of a system is constrained to never decrease in time by the second law of thermodynamics,
\begin{equation}\label{eqrelhydro18}
\frac{{\rm d} S}{{\rm d} \tau}\ge 0.
\end{equation}
In particular, the responsible for entropy production are irreversible processes, while reversible ones do not produce any entropy.
It is also known that entropy is an extensive quantity, meaning that it depends on the volume of the system.
Now, as expressed in \eqref{eqrelhydro18}, the second law of thermodynamics is a global statement, meaning that it concerns the system in its entirety.
This is fine in a non-relativistic theory, but from a relativistic standpoint the meaningful statements ought to be local.
In order to embed the extensivity and the second law of thermodynamics in a covariant language, it is postulated that there exists a vector field $s^{\mu}$, called the \textit{entropy current}, such that its integral on some 3-dimensional spacelike hypersurface $\Sigma$ gives the entropy
\begin{equation}\label{eqrelhydro19}
S\equiv \int_{\Sigma}{\rm d} \Sigma_{\mu}\,s^{\mu}.
\end{equation}
Here, ${\rm d} \Sigma_{\mu}\equiv {\rm d} \Sigma \,n_{\mu}$ with ${\rm d} \Sigma$ and $n^{\mu}$ the measure on $\Sigma$ and the timelike vector field of magnitude $n^{\mu}n_{\mu}=1$ orthogonal to $\Sigma$ respectively.
Equation \eqref{eqrelhydro19} is in fact the extensivity property, as it expresses the entropy as a volume integral.
With this definition of the entropy current, the local version of the second law of thermodynamics for any spacelike hypersurface $\Sigma$ is
\begin{equation}
\nabla_{\mu}s^{\mu}\ge 0.
\end{equation}
Now the question is what the expression of the entropy current looks like and what constraints stem from the second law of thermodynamics.
As we mentioned, in ideal hydrodynamics we assume that the energy density, the pressure and the charge density have, point by point, the same type of dependence on the temperature and chemical potential, often referred to as the \textit{thermodynamic fields}, as they do at thermodynamic equilibrium.
Thus, the equations of state read
\begin{subequations}
\begin{align}
\rho=&\rho(x)=\rho_{\rm eq}(T(x),\mu(x)),\\
p=&p(x)=p_{\rm eq}(T(x),\mu(x)),\\
n=&n(x)=n_{\rm eq}(T(x),\mu(x)).
\end{align}
\end{subequations}
This means that we can use our knowledge of equilibrium thermodynamics to study non-equilibrium thermodynamics.
For instance, at thermodynamic equilibrium we know that
\begin{equation}\label{eqrelhydro06}
Ts=\rho+p-\mu n,
\end{equation}
where $s$ is the entropy density, namely the entropy per unit volume.
In particular, given the above equations of state, equation \eqref{eqrelhydro06} becomes in fact a definition of the entropy density out of equilibrium.
Moreover, the Gibbs-Duhem equation must hold
\begin{equation}\label{eqrelhydro09}
T\,{\rm d} s={\rm d} \rho-\mu \,{\rm d} n,
\end{equation}
which combined with the differentiation of \eqref{eqrelhydro06} gives
\begin{equation}
{\rm d} p=s\,{\rm d} T+n\,{\rm d} \mu,
\end{equation}
hence
\begin{equation}
s=\left(\frac{\partial p}{\partial T}\right)_{\mu},\qquad
n=\left(\frac{\partial p}{\partial \mu}\right)_T,
\end{equation}
the symbols meaning that when deriving with respect to a thermodynamic field, the other one is kept fixed.
Thus
\begin{equation}
\nabla_{\mu}p=\left(\frac{\partial p}{\partial T}\right)_{\mu}\nabla_{\mu}T+\left(\frac{\partial p}{\partial \mu}\right)_T\nabla_{\mu}\mu=s\nabla_{\mu}T+n\nabla_{\mu}\mu,
\end{equation}
and projecting along the velocity field
\begin{equation}
u^{\mu}\nabla_{\mu}p=su^{\mu}\nabla_{\mu}T+nu^{\mu}\nabla_{\mu}\mu.
\end{equation}
Together with the derivative of \eqref{eqrelhydro06} along the flow, this implies
\begin{equation}\label{eqrelhydro10}
Tu^{\mu}\nabla_{\mu}s=u^{\mu}\nabla_{\mu}\rho-\mu u^{\mu}\nabla_{\mu}n.
\end{equation}
The first term at right-hand side can be worked out by using the equation of motion \eqref{eqrelhydro04}, while for the second one we use \eqref{eqrelhydro07}
\begin{equation}
u^{\mu}\nabla_{\mu}\rho=-(\rho+p)\nabla_{\mu}u^{\mu},\qquad
u^{\mu}\nabla_{\mu}n=-n\nabla_{\mu}u^{\mu},
\end{equation}
hence
\begin{equation}
Tu^{\mu}\nabla_{\mu}s=-(\rho+p-\mu n)\nabla_{\mu}u^{\mu}=-Ts\nabla_{\mu}u^{\mu},
\end{equation}
where in the last equality we used \eqref{eqrelhydro06}.
This is in fact a conservation equation
\begin{equation}\label{eqrelhydro08}
\nabla_{\mu}s^{\mu}=0,
\end{equation}
where we defined the entropy current of an ideal relativistic fluid as
\begin{equation}
s^{\mu}\equiv su^{\mu}.
\end{equation}
Equation \eqref{eqrelhydro08} tells us an important fact: an ideal fluid does not produce entropy, its entropy is constant in time.
As we will see, it will not be the same in general for non-ideal fluids.
In fact, the entropy current of a generic fluid will have more components other than the ideal $su^{\mu}$, those causing the entropy production rate $\nabla_{\mu}s^{\mu}$ to be greater than zero are called \textit{dissipations}.
\section{Dissipative hydrodynamics}
\label{sec:relhydro_dissipative_hydro}
Ideal hydrodynamics is obtained under the simplest hypothesis that, out of equilibrium, the energy-momentum tensor and the charged current have the same tensor structure as they do at thermodynamic equilibrium.
Indeed this is a strong assumption, which clearly becomes a poor approximation when the microscopic time scales are comparable to the macroscopic ones, namely when the condition of local thermodynamic equilibrium breaks down.
When this occurs, the ideal fluid description has to be extended by including dissipative terms,
that is by considering \textit{dissipative} or \textit{non-ideal fluids}.
The first problem we encounter when dealing with non-ideal fluids is the definition of the velocity field.
As we mentioned, for an ideal fluid the Landau and the Eckart frames coincide, making the choice of the velocity field unambiguous.
For a non-ideal fluid, this is known to no longer hold true, so one has to make a frame choice.
Both the frames have their own advantages.
For instance, the Eckart frame is more intuitive, being a quite straightforward generalization of the non-relativistic case and the continuity equation taking on a simple form, however it is not well-defined for systems with no net charge.
On the other hand, the Landau frame can be convenient as it simplifies the expression of the energy-momentum tensor, the price to pay is that the definition of the velocity field is implicit.
For more details on the frame choice in dissipative hydrodynamics, see \cite{Tsumura:2007ji}.
Whatever the frame we choose, in order to capture the effects of dissipations we have to go beyond the assumption of ideal hydrodynamics and allow the energy-momentum tensor and the charged current to contain other terms in addition to the ideal ones.
With this in mind, we can write in general
\begin{equation}
T^{\mu \nu}=\rho u^{\mu}u^{\nu}-p\Delta^{\mu \nu}+\delta T^{\mu \nu}
\end{equation}
\begin{equation}
j^{\mu}=nu^{\mu}+\delta j^{\mu \nu}
\end{equation}
where $\delta T^{\mu \nu}$ and $\delta j^{\mu}$ are the dissipative terms.
As such, they must vanish at thermodynamic equilibrium and depend on the derivatives of the thermodynamic fields.
We demand the energy and charge densities to be left unchanged by the dissipative terms, so we enforce the so-called \textit{Landau matching conditions}
\begin{equation}
\delta T^{\mu \nu}u_{\mu}u_{\nu}=0,\qquad
\delta j^{\mu}u_{\mu}=0.
\end{equation}
These allow us to write down the following irreducible tensor decompositions
\begin{equation}\label{eqrelhydro11}
T^{\mu \nu}=\rho u^{\mu}u^{\nu}-(p+\Pi)\Delta^{\mu \nu}+q^{\mu}u^{\nu}+q^{\nu}u^{\mu}+\Pi^{\mu \nu}
\end{equation}
\begin{equation}\label{eqrelhydro12}
j^{\mu}=nu^{\mu}+v^{\mu}.
\end{equation}
Here, $q^{\mu}$, $v^{\mu}$ and $\Pi^{\mu \nu}$ are tensor fields orthogonal to $u^{\mu}$, $\rho$ and $n$ are still the energy and charge densities while $p$ is in fact the equilibrium component of the pressure $p=p_{\rm eq}$, and being $p+\Pi$ the total pressure, $\Pi$ represents the \textit{non-equilibrium pressure}.
In formulae
\begin{subequations}
\begin{align}
\rho=&T^{\mu \nu}u_{\mu}u_{\nu}\\
p+\Pi=&\frac{1}{3}T^{\mu \nu}\Delta_{\mu \nu}\\
q^{\mu}=&T^{\alpha \beta}{\Delta^{\mu}}_{\alpha}u_{\beta}\\
\Pi^{\mu \nu}=&\left[\frac{1}{2}\left({\Delta^{\mu}}_{\alpha}{\Delta^{\nu}}_{\beta}+{\Delta^{\nu}}_{\alpha}{\Delta^{\mu}}_{\beta}\right)-\frac{1}{3}\Delta^{\mu \nu}\Delta_{\alpha \beta}\right]T^{\alpha \beta}\\
n=&j^{\mu}u_{\mu}\\
v^{\mu}=&j^{\nu}{\Delta^{\mu}}_{\nu}.
\end{align}
\end{subequations}
Being the dissipations irreversible processes, a non-ideal fluid will produce entropy, but in the limit of dissipations going to zero, the result of ideal hydrodynamics of vanishing entropy production rate ought to be recovered.
As a consequence, we expect the entropy current $s^{\mu}$ to have two components: one is the ideal term $su^{\mu}$ along the velocity field, while the other is orthogonal to that and depends on the dissipations.
Thermodynamic equilibrium is characterized by the absence of transport phenomena, namely by having $\Pi$, $q^{\mu}$ and $\Pi^{\mu \nu}$ all equal to zero, therefore we might as well say that, not surprisingly, the thermodynamic equilibrium state corresponds to a vanishing entropy production rate.
In the following, we will see how two approaches to dissipative hydrodynamics differ in prescribing the dissipative component of the entropy current while maintaining the same properties for the thermodynamic equilibrium state.
\subsection{Navier-Stokes theory}
This formulation of dissipative hydrodynamics was first proposed by Eckart \cite{Eckart:1940} and then slightly modified by Landau and Lifshitz \cite{landau2013fluid}.
It represents the simplest relativistic generalization of Navier-Stokes and Fourier equations of non-relativistic dissipative hydrodynamics, and it is usually referred to as \textit{classical irreversible thermodynamics}.
The fundamental feature of this theory is that the dissipative terms of the entropy current depend linearly on the dissipations $\Pi$, $q^{\mu}$ and $\Pi^{\mu \nu}$, therefore the entropy current contains up to the first derivatives of the thermodynamic parameters.
For this reason, classical irreversible thermodynamics is called a \textit{first-order theory}.
If we limit ourselves to first derivatives, meaning that deviations from thermodynamic equilibrium are small, and we still identify $s=s^{\mu}u_{\mu}$ with the entropy density, the thermodynamic relations \eqref{eqrelhydro06} and \eqref{eqrelhydro09} should keep holding.
Note that the total pressure is $p+\Pi$, so $p$ in \eqref{eqrelhydro06} must now be replaced with $p+\Pi$.
We have shown already that with \eqref{eqrelhydro06} and \eqref{eqrelhydro09} satisfied at each point, \eqref{eqrelhydro10} is obtained.
The first term at right-hand side is worked out from the projection of the equations of motion $\nabla_{\mu}T^{\mu \nu}=0$ along the flow, in particular we note that
\begin{equation}
u_{\nu}\nabla_{\mu}T^{\mu \nu}=0
\qquad \Rightarrow \qquad
\nabla_{\mu}(T^{\mu \nu}u_{\nu})-T^{\mu \nu}\nabla_{\mu}u_{\nu}=0.
\end{equation}
By using the expression \eqref{eqrelhydro11} of the energy-momentum tensor, we have
\begin{equation}
u^{\mu}\nabla_{\mu}\rho=-\rho \nabla_{\mu}u^{\mu}-\nabla_{\mu}q^{\mu}-(p+\Pi)\nabla_{\mu}u^{\mu}+q^{\mu}A_{\mu}+\Pi^{\mu \nu}\nabla_{\mu}u_{\nu}.
\end{equation}
For the second term, we consider the equation of motion $\nabla_{\mu}j^{\mu}=0$.
Using \eqref{eqrelhydro12} we find
\begin{equation}
u^{\mu}\nabla_{\mu}n=-n\nabla_{\mu}u^{\mu}-\nabla_{\mu}v^{\mu}.
\end{equation}
Plugging these two equations into \eqref{eqrelhydro10}, we get
\begin{equation}
\nabla_{\mu}(su^{\mu})=-\frac{1}{T}\nabla_{\mu}q^{\mu}+\frac{q^{\mu}}{T}A_{\mu}+\frac{\Pi^{\mu \nu}}{T}\nabla_{\mu}u_{\nu}+\frac{\mu}{T}\nabla_{\mu}v^{\mu}.
\end{equation}
Finally, by defining $\zeta \equiv \mu/T$, we have
\begin{equation}\label{eqrelhydro13}
\nabla_{\mu}\left(su^{\mu}+\frac{q^{\mu}}{T}-\zeta v^{\mu}\right)=-\frac{q^{\mu}}{T^2}\nabla_{\mu}T+\frac{q^{\mu}}{T}A_{\mu}+\frac{\Pi^{\mu \nu}}{T}\nabla_{\mu}u_{\nu}-v^{\mu}\nabla_{\mu}\zeta.
\end{equation}
The non-relativistic limit of this equation is the known equation for the entropy production rate of non-relativistic dissipative hydrodynamics, provided that $q^{\mu}$ and $v^{\mu}$ are interpreted as the \textit{heat flow} and the \textit{current flow} respectively.
Moreover, $\Pi^{\mu \nu}$ is called the \textit{viscous stress tensor}.
Thus, we are persuaded to identifying the entropy current of dissipative hydrodynamics with
\begin{equation}\label{eqrelhydro16}
s^{\mu}=su^{\mu}+\frac{q^{\mu}}{T}-\zeta v^{\mu}.
\end{equation}
So far, we do not have any information on the expression of the dissipations.
This is obtained by enforcing the second law of thermodynamics, namely by demanding \eqref{eqrelhydro13} to be non-negative.
Being the dissipative terms are independent from each other, we will have a constraint for each of them.
The following expressions of the dissipations are such that the second law is fulfilled for any value of the temperature and velocity field
\begin{subequations}
\begin{align}
q^{\mu}\equiv&\kappa \Delta^{\mu \nu}(\nabla_{\nu}T-TA_{\nu})\label{eqrelhydro15a}\\
\Pi^{\mu \nu}\equiv&2\eta \sigma^{\mu \nu}\label{eqrelhydro15b}\\
\Pi \equiv&\xi \nabla_{\mu}u^{\mu}\label{eqrelhydro15c}\\
v^{\mu}\equiv&{\cal D}\nabla_{\mu}\zeta \label{eqrelhydro15d}
\end{align}
\end{subequations}
where $\sigma^{\mu \nu}$ is the \textit{shear tensor}
\begin{equation}\label{eqrelhydro21}
\begin{split}
\sigma_{\mu \nu}\equiv &
\frac{1}{2}\left({\Delta^{\alpha}}_{\mu}{\Delta^{\beta}} _{\nu}+{\Delta^{\alpha}}_{\nu}{\Delta^{\beta}}_{\mu}-\frac{2}{3}\Delta_{\mu \nu}\Delta^{\alpha \beta}\right)\nabla_{\alpha}u_{\beta}=\\
=&\nabla_{(\mu}u_{\nu)}-A_{(\mu}u_{\nu)}-\frac{1}{3}\Delta_{\mu \nu}\nabla_{\lambda}u^{\lambda},
\end{split}
\end{equation}
in fact, the entropy production rate
\begin{equation}
\nabla_{\mu}s^{\mu}=-\frac{q^{\mu}q_{\mu}}{\kappa T^2}+\frac{\Pi^2}{\xi T}+\frac{\Pi^{\mu \nu}\Pi_{\mu \nu}}{2\eta T}+\frac{v^{\mu}v_{\mu}}{\cal D}
\end{equation}
is non-negative provided that
\begin{equation}
\kappa \ge 0,\qquad \xi \ge 0,\qquad \eta \ge 0,\qquad {\cal D}\ge 0.
\end{equation}
The parameters $\kappa$, $\xi$, $\eta$ and ${\cal D}$ are called the \textit{transport coefficients}; in particular, $\kappa$ is the \textit{thermal conductivity}, $\xi$ the \textit{bulk viscosity}\footnote{The bulk viscosity is usually indicated as $\zeta$, but throughout this work and papers relevant for it, $\zeta$ always stands for the ratio between chemical potential and temperature $\zeta=\mu/T$. We choose to ``sacrify'' the traditional nomenclature because we will often deal with $\mu/T$, but never again with the bulk viscosity.}, $\eta$ the \textit{shear viscosity} and ${\cal D}$ the \textit{diffusion coefficient}.
Equations \eqref{eqrelhydro15a}--\eqref{eqrelhydro15d} expressing the transport coefficients in terms of the derivatives of the thermodynamic parameters as a consequence of the second law of thermodynamics are called the \textit{constitutive equations}.
Together with the equations of motion \eqref{eqrelhydro02} and \eqref{eqrelhydro03}, they close the set of dissipative hydrodynamics equations.
We will not go into the calculations for they are quite long, but it can be shown that this set of equations reduces to the well-known Navier-Stokes ones in the non-relativistic limit.
The constitutive equations can be used to further characterize the properties of the thermodynamic equilibrium state.
At thermodynamic equilibrium the dissipations must vanish for any value of the transport coefficients, so equations \eqref{eqrelhydro15a}--\eqref{eqrelhydro15d} imply
\begin{subequations}
\begin{align}
\nabla_{\mu}T-TA_{\mu}=&0\label{eqrelhydro20a}\\
\sigma^{\mu \nu}=&0\label{eqrelhydro20b}\\
\nabla_{\mu}u^{\mu}=&0\label{eqrelhydro20c}\\
\nabla_{\mu}\zeta=&0\label{eqrelhydro20d}.
\end{align}
\end{subequations}
By plugging the definition \eqref{eqrelhydro21} of the shear tensor into the condition \eqref{eqrelhydro20b}, combining it with \eqref{eqrelhydro20c} and dividing everything by the temperature, we obtain
\begin{equation}
\begin{split}
0=&
\frac{\nabla_{\mu}u_{\nu}}{T}+\frac{\nabla_{\nu}u_{\mu}}{T}-\frac{A_{\mu}u_{\nu}}{T}-\frac{A_{\nu}u_{\mu}}{T}=\\
=&\nabla_{\mu}\frac{u_{\nu}}{T}-\frac{u_{\nu}}{T^2}\nabla_{\mu}T+\nabla_{\nu}\frac{u_{\mu}}{T}-\frac{u_{\mu}}{T^2}\nabla_{\nu}T-\frac{A_{\mu}u_{\nu}}{T}-\frac{A_{\nu}u_{\mu}}{T}.
\end{split}
\end{equation}
Finally, by using \eqref{eqrelhydro20a} we are simply left with
\begin{equation}\label{eqrelhydro22}
\nabla_{\mu}\frac{u_{\nu}}{T}+\nabla_{\nu}\frac{u_{\mu}}{T}=0.
\end{equation}
Equations \eqref{eqrelhydro22} and \eqref{eqrelhydro20d} represent the conditions that the thermodynamic fields must fulfill in order to have thermodynamic equilibrium, namely $u^{\mu}/T$ must be a Killing vector field and $\zeta=\mu/T$ must be constant.
This statement is of fundamental importance, and will be discussed more thoroughly in Subsection \ref{sec:LE_to_GE_and_NE} in the next Chapter.
As it pertains the thermodynamic equilibrium state, it will hold true even in the framework of extended irreversible thermodynamics that we are soon going to introduce.
\newline
Although classical irreversible thermodynamics includes the dissipations in such a way to ensure the second law of thermodynamics and it reproduces the known equations in the non-relativistic limit, it also has some undesirable features.
In particular, it is unstable under perturbations and it allows for the propagation of information at infinite speed, thus breaking causality \cite{Hiscock:1983zz, Hiscock:1985zz, Hiscock:1987zz}.
Non-causality, in particular, is due to the algebraic nature of the constitutive equations \eqref{eqrelhydro15a}--\eqref{eqrelhydro15d}, which makes \textit{parabolic} equations emerge and, as a consequence, the thermodynamic fluxes react instantaneously to the gradients of the thermodynamic fields.
Actually, this feature is present in the non-relativistic Navier-Stokes theory as well, where the Fourier law for the heat flux leads to a parabolic diffusion equation for the temperature.
This is not a conceptual problem \textit{per se} in a non-relativistic theory, though a big one in a relativistic context.
A recent discussion on this subject can be found in \cite{Hoult:2020eho}, where an interpretation and a solution different from the traditional one is presented.
In order to convince ourselves of the causality breaking, we can study sound waves propagation.
For the sake of simplicity, let us consider a fluid initially at thermodynamic equilibrium and at rest in a flat spacetime, and suppose that shear viscosity is the only dissipation present \cite{Romatschke:2009im}.
To make this even simpler, we take a perturbation of the velocity field along the $x$ direction, independent of $y$ and $z$
\begin{equation}
u^{\mu}={u_0}^{\mu}+\delta u^{\mu}(t,x),
\end{equation}
where the initial velocity field is ${u_0}^{\mu}=(1,{\bf 0})$, being the system initially at rest.
The density and pressure will change in response to this perturbation
\begin{equation}
\rho=\rho_0+\delta \rho(t,x),\qquad
p=p_0+\delta p(t,x),
\end{equation}
where $\rho_0$ and $p_0$ are their initial values.
In this setup we have the hydrodynamic equation
\begin{equation}
{\Delta^{\alpha}}_{\nu}\nabla_{\mu}T^{\mu \nu}=0
\qquad \longrightarrow \qquad
(\rho+p)A^{\alpha}-{\Delta^{\alpha}}_{\nu}\partial^{\nu}p+{\Delta^{\alpha}}_{\nu}\partial_{\mu}\Pi^{\mu \nu}=0
\end{equation}
and after some algebra we obtain the following diffusive equation at first order in the perturbations for the $y$ component $\delta u^y$
\begin{equation}
(\rho_0+p_0)\frac{\partial \delta u^y(t,x)}{\partial t}-\eta \frac{\partial^2 \delta u^y(t,x)}{\partial x^2}=0.
\end{equation}
This is in fact a parabolic equation.
As is customary in perturbative analysis, we assume that the perturbation has a simple harmonic behaviour of the type
\begin{equation}
\delta u^y(t,x)\propto {\rm e}^{-\omega t+ikx},
\end{equation}
which leads to the quadratic dispersion relation
\begin{equation}
\omega=\frac{\eta}{\rho_0+p_0}k^2.
\end{equation}
We can now estimate the velocity of propagation of the mode with wavenumber $k$
\begin{equation}
\frac{{\rm d} \omega}{{\rm d} k}=\frac{2\eta}{\rho_0+p_0}k.
\end{equation}
Clearly, there exists a critical value $k_{\rm crit}$ for the wavenumber such that this velocity equals the speed of light and exceeds it for greater values, thus breaking causality.
While this can be acceptable within a non-relativistic description, it is clearly unsatisfactory within a relativistic theory in which all signals must be contained in the light cone.
In summary: the ultimate reason for the non-causality of the Navier-Stokes theory is the parabolicity of the equations, which, in fact, turns out to be a general feature of first-order theories.
In order to avoid it and restore causality, we need \textit{hyperbolic} instead of parabolic equations.
The traditional solution to this problem is the replacement of a first-order theory with a second-order one.
\subsection{Israel-Stewart theory}
Several approaches have been developed to overcome the causality breaking and instability of classical irreversible thermodynamics.
The basic idea is to extend the space of variables by considering the dissipations as conserved variables of an ideal fluid, which allows to restore causality under some conditions.
The resulting conservation equations describe the evolution of these new ``extended dissipations'', and turn out to be hyperbolic.
The price to pay is that the theory now mathematically more complicated, as it involves a greater number of variables and parameters, and the physical meaning is not always crystal clear.
This class of theories generally goes under the name of \textit{extended irreversible thermodynamics} because of this extension of the space of variables.
A non-relativistic extended theory was first proposed by M\"uller \cite{Muller:1967zza} and later generalised to the relativistic framework by Israel \cite{Israel:1976tn} and Stewart \cite{Israel:1979wp}.
Here we just touch upon the main qualitative features without going into technical calculations.
Supported by evidence from relativistic kinetic theory, Israel and Stewart argued that the entropy current in equation \eqref{eqrelhydro16} is not the most general one, but is in fact the first-order truncation of a more elaborate expression that also contains terms quadratic in the dissipation, which is therefore of second-order in the derivatives of the thermodynamic fields.
In the Eckart frame, for simplicity, it reads
\begin{equation}
s^{\mu}=su^{\mu}+\frac{q^{\mu}}{T}-\left(\beta_0\Pi^2-\beta_1q^{\nu}q_{\nu}+\beta_2\Pi^{\alpha \beta}\Pi_{\alpha \beta}\right)\frac{u^{\mu}}{2T}-\alpha_0\frac{\Pi q^{\mu}}{T}+\alpha_1\frac{\Pi^{\mu \nu}q_{\nu}}{T}.
\end{equation}
Here, the thermodynamic coefficients $\alpha_0$ and $\alpha_1$ control the couplings between viscosity and heat fluxes, while $\beta_0$, $\beta_1$ and $\beta_2$ govern the scalar, vector and tensor contributions to the entropy density respectively.
As we can see, both the component of the entropy current along the velocity field and orthogonal to it are modified with respect to their counterparts in the Navier-Stokes theory.
As in the Navier-Stokes theory, the simplest way to enforce the second law of thermodynamics is to impose linear relations between the dissipative fluxes and the corresponding ``extended thermodynamic forces''.
In this case, however, being the dissipations regarded as dynamic variables, the extended forces contain derivatives of their respective fluxes along the velocity field.
The resulting constitutive equations replacing \eqref{eqrelhydro15a}--\eqref{eqrelhydro15d}, which are in fact the equations of motion of the dissipations, read \cite{Maartens:1996vi}
\begin{subequations}
\begin{align}
\tau_{\Pi}D\Pi+\Pi=&-\xi \nabla_{\mu}u^{\mu}-\frac{1}{2}T\xi \nabla_{\mu}\left(\frac{\tau_{\Pi}}{\xi T}u^{\mu}\right)\Pi+\tau_0{\Delta^{\nu}}_{\mu}\nabla_{\nu}q^{\mu}\label{eqrelhydro17a}\\
\tau_qDq_{\mu}+q_{\mu}=&\kappa \left({\Delta^{\nu}}_{\mu}\nabla_{\nu}T-TA_{\mu}\right)+\frac{1}{2}T^2\kappa \nabla_{\nu}\left(\frac{\tau_q}{\kappa T^2}u^{\nu}\right)q_{\mu}\nonumber \\
&-\tau_0{\Delta^{\lambda}}_{\nu}{\Pi^{\nu}}_{\mu}-\tau_2{\Delta^{\nu}}_{\mu}\nabla_{\nu}\Pi \label{eqrelhydro17b} \\
\tau_{\pi}D\Pi_{\mu \nu}+\Pi_{\mu \nu}=&\eta \sigma_{\mu \nu}-\frac{1}{2}T\eta \nabla_{\lambda}\left(\frac{\tau_{\pi}}{\eta T}u^{\lambda}\right)\Pi_{\mu \nu}\nonumber \\
&+\frac{1}{2}\tau_1\left(\Delta_{\mu \alpha}\Delta_{\nu \beta}+\Delta_{\nu \alpha}\Delta_{\mu \beta}-\frac{2}{3}\Delta_{\mu \nu}\Delta_{\alpha \beta}\right)\nabla^{\alpha}q^{\beta}.\label{eqrelhydro17c}
\end{align}
\end{subequations}
Here, $\tau_{\Pi}$, $\tau_q$, $\tau_{\pi}$, $\tau_0$, $\tau_1$ and $\tau_2$ are new transport coefficients related to $\xi$, $q^{\mu}$ and $\Pi^{\mu \nu}$ by
\begin{equation}
\begin{split}
\tau_{\Pi}\equiv &\xi \beta_0,\qquad
\tau_q\equiv \kappa T^2\beta_1,\qquad
\tau_{\pi}\equiv \eta \beta_2,\\
\tau_0\equiv &\xi \alpha_0,\qquad
\tau_1\equiv \kappa T^2\alpha_1,\qquad
\tau_2\equiv 2\eta \alpha_1.
\end{split}
\end{equation}
It can be shown that they measure the time scales over which the system evolves to a new equilibrium, so they can be regarded as relaxation times.
They are not known \textit{a priori}, and must be worked out from the underlying microscopic theory.
Finally, note that the Navier-Stokes theory is simply recovered in the limit where the new terms are negligible.
The constitutive equations \eqref{eqrelhydro17a}--\eqref{eqrelhydro17c} are sometimes referred to as the \textit{Israel-Stewart equations} of extended irreversible thermodynamics.
They are 9 first-order-in-time partial differential equations which, together with the 5 energy-momentum and charge conservation equations \eqref{eqrelhydro02} and \eqref{eqrelhydro03}, form a set of 14 equations in the 14 unknowns represented by the independent components of the variables $\{ \rho,n,u^{\mu},\Pi,q^{\mu},\Pi^{\mu \nu}\}$.
Clearly, this theory is mathematically more involved than ideal hydrodynamics or the Navier-Stokes theory: more equations in more variables come into play, more input from the underlying microscopic theory is needed and sometimes the physical meaning can be clouded by technicalities.
However, the feature that the dissipations relax with specific time scales instead of instantaneously reach their asymptotic values is the key to obtain a hyperbolic set of equations, thus a causal and stable theory of relativistic hydrodynamics \cite{Israel:1979wp}.
In fact, by repeating a perturbative analysis similar to the one previously discussed, one can indeed show that the resulting equations are hyperbolic in nature.
It is not straightforward to see whether hyperbolicity holds in any condition or not, but we are not going into that.
Over the years, a great variety of predictions of the Israel-Stewart theory have successfully matched with experiments, both in the relativistic and non-relativistic regimes.
Most of the recent interest came in fact from heavy-ion collisions, where the need of a relativistic theory of non-ideal fluids has become crucial to explain the spectra of particles produced in ultrarelativistic nucleus-nucleus collisions.
\section{Summary and outlook}
Before moving to the next Chapter, we briefly wrap up the content of the present one.
Hydrodynamics is an effective classical theory describing the evolution of fluids, systems whose microscopic and macroscopic characteristics are significantly separated.
Although their fundamental nature can be quantum mechanical and possibly complicated, whenever that scale separation occurs their evolution can be effectively described in terms of few classical degrees of freedom.
The simplest instance is that of a perfect or ideal fluid, namely ideal hydrodynamics.
This relies on the assumption of local thermodynamic equilibrium, a configuration wherein the energy-momentum tensor and possible charged currents, which are the fundamental ingredients upon which hydrodynamics is built, although out of thermodynamic equilibrium preserve the same structure they have at thermodynamic equilibrium, promoting the energy density, pressure and velocity field to slowly varying functions of space and time.
Thus, it is possible to define the thermodynamic fields for each fluid element and do thermodynamics.
The key quantity allowing the enforcement of the second law of thermodynamics is the entropy current, a vector field that integrated on some three-dimensional spacelike hypersurface gives the entropy of the system.
It is shown that ideal fluids do not produce any entropy.
On the contrary, entropy is produced by non-perfect or non-ideal fluids, systems where dissipations are present, giving rise to irreversible transport phenomena, whose track is kept in the entropy current, that in turn make the entropy increase.
The most straightforward theory of dissipative hydrodynamics is the Navier-Stokes one: although reproducing the correct non-relativistic limit and fulfilling the laws of thermodynamics, it turns out to be unstable and to break causality.
The ultimate cause for that is the parabolic nature of the equations.
The solution to this problem proposed by Israel and Stewart is the extension of the dissipations from fixed parameters to dynamical variables.
The resulting entropy current captures more terms, which are of second order in the derivatives of the thermodynamic fields and which in turn give rise to hyperbolic equations that restore stability and causality.
The price to pay is that the theory is mathematically more involved and the physical interpretation can be clouded by technicalities, but the predictions of this theory has proven to match many experimental results so far.
As mentioned, the fundamental ingredients of relativistic hydrodynamics are the conserved currents, namely the energy-momentum tensor and some possible charged currents.
In order to enter the hydrodynamic equations, they must be worked out as averages in the underlying microscopic theory.
This could be, for instance, a relativistic kinetic theory, that is a model describing the non-equilibrium dynamics of a fluid whose microscopic degrees of freedom can be regarded as a diluted system of weakly interacting particles.
The relativistic kinetic approach works well when the mean free path of the particles is much larger than their thermal wavelength, but it breaks down for strongly interacting fluids requiring a full quantum field theoretical description in general.
Thus, when the underlying microscopic theory is a Quantum Field Theory, those averages are in fact thermal expectation values calculated either with a generating functional or a density operator depending on the state of the system.
Throughout the rest of this work, we will use a method to calculate thermal expectation values based on a density operator put forward by Zubarev in the late 70's, which will be the subject of the next Chapter.
\chapter*{Abstract}
\afterpage{\blankpage}
\vspace*{5cm}
\begin{Large}
\begin{center}
\textit{To my parents}
\end{center}
\end{Large}
\vfill
\thispagestyle{empty}
\chapter*{Acknowledgements}
\input{aknowledgements}
\tableofcontents
\newpage
\chapter*{Notation}
\input{chapters/notation}
\chapter{Introduction}
\input{chapters/introduction}
\chapter{Relativistic Hydrodynamics}
\label{chapter:relhydro}
\input{chapters/relhydro}
\chapter{Relativistic Quantum Statistical Mechanics}
\label{chapter:zubarev}
\input{chapters/zubarev}
\chapter{Relativistic Quantum Fluid at Equilibrium with Acceleration}
\label{chapter:gteacceleration}
\input{chapters/gteacceleration}
\chapter{Relativistic Quantum Fluid out of Equilibrium with Boost Invariance}
\label{chapter:boost}
\input{chapters/nonequilibrium}
\chapter{Conclusions}
\label{chapter:conclusions}
\input{chapters/conclusions}
|
2,877,628,090,269 | arxiv | \section{Introduction}
One of the recent exiting discoveries in gamma-ray bursts (GRBs)
is the break detected in the afterglow light curve of some bursts
which could be interpreted as a consequence of the beamed emission
(e.g., Ref. \refcite{Rho97},\refcite{sar99}). If the afterglow emission
is beamed, how is the prompt emission? Whether the latter emission
is isotropic or strongly beamed in our direction has been an open
question for some years. As mentioned in Ref. \refcite{sar99}, this
question has implications on almost every aspect of the
phenomenon, from the energetics of the events to the engineering
of the inner engine and the statistics and the luminosity function
of the sources. Frail et al. (2001) studied\cite{fra01} a sample of GRBs with
good afterglow follow-up and known redshifts. They interpreted the
breaks in the scenario of the beamed model and found that most
bursts with large values of the isotropic-equivalent gamma-ray
energy, $E_{iso}$, possess the smallest beaming fraction,
$f_b=1-\cos \theta _{jet}$. The collimation-corrected energy,
$E_\gamma =f_bE_{iso}$, of this sample is strongly clustered. This
was independently confirmed by Panaitescu \& Kumar (2001)\cite{pan01a}. Bloom
et al.(2003) collected a larger sample of GRBs and found that the distribution of $%
E_\gamma $ clusters around $1.3\times 10^{51}$ ergs\cite{blo03}. All these
were regarded as evidence supporting the beamed emission scenario.
The isotropic gamma-ray energy $E_{iso}$ was found to be
correlated with the cosmological co-moving frame peak energy
$E_{p}$ by different authors (see, e.g., Refs.~\refcite{llo00}--\refcite{yon04}). More recently, Ghirlanda et al. (2004) found\cite{ghi04} a tight
correlation between $E_{\gamma }$ and $E_{p}$, which sheds light
on the still uncertain radiation processes for the prompt GRB
emission. In computing $E_{\gamma }$, it is essential that the
beaming fraction is available. According to Refs.~\refcite{sar99},
the opening angle of the jet can be calculated with
\begin{equation}
\theta _{jet}=B(\frac{t_{jet}}{1+z})^{3/8}(\frac{\xi
n}{E_{iso}})^{1/8}
\end{equation}%
in the case of a homogeneous circumburst medium, where $B$ is a
constant which can be found in Ref.~\refcite{fri05}, $t_{jet}$
is the afterglow jet break time, $z$ is the redshift, $\xi $ is
the efficiency for converting the explosion energy to radiation,
$n$ is the density of the ambient medium, and $E_{iso}$ is the
energy in $\gamma $-rays calculated assuming that the emission is
isotropic. This enables us to estimate the opening angle of jets
and with it to peep into other parameters associated with the
mechanism of radiation.
Indeed, under the scenario of jets, parameters such as the total
energy in the relativistic ejecta, the jet opening angle, the
density and profile of the medium in the immediate vicinity of the
burst, and those associated with the microphysical shocks could be
estimated by modeling the broadband emission of GRB afterglows for
various bursts (see Refs.~\refcite{pan01a},\refcite{pan01}--\refcite{pan02}). Assuming that the observed GRB
durations are a good measure of the ejecta deceleration timescale,
the jet Lorentz factors at the deceleration radius were found to
be within 70 and 300 for 10 GRBs\cite{pan02}.
In addition to the break time observed in the afterglow, Sari \&
Piran (1999) suggested\cite{sar99a} that the reverse shock of a burst could
provide a crude measurement of the initial Lorentz factor. In the
case of GRB 990123, they showed that the initial Lorentz factor
$\Gamma \sim 200$ could be obtained from the prompt optical flash
observed in the burst.
Early afterglow which could overlap the main burst was predicted
previously (see, e.g., Ref.~\refcite{sar97}). According to the analysis of
Ref. ~\refcite{sar99b} based on the internal-external shocks model,
for short bursts the peak of the afterglow will be delayed,
typically, by few dozens of seconds after the burst while for long
ones the early afterglow emission will overlap the GRB signal, and
a delayed emission, with the characteristics of the early
afterglow, can be used to measure the initial Lorentz factor of
the relativistic flow (see, also Refs.~\refcite{sar99c}--\refcite{nak05}).
Hinted by the previous works, we wonder if the initial Lorentz
factor could be estimated with an independent approach in case the
delayed emission is not available. An investigation on this issue
is organized as follows. In section 2, we present appropriate
formulas and show how to estimate the concerned parameters with
them. A new GRB sample for which the redshift and the break time
in the afterglow are known is studied in section 3. Conclusions
are summarized in the last section, where a brief discussion is
present.
\section{Formulas employed}
As the initial explosion of GRBs is not at all in the stage of
afterglow, we cannot estimate parameters of the former from the
measurements of the latter according to the law of the afterglow.
To connect the two phases, we need to find a particular moment
which satisfies the following requirements: a) at that moment, the
afterglow has already begun so that the law of the afterglow is
applicable; b) parameters associated with that moment are well
related with those of the initial explosion.
As predicted above, for long bursts the early afterglow emission
will overlap the GRB signal. Indeed, it was reported recently that
a bright optical emission from GRB 990123 was detected while the
burst was still in progress\cite{ake99}. Revealed in Ref.~\refcite{fox03}
is the discovery of the optical counterpart of GRB
021004 only 193 seconds after the event. The time (measured from
the trigger) is slightly longer than the duration of the event. Li
et al. (2003) showed\cite{li03} that the faintness of GRB 021211, coupled
with the fast decline typical of optical afterglows, suggests that
some of the dark bursts were not detected because optical
observations commenced too late after the GRB.
Accordingly, we assume that to the time the prompt emission of
GRBs is going to be undetectable, which is generally represented
by the concept of duration $t_{dur}$ (in BATSE, it is associated
with $T_{90}$), the afterglow of the source has already emerged.
Under this assumption, $t_{dur}$ is the moment that can satisfy
the first requirement. Shown in the following one will find that
$t_{dur}$ can also satisfy the second requirement. In addition,
$t_{dur}$ is fortunately always available.
The observed complex structure of GRB\ light curves suggests that,
during the prompt emission of a burst, several ejecta with
different masses and different Lorentz factors might be involved.
We assume that, after the process of the prompt emission, all
ejecta are merged as a single one which has mass $m$ and bears a
Lorentz factor $\Gamma _{dur}$ (which is measured at the end of
the duration of the burst). Let's define an average initial
Lorentz factor of the early phase ejecta as%
\begin{equation}
\Gamma \equiv \frac{1}{m}\sum_{i}\Gamma _{i}m_{i},
\end{equation}%
with%
\begin{equation}
m=\sum_{i}m_{i},
\end{equation}%
where $\Gamma _{i}$ and $m_{i}$ are the initial Lorentz factor and
the mass of the $i$th ejecta, respectively. One can check that,
for a burst associated with a shock produced by the collision of
two shells with roughly equal masses, $\Gamma \simeq (\Gamma
_{in}+\Gamma _{out})/2$, where $\Gamma _{in}$ and $\Gamma _{out}$
are the Lorentz factors of the inner and outer shells,
respectively, while for a burst containing several shells with
roughly equal masses, $\Gamma \simeq \sum_{i}\Gamma _{i}/N$, where
$N$ is the number of shells. Note that the initial kinetic energy
of all these early phase ejecta is the product of the explosion of
the burst (where a number of sub-explosions might be involved). It
is well known that it is the losing of the kinetic energy of the
GRB ejecta that gives rise to the energy of the radiation observed
during the prompt emission as well as the increasing of the
thermal energy during this period, regardless what the radiation
mechanism is. According to the conservation of energy, one finds
\begin{equation}
\Gamma mc^{2}-\Gamma _{dur}mc^{2} =\xi (\Gamma -1)mc^{2}+\Delta
E_{th},
\end{equation}%
where $\Delta E_{th}$ is the increasing of the thermal energy of
the system. Assuming that the radiation is associated with the
synchrotron mechanism, as is generally believed, then parts of the
increasing thermal energy at any moment must be converted to
radiation due to the increasing velocity of individual electrons.
Based on this argument, we believe that, as a sum of that of all
moments, $\Delta E_{th}$ would be significantly reduced, compared
with that obtained in the situation where the synchrotron
mechanism is not at work. We accordingly assume that, during the
shock, the increasing thermal energy is much smaller than the
radiation energy. That is, we assume $\xi (\Gamma
-1)mc^{2}\gg\Delta E_{th}$. Omitting the increasing of the thermal
energy, we get from (4) that
\begin{equation}
(1-\xi )\Gamma \simeq\Gamma _{dur}-\xi .
\end{equation}%
When all the initial kinetic energy is converted to photons, we
have $\xi =1$ and then $\Gamma _{dur}=1$, and when none of the
initial kinetic energy is changed to radiation, we get $\xi =0$
and then $\Gamma _{dur}=\Gamma $. Thus, one always finds $1\leq
\Gamma _{dur}\leq \Gamma $.
According to (5), the initial Lorentz factor could be determined as long as $%
\Gamma _{dur}$ and $\xi $ (when $\xi \neq 1$) are known.
During the period of the afterglow, when the external matter is
homogenously distributed, the Lorentz factor would decline
following the law of $\Gamma (t)\propto t^{-p}$, where $p=3/7$ in
a radiative phase and $p=3/8$ in an adiabatic phase (see, e.g.,
Ref.~\refcite{pir05}). Thus, we get $\Gamma
_{dur}=(t_{jet}/t_{dur})^{p}\Gamma _{jet}$, where $\Gamma _{jet}$
is the Lorentz factor of the ejecta measured at $t_{jet}$.
According to the beamed model, a break in the afterglow light
curve of the burst would appear when
its bulk Lorentz factor becomes of the order of $1/\theta _{jet}$, i.e., $%
\Gamma _{jet}\simeq 1/\theta _{jet}$. We then come to
\begin{equation}
\Gamma _{dur}\simeq (\frac{t_{jet}}{t_{dur}})^{p}\frac{1}{\theta
_{jet}}.
\end{equation}%
For $\xi <1$, we get from equations (5) and (6) that
\begin{equation}
\Gamma \simeq \frac{(t_{jet}/t_{dur})^{p}/\theta _{jet}-1}{1-\xi
}+1.
\end{equation}
As is generally assumed, the jet of bursts is strongly beamed in
our direction so that the emission is detectable due to the great
Doppler boosting (see, e.g., Ref.~\refcite{sar99}). According to the
Doppler effect, a photon of $E_{0}$ emitted from the area of
$\theta =0$ within the spherical
surface of a uniform jet which moves outwards with a bulk Lorentz factor $%
\Gamma $ would be blue-shifted to $E=2\Gamma E_{0}$. In the case
of photons being emitted from a certain area with a rest frame
Band function spectrum\cite{ban93} which peaks at
$E_{0,p}$, the spectrum would be blue-shifted and would peak at
$E_{p}$ which is proportional to $E_{0,p}$ (see Table 4 in Ref.~\refcite{qin02}
where $E_{p}=1.67\Gamma E_{0,p}$ can be concluded). Neglecting the
minute difference we take in the following that $E_{p}\simeq
2\Gamma E_{0,p}$. Following Ref.~\refcite{sar99}, we consider
through out this paper only an adiabatic phase and then take
$p=3/8$. Thus, from equation (7) we get
\begin{equation}
\Gamma \simeq \frac{(t_{jet}/t_{dur})^{3/8}/\theta _{jet}-1}{1-\xi
}+1
\end{equation}%
and
\begin{equation}
E_{0,p}\simeq \frac{(1-\xi
)E_{p}/2}{(t_{jet}/t_{dur})^{3/8}/\theta _{jet}-\xi }.
\end{equation}
\section{Application}
Presented in Ref.~\refcite{fri05} are 52 GRB or XRF sources
(called the FB sample) where their redshifts as well as the
gamma-ray fluences are available. The isotropic energies $E_{iso}$
were calculated assuming a standard cosmology of $(\Omega
_{M},\Omega _{\Lambda },h)=(0.3,0.7,0.7)$. For some of these
sources, break times $t_{jet}$ are available, and then with
equation (1) the opening angles $\theta _{jet}$ of the sources
could be well determined, where $\xi =0.2$ is assumed (see also
Ref.~\refcite{fra01}). According to equations (8) and (9), to
calculate $\Gamma $ and $E_{0,p}$ for these sources we need to
know $t_{dur}$ as well. Listed in Table 1 are the values of
$t_{dur}$, which is measured in various bands, for the sources of
the FB sample with $t_{jet}$, $\theta _{jet}$, and $E_{p}$
available. To
meet the requirement that the afterglow has already begun (i.e., $%
t_{aft}\leq t_{dur}$, where $t_{aft}$ is the start time of the
afterglow) and the prompt emission is just ended so that the
common value of the efficiency $\xi $ can be adopted and the mass
of the piled up ambient medium is relatively small, we adopt the
largest value of $t_{dur}$ to calculate $\Gamma $ and $E_{0,p}$.
The results are presented in Table 2.
\newpage
\begin{table}[ph]
\tbl{Data of $t_{dur}$}
{ \begin{tabular*}{8cm}{@{}ccccl@{}} \toprule
GRB & trig. NO. & Duration Band & $t_{dur}$ &Ref. \\
(/XRF) & & ($Kev$) & (s) & \\
\hline
970508 & 6225 & $20 \sim 1000$ & 35 & \refcite{kou97}\\
& &$50\sim300$ & 35 & \refcite{kou97} \\
& & -- & 15 & \refcite{cos97} \\
& &$25\sim(>320)$&23.104(3.789) & BATSE \\
970828 & 6350 & $2\sim12$ & 160 & \refcite{rem98} \\
980519 & 6764 & $40\sim700$ & 30 & \refcite{mul98}\\
& & $50\sim300$ & 60 & \refcite{con98}\\
& & $2\sim28$ & 190 & \refcite{mul98} \\
& &$25\sim(>320)$&23.808(1.032) &BATSE \\
980703 & 6891 & $40\sim700$ & 90 & \refcite{ama98} \\
& & $50\sim300$ & 400 & \refcite{kip98}\\
& & $2\sim12$ & 40 & \refcite{smi98} \\
& & $2\sim20$ & 400 & \refcite{kip98} \\
& &$25\sim(>320)$&411.648(9.273) &BATSE \\
990123 & 7343 & $40\sim700$ & 100 & \refcite{fer99} \\
& & $50\sim300$ & 63.3 & \refcite{kip99a} \\
& & $20\sim1000$ & 63.3 & \refcite{kip99a} \\
& &$25\sim(>320)$&63.36(0.264 ) & BATSE\\
990510 & 7560 & $40\sim700$ & 80 & \refcite{dad99} \\
& & $50\sim300$ & 100 & \refcite{kip99b}\\
& & $20\sim100$ & 100 & \refcite{kip99b} \\
& &$25\sim(>320)$&68.032(0.202) & BATSE\\
990705 & 7633 & $40\sim700$ & 45 & \refcite{cel99} \\
& & $2\sim26$ & 45 & \refcite{cel99} \\
990712 & 7648 & $40\sim700$ & 30 & \refcite{hei99} \\
& &$25\sim(>320)$&31.616(3.137) & BATSE\\
991216 & 7906 & $50\sim300$ & 50 & \refcite{kip99} \\
& & $20\sim100$ & 50 & \refcite{kip99} \\
& &$25\sim(>320)$&15.168(0.091) & BATSE\\
011211 & & $40\sim700$ & 270 & \refcite{fro02}\\
020124 & & $8\sim85$ & 70 & \refcite{ric02}\\
020405 & & $25\sim100$ & 40 & \refcite{hur02a} \\
020813 & & $2\sim25$ & 125 & \refcite{vil02} \\
& & $25\sim100$ & 125 & \refcite{hur02b} \\
& & $8\sim40$ & 125 & \refcite{hur02b} \\
021004 & & $8\sim40$ & 100 & \refcite{shi02} \\
021211 & & $8\sim40$ & 5.7 & \refcite{cre02} \\
030226 & & $30\sim400$ & 100 & \refcite{suz03} \\
030328 & & $30\sim400$ & 100 & \refcite{vil03} \\
030329 & & $30\sim400$ & 50 & \refcite{ric03}\\
& & $15\sim5000$ & 35 & \refcite{gol03}\\
030429 & & $30\sim400$ & 14 & \refcite{dot03} \\
& & $25\sim100$ & 5 & \refcite{hur03} \\
040511 & & $30\sim400$ & 38 & \refcite{dul04} \\
041006 & & $25\sim100$ & 24.6 & \refcite{van04} \\
\hline
\end{tabular*} \label{ta1}}
\end{table}
\newpage
\begin{table}[ph]
\tbl{Estimated values of the initial Lorentz factor and the corresponding rest
frame peak energy}
{ \begin{tabular}{@{}ccccc@{}}
\\
\hline
GRB/XRF & $E_{0,p}$ & $\Gamma$\\
\hline
970508& 0.354(0.111) & 206(26) \\
970828& 2.081(0.477) & 141(16) \\
980519& 2.311(0.594) & 157(26) \\
980703& 3.397(0.734) & 75( 7) \\
990123& 4.027(0.630) & 253(34) \\
990510& 0.752(0.086) & 285(17) \\
990705& 0.794(0.116) & 220(27) \\
990712& 0.330(0.065) & 142(14) \\
991216& 1.193(0.335) & 270(54) \\
011211& 0.753(0.101) & 124( 8) \\
020124& 0.723(0.183) & 254(48) \\
020405& 1.530(0.405) & 202(36) \\
020813& 0.857(0.114) & 188(18) \\
021004& 0.930(0.521) & 144(51) \\
021211& 0.133(0.033) & 343(68) \\
030226& 0.859(0.203) & 170(15) \\
030328& 0.837(0.117) & 191(18) \\
030329& 0.293(0.024) & 136(10) \\
030429& 0.203(0.083) & 317(97) \\
040511& 0.697(0.189) & 236(44) \\
041006& 0.289(0.106) & 190(59) \\
\hline
\end{tabular}}
\end{table}
Shown in Fig. 1 is the relation between $E_{0,p}$ and $\Gamma$. It
shows clearly that $\log E_{0,p}$ and $\log \Gamma $ are not
correlated at all. The un-correlation between the two quantities
seems to suggest that the rest frame peak energy is strongly
associated with the mechanism rather than with the expansion speed
(one will find in the following that the distribution of $\Gamma $
is much more clustered than that of $E_{0,p}$).
\begin{figure}
\centering
\includegraphics[width=3.0in]{fig1.ps}
\caption{Relationship between $E_{0,p}$ and $\Gamma $. The
correlation coefficient between $\log E_{0,p}$ and $\log \Gamma $ is $%
r=0.078$ ($N=21$) and the probability of rejecting the null hypothesis is 0.734}
\end{figure}
Displayed in Fig. 2 are the distributions of $E_{0,p}$ and
$\Gamma$. The rest frame peak energy peaks at $E_{0,p}=0.8keV$ and is mainly distributed
within $(0.3keV,3keV)$. The Lorentz factor peaks at $200$ and it
is found mainly within $(100,400)$, which is very narrow.
\begin{figure}
\centering
\subfigure[]{
\label{fig:subfig:a}
\includegraphics[width=3.0in]{fig2a.ps}}
\subfigure[]{
\label{fig:subfig:b}
\includegraphics[width=3.0in]{fig2b.ps}}
\caption{ Distributions (solid lines) of $\log
E_{0,p}$ (a) and $\log \Gamma $ (b).}
\end{figure}
As Fig. 3 shows, there is a very tight correlation between $E_{0,p}$ and $%
E_p $: $\log E_p=(0.85\pm 0.25)\log E_{0,p}+(2.56\pm 0.10)$. Note
that, relation $E_p\simeq 2\Gamma E_{0,p}$ itself could not
guarantee the
correlation, since if it did, it should also lead to a correlation between $%
E_{0,p}$ and $\Gamma $, but this is not true (see Fig. 1). The
correlation between $E_{0,p} $ and $E_p$ must arise from
mechanisms other than from the Doppler effect.
\begin{figure}
\centering
\includegraphics[width=3.0in]{fig3.ps}
\caption{Relationship between $E_p$ and $E_{0,p}$. The correlation
coefficient between $\log E_p$ and $\log E_{0,p}$ is $r=0.960$
($N=21$) and the probability of rejecting the null hypothesis is
$P<0.0001$.}
\end{figure}
\section{Discussion and conclusions}
In this paper we propose a method which does not refer to the
delayed emission of the early afterglow to estimate the initial
Lorentz factor of GRBs, in case the detection of the early
afterglows of many bursts might be missed. Due to the fact that the
afterglows of some bursts were observed soon after the detection of
the main emission, we assume that the afterglows of the bursts
concerned occur well before the prompt emission dies away (i.e.,
we assume $t_{aft}\leq t_{dur}$). Under this assumption, the bulk
Lorentz factor of a burst measured at the break time, $t_{jet}$,
and that measured at the time marking the end of duration,
$t_{dur}$, could be well related by the law of $\Gamma (t)\propto
t^{-3/8}$ according to the beaming scenario. Employing the concept
of the efficiency for converting the explosion energy to
radiation, $\xi $, we can relate the initial Lorentz factor of a
burst to that measured at $t_{dur}$. Combining the two relations
one can therefore estimate the initial Lorentz factor of a burst
from that measured at $t_{jet}$. The corresponding rest frame peak
energy can hence be estimated from this initial Lorentz factor and
the observed peak energy according to the Doppler effect.
Applying this method, the initial Lorentz factors of the bulk
motion as well as the corresponding rest frame spectral peak
energies of GRBs for a new sample for which the redshift and the
break time in the afterglows are available are estimated. The
sample employed is that presented currently by Ref.~\refcite{fri05}.
Our analysis shows that the initial Lorentz factor $\Gamma
$ peaks at $200$ and is distributed mainly within $(100,400)$, and
the peak of the distribution of the corresponding rest frame peak energy is $%
E_{0,p}=0.8keV$ and its main region is $(0.3keV,3keV)$.
It is known that, a large value of the Lorentz factor, $\Gamma
>100$, is essential to overcome the compactness problem (see,
e.g., Ref.~\refcite{pir05}). As individual cases, the optical flash
accompanying GRB 990123 provids a direct evidence for a large
Lorentz factor\cite{sar99a} $\Gamma \sim 200$
Statistically, Mallozzi et al. (1995) found\cite{mal95} that the average value
of $E_p$ for 82 bright bursts is $\sim 340keV$. Taking
$E_{0,p}=0.8keV$ and adopting $E_p\simeq 2\Gamma E_{0,p}$, we find
that the average Lorentz factor of these bursts would be $\sim
213$, which is consistent with what we obtained above. Preece et
al. (2000) revealed\cite{pre00} by the analysis of high time resolution
spectroscopy of 156 bright bursts that the main range of $E_p$ for
these sources could be found to be within $\sim [100,800]keV$.
This would lead to a range of $\Gamma \sim [62,500]$ when adopting
$E_{0,p}=0.8keV$ and $E_p\simeq 2\Gamma E_{0,p}$, which is also in
agreement with what we find in this paper.
As shown in Table 2, the estimated initial Lorentz factor for GRB 990123 is $%
\Gamma \sim 253\pm34 $ which is slightly larger than what is obtained with
the method referring to the delayed emission of the early
afterglow (see Ref.~\refcite{sar99a}, where $\Gamma \sim 200$ was
presented). Applying (6), we get $\Gamma _{dur}=202\pm22$ for this
source. We argue that the initial Lorentz factor estimated with
our method is that associated with the initial explosion of a
burst. It is natural that this value is larger than others which
are measured at later times. This might be the cause for the
detected difference. Ignoring this slight difference, our method
is consistent with that referring to the delayed emission of the
early afterglow.
We suspect that, a very strong shock might produce higher energy
photons, which is characterized by a large value of $E_{0,p}$, and
this would lead to a large value of $E_p$ (note that, as shown
above, the Lorentz factor does
not change much for different sources). We make a statistical analysis for $%
E_\gamma $ and $E_{0,p}$ and find that they are indeed obviously
correlated (the figure is omitted). We then understand why
$E_\gamma $ is correlated
with $E_p$. It is because that strong shocks produce large values of both $%
E_\gamma $ and $E_p$, whereas weak shocks lead to smaller values.
We assume through out this paper that the afterglow is dominated
by an adiabatic process. However, there is an alternative which is
a radiative process, for which, $p=3/7$ should be adopted. We
repeat our work by replacing $p=3/8$ with $p=3/7$. We find that,
the value of $\Gamma $ is mainly distributed within $(106,584)$
which is slightly larger than what we expect. Thus, we tend to
believe that, during the epoch of the afterglow, the dominated
process is adiabatic rather than radiative (this is not certain
since the resulted Lorentz factors are still within the acceptable
range).
In the above analysis, we assume that $\xi (\Gamma
-1)mc^{2}\gg\Delta E_{th}$. Does our analysis strongly depend on
this assumption? To find an answer to this, let us assign $\Delta
E_{th} \equiv \eta \xi (\Gamma -1)mc^{2}$, where $\eta$ is a
constant. In this way, equation (4) becomes
\begin{equation}
\Gamma mc^{2}-\Gamma _{dur}mc^{2} =(1+ \eta) \xi (\Gamma
-1)mc^{2}.
\end{equation}
Taking $\xi'\equiv(1+ \eta) \xi$, we get
\begin{equation}
(1-\xi' )\Gamma =\Gamma _{dur}-\xi'.
\end{equation}
According to (10), $\xi'=1$ suggests that all the explosion energy
converts to radiation, which would not be true. Thus, we have
$0<\xi'<1$. In this case, one finds that formulas (5), (7), (8)
and (9) are valid when one replaces $\xi$ with $\xi'$. This
indicates that the above analysis does not depend on the the
mentioned assumption. When the two amounts of energy are
comparable, i.e., $ \eta\simeq 1$, one has $\xi'\simeq 2 \xi$.
Adopting $\xi=0.2$ leads to $\xi'=0.4$, which would not
significantly change the results obtained above.
\section*{Acknowledgments}
This work was supported by the Special Funds for Major State Basic
Research Projects (``973'') and National Natural Science
Foundation of China (No. 10273019).
|
2,877,628,090,270 | arxiv | \section{Introduction}
The study of orthogonal polynomials with respect to the generalized weight function $|x|^\rho $ $\exp(-|x|^m)$, $\rho >-1, $ $m>0$, began with G\'{e}za Freud, see for example \cite{Fr76}. We refer to \cite{CJ18} for a interesting historic summary about the studies of generalized Freud polynomials.
A symmetric Freud weight function in one variable is usually given by
\begin{equation*}
w_{t}(x)=e^{-x^4+tx^2},
\end{equation*}
for $x \in \mathbb{R}$, and $t\in \mathbb{R}$ is consider as a time parameter. The corresponding moments exist and depend on $t$ as
\begin{equation*}
\mu_k(t) = \int_{-\infty}^{+\infty} x^k e^{-x^4 + tx^2} dx, \quad k=0,1,\ldots .
\end{equation*}
Therefore, the sequence of orthonormal polynomials with respect to $w_{t}(x)$ is a sequence of polynomials on the variable $x$ whose coefficients depend on $t$, that we denote $\{p_n(x,t)\}_{n\geqslant 0}$, and satisfies the three term recurrence relation in the form
$$
x\,p_n(x,t) = a_{n}(t)p_{n+1}(x,t) + a_{n-1}(t) p_{n-1}(x,t), \quad n \geqslant 0,
$$
with $p_{-1}(x,t)=0$ and $p_{0}(x,t)=\mu_{0}(t)^{-1/2}$.
It is well known that the coefficients $ a_{n}(t)$ satisfy the difference equation
\begin{equation} \label{dif_eq_1_var}
4\,a_{n}^2(t) [a_{n+1}^2(t) + a_{n}^2(t) + a_{n-1}^2(t)] - 2\, t \,a_{n}^2(t) = n+1 , \quad n \geqslant 0,
\end{equation}
where $a_0^{2}(t) =\mu_2(t)/\mu_0(t)$ and $a_{-1}(t) =0$ (see, for instance, \cite{BR94, Ma86, Mag99, Va08}).
Also well known is the fact that the difference equation \eqref{dif_eq_1_var} coincides with the discrete Painlev\'{e} equation dPI
\begin{equation*}
x_n (x_{n+1} + x_n + x_{n-1}) - \delta\,x_n = \alpha\, n +\beta +(-1)^n \,\gamma,
\end{equation*}
with $x_n = a^2_{n}(t), \alpha = \beta = 1/4, \gamma =0, \delta = t/2.$ See more about relations between orthogonal polynomials and Painlev\'{e} equations in \cite{Va08} and the references therein.
If we consider the sequence of monic orthogonal polynomials associated with $w_{t}(x)$, $\{q_n(x,t)\}_{n\geqslant 0}$, again a sequence of polynomials in the variable $x$ and whose coefficients depend on $t$, it satisfies
$$
x \,q_n(x,t) = q_{n+1}(x,t) + \beta_{n}(t) q_{n-1}(x,t), \quad n \geqslant 0,
$$
with $q_{-1}(x,t)=0$, $q_{0}(x,t)=1$ and $\beta_{n}(t) = a_{n-1}^{2}(t)$. The coefficients $\beta_{n}(t)$ satisfy the Langmuir lattice (or Volterra lattice)
\begin{equation} \label{Lang_one_v}
\dot{\beta}_{n}(t) = \beta_{n}(t) [\beta_{n+1}(t)-\beta_{n-1}(t)], \quad
n \geqslant 0,
\end{equation}
where, as usual, $ \dot{\beta}_{n}(t) = \dfrac{d}{dt} \beta_{n}(t)$, see \cite{Pe01}.
Consequently, the Langmuir lattice in terms of $a_{n}(t)$ is
$$
\dot{a}_{n}(t) = \frac{a_{n}(t)}{2} [a_{n+1}^2(t)-a_{n-1}^2(t)], \quad n \geqslant 1.
$$
\medskip
The connection between the coefficients of the three term recurrence relation for orthogonal polynomials in one variable and Painlev\'{e} equations (\cite{Va08}), Langmuir or Toda lattices (\cite{Pe01}) is very well known.
A fundamental paper regarding discrete Painlev\'e I and Laguerre-Freud equations is \cite{Mag99}.
The motivation of this manuscript is to analyse extensions of the equation dPI, showing that
the matrix coefficients of three term relations of two variable Freud orthogonal polynomials satisfy some matrix difference equations, that we call matrix Painlev\'e-type difference equations, and also to present two dimensional version of the Langmuir lattices. There are previous papers dealing with the extension of such systems to the matrix realm. In \cite{Cas12,Gru11} the matrix extension of dPI was first derived using the Riemann-Hilbert problem for the theory of matrix orthogonal polynomials. This has been extended further to alt-dPI, dPII and dPIV, see \cite{Bra21,Bra21_2,Bra22,Cas19}. Matrix Painlev\'e systems have been also studied in \cite{Caf14,Caf18}. Confinement of singularities is a very interesting property for non-linear discrete system derived within orthogonal polynomial
theory (\cite{Mas19,Ram91}), for its application for matrix dPI see \cite{Cas14}.
As it is well known, the study of bivariate orthogonal polynomials is not developed as deeply as in the univariate case. The first difficulty lies in the fact that there is no unique orthogonal system, due to the fact that several orderings of the bivariate monomials are possible. Therefore, it is necessary to fix an order on the monomials, to choose a representation for the polynomials and develop the theory. In this paper, we use the vector representation for polynomials in two variables introduced in \cite{Ko1}, \cite{Ko2}, and developed in \cite{Xu93}. There, the graded lexicographical order is used, and the representation of the polynomials as vectors whose entries are independent polynomials of the same total degree is introduced.
However, the size of these vectors and the corresponding coefficient matrices of the formulas are increasing with the degree, on the contrary to the non-matrix case, where the size is constant.
In \cite{Sue99}, the vector representation for general families of bivariate orthogonal polynomials is not used, but main properties as three term relations for the orthogonal polynomials appear in a non-matrix formulation. Despite to the fact that the vector-matrix representation apparently adding more complexity to the problem, the vector representation of the families of orthogonal polynomials and the vector-matrix formulation of the three term relations, that first appeared in \cite{Ko2}, has proven to be a very powerful tool when formulating results in the bivariate environment, simplifying the notations. Now, the involved coefficients are, in general, rectangular matrices of increasing size. Nevertheless, the vector-matrix notation must be interpreted as a compact form to express properties that could be write in another form, as, for instance, in \cite{Sue99}.
The aim of this paper is to investigate the symmetric bivariate Freud weight function given by
$$
W(x,y) =e^{-q(x,y)},\qquad (x,y) \in \mathbb{R}^2,
$$
where
$$
q(x,y) = a_{4,0} \, x^4 + a_{2,2} \, x^2 \, y^2 + a_{0,4}\, y^4 + a_{2,0} \, x^2 + a_{0,2} \, y^2
$$
and $a_{i,j}$ are real parameters. We analyse the bivariate orthonormal polynomials with respect to $W(x,y)$ by using, as the main tool, the vector representation for the families of orthogonal polynomials. In this environment, we can formulate the main properties in a vector-matrix form, deducing and writing the properties in a friendly form extending the results in one variable to the bivariate case.
We extend the difference equation \eqref{dif_eq_1_var} for the matrix coefficients of the three term relations for these polynomials when $a_{2,0} = a_{0,2} = - t$, getting matrix Painlev\'e-type difference equations for the respective coefficients. We also present 2D Langmuir lattices for the matrix coefficients of the three term relations satisfied by the orthogonal polynomial systems associated with $W(x,y)$, where $a_{2,0} = a_{0,2} = - t$, $t \in \mathbb{R}$. Furthermore, matrix differential-difference equations are provided for the orthogonal polynomial systems.
This paper is structured as follows. In Section \ref{sec_basic_tools} we recall the basic results about bivariate polynomials in vector-matrix representation that we need along the paper.
In Section \ref{sec_bi_Fr_we_fun} we present the Freud inner product associated with the bivariate Freud weight function that is considered in this work.
The three term relations satisfied by the bivariate orthonormal polynomials and the involved matrix coefficients are given. The structure relations as well as a differential-difference equation satisfied by the orthonormal polynomials system are also presented. These structure relations are related to the matrix Pearson-type equation satisfied by the bivariate Freud weight function.
In Section \ref{sec_some_results}, relations for the coefficients of the three term relations and for the coefficients of the structure relations for orthonormal polynomials are presented. We also give non-linear four term relations for the coefficients of the three term relations for orthonormal polynomials.
In Section \ref{sec_Freud_type_} we present the main results, that are the matrix Painlev\'e-type difference equations for the coefficients of the three term relations of the orthonormal polynomial system. They are
extensions for two variables for the difference equation \eqref{dif_eq_1_var}, see Theorem \ref{Theo_TTR_A}.
Furthermore, considering the Freud weight function
$W(x,y) = e^{-q(x,y)}$, with
$q(x,y)={a_{4,0}x^4 + a_{2,2}x^2y^2 + a_{0,4}y^4 - t (x^2 + y^2)},$
depending on the real parameter $t$, 2D Langmuir lattices for the coefficients of the three term relations are given in Section \ref{sec_Langmuir_lattice}.
\medskip
\section{Basic tools}
\label{sec_basic_tools}
We start introducing the basic definitions and main tools that we will need along the paper. We refer mainly \cite{DX14}.
Let us consider the linear space of real polynomials in two variables $x$ and $y$
$$
\Pi = \mathrm{span} \langle x^h\,y^k: h, k \geqslant 0\rangle,
$$
and we define the linear space
$$
\Pi_n = \mathrm{span} \langle x^h\,y^k: h+ k \leqslant n\rangle,
$$
of finite dimension $(n+1)(n+2)/2$. Observe that $\cup_{n\geqslant 0} \Pi_n = \Pi.$
As usual, a two variable polynomial of (total) degree $n$, i.e., $p(x,y)\in \Pi_n$, is given by
$$
p(x,y) = \sum_{h+k\leqslant n} c_{h,k}\, x^h \, y^k, \quad c_{h,k}\in \mathbb{R}.
$$
Now we define the vector representation for bivariate polynomials introduced in \cite{Ko1}, \cite{Ko2}, and developed in \cite{Xu93}, by using the graded lexicographical order. Notice that the size of the vectors is increasing with the degree.
\begin{definition}
A \emph{polynomial system (PS)}
is a sequence of vectors of polynomials $\{\mathbb{P}_n\}_{n\geqslant0}$ of increasing size $(n+1)$
$$
\mathbb{P}_n = (P_{n,0}(x,y), P_{n,1}(x,y), \ldots, P_{n,n}(x,y))^T,
$$
such that every bivariate polynomial $P_{n,i}(x,y)$ has exactly degree $n$ and the set $\{P_{n,0}(x,y)$, $P_{n,1}(x,y)$, $\ldots$, $P_{n,n}(x,y)\}$ is linearly independent.
\end{definition}
Observe that $\{\mathbb{P}_m\}_{m=0}^n$ contains a basis of $\Pi_n$, and, by extension, we will say that $\{\mathbb{P}_m\}_{m=0}^n$ is a basis of $\Pi_n$.
The simplest PS is the so-called {\it canonical basis} $\{\mathbb{X}_n\}_{n\geqslant0}$, defined as
$$
\mathbb{X}_n = (x^n, x^{n-1}\,y, x^{n-2}\,y^2, \ldots, x\,y^{n-1}, y^{n})^T.
$$
Following \cite{DX14}, observe that
\begin{equation} \label{xXLX}
x\,\mathbb{X}_n = x\,\begin{pmatrix}
x^n\\
x^{n-1}\, y\\
x^{n-2}\, y^2\\
\vdots\\
x\, y^{n-1}\\
y^n
\end{pmatrix} = \begin{pmatrix}
x^{n+1}\\
x^{n}\, y\\
x^{n-1}\, y^2\\
\vdots\\
x^2\, y^{n-1}\\
x\,y^n
\end{pmatrix} = L_{n,1}\,\mathbb{X}_{n+1},
\end{equation}
for $n\geqslant 0$, analogously, $y\,\mathbb{X}_n = L_{n,2}\,\mathbb{X}_{n+1}$, where $L_{n,1}$ and $L_{n,2}$ are $(n+1)\times (n+2)$ matrices given by
\begin{equation} \label{L1L2}
L_{n,1} = \left(\begin{array}{ccc|c}
1 & & \bigcirc & 0 \\
& \ddots & & \vdots \\
\bigcirc & & 1 & 0
\end{array}\right)
\quad \mbox{and} \quad
L_{n,2} = \left(\begin{array}{c|ccc}
0 & 1 & & \bigcirc \\
\vdots & & \ddots & \\
0 & \bigcirc & & 1
\end{array}\right),
\end{equation}
where the symbol $\bigcirc$ represents a triangle of zero elements of adequate size. This notation will be used along this work.
Observe that $L_{n,i}$ are full rank matrices,
such that $L_{n,i}\,L_{n,i}^T = I_{n+1}$.
We can write
\begin{equation}\label{partial_x}
\partial_x\,\mathbb{X}_n = \partial_x\,\begin{pmatrix}
x^n\\
x^{n-1}\, y\\
x^{n-2}\, y^2\\
\vdots\\
x\, y^{n-1}\\
y^n
\end{pmatrix} = \begin{pmatrix}
n\,x^{n-1}\\
(n-1)\,x^{n-2}\, y\\
(n-2)\,x^{n-3}\, y^2\\
\vdots\\
y^{n-1}\\
0
\end{pmatrix} = L_{n-1,1}^T\,N_{n,1}\,\mathbb{X}_{n-1},
\end{equation}
moreover, $\partial_y\,\mathbb{X}_n = L_{n-1,2}^T\,N_{n,2}\,\mathbb{X}_{n-1}$, where
\begin{equation}\label{N_n}
N_{n,1} = \begin{pmatrix}
n & & & \bigcirc \\
& n-1 & & \\
& & \ddots & \\
\bigcirc & & & 1
\end{pmatrix}
\quad \mbox{and} \quad
N_{n,2} = \begin{pmatrix}
1 & & & \bigcirc \\
& 2 & & \\
& & \ddots & \\
\bigcirc & & & n
\end{pmatrix}.
\end{equation}
Let $\{\mathbb{P}_n\}_{n\geqslant0}$ be a PS. There exist matrices of constants $G^n_k$ of respective sizes $(n+1)\times (k+1)$ such that every vector polynomial $\mathbb{P}_n$ can be express in terms of the canonical basis
\begin{equation*}
\mathbb{P}_n = G_n \,\mathbb{X}_n + G_{n-1}^{n} \,\mathbb{X}_{n-1} +
G_{n-2}^{n}\, \mathbb{X}_{n-2} + \cdots + G_{1}^{n}\, \mathbb{X}_{1} + G_{0}^{n} \, \mathbb{X}_{0},
\end{equation*}
where $G_n$ is a $(n+1)$ non-singular matrix, because the independence of the entries of $\mathbb{P}_n$ and $\mathbb{X}_n$. We use the convention $G_{m}^n = \mathtt{0}$, for $m>n$ and $m<0$.
\medskip
\section{Bivariate Freud weight functions}
\label{sec_bi_Fr_we_fun}
We work with a bivariate Freud weight function in the form
\begin{equation} \label{wf}
W(x,y) =e^{-q(x,y)},\quad (x,y) \in \mathbb{R}^2,
\end{equation}
where
\begin{equation}\label{q(x,y)}
q(x,y) = a_{4,0} \, x^4 + a_{2,2} \, x^2 \, y^2 + a_{0,4}\, y^4 + a_{2,0} \, x^2 + a_{0,2} \, y^2,
\end{equation}
is a bivariate polynomial of degree 4, such that the coefficients $a_{4,0}, a_{2,2}, a_{0,4} \geqslant 0$, and $a_{2,0}, a_{0,2} \in \mathbb{R}$, with
$a_{4,0} + a_{2,2} > 0$ and $a_{2,2} + a_{0,4} > 0$.
Observe that $q(- x,- y) = q(x,y)$, for $(x,y) \in \mathbb{R}^2$, that is, $q(x,y)$ is an even function, and, as consequence,
$W(-x,-y) = W(x,y)$. Following \cite[p.~76]{DX14}, the bivariate Freud weight function $W(x,y)$ is centrally symmetric.
We define the bivariate Freud moment functional as
\begin{equation*}
\langle \mathbf{u}, f \rangle = \iint_{-\infty}^{+\infty} f(x,y)\, W(x,y)\, dx\,dy,
\end{equation*}
and its associated moments as
$$
\mu_{n,m} =\langle \mathbf{u}, x^n\,y^m \rangle = \iint_{-\infty}^{+\infty} x^n\,y^m\, W(x,y)\, dx\,dy < +\infty,
$$
for $n, m = 0, 1, 2, \ldots$. Since $\mathbf{u}$ is centrally symmetric, then, for $n+m$ odd, we get
$$
\mu_{n,m} =\langle \mathbf{u}, x^n\,y^m \rangle= 0.
$$
Furthermore, since the special shape of the weight function, the moments such that $n$ or $m$ is odd are zero, that is,
$$
\mu_{n,m} =0, \quad \mathrm{for} \ n \ \mathrm{or} \ m \ \mathrm{odd}.
$$
Thus, we will consider the inner product
\begin{equation}\label{ip}
( f, g) := \langle \mathbf{u}, f\,g\rangle = \iint_{-\infty}^{+\infty} f(x,y)\,g(x,y)\,W(x,y)\, dx\,dy.
\end{equation}
\subsection{Orthonormal Polynomial Systems}
Let $\{\mathbb{P}_n\}_{n\geqslant0}$ be a polynomial system satisfying
\begin{align*}
(\mathbb{P}_n, \mathbb{P}_n^T) = \langle \mathbf{u},\mathbb{P}_n\,\mathbb{P}_n^T \rangle &= I_{n+1},\\
(\mathbb{P}_n, \mathbb{P}_m^T) = \langle \mathbf{u},\mathbb{P}_n\,\mathbb{P}_m^T \rangle &= \mathtt{0},
\end{align*}
where $\mathtt{0}$ is the zero matrix of adequate size. We say that $\{\mathbb{P}_n\}_{n\geqslant0}$ is an {\it orthonormal polynomial system} associated with the Freud inner product \eqref{ip}.
Since the inner product \eqref{ip} is centrally symmetric, every vector of polynomials $\mathbb{P}_n$ reduces to
\begin{equation}\label{expl_expr}
\mathbb{P}_n = G_n \,\mathbb{X}_n + G_{n-2}^{n}\, \mathbb{X}_{n-2} + G_{n-4}^{n}\, \mathbb{X}_{n-4} + \cdots,
\end{equation}
that is, $\mathbb{P}_n$ contains only even powers if $n$ is even, or odd powers in the case of $n$ odd. The
matrices $G^n_k$ are of order $(n+1)\times (k+1)$ and $G_n$ is a matrix of order $(n+1)\times (n+1)$.
\subsection{Three term relations}
Since $W(x,y)$ is an even function, the three term relations for the orthonormal polynomial system $\{\mathbb{P}_n\}_{n\geqslant 0}$ takes the form (\cite[p. 77]{DX14}),
\begin{equation}\label{TTR-O}
\begin{aligned}
x \,\mathbb{P}_n &= A_{n,1}\,\mathbb{P}_{n+1} + A_{n-1,1}^{T}\,\mathbb{P}_{n-1}, \\
y \,\mathbb{P}_n &= A_{n,2}\,\mathbb{P}_{n+1} + A_{n-1,2}^{T}\,\mathbb{P}_{n-1},
\end{aligned}
\end{equation}
for $n \geqslant 0$, where $\mathbb{P}_{-1} =0$, $\mathbb{P}_0 = \mu_{0,0}^{-1/2}$, and $A_{n,i}$, for $i=1,2$, are full rank $(n+1)\times(n+2)$ matrices.
Observe that the $2(n+1)\times (n+2)$ joint matrix
\begin{align} \label{joint_An}
A_n = \left(\begin{array}{c}
A_{n,1}\\
A_{n,2}
\end{array}\right)
\end{align}
is also a full rank matrix.
Computing directly, we get the initial terms
$$
A_{0,1} = \left(\sqrt{\frac{\mu_{2,0}}{\mu_{0,0}}}, 0\right), \quad
A_{0,2} = \left(0,\sqrt{\frac{\mu_{0,2}}{\mu_{0,0}}}\right),
$$
since $\mathbb{P}_1 = (\mu_{2,0}^{-1/2} x, \mu_{0,2}^{-1/2} y)^T$. In this way, the leading coefficient matrices of $\mathbb{P}_0$ and $\mathbb{P}_1$ are respectively given by
$$
G_0 = \mu_{0,0}^{-1/2}, \qquad G_1 = \begin{pmatrix}
\mu_{2,0}^{-1/2} & 0 \\
0 & \mu_{0,2}^{-1/2}
\end{pmatrix}.
$$
\subsection{Pearson matrix equation for the Freud weight function}
A direct computation on $W(x,y)$, given by \eqref{wf} and \eqref{q(x,y)}, shows that
\begin{equation}\label{Pearson}
\begin{aligned}
\partial_x W(x,y) &= -(4 \, a_{4,0} \, x^3 + 2 \, a_{2,2}\, x \, y^2 + 2 \, a_{2,0}\, x) \, W(x,y),\\[1ex]
\partial_y W(x,y) &= -(2 \, a_{2,2} \, x^2 \, y + 4 \,a_{0,4} \, y^3 + 2 \, a_{0,2}\, y) \, W(x,y).
\end{aligned}
\end{equation}
Given $M_1, M_2$, matrices of polynomials of the same order,
the divergence operator for the join matrix is defined by
\begin{equation*}
\mathrm{div}\left(\begin{array}{c}
M_1\\
M_2
\end{array}\right) = \partial_x M_1 + \partial_y M_2,
\end{equation*}
hence, we can state that
the weight function \eqref{wf} satisfies the bivariate Pearson equation
$$
\mathrm{div}(\Phi \, W(x,y) ) = \Psi^{T} \, W(x,y),
$$
where
\[
\Phi = \begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}, \quad \quad
\Psi = \begin{pmatrix} \psi_1\\ \psi_2 \end{pmatrix},
\]
\begin{align*}
\psi_1 = \psi_1(x,y) &= -(4 \, a_{4,0} \, x^3 + 2 \, a_{2,2}\, x \, y^2 + 2 \, a_{2,0}\, x), \\
\psi_2 = \psi_1(x,y) &= -(2 \, a_{2,2} \, x^2 \, y + 4 \,a_{0,4} \, y^3 + 2 \, a_{0,2}\, y).
\end{align*}
Observe that $\deg \psi_1 = \deg\psi_2 = 3$.
\subsection{Structure relation and difference-differential equation}
Now using the fact that the weight function \eqref{wf} is centrally symmetric, and the Pearson equations \eqref{Pearson} for the weight function, we know that the orthonormal polynomial system, $\{\mathbb{P}_n\}_{n\geqslant0}$, (see \cite{AFPP07}), satisfies the following structure relations
\begin{equation}\label{rel_Est}
\begin{aligned}
\partial_x \, \mathbb{P}_n &= B_{n,1} \,\mathbb{P}_{n-1} + C_{n,1} \,\mathbb{P}_{n-3}, \\[1ex]
\partial_y \, \mathbb{P}_n &= B_{n,2} \,\mathbb{P}_{n-1} + C_{n,2} \,\mathbb{P}_{n-3},
\end{aligned}
\end{equation}
for $n\geqslant1$, where $\mathbb{P}_{-2} = \mathbb{P}_{-1} =0$, $B_{n,i}$, $C_{n,i}$ are matrices of respective sizes $(n+1) \times n$ and $(n+1) \times (n-2)$, and $C_{1,i} = C_{2,i}=0$, for $i=1,2$.
\medskip
Following \cite{AFPP08}, since the Freud weight function \eqref{wf} is semiclassical, there exists a second order partial differential functional
$$
\mathcal{F} = \partial_{xx} + \partial_{yy} + \psi_1\, \partial_x + \psi_2\,\partial_y
$$
such that
\begin{equation}\label{dde}
\mathcal{F}\,\mathbb{P}_n
= \Lambda^n_{n+2}\,\mathbb{P}_{n+2} + \Lambda^n_{n}\,\mathbb{P}_{n} + \Lambda^n_{n-2}\,\mathbb{P}_{n-2},
\end{equation}
for $n\geqslant1$, where
\begin{align*}
\Lambda^n_{n+2} & = - [B_{n,1} \, C_{n+2,1}^T + B_{n,2} \, C_{n+2,2}^T ], \\
\Lambda^n_{n} & = - [ B_{n,1}B_{n,1}^T + C_{n,1}C_{n,1}^T + B_{n,2}B_{n,2}^T + C_{n,2}C_{n,2}^T ], \\
\Lambda^n_{n-2} & =- [ C_{n,1}B_{n-2,1}^T + C_{n,2} B_{n-2,2}^T ] = (\Lambda^{n-2}_{n})^T,
\end{align*}
that is, the orthonormal polynomial system $\{\mathbb{P}_n\}_{n\geqslant0}$ satisfies the matrix partial-differential-difference equation \eqref{dde}.
\medskip
\section{Results involving the matrix coefficients}
\label{sec_some_results}
In this section we show several relations between the matrix coefficients of the three term relations for orthonormal polynomials \eqref{TTR-O}, the matrix coefficients of the structure relations \eqref{rel_Est} and the matrices involved in the explicit expressions of the vector polynomials \eqref{expl_expr}.
We start by defining two useful matrices and establishing their relations with the Pearson-type equation for the weight function \eqref{Pearson}.
Let us define $(n+1) \times (n+1)$ upper and lower triangular matrices, that involve the
coefficients of the weight function \eqref{wf},
\begin{equation} \label{K1}
\begin{aligned}
K_{n,1} = &
\begin{pmatrix}
4 a_{4,0} & 0 & 2 a_{2,2} & & \bigcirc \\
& 4 a_{4,0} & 0 & \ddots & \\
& & \ddots & \ddots & 2 a_{2,2} \\
& & & \ddots & 0 \\
\bigcirc & & & & 4 a_{4,0}
\end{pmatrix}
\end{aligned}
\end{equation}
and
\begin{equation} \label{K2}
\begin{aligned}
K_{n,2} =
\begin{pmatrix}
4 a_{0,4} & & & & \bigcirc \\
0 & 4 a_{0,4} & & & \\
2 a_{2,2} & 0 & \ddots & & \\
& \ddots & \ddots & \ddots & \\
\bigcirc & & 2 a_{2,2} & 0 & 4 a_{4,0}
\end{pmatrix}.
\end{aligned}
\end{equation}
Then, one can easily see that the matrices $K_{n,i}$ and $L_{n,i}$ defined in \eqref{L1L2}, for $i=1,2$, are related as
\begin{equation}\label{LLKKLL}
\begin{aligned}
L_{n,1}\,L_{n+1,1} \, K_{n+2,1} = & \, 4 \, a_{4,0}\,L_{n,1}\,L_{n+1,1} + 2 \, a_{2,2}\,L_{n,2}\,L_{n+1,2}, \\[1ex]
L_{n,2}\,L_{n+1,2} \, K_{n+2,2} = & \, 4 \, a_{0,4}\,L_{n,2}\,L_{n+1,2} + 2 \, a_{2,2}\,L_{n,1}\,L_{n+1,1}.
\end{aligned}
\end{equation}
Using the relations \eqref{LLKKLL} and the Pearson matrix equation \eqref{Pearson}, we can prove the following result.
\begin{proposition} \label{propo_psiXLKX}
The following hold
\begin{equation} \label{psiXLKX}
\begin{aligned}
\psi_1(x,y)\,\mathbb{X}_{n-1} &= - L_{n-1,1} \,L_{n,1} \,L_{n+1,1}\, K_{n+2,1}\, \mathbb{X}_{n+2} -2a_{2,0}L_{n-1,1}\mathbb{X}_{n},
\\
\psi_2(x,y)\,\mathbb{X}_{n-1} &= - L_{n-1,2} \,L_{n,2} \,L_{n+1,2}\, K_{n+2,2}\, \mathbb{X}_{n+2} -2a_{0,2}L_{n-1,2}\mathbb{X}_{n}.
\end{aligned}
\end{equation}
\end{proposition}
\medskip
\subsection{Explicit expressions}
Next result brings explicit expressions for the matrix coefficients $A_{n,i}$, $i=1,2$, of the three term relations \eqref{TTR-O}, and for the matrix coefficients $B_{n,i}$ and $C_{n,i}$, $i=1,2$, defined on the structure relations \eqref{rel_Est}, in terms of the matrices $G_n$, $L_{n,i}$, $N_{n,i}$, and $K_{n,i}$, for $i=1,2$, defined by \eqref{expl_expr}, \eqref{L1L2}, \eqref{N_n}, and \eqref{K1}--\eqref{K2}, respectively.
\begin{proposition}
For the matrix coefficients $A_{n,i}$, $B_{n,i}$ and $C_{n,i}$, $i=1,2$, of the three term relations \eqref{TTR-O} and the structure relations \eqref{rel_Est}, respectively, the following properties hold
\begin{itemize}
\item[i)]
\begin{equation}\label{A_ni}
A_{n,i} = G_n \, L_{n,i} \, G_{n+1}^{-1}, \quad n \geqslant 0.
\end{equation}
\item[ii)]
\begin{equation}\label{B_ni}
B_{n,i} = G_n\,L_{n-1,i}^T\, N_{n,i} \,G_{n-1}^{-1}, \quad n \geqslant 1.
\end{equation}
\item[iii)]
\begin{equation}\label{C_ni}
C_{n,i}^T = G_{n-3}\,L_{n-3,i}\,L_{n-2,i}\,L_{n-1,i}\,K_{n,i}\,G_n^{-1}, \quad n \geqslant 3,
\end{equation}
\end{itemize}
where the matrices $G_n$, $L_{n,i}$, $N_{n,i}$, and $K_{n,i}$, for $i=1,2$, are defined by \eqref{expl_expr}, \eqref{L1L2}, \eqref{N_n}, and \eqref{K1}-\eqref{K2}, respectively.
Moreover, the right pseudo inverse matrix of $A_{n,i}$ is
\begin{equation}\label{A_inv}
A_{n,i}^{-1} = G_{n+1} \, L_{n,i}^T \, G_{n}^{-1}, \quad i=1,2.
\end{equation}
\end{proposition}
\begin{proof}
\noindent
i) \ Substituting the explicit expression of $\mathbb{P}_n$ \eqref{expl_expr} on the three term relation \eqref{TTR-O}
we have
\begin{align} \label{eq_expl}
x \Big[ G_n \, \mathbb{X}_n + G_{n-2}^{n} \mathbb{X}_{n-2} + \cdots \Big] = & \, A_{n,1} \Big[ G_{n+1} \mathbb{X}_{n+1} + G_{n-1}^{n+1} \mathbb{X}_{n-1} + \cdots \Big] \\
&+ A_{n-1,1}^{T} \Big[ G_{n-1} \, \mathbb{X}_{n-1} +
G_{n-3}^{n-1} \, \mathbb{X}_{n-3} + \cdots \Big] \nonumber
\end{align}
and analogue to the three term relation for the second variable. Using \eqref{xXLX}, and adjusting leading coefficients, we have
$$
G_n \, L_{n,i} = A_{n,i}\, G_{n+1}, \quad i=1,2,
$$
and \eqref{A_ni} holds.
The pseudo inverse for $A_{n,i}$ by the right side \eqref{A_inv} follows immediately.
\medskip
\noindent
ii) \ In the same way, substituting \eqref{expl_expr} on \eqref{rel_Est} for $i=1$, we get
\begin{align}
\partial_x \Big[ G_n \, \mathbb{X}_n + G_{n-2}^{n} \mathbb{X}_{n-2} + \cdots \Big] \label{expl_par}
=& \, B_{n,1} \Big[ G_{n-1} \mathbb{X}_{n-1} + G_{n-3}^{n-1} \mathbb{X}_{n-3} + \cdots \Big] \\
& + C_{n,1} \Big[ G_{n-3} \, \mathbb{X}_{n-3} + G_{n-5}^{n-3} \mathbb{X}_{n-5} + \cdots \Big]. \nonumber
\end{align}
Next, applying \eqref{partial_x}, and adjusting leading coefficients we obtain
\begin{equation*}
G_n\,L_{n-1,1}^T\, N_{n,1} = B_{n,1}\,G_{n-1}.
\end{equation*}
Doing analogue for $i=2$ we get \eqref{B_ni}.
\medskip
\noindent
iii) \ Multiplying the structure relation \eqref{rel_Est} for $i=1$ by $\mathbb{P}_{n-3}^T$, and applying the inner product \eqref{ip}, we get
\begin{align*}
\langle \mathbf{u}, \partial_x [\mathbb{P}_{n}] \mathbb{P}_{n-3}^T \rangle = & \,
B_{n,1} \langle \mathbf{u}, \mathbb{P}_{n-1}\, \mathbb{P}_{n-3}^T \rangle
+ C_{n,1} \langle \mathbf{u}, \mathbb{P}_{n-3}\, \mathbb{P}_{n-3}^T \rangle,
\end{align*}
that is,
$
C_{n,1} = \langle \mathbf{u}, \partial_x [\mathbb{P}_{n}] \mathbb{P}_{n-3}^T \rangle.
$
Then,
$$
C_{n,1} = \langle \mathbf{u}, \partial_x [\mathbb{P}_{n} \mathbb{P}_{n-3}^T ] \rangle -
\langle \mathbf{u}, \mathbb{P}_{n} \partial_x[ \mathbb{P}_{n-3}^T] \rangle
= \langle \mathbf{u}, \partial_x [\mathbb{P}_{n} \mathbb{P}_{n-3}^T ] \rangle,
$$
because the orthogonality. Integrating $C_{n,1} $ by parts on the variable $x$, taking into account the behaviour of the weight function on $\mathbb{R}^2$, i.e., for $F(x,y) \in \Pi$, the value of $F(x,y)W(x,y)$ goes to zero when the variables $x$ and $y$ diverges positive or negatively,
and using the first Pearson equation for the weight function \eqref{Pearson}, we deduce
\begin{align*}
C_{n,1} = & \, \iint_{-\infty}^{+\infty} \partial_x [ \mathbb{P}_n \, \mathbb{P}_{n-3}^T] \, W(x,y)\, dx\, dy
\, = \, - \iint_{-\infty}^{+\infty} \mathbb{P}_n \,\mathbb{P}_{n-3}^T \, \partial_x \, W(x,y)\, dx\, dy \\
= & \, - \iint_{-\infty}^{+\infty} \mathbb{P}_n \,\mathbb{P}_{n-3}^T \,\psi_1(x,y)\, W(x,y)\, dx\, dy.
\end{align*}
Using the explicit expression \eqref{expl_expr} of $\mathbb{P}_{n-3}$ and relations \eqref{psiXLKX} of Proposition \ref{propo_psiXLKX}, we deduce that $\mathbb{P}_{n-3} \psi_1(x,y)$ is a $(n-2)\times 1$ vector polynomial of degree $n$. Hence,
$$
C_{n,1} = \iint_{-\infty}^{+\infty} \mathbb{P}_n \mathbb{P}_n^T \, W(x,y)\, dx\, dy \ (G_n^{-1})^{T}\, K_{n,1}^{T}\,L_{n-1,1}^T\,L_{n-2,1}^T\,L_{n-3,1}^{T} \,G_{n-3}^T,
$$
and \eqref{C_ni} holds for $i=1$. Similar calculation can be done for $i=2$.
\end{proof}
\medskip
Next result gives relations involving the matrix coefficients $A_{n,i}$, $B_{n,i}$ $C_{n,i}$, for $i=1,2$, by themselves.
\begin{proposition}
The matrix coefficients $A_{n,i}$, $B_{n,i}$ and $C_{n,i}$, $i=1,2$, of the three term relations \eqref{TTR-O} and of the structure relations \eqref{rel_Est}, respectively, are related as follow
\begin{itemize}
\item[i)]
\begin{equation} \label{B_ni_A}
B_{n,i} = A_{n-1,i}^{-1}\, G_{n-1}\,N_{n,i} \,G_{n-1}^{-1}, \quad n\geqslant1.
\end{equation}
\item[ii)]
\begin{equation} \label{CnAn}
C_{n,i}^T = A_{n-3,i}\,A_{n-2,i}\,A_{n-1,i}\,G_n\,K_{n,i}\,G_n^{-1}, \quad n \geqslant 3.
\end{equation}
\item[iii)]
\begin{equation} \label{PropCn}
C_{n,i} = G_{n-2}^{n} G_{n-2}^{-1} B_{n-2,i} - B_{n,i} G_{n-3}^{n-1}G_{n-3}^{-1}, \quad n \geqslant 3,
\end{equation}
\end{itemize}
where the matrices $G^n_{n-k}$, $L_{n,i}$, $N_{n,i}$, and $K_{n,i}$, for $i=1,2$, are defined by \eqref{expl_expr}, \eqref{L1L2}, \eqref{N_n}, and \eqref{K1}-\eqref{K2}, respectively.
Moreover, the following relations hold
\begin{equation}
A_{n,i} G_{n-2k+1}^{n+1} + A_{n-1,i}^{T} G_{n-2k+1}^{n-2} = G_{n-2k}^{n} L_{n-2k,i} \label{GLA_i}
\end{equation}
and
\begin{equation}
B_{n,i} G_{n-2k-1}^{n-1} + C_{n,i} G_{n-2k-1}^{n-3} = G_{n-2k}^{n} L_{n-2k-1,i}^T N_{n-2k,i}, \label{GLBC_i}
\end{equation}
for $\ k=0,1,\ldots, \lfloor n/2 \rfloor$. \\
\end{proposition}
\begin{proof}
\eqref{B_ni_A} is deduced using the explicit expression of $A_{n-1,i}^{-1}$ in \eqref{B_ni}, and \eqref{CnAn} using the relation \eqref{A_ni} in \eqref{C_ni}.
The expression \eqref{GLA_i} is deduced adjusting the coefficients of $\mathbb{X}_{n-1}, \mathbb{X}_{n-3}, \ldots$ in \eqref{eq_expl}, and \eqref{GLBC_i} is obtained in the same way in \eqref{expl_par}.
Finally, using $k=1$ in \eqref{GLBC_i}, we get
$$
C_{n,i} G_{n-3} = G_{n-2}^{n} L_{n-3,i}^T N_{n-2,i} - B_{n,i} G_{n-3}^{n-1}.
$$
Since $B_{n-2,i} G_{n-3} = G_{n-2} L_{n-3,i}^T N_{n-2,i} $, we can write
\begin{align*}
C_{n,i} G_{n-3} &= G_{n-2}^{n} G_{n-2}^{-1} G_{n-2} L_{n-3,i}^T N_{n-2,i} - B_{n,i} G_{n-3}^{n-1}\\
& = G_{n-2}^{n} G_{n-2}^{-1} B_{n-2,i} G_{n-3} - B_{n,i} G_{n-3}^{n-1}
\end{align*}
hence, we get \eqref{PropCn}.
\end{proof}
We remark that, for the general case, equations \eqref{GLA_i} and \eqref{GLBC_i} can be found in \cite{MMPP18}.
\medskip
\subsection{Non-linear four term relations for the coefficients of the three term relations.}
We now show non-linear four term relations for the matrix coefficients of the three term relations $A_{n,i}, i=1,2$. We must remark that the results given in this subsection hold for every centrally symmetric weight function, since structure relations have not been used.
\begin{proposition}
The matrix coefficients of the three term relations for orthonormal polynomials, $A_{n,i}$,
$n\geqslant 0$ and $i,j=1,2$, satisfy
\begin{equation} \label{FTR_An_1}
A_{n,i} A_{n,j}^T + A_{n-1,i}^T {A}_{n-1,j} = G_{n-2}^{n} G_{n-2}^{-1} A_{n-2,i} {A}_{n-1,j} - A_{n,i} {A}_{n+1,j} G_{n}^{n+2} G_{n}^{-1},
\end{equation}
where the matrices $G^n_{n-k}$ are defined in \eqref{expl_expr}.
\end{proposition}
\begin{proof}
First we compute $\langle \mathbf{u}, x^2 \mathbb{P}_n \mathbb{P}_n^T \rangle$ using the three term relation \eqref{TTR-O} and the orthogonality. Hence,
\begin{align*}
\langle \mathbf{u}, x^2 \mathbb{P}_n \mathbb{P}_n^T \rangle &=
\langle \mathbf{u}, [A_{n,1}\,\mathbb{P}_{n+1} + A^T_{n-1,1}\,\mathbb{P}_{n-1}] [\mathbb{P}_{n+1}^T\,A_{n,1}^T + \mathbb{P}_{n-1}^T\,A_{n-1,1}] \rangle \\
&= A_{n,1}\, A_{n,1}^T + A^T_{n-1,1}\,A_{n-1,1}.
\end{align*}
Since the entries of the sequence of vectors $\{\mathbb{P}_{n}\}_{n\geqslant 0}$ form a basis for the space $\Pi$, then $x^2 \mathbb{P}_n $ can be written as
\begin{align} \label{xxPF}
x^2 \mathbb{P}_n &= F_{n+2,1}^{n} \mathbb{P}_{n+2} + F_{n,1}^{n} \mathbb{P}_{n} + F_{n-2,1}^{n} \mathbb{P}_{n-2} + \cdots ,
\end{align}
where $F_{j,1}^{n}$ are real matrices of order $(n+1) \times (j+1)$.
On the one hand, using \eqref{expl_expr}, we get
\begin{align}
x^2 \mathbb{P}_n = & \
F_{n+2,1}^{n} [G_{n+2} \,\mathbb{X}_{n+2} + G_{n}^{n+2}\, \mathbb{X}_{n} + G_{n-2}^{n+2}\, \mathbb{X}_{n-2} + \cdots] \nonumber \\
&+ F_{n,1}^{n} [G_n \,\mathbb{X}_n + G_{n-2}^{n}\, \mathbb{X}_{n-2} + G_{n-4}^{n}\, \mathbb{X}_{n-4} + \cdots] \label{eq_equal_1}\\
&+ F_{n-2,1}^{n} [G_{n-2} \,\mathbb{X}_{n-2} + G_{n-4}^{n-2}\, \mathbb{X}_{n-4} + G_{n-6}^{n-2}\, \mathbb{X}_{n-6} + \cdots] + \cdots.
\nonumber
\end{align}
On the other hand, we can write $x^2 \mathbb{P}_n $ as
\begin{align}
x^2 \mathbb{P}_n & = \,
x^2 [G_n \,\mathbb{X}_n +G_{n-2}^{n}\, \mathbb{X}_{n-2} + G_{n-4}^{n}\, \mathbb{X}_{n-4} + \cdots] \nonumber \\
& = \, G_n L_{n,1} L_{n+1,1} \mathbb{X}_{n+2} + G_{n-2}^{n} L_{n-2,1} L_{n-1,1} \mathbb{X}_n + \cdots.\label{eq_equal_2}
\end{align}
Adjusting the coefficients of the terms of $\mathbb{X}_{n+2}$ and $\mathbb{X}_{n}$ on \eqref{eq_equal_1} and \eqref{eq_equal_2}, we get
\begin{align*}
F_{n+2,1}^{n}G_{n+2} & = G_n L_{n,1} L_{n+1,1}, \\
F_{n+2,1}^{n}G_{n}^{n+2} + F_{n,1}^{n} G_n & = G_{n-2}^{n} L_{n-2,1} L_{n-1,1}.
\end{align*}
Therefore
$
F_{n+2,1}^{n} = G_n L_{n,1} L_{n+1,1} G_{n+2}^{-1} \, ,
$
and
\begin{align*}
F_{n,1}^{n} &= G_{n-2}^{n} L_{n-2,1} L_{n-1,1} G_n^{-1} - G_n L_{n,1} L_{n+1,1} G_{n+2}^{-1}G_{n}^{n+2} G_n^{-1} \\
&= G_{n-2}^{n}G_{n-2}^{-1} G_{n-2} L_{n-2,1} L_{n-1,1} G_n^{-1} - G_n L_{n,1} L_{n+1,1} G_{n+2}^{-1}G_{n}^{n+2} G_n^{-1}.
\end{align*}
From \eqref{A_ni}, $G_{n-2} L_{n-2,1} L_{n-1,1} G_n^{-1} = A_{n-2,1}A_{n-1,1}$, we obtain
\begin{align}\label{relF_n1A_n}
F_{n,1}^{n} &= G_{n-2}^{n} G_{n-2}^{-1} A_{n-2,1} {A}_{n-1,1} - A_{n,1} {A}_{n+1,1} G_n^{n+2} G_n^{-1}.
\end{align}
Finally, since
$
\langle \mathbf{u}, x^2 \mathbb{P}_n \mathbb{P}_n^T \rangle =
F_{n,1}^{n} \langle \mathbf{u}, \mathbb{P}_n \mathbb{P}_n^T \rangle =
F_{n,1}^{n},
$
then, for $ n \geqslant 0$,
$$
A_{n,1} A_{n,1}^T + A_{n-1,1}^T {A}_{n-1,1} = G_{n-2}^{n} G_{n-2}^{-1} A_{n-2,1} {A}_{n-1,1} - A_{n,1} {A}_{n+1,1} G_n^{n+2} G_n^{-1}.
$$
Similar reasoning using $y^2 \mathbb{P}_n $, $x\,y \mathbb{P}_n $ and $y\,x \mathbb{P}_n $ gives the results.
\end{proof}
Let us consider the joint matrix $A_n$, given by \eqref{joint_An}, of order $2(n+1) \times (n+2)$, and the joint matrix of order $(n+1) \times 2(n+2)$, denoted by $\bar{A}_n$, and defined by
\begin{equation*}
\bar{A}_n = \left(\begin{array}{cc}
A_{n,1}, &
A_{n,2}
\end{array} \right).
\end{equation*}
Remember that the Kronecker product of $A=[a_{ij}]$, matrix of order $m \times n$, and $B=[b_{ij}]$, matrix of order $p \times q$, denoted by $A \otimes B$, is defined as the following block matrix
$$ A \otimes B = \left(\begin{matrix}
a_{11} B & \dots & a_{1n} B \\
\vdots & \ddots & \vdots \\
a_{m1} B & \dots & a_{mn} B
\end{matrix}\right),$$
of order $mp \times nq$, see also \cite[p. 243]{HJ91}.
A direct use of the definition of Kronecker product and equations \eqref{FTR_An_1} yields the following result.
\begin{corollary}
The sequences of the joint matrices $A_n$ and $\bar{A}_n$ satisfy
$$
A_n A_n^T + \bar{A}_{n-1}^T \bar{A}_{n-1} = (I_{2} \otimes G_{n-2}^{n} G_{n-2}^{-1}) A_{n-2} \bar{A}_{n-1} - A_n \bar{A}_{n+1}
(I_{2} \otimes G_{n}^{n+2} G_{n}^{-1}).
$$
\end{corollary}
We observe that the matrices $F_{n,i}^{n+2}$, $F_{n,i}^{n}$ and $F_{n,i}^{n-2}$, for $i=1,2$ given in \eqref{xxPF} satisfy another interesting relation.
\begin{corollary}
Let $F_{m,i}^{n}$ be the matrix coefficients defined in \eqref{xxPF}, for $n\geqslant2$, $i=1,2$ and $0\leqslant m\leqslant n+2$. Then
\begin{align*}
F_{n,i}^{n} = G_{n-2}^{n} G_{n-2}^{-1} (F_{n-2,i}^{n})^T - F_{n+2,i}^n G_n^{n+2} G_n^{-1}.
\end{align*}
\end{corollary}
\begin{proof}
For simplicity here we denote the variable $x$ by $x_1$ and the variable $y$ by $x_2$, then using the three term relations \eqref{TTR-O}, for $i=1,2$,
$$
x_i^2\mathbb{P}_n = A_{n,i}A_{n+1,i}\mathbb{P}_{n+2} + (A_{n,i}A_{n,i}^T + A_{n-1,i}^TA_{n-1,i})\mathbb{P}_n + A_{n-1,i}^TA_{n-2,i}^T \mathbb{P}_{n-2}.
$$
Comparing this expression with \eqref{xxPF}, we obtain
\begin{equation}\label{comparF_n}
\begin{aligned}
F_{n+2,i}^{n} & = A_{n,i}A_{n+1,i}, \\
F_{n,i}^{n} & = A_{n,i}A_{n,i}^T + A_{n-1,1}^TA_{n-1,i}, \\
F_{n-2,i}^{n} & = A_{n-1,i}^TA_{n-2,i}^T.
\end{aligned}
\end{equation}
Hence, from \eqref{relF_n1A_n}, and \eqref{comparF_n}, we have
\begin{align*}
F_{n,i}^{n} & = G_{n-2}^{n} G_{n-2}^{-1} F_{n,i}^{n-2} - F_{n+2,i}^n G_n^{n+2} G_n^{-1}, \quad n\geqslant 2.
\end{align*}
Observing that $F_{n,i}^{n-2} = (F_{n-2,i}^{n})^T$, $n \geqslant 2$, we finally get the result.
\end{proof}
\medskip
\section{Matrix Painlev\'{e}-type difference equations}
\label{sec_Freud_type_}
In this section we obtain non-linear three term relations for the matrix coefficients, $A_{n,i}$, $i=1,2$, of the three term relations for orthonormal polynomials, \eqref{TTR-O}, extending the well known relation \eqref{dif_eq_1_var}, namely
$$
4 \,a_n^2\,(a_{n+1}^2 + a_n^2 + a_{n-1}^2) - 2 \,t\,a_n^2 = n+1,
$$
extensively studied (\cite{BR94}, \cite{Ma86}, \cite{Mag99}, \cite{Va08}, among others) to the bivariate case. We have to taking account the non-commutativity of the product of matrices.
\medskip
We know that in bivariate case the matrix coefficients $A_{n,i}$, for $i=1,2$, of the three term relations \eqref{TTR-O}, of order $(n+1)\times(n+2)$, take the place of the coefficients $a_n$ of the univariate case. We can now prove the following result.
\begin{theorem}[Matrix Painlev\'{e}-type difference equations] \label{Theo_TTR_A}
For $n\geqslant 0$, the following relations, for the matrix coefficients $A_{n,i}$, $i=1,2$,
of the three term relations \eqref{TTR-O},
hold
\begin{align*}
4\,a_{4,0}\,A_{n,1}&\left[(A_{n+1,1} A_{n+1,1}^T) A^T_{n,1} + A_{n,1}^T(A_{n,1}A_{n,1}^T + A_{n-1,1}^TA_{n-1,1})\right]\\
& + 2\,a_{2,2}\, A_{n,1} \,\left[(A_{n+1,2} A_{n+1,1}^T) A^T_{n,2}
+ A_{n,2}^T (A_{n,1}A_{n,2}^T + A_{n-1,1}^TA_{n-1,2})\right]\\
& + 2\,a_{2,0}\, A_{n,1}\,A^T_{n,1} = G_nN_{n+1,1}G_{n}^{-1}
\end{align*}
and
\begin{align*}
4\,a_{0,4}\,A_{n,2}&\left[(A_{n+1,2} A_{n+1,2}^T) A^T_{n,2} + A_{n,2}^T(A_{n,2}A_{n,2}^T + A_{n-1,2}^TA_{n-1,2})\right]\\
& + 2\,a_{2,2}\, A_{n,2} \,\left[(A_{n+1,1} A_{n+1,2}^T) A^T_{n,1}
+ A_{n,1}^T (A_{n,2}A_{n,1}^T + A_{n-1,2}^TA_{n-1,1})\right]\\
& + 2\,a_{0,2}\, A_{n,2}\,A^T_{n,2} = G_nN_{n+1,2}G_{n}^{-1},
\end{align*}
where $a_{4,0},a_{2,2},a_{0,4},a_{2,0},a_{0,2}$ are the coefficients of the bivariate Freud weight function \eqref{wf}-\eqref{q(x,y)}.
\end{theorem}
\begin{proof}
By using \eqref{Pearson}, we know that
$$
\langle \partial_x\mathbf{u}, \mathbb{P}_{n+1}\mathbb{P}_n^T\rangle = \langle \psi_1\mathbf{u}, \mathbb{P}_{n+1}\mathbb{P}_n^T\rangle.
$$
The left-hand term is given by
\begin{align*}
\langle \partial_x\mathbf{u}, \mathbb{P}_{n+1}\mathbb{P}_n^t\rangle
=& - \langle \mathbf{u}, \partial_x[\mathbb{P}_{n+1}\mathbb{P}_n^T]\rangle
= - \langle \mathbf{u}, \partial_x[\mathbb{P}_{n+1}]\mathbb{P}_n^T\rangle - \langle \mathbf{u}, \mathbb{P}_{n+1}\partial_x[\mathbb{P}_n^T]
\rangle \\
=& - \langle \mathbf{u}, \partial_x[\mathbb{P}_{n+1}]\mathbb{P}_n^T\rangle = - B_{n+1,1},
\end{align*}
by using the structure relation \eqref{rel_Est}.
To compute the right-hand term, we apply successively the three term relations. Observe that
\begin{align*}
x^2\mathbb{P}_{n+1} =& A_{n+1,1}A_{n+2,1}\mathbb{P}_{n+3} + [A_{n+1,1} A^T_{n+1,1} + A^T_{n,1}A_{n,1}]\mathbb{P}_{n+1}
+ A^T_{n,1}A^T_{n-1,1}\mathbb{P}_{n-1},\\
x^3\mathbb{P}_{n+1} =& A_{n+1,1}A_{n+2,1}A_{n+3,1}\mathbb{P}_{n+4} \\
& + [A_{n+1,1} A_{n+2,1} A^T_{n+2,1} + A_{n+1,1}A^T_{n+1,1}A_{n+1,1} + A_{n,1}^T A_{n,1} A_{n+1,1}]\mathbb{P}_{n+2} \\
& + [A_{n+1,1} A_{n+1,1}^T A^T_{n,1} + A_{n,1}^TA_{n,1}A_{n,1}^T + A_{n,1}^TA_{n-1,1}^TA_{n-1,1}] \mathbb{P}_n\\
& + A^T_{n,1}A^T_{n-1,1}A_{n-2,1}^T\mathbb{P}_{n-2},
\end{align*}
\begin{enumerate}[(i)]
\item Using $x\mathbb{P}_{n+1} = A_{n+1,1}\mathbb{P}_{n+2} + A^T_{n,1}\mathbb{P}_{n}$, we have
$$
\langle \mathbf{u}, x\mathbb{P}_{n+1}\mathbb{P}_n^T\rangle
= \langle \mathbf{u}, [A_{n+1,1}\mathbb{P}_{n+2} + A^T_{n,1}\mathbb{P}_{n}] \mathbb{P}_n^T\rangle = A^T_{n,1}.
$$
\item Moreover,
$$
\langle \mathbf{u}, x^3\mathbb{P}_{n+1}\mathbb{P}_n^T\rangle = A_{n+1,1} A_{n+1,1}^T A^T_{n,1} + A_{n,1}^TA_{n,1}A_{n,1}^T + A_{n,1}^TA_{n-1,1}^TA_{n-1,1}.
$$
\item Analogously, using $xy^2 = yxy$,
$$
\langle \mathbf{u}, xy^2\mathbb{P}_{n+1}\mathbb{P}_n^T\rangle = A_{n+1,2} A_{n+1,1}^T A^T_{n,2}
+ A_{n,2}^TA_{n,1}A_{n,2}^T + A_{n,2}^TA_{n-1,1}^TA_{n-1,2}.
$$
\end{enumerate}
Observe that
\begin{align*}
\langle \psi_1\mathbf{u}, \mathbb{P}_{n+1}&\mathbb{P}_n^T\rangle = \langle \mathbf{u}, \psi_1\mathbb{P}_{n+1}\mathbb{P}_n^T\rangle \\
=& -4a_{4,0}\langle \mathbf{u}, x^3\mathbb{P}_{n+1}\mathbb{P}_n^T\rangle - 2a_{2,2} \langle \mathbf{u}, xy^2\mathbb{P}_{n+1}\mathbb{P}_n^T\rangle - 2a_{2,0} \langle \mathbf{u}, x\mathbb{P}_{n+1}\mathbb{P}_n^T\rangle\\
=& -4a_{4,0}[A_{n+1,1} A_{n+1,1}^T A^T_{n,1} + A_{n,1}^TA_{n,1}A_{n,1}^T + A_{n,1}^TA_{n-1,1}^TA_{n-1,1}]\\
& - 2a_{2,2} [A_{n+1,2} A_{n+1,1}^T A^T_{n,2}
+ A_{n,2}^TA_{n,1}A_{n,2}^T + A_{n,2}^TA_{n-1,1}^TA_{n-1,2}]\\
& - 2a_{2,0}A^T_{n,1}.
\end{align*}
Therefore,
\begin{align*}
4 a_{4,0}&\left[(A_{n+1,1} A_{n+1,1}^T) A^T_{n,1} + A_{n,1}^T(A_{n,1}A_{n,1}^T + A_{n-1,1}^TA_{n-1,1})\right]\\
& + 2 a_{2,2} \left[(A_{n+1,2} A_{n+1,1}^T) A^T_{n,2}
+ A_{n,2}^T (A_{n,1}A_{n,2}^T + A_{n-1,1}^TA_{n-1,2})\right]\\
& + 2 a_{2,0}A^T_{n,1} = B_{n+1,1}.
\end{align*}
Since $B_{n+1,1} = A^{-1}_{n,1}G_nN_{n+1}G_{n}^{-1}$, we multiply all the equation by $A_{n,i}$ by the left-hand side, and the result follows for $i=1$. Analogous calculation can be done for $i=2$.
\end{proof}
For $a_{4,0}=a_{0,4}=1$, $a_{2,2}=0$, and $a_{2,0}=a_{0,2}= -t$, expressions in Theorem \ref{Theo_TTR_A} read as
\begin{align*}
4\,A_{n,i}\left[(A_{n+1,i} A_{n+1,i}^T) A^T_{n,i} +\right. & \left.A_{n,i}^T(A_{n,i}A_{n,i}^T + A_{n-1,i}^TA_{n-1,i})\right]
- 2\,t\,A_{n,i}\,A^T_{n,i} \\
&= G_nN_{n+1,i}G_{n}^{-1},
\end{align*}
for $i=1,2$. We can say that above expressions extend the well known Freud equation \eqref{dif_eq_1_var} for the univariate case, since here the matrix coefficients $A_{n,i}, i=1,2,$ take the same roles as the coefficients $a_n$, obey the same product and difference relations, and matrices $G_nN_{n+1,i}G_{n}^{-1}$ extend the independent term $n+1$.
In the univariate case, equation \eqref{dif_eq_1_var} is a non-linear recurrence that could determine, if no zeros occur, the
consecutive recursion coefficients. However, in the bivariate case, matrix Painlev\'{e}-type difference equations are not recurrence relations for the matrix coefficients $A_{n,i}$. The matrices $A_{n,i}$ are full rank matrices invertible only by the right hand side, and this fact prevent to use the relation as a recurrence relation to compute $A_{n+1,i}$. This fact is the same as happens with the three term relations \eqref{TTR-O}, they are not recurrence relations (\cite[p.~73]{DX14}).
Even though the dimension of the matrix coefficients $A_{n,i}$ grows linearly with respect to the index $n$, the matrix representation of the orthogonal polynomials yields interesting matrix difference equations and in the same formal model as the discrete Painlev\'{e} equation dPI. The use of the vector-matrix representation has allowed us to construct an extension of equation \eqref{dif_eq_1_var} that reads in a similar way. Theorem \ref{Theo_TTR_A} could be proved without matrix formulation as in \cite{Sue99}, but the expressions would have read in a very cumbersome way.
\medskip
\section{2D Langmuir lattices}
\label{sec_Langmuir_lattice}
The aim of this section is to deduce formal 2D Langmuir lattices associated with a Freud weight function in two variables. As in the previous sections, our results involve matrices of increasing size and can be read as extensions of the univariate Langmuir lattices.
We assume that the coefficients of the polynomial $q(x,y)$ in \eqref{q(x,y)} satisfies $a_{2,0} = a_{0,2}=-t$, with $t \in \mathbb{R}$, then the weight function is given by
$$
W_t(x,y) = e^{-(a_{4,0}x^4 + a_{2,2}x^2y^2 + a_{0,4}y^4) + t (x^2 + y^2)}, \quad (x,y) \in \mathbb{R}^2.
$$
We consider the inner product
\begin{equation}\label{ip_t}
( f, g)_t := \langle \mathbf{u}_t, f\,g\rangle = \iint_{-\infty}^{+\infty} f(x,y)\,g(x,y)\,W_t(x,y)\, dx\,dy
\end{equation}
that depends on a time parameter $t$. As usual, we denote the derivative of $f(t)$ with respect to $t$ by $\dot{f} =\dfrac{d}{dt}f(t)$.
As the univariate case, to deduce Langmuir lattices we will need a bivariate monic polynomial system
$\{\mathbb{Q}_n(x,y,t)\}_{n \geqslant 0}\equiv \{\mathbb{Q}_n(t)\}_{n \geqslant 0}$ orthogonal with respect to the inner product \eqref{ip_t} and depending on $t$. Here $\mathbb{Q}_n(t)$ is a vector of monic polynomials on the variables $(x,y)$ such that its coefficients depend on the parameter $t$. For $n\geqslant0$, we say that $\mathbb{Q}_n(t)$ is monic if the matrix $G_n(t)$ in its explicit expression \eqref{expl_expr} is the identity matrix $I_{n+1}$. In this case,
\begin{align*}
& (\mathbb{Q}_n(t), \mathbb{Q}_n(t)^T) = \langle \mathbf{u},\mathbb{Q}_n(t)\,\mathbb{Q}_n(t)^T \rangle = H_{n}(t), \\
& (\mathbb{Q}_n(t), \mathbb{Q}_m(t)^T) = \langle \mathbf{u},\mathbb{Q}_n(t)\,\mathbb{Q}_m(t)^T \rangle = \mathtt{0},
\end{align*}
where $H_n = H_{n}(t)$ is a $(n + 1)$ symmetric and positive definite matrix depending on $t$ and again $\mathtt{0}$ is the zero matrix of adequate size.
The coefficients of the three term relations for $\{\mathbb{Q}_n(t) \}_{n \geqslant 0}$ also depends on $t$. Since the inner product \eqref{ip_t} is centrally symmetric, the three term relations take the form
\begin{equation} \label{TTRmonict}
\begin{aligned}
x \, \mathbb{Q}_n(t) = L_{n,1} \mathbb{Q}_{n+1}(t) + E_{n,1}(t) \mathbb{Q}_{n-1}(t), \\
y \, \mathbb{Q}_n(t) = L_{n,2} \mathbb{Q}_{n+1}(t) + E_{n,2}(t) \mathbb{Q}_{n-1}(t),
\end{aligned}
\end{equation}
for $ n\geqslant 0$, where $\mathbb{Q}_{-1}(t)=0$, $\mathbb{Q}_{0}(t)=1$, and for $i=1,2$, the matrices $L_{n,i}$ were defined in \eqref{L1L2} and $E_{n,i}(t)$ are matrices of order $(n+1) \times n$, (see \cite[p. 70]{DX14}).
The matrices $E_{n,i}(t)$ also satisfy
\begin{equation} \label{EHHE}
E_{n,i}(t) H_{n-1}(t) = H_n(t) L_{n-1,i}^{T}, \quad i=1,2.
\end{equation}
\medskip
Next, we find the following relation between $\dot{H}_n(t)$ and $H_n(t)$.
\begin{lemma} \label{dHVH}
For $n \geqslant 0$,
$$
\dot{H}_n(t) = V_{n+1}(t) H_{n}(t),
$$
where
\begin{equation}\label{V_n}
V_{n+1}(t) = L_{n,1} E_{n+1,1}(t) + L_{n,2} E_{n+1,2}(t) + E_{n,1}(t) L_{n-1,1} + E_{n,2}(t) L_{n-1,2}.
\end{equation}
\end{lemma}
\begin{proof} Since
$
\dot{W_t}(x,y) = (x^2 + y^2)W_t(x,y),
$
we can write
\begin{align*}
\dot{H}_n(t) = &
\iint_{-\infty}^{+\infty} \dot{\mathbb{Q}}_n(t) \,\mathbb{Q}_{n}^T(t)\, W_t(x,y)\, dx\, dy +
\iint_{-\infty}^{+\infty} \mathbb{Q}_n(t) \,\dot{\mathbb{Q}}_{n}^T(t)\, W_t(x,y)\, dx\, dy \\
&
+ \iint_{-\infty}^{+\infty} \mathbb{Q}_n(t) \,\mathbb{Q}_{n}^T(t)\, (x^2 +y^2) W_t(x,y)\, dx\, dy.
\end{align*}
Notice that $\deg \dot{\mathbb{Q}}_n(t) < n$, hence, using the orthogonality, and the three term relations \eqref{TTRmonict}, we get the result.
\end{proof}
Now, we define the matrices
\begin{equation} \label{E=E+E}
\textbf{E}_n(t) =
E_{n,1}(t) + E_{n,2}(t), \quad n \geqslant 1.
\end{equation}
We can prove that the matrices $\textbf{E}_{n}(t)$ satisfy a two dimension version of the Langmuir lattice.
\begin{theorem}
The matrices $\textbf{E}_{n}(t)$ satisfy the 2D Langmuir lattice
\begin{equation}\label{E_dot}
\dot{\textbf{E}}_{n}(t) = V_{n+1}(t) \textbf{E}_{n}(t) - \textbf{E}_{n}(t) V_{n}(t), \quad n \geqslant 1,
\end{equation}
where $V_{n}(t)$ is given in \eqref{V_n}.
\end{theorem}
\begin{proof}
From \eqref{EHHE} we can write
$ \dot{H}_n(t) L_{n-1,i}^{T} = \dot{E}_{n,i}(t) H_{n-1}(t) + E_{n,i}(t) \dot{H}_{n-1}(t), $ for $i=1,2$,
hence
$$
\dot{H}_n(t) [ L_{n-1,1}^{T} + L_{n-1,2}^{T} ]= [\dot{E}_{n,1}(t) +\dot{E}_{n,2}(t)] H_{n-1}(t) + [E_{n,1}(t)+E_{n,2}(t)] \dot{H}_{n-1}(t).
$$
Using Lemma \ref{dHVH} and definition \eqref{E=E+E}, we get
\begin{align*}
V_{n+1}(t) H_{n}(t) [ L_{n-1,1}^{T} + L_{n-1,2}^{T} ] = & \ \dot{\textbf{E}}_{n}(t) H_{n-1}(t) + \textbf{E}_{n}(t) V_{n}(t) H_{n-1}(t),
\end{align*}
hence, using \eqref{EHHE},
\begin{align*}
\dot{\textbf{E}}_{n}(t) H_{n-1}(t) = & \ V_{n+1}(t) [ E_{n,1}(t) + E_{n,2}(t)] H_{n-1}(t) - \textbf{E}_{n}(t) V_{n}(t) H_{n-1}(t) .
\end{align*}
Since $H_{n-1}(t)$ is a non-singular matrix, we obtain the result.
\end{proof}
Relation \eqref{E_dot} can be seen as a formal type of 2D Langmuir lattice for the matrix coefficients of the three term relation for the monic orthogonal polynomials. The coefficient matrices $\textbf{E}_{n}(t)$ play the same role as the coefficients $\beta_{n}(t)$ of the univariate case \eqref{Lang_one_v}.
\medskip
Now, we return to orthonormal polynomial systems. Since $H_n(t)$ is symmetric and positive definite, there exists another symmetric and positive definite matrix $H_{n}^{1/2}(t)$, the
so-called \emph{square root} of the matrix $H_{n}(t)$ \cite[p. 440]{HJ85} such that
$
H_{n}(t) = H_{n}^{1/2}(t)\,H_{n}^{1/2}(t).
$
Let us define the polynomial system $\{\mathbb{P}_n(t)\}_{n\geqslant0}$ by means of
$$
\mathbb{P}_n(t) = H_{n}^{-1/2}(t)\, \mathbb{Q}_n(t), \quad n\geqslant 0.
$$
Since
\begin{align*}
& (\mathbb{P}_n(t), \mathbb{P}_n(t)^T) = (H_{n}^{-1/2}(t) \mathbb{Q}_n(t), \mathbb{Q}_n(t)^T H_{n}^{-1/2}(t)) = I_{n+1}, \\
& (\mathbb{P}_n(t), \mathbb{P}_m(t)^T) = (H_{n}^{-1/2}(t) \mathbb{Q}_n(t), \mathbb{Q}_m(t)^T H_{m}^{-1/2}(t))=\mathtt{0},
\end{align*}
then $\{\mathbb{P}_n(t) \}_{n \geqslant 0}$ is an orthonormal polynomial system with respect to \eqref{ip_t}, and satisfy the three term relations \eqref{TTR-O}, where the matrices $A_{n,i} = A_{n,i}(t)$ also depend on $t$, for $n\geqslant0$.
The matrices involved in the respective three term relations \eqref{TTR-O} and \eqref{TTRmonict} are related by
\begin{equation*}
A_{n,i}(t) = H_{n}^{1/2}(t) E_{n+1,i}^{T}(t) H_{n+1}^{-1/2}(t).
\end{equation*}
Then,
\begin{equation} \label{At=HEH}
\textbf{A}_n^{T}(t) = H_{n+1}^{-1/2}(t) \textbf{E}_{n+1}(t) H_{n}^{1/2}(t), \quad n \geqslant 0,
\end{equation}
where $\textbf{A}_n(t) = A_{n,1}(t) + A_{n,2}(t)$. Deriving \eqref{At=HEH} with respect to $t$, and omitting the parameter $t$ for simplicity, we get
$$
\dot{\textbf{A}}_n^{T} = \dot{H}_{n+1}^{-1/2} \textbf{E}_{n+1} H_{n}^{1/2} +
H_{n+1}^{-1/2} \dot{\textbf{E}}_{n+1} H_{n}^{1/2} +
H_{n+1}^{-1/2} \textbf{E}_{n+1} \dot{H}_{n}^{1/2}.
$$
Let us analyse term by term. From \eqref{E_dot} and \eqref{At=HEH}, we obtain
\begin{align}
H_{n+1}^{-1/2} \dot{\textbf{E}}_{n+1} H_{n}^{1/2}
& = H_{n+1}^{-1/2} [V_{n+2}\textbf{E}_{n+1} - \textbf{E}_{n+1}V_{n+1}] H_{n}^{1/2} \nonumber \\
& = H_{n+1}^{-1/2} V_{n+2} H_{n+1}^{1/2} \textbf{A}_n^{T}
- \textbf{A}_n^{T} H_{n}^{-1/2} V_{n+1} H_{n}^{1/2}.\label{HEdotH}
\end{align}
Using the definition of $V_{n+1}$ and \eqref{At=HEH}, we have
$$
H_{n}^{-1/2} V_{n+1} H_{n}^{1/2}
= A_{n,1} A_{n,1}^{T} + A_{n,2} A_{n,2}^{T} + A_{n-1,1}^{T} A_{n-1,1}
+ A_{n-1,2}^{T} A_{n-1,2}.
$$
Substituting this relation in \eqref{HEdotH} we get
\begin{align*}
H_{n+1}^{-1/2} \dot{\textbf{E}}_{n+1} H_{n}^{1/2}
= & [A_{n+1,1} A_{n+1,1}^{T} + A_{n+1,2} A_{n+1,2}^{T} + A_{n,1}^{T} A_{n,1} + A_{n,2}^{T} A_{n,2}] \textbf{A}_n^{T} \\
& - \textbf{A}_n^{T} [A_{n,1} A_{n,1}^{T} + A_{n,2} A_{n,2}^{T} + A_{n-1,1}^{T} A_{n-1,1}
+ A_{n-1,2}^{T} A_{n-1,2}].
\end{align*}
Therefore,
\begin{align*}
\dot{\textbf{A}}_n^{T}
= & [A_{n+1,1} A_{n+1,1}^{T} + A_{n+1,2} A_{n+1,2}^{T} + A_{n,1}^{T} A_{n,1} + A_{n,2}^{T} A_{n,2} ] \textbf{A}_n^{T} \\
& - \textbf{A}_n^{T} [A_{n,1} A_{n,1}^{T} + A_{n,2} A_{n,2}^{T} + A_{n-1,1}^{T} A_{n-1,1} + A_{n-1,2}^{T} A_{n-1,2}] \\
& + \dot{H}_{n+1}^{-1/2} \textbf{E}_{n+1} H_{n}^{1/2} +
H_{n+1}^{-1/2} \textbf{E}_{n+1} \dot{H}_{n}^{1/2}.
\end{align*}
From \eqref{At=HEH}, we get $\textbf{E}_{n+1} H_{n}^{1/2} = H_{n+1}^{1/2} \textbf{A}_n^T $ and $H_{n}^{-1/2} \textbf{E}_{n+1} = \textbf{A}_n^T H_{n}^{-1/2} $. Even, $H_{n}^{-1/2} \dot{H}_{n}^{1/2} = - \dot{H}_{n}^{-1/2} H_{n}^{1/2}$, and then
\begin{align}
\dot{\textbf{A}}_n^{T}
= & [A_{n+1,1} A_{n+1,1}^{T} + A_{n+1,2} A_{n+1,2}^{T}] \textbf{A}_n^{T} - \textbf{A}_n^{T} [A_{n-1,1}^{T} A_{n-1,1} + A_{n-1,2}^{T} A_{n-1,2}] \nonumber \\
& + [ A_{n,1}^{T} A_{n,1} + A_{n,2}^{T} A_{n,2} + \dot{H}_{n+1}^{-1/2} H_{n+1}^{1/2} ] \textbf{A}^T_{n} \label{LL_A} \\
& - \textbf{A}_n^T [A_{n,1} A_{n,1}^{T} + A_{n,2} A_{n,2}^{T} - \dot{H}_{n}^{-1/2} H_{n}^{1/2}]. \nonumber
\end{align}
Relation \eqref{LL_A} can be seen as a formal type of 2D Langmuir lattice for the matrix coefficients of the three term relation of the orthonormal centrally symmetric polynomials.
\medskip
\noindent {\bf Acknowledgements.} The authors are grateful to the anonymous referee for the constructive comments, suggestions and to point out several references, that helped to improve the manuscript.
|
2,877,628,090,271 | arxiv | \section{introduction}
Microchip-based ion traps are being investigated in several laboratories
worldwide for purposes ranging from mass spectrometry
\cite{blain2004:ijm,shortt2005:jms} to quantum information
\cite{seidelin2006:prl,pearson2006:pra,stick2006:nat}. Such traps can be
precisely manufactured using micro-electromechanical systems (MEMS) technology
offering highly integrated setups. Radiofrequency paul traps are being
developed with ions trapped above the surface of a single chip
\cite{pearson2006:pra,seidelin2006:prl} or between electrodes placed on
different chips \cite{schulz2006:fp,brownnutt06}. For loading these traps,
photoionisation techniques using laser-ablated gas \cite{hendricks2007:apb} or
laser-cooled neutral atoms \cite{cetina2007:arxiv} are utilized.
Here we present a planar microchip-based ion trap with a multipole arrangement
of radiofrequency electrodes. Built from classically machined components, such
multipole ion traps, in particular the 22-pole trap \cite{gerlich1995:ps}, are
successfully used for the study of low-temperature ion-molecule reactions of
astrophysical interest \cite{paul95,gerlich2006:ps} and to investigate
laser-induced reaction processes \cite{schlemmer2002:jcp,asvany2005:sci,
mikosch2004:jcp,trippel2006:prl,dzhonson07}. The multipole structure leads to
an effective potential with a finite depth and a large field-free central
region \cite{gerlich1995:ps,trippel2006:prl,mikosch2007:prl} that allows for
buffer gas thermalization of the translational and rovibrational degrees of
freedom of trapped molecular ions \cite{glosik06,mercier06,mikosch2004:jcp}.
We have transformed the cylindrical design of a conventional 22-pole trap into
a planar electrode structure, which allows for MEMS fabrication. The open
geometry of this planar configuration, and the application of transparent
indium tin oxide (ITO) electrodes, will allow us to overlap an optically
trapped cloud of ultracold atoms with ions confined in the microchip-based
trap. This will open up opportunities for sympathetic cooling of ions with
ultracold atoms and for experimental investigations of ultracold ion-atom
interactions.
In this work, the operation of the planar trap and its characteristics are
described. Numerical simulations of the trapping field and details of the MEMS
process will be described elsewhere \cite{kroener:s_a}. The paper is organized
as follows: an analytical model of the effective potential of the chip-based
multipole trap is presented in the next section, followed by a description of
the trap setup in section \ref{setup:sect}. Experimental results on ion
trapping and on the achieved trap lifetimes are discussed in section
\ref{results:sect}. The analysis of surface charging effects are presented in
section \ref{charging:sect}.
\section{\label{design:sect} Properties of the chip-based multipole ion trap}
The basic components of the planar chip-based multipole ion trap are two sets
of equally spaced and equally broad conducting stripes deposited on two
insulating glass substrates that face each other. Fig.\ \ref{falleschema}
shows a schematic view of the trap; every second stripe is connected to an
rf-potential $U_0 \sin(\omega t)$ and the other stripes are connected to the
opposing rf-potential $-U_0 \sin(\omega t)$. As shown below, this leads to a
repulsive effective potential in front of each of the two electrode planes,
thus yielding confinement of ions between the two planes. The distance from
the center of one stripe to the center of the next one is given by $\pi x_0$
and the distance between the two substrate surfaces is denoted $z_0$. The
width of the stripes is assumed to be $\pi x_0 / 2$. In our realisation $\pi
x_0 = 1$\,mm and $z_0 = 5$\,mm is employed.
\begin{figure}
\includegraphics[width=\columnwidth]{fig1.eps}
\caption{\label{falleschema} (Color online) Schematic view of the planar
multipole ion trap with equidistant electrodes in two nearby planes. The
electrodes are alternatingly connected to two opposing radiofrequency
potentials to provide confinement between the planes.}
\end{figure}
For an analytical description of the potential generated by the two planes of
radiofrequency electrodes we assume the plane to carry an infinite number of
stripes and the stripes to extend infinitely in the plane. We further assume
quasistationary conditions, a good approximation for trap frequencies in the
MHz regime, and obtain the potential $\Phi(\vec r) \sin(\omega t)$ by solving
the Laplace equation
\begin{equation}
\Delta \Phi(\vec r)=0.
\end{equation}
Fig.\ \ref{falleschema} shows the employed coordinate system. The boundary
conditions of the periodic arrangement of stripes are given by a periodic
trapezoidal function: the potential is constant along the electrode surfaces
and linear between the electrodes. This potential is approximated by the first
order term of its Fourier series which reads $U(x, z=\pm z_0/2, t) = 1.15\,U_0
\cos(x/x_0) \sin(\omega t)$. This approximate boundary condition satisfies the
requirement of opposite voltages on neighbouring electrodes. For distances
$\Delta z > x_0$ from the trap electrodes it is a good approximation, as shown
below. For these boundary conditions an analytical solution for the electric
field inside the trap is given by,
\begin{equation}
\Phi(\vec r)=\Phi_0 \sinh(\hat{z}) \cos(\hat{x}),
\end{equation}
where $\hat{z}=z/x_0$ and $\hat{x}=x/x_0$ are reduced variables. The value of
$\Phi_0$ is linked to the potential $U_0$ applied to the electrodes by $\Phi_0
= 1.15\,U_0 / \sinh[z_0/(2 x_0)]$.
The effective potential that an adiabatically trapped ion experiences in a
rapidly oscillating rf field is given by \cite{gerlich1992:adv}
\begin{equation}
V^*(\vec r)=\frac{q^2}{4 m \omega^2} [\vec \nabla \Phi(\vec r)]^2,
\label{effective_potential}
\end{equation}
where the charge and mass of the ion are denoted as $q$ and $m$. For the given
solution for the chip-based ion trap this yields
\begin{equation}
V^*(\vec r)=
\frac{(1.15)^2 q^2 U_0^2}{4 m \omega^2 x_0^2}
\frac{\cosh(2 \hat{z}) + \cos(2 \hat{x})}
{\cosh(z_0/x_0)-1}.
\end{equation}
For $z \gg x_0$ this solution is approximately proportional to $\exp(2
\hat{z})$. This is in contrast to cylindrical multipole ion traps of order
$n$, such as the 22-pole trap ($n=11$) \cite{gerlich1995:ps}, which feature
effective potentials proportional to $r^{(2n-2)}$.
The necessary condition of adiabatic motion for a trapped ion in a
time-varying field is characterized by the adiabaticity parameter
\cite{gerlich1992:adv}
\begin{equation}
\eta(\vec r)=\frac{2q
\left |
\vec \nabla |\vec \nabla \Phi(\vec r) |
\right |
} {m \omega^2},
\end{equation}
Ref.\ \cite{gerlich1992:adv} postulates that $\eta$ has to be less than 0.3 to
guarantee ``safe operating conditions''. We have thoroughly investigated trap
loss out of multipole traps \cite{mikosch2007:prl} and found trapping to occur
up to a value of 0.38 for $\eta$. Where $\eta$ reaches this maximum value the
surface of the trapping volume is reached. The effective potential on this
surface represents the maximum potential depth for trapped ions
\cite{mikosch2007:prl}. The right panel of Fig.\ \ref{pottopf} shows the
effective trapping potential of the chip-based multipole ion trap in the
region of space where adiabatic trapping is possible, i.\ e.\ where the
adiabaticity criterion of $\eta < 0.38$ is fulfilled. The potential is
calculated for Ar$^+$ ions in a trap of amplitude $U_0 = 125$\,V and frequency
$\omega = 2 \pi \times 5.75$\,MHz. It can be seen that the effective potential
is represented by a deep well with an almost flat, field-free bottom and with
exponentially rising potential walls and a height of about 0.5\,eV.
\begin{figure}
\includegraphics[width=\columnwidth]{fig2.eps}
\caption{\label{pottopf} (Color online) The right panel shows the analytically
calculated effective trapping potential along the z-direction (black
line). The one-dimensional cut obtained from a two-dimensional numerical
calculation of the effective potential (red line) can not be distinguished
from the analytical model. The result of the two-dimensional calculation is
shown as a contour plot in the left panel for 16 stripes on each plane. One
can clearly see the flat bottom and the steep walls of the effective
potential. The nonadiabatic regions, where no stable trapping is possible, are
colored in white.}
\end{figure}
The electric field configuration for stable ion trapping has also been
investigated in numerical simulations and the resulting effective trapping
potentials and $\eta$-parameters are evaluated \cite{kroener:s_a}. From a
two-dimensional simulation of the effective trapping potential using SIMION
\cite{simion7}, a one-dimensional cut along the $z$-direction in the center of
the trap (for $x=y=0$) is derived. It cannot be distinguished from the
analytical model in the right panel of Fig.\ \ref{pottopf}. Both results are
found to agree within one percent, which proves the applicability of the
analytical model in the region of the trap where adiabatic motion
prevails. The full two-dimensional calculation in the $xz$-plane is shown in
the left panel of Fig.\ \ref{pottopf}. We find that the confinement in the
$z$-direction is independent of the $x$-position for almost the entire
trap. One can also see that the confinement for small and large $x$-values is
not provided by the rf fields. The same holds for small and large
$y$-values. Confinement in the $xy$-plane is therefore achieved by
superimposing additional electrostatic potentials.
\section{\label{setup:sect} Realisation of the trap and loading scheme}
Two planes of gold electrodes on top of two glass substrates that face each
other form the ion trap. Design and fabrication of the chip-based ion trap
using MEMS technology will be described in a separate publication
\cite{kroener:s_a}. Fig.\ \ref{fallefoto} shows a picture of one of the two
glass substrates with the rf electrodes, spaced at $\pi x_0 = 1$\,mm, and
several static electrodes surrounding the comb structure. Besides providing
three-dimensional trapping these static electrodes are also used for the
controlled extraction of trapped ions. The second glass chip is mounted facing
the first one at a distance of $z_0 = 5$\,mm.
\begin{figure}
\includegraphics[width=\columnwidth]{fig3.eps}
\caption{\label{fallefoto} Photograph of one of the two ion trap chips mounted
into its holder. The second chip (not shown) is mounted 5\,mm above, facing
the first chip. The metal bars surrounding the chip serve to shield the chip
from electrostatic charging.}
\end{figure}
The trap is kept in a vacuum chamber at a residual gas pressure of about
$10^{-8}\mathrm{mbar}$ generated by a $500\,\mathrm{l/s}$ turbo molecular
pump. It is mounted in a holder fixed at one flange which also supports the
electrical connections for the trap. The radiofrequency amplitude of the trap
is generated by amplifying the signal of an rf oscillator (Hameg HM8032) in a
high frequency power amplifier (RFPA RF001100-8). To reach sufficiently high
amplitudes the output is transformed by a coil on a high frequency ferrite
core located close to the trap outside the chamber. In this way peak
amplitudes of $U_0=0...250$\,V and frequencies in the range of $\omega/(2\pi)
= 3 ... 6.5$\,MHz are applied.
Ions are created by electron impact on neutral atoms inside the trap. This is
achieved by crossing a pulsed gas beam from a piezoelectric valve
\cite{gerlich_PV} with a pulsed 1\,kV electron beam in the center of the
trap. Creating the ions inside the trap is favored over ion transport and
capturing techniques due to its simplicity but causes charging of non
conducting parts (see section \ref{charging:sect}) as well as a higher
background pressure for the first tenths of ms after the pulse. When ions are
created the electron beam is adjusted by optimizing the ion signal on a
channeltron detector, which is mounted opposite of the pulsed valve and is set
up to detect and amplify individual ion pulses. The number of ions hitting
the detector are measured using a single channel discriminator and a
counter. Large numbers of trapped ions are measured by digitizing the current
signal of the channeltron with an oscilloscope. The data acquisition timing is
controlled with an AVR Atmel microprocessor (AT90S8515).
\section{\label{results:sect} Characterization of the trap}
Operation of the planar ion trap with Ar$^+$ ions has been achieved with the
design parameters for the rf and dc potentials obtained from the numerical
simulations, i.\ e.\ $\omega=2\pi\times 5.75$\,MHz and $U_0=125$\,V. The best
operating conditions are found by optimizing the electrostatic electrodes
surrounding the trap. These optimal settings result in static voltages of up
to a few volts. The setup is found to be stable against slight variations of
single static potentials: varying the static potentials by less than 1 Volt
from their optimum values decreases the lifetime due to a lower potential
depth, but trapping is still possible.
For extraction the potential of the surrounding border electrode in the
direction of the detector is lowered to -15\,V. More negative extraction
potentials lead to a decrease in ion signal as the ions are hitting the
electrode. More positive extraction potentials lead to a smearing of the ion
signal in the time domain as the ions close to the border are accelerated by
the extraction potential but the ions further away are much less influenced.
In experiments with few trapped ions up to 200 individual ions are counted,
limited by the overlapping ion signals in the counter. We use these data to
calibrate the analog current signal of the channeltron detector to the ion
number. In this way the largest observed analog signals of trapped ions are
found to contain about 3000 ions.
\begin{figure}
\includegraphics[width=\columnwidth]{fig4.eps}
\caption{\label{speicherkurve} Number of ions extracted from the trap after
different storage times. The solid line is showing an exponential fit with a
lifetime of 16\,s.}
\end{figure}
For the ion trap we determine a storage time of 16\,s, which corresponds to a
loss rate of 0.06\,s$^{-1}$. This lifetime can be compared to the
evaporation limited lifetime over the rim of the trapping potential
\cite{mikosch2007:prl}: The evaporation rate is given by
\begin{equation}
k(T) = A e^{-E_a / k_B T},
\end{equation}
where the trap depth $E_a \approx 0.5$\,V is taken from the effective
potential calculation of section \ref{design:sect}. The temperature of the
trapped ions is estimated to be roughly room temperature, controlled by
collisions of the trapped ions with the gas injected into the trap chamber
after ion formation. The pre-factor $A$ is assumed to be similar to the value
obtained in the 22-pole ion trap, $A=10^7$\,s$^{-1}$
\cite{mikosch2007:prl}. This yields a value of about 0.02\,s$^{-1}$, which is
only a factor of three away from the measured storage time. This is considered
a fair agreement when keeping in mind the exponential dependence of the
evaporation rate on the trap depth $E_a$.
\section{\label{charging:sect} Electrostatic charging}
Avoiding stray charges and investigating their effects where they can not be
completely eliminated is a central issue in the design of micro trap
structures where conducting and nonconducting areas are lying close to each
other and to the center of the trap \cite{seidelin2006:prl}. In our current
trap design these charging effects are non-negligible and affect both trapping
efficiency and storage time.
\begin{figure}
\includegraphics[width=\columnwidth]{fig5.eps}
\caption{\label{auflad} The picture is showing a cut through the chip. A high
resistive region (substrate) is enclosed by two conducting parts
(electrodes).}
\end{figure}
The steady-state potential of the glass surface induced by charging can be
calculated assuming a constant current density $j_{\rm ch}$ that is flowing
onto the surface and a resistivity-limited discharging current $I_{\rm dis}$
within the glass (see Fig.\ \ref{auflad}). The current inside the surface
flows from the middle of the high-resistive region (denoted as $x=0$) to the
two neighbouring electrode stripes. A surface area $x\,\Delta y$ (with $0 < x
< \pi x_0 / 4$ perpendicular to the stripes and $\Delta y$ parallel to the
stripes) leads to a discharging current at the position $x$ inside the glass
of
\begin{equation}
I_{\rm dis}(x)=j_{\rm ch} x\,\Delta y.
\end{equation}
Under steady state conditions only the resistivity of the glass substrate and
not the parallel capacity determines the potential (see equivalent circuit
diagram in Fig.\ \ref{auflad}). This leads to a potential gradient at a
position $x$ between the stripes of
\begin{equation}
\frac{dU}{dx}=\frac{\rho}{h\,\Delta y} I_{\rm dis}(x)
\end{equation}
where the discharching current $I_{\rm dis}$ flows through the area $h\,\Delta
y$ in the glass chip and $\rho$ denotes the specific resistance of the glass.
Integration from $x=0$ to $x=\pi x_0 / 4$ yields the electric potential at the
center of the high-resistive region ($x=0$) of
\begin{equation}
U_{\rm ch}=\frac{1}{2} \frac{\rho}{h} j_{\rm ch} \frac{(\pi x_0)^2}{16},
\end{equation}
with respect to the electrodes. For an estimation of the amount of charge
needed to significantly influence storage of ions we assume that a potential
of 500\,mV between two rf electrodes, a value similar to the depth of the
effective potential, will preclude trapping of ions. The resistivity $\rho$ of
the glass substrate (thickness $h=0.05$\,cm) is extrapolated from the material
data sheet \cite{borofloat} to $\approx 10^{15}\mathrm{\Omega cm}$. Thus, a
potential of 500\,mV is obtained for a charging current density of about
$5\times 10^5$ electrons per cm$^2$ and second. For the two entire chips with
their total glass surface of $2 \times 4.5$\,cm$^2$, this means that a charge
flux of about $5\times 10^6$ elementary charges per second will have a
significant influence on trapping and storage. At the typical repetition rate
of the experiment of 10 cycles per second, this yields a maximum allowable
current of $5\times 10^5$ charges per trap loading.
To investigate charging effects of the planar ion trap experimentally, the
trapping efficiency is measured for different average currents of the electron
beam used for ionization. We define the trapping efficieny as the number of
ions trapped after 10\,ms of storage time. This time is much shorter than the
lifetime of trapped ions but is also long enough to allow for complete
randomization of ion trajectories. In the experiment, charging of the chips'
surface stems from the electron beam, which is pulsed on only during loading
of the trap. The average charging current can therefore be varied by changing
the repetition rate of trap loading from 0.1\,Hz to 20\,Hz. The trapping
efficiency is measured for many trapping cycles over a time span of several
hours. In Fig.\ \ref{auflad2} the change in the trapping efficiency is shown
when the repetition rate is changed from 2\,Hz to 5\,Hz and back to
2\,Hz. With the higher repetition rate the charging increases and consequently
the trapping efficiency decreases until the repetition rate is set back to
2\,Hz and the charging is reduced again. The time constants for reaching
steady-state trapping efficiencies upon increased and decreased surface
charging are obtained by fitting a decay curve $A\exp(-t/\tau)+B$ and a growth
curve $A(1-\exp(-t/\tau'))+B$ to the data (solid line in Fig.\ \ref{auflad2}).
The obtained values for increased and decreased charging amount to $\tau
\approx 600$\,s and $\tau' \approx 100$\,s, respectively. The observation of
two different values may indicate that the increased charging is limited by
the current $j_{\rm ch}$ whereas the decreased charging is only limited by the
intrinsic capacitance and resitivity of the substrate.
\begin{figure}
\includegraphics[width=\columnwidth]{fig6.eps}
\caption{\label{auflad2} The picture is showing a decrease in trapping
efficiency after changing from 2 loading cycles per second to 5 loading cycles
per second. The increase on the right side of the graph is the result of
switching back again to two loading cycles per second.}
\end{figure}
To estimate the expected time constant for discharging the surface we use the
equivalent circuit of the chip surface shown in Fig.\ \ref{auflad}. The time
constant $\tau = dR\,dC$ that determines changes of the steady-state potential
depends on the resistance $dR=\frac{\rho}{h\,\Delta y} dx$ and the capacity
$dC=\epsilon_0 \epsilon_r\frac{h\,\Delta y}{dx}$. This yields the time constant
\begin{equation}
\tau=\epsilon_0 \epsilon_r \rho.
\end{equation}
With $\epsilon_r=4.6$ for the glass substrate \cite{borofloat} one obtains a
typical time constant of about 400\,s for changes of the charging potential of
the glass substrate. Under the assumption that small changes of the trapping
efficiency are to a first approximation proportional to small changes in the
charging potential one can compare this calculated time constant to the values
obtained from the measured trapping efficiency. The order of magnitude
agreement that one finds provides evidence that charging of the glass surface
is in fact the major cause for the observed changes in the trapping
efficiency. Decreasing the resistivity of the glass substrate by an order of
magnitude one can proportionally reduce the charging potentials of the
substrate to an insignificant amount, while still maintaining small resistive
losses for the driving rf amplitude.
\section{Conclusions and outlook}
We have presented a chip-based multipole ion trap based on a planar design,
which features a large field free trapping volume between two glass substrates
carrying stripes of radiofrequency electrodes. An analytical model has been
presented that describes the effective trapping potential in good agreement
with numerical calculations. Trapping of ions has been demonstrated and the
measured decay rate of trapped Ar$^+$ ions follows the expectations from
evaporative losses over the rim of the confining potential. The effect of
surface charging, due to the highly resistive glass substrates, on the ion
trapping efficiency has been experimentally studied. The charging potential
and the observed time constant for reaching steady-state conditions has been
successfully modeled using an appropriate equivalent circuit, which is based
on the resistivity and capacity of the glass substrate.
As a next step we will add a drift tube for the extracted ions to implement a
Wiley McLaren \cite{WileyMcLaren} type time of flight mass spectrometer. To
characterize the density distribution of the trapped ions, photodetachment
tomography experiments \cite{trippel2006:prl} will be carried out. Further
improvements of the design and the fabrication techniques of the trap are
under development, including electrode materials with high optical
transmission \cite{kroener:s_a}. This will allow the combination of the
chip-based multipole ion trap with a magneto optical trap for ultracold
neutral atoms for experiments on interactions of trapped ions and clusters
with ultracold atoms.
\begin{acknowledgments}
This project is supported in part by a grant from the Ministry of Science,
Research and Arts of Baden-W{\"u}rttemberg. The chips were fabricated in the
Clean Room Service Center of the Department of Microsystems Engineering
(IMTEK), Freiburg. N. M. acknowledges support from the RISE program of the
German Academic Exchange Service (DAAD).
\end{acknowledgments}
|
2,877,628,090,272 | arxiv | \section{Introduction}
Inverse problems occur widely in mathematical and engineering fields. The related mathematical theories and algorithms have been developed by many authors \cite{Tarantola}.
Recently, Bayesian inference establishes a convenient framework in order to analyze the uncertainty of the unknowns \cite{Stuart}. It adopts probability viewpoint to represent, propagate and update epistemic parameter uncertainties.
In computation, a key challenge lies in the transition from the prior distribution to the posterior, which attracts a large number of researchers.
Some existed algorithms, e.g., the Metropolis-Hastings (MH) sampling algorithm, have been proposed to explore the posterior distribution. Based on MH sampling technique, plenty of samples are collected by reject-accept proposal and are used to characterize the posterior distribution. As we know, the technical difficulty posed by MCMC based methods is that the samples will typically be autocorrelated (or anticorrelated) within a chain. This increases the uncertainty of the estimation of posterior quantities of interest, such as means, variance.
In this paper, we seek an optimal approximation of the posterior from some common family of distribution.
We assume that the prior belongs to this distribution class. Therefore, the optimal approximation and the prior are of the same type and share the common parameterization density.
The transition from the prior to the posterior involves the change of the distribution parameters. Homotopy methods are a promising approach to characterize solution spaces by smoothly tracking solutions from one formulation (typically an "easy" problem) to another (typically a ''hard problem").
In deal with filter problems, Hanebeck et al. \cite{Hanebeck} introduced a general framework for performing the measurement update using a homotopy.
This method is discussed further in \cite{Hagmar}. To the authors' knowledge, this method has not been applied to solve inverse problems. We discuss the application of this method in Bayesian inverse problems.
Different from \cite{Hagmar, Hanebeck}, the approximated posterior density family is taken as the exponential family distribution and the mixed exponential family, and the usual moment parameters are replaced by a simplition of natural parameters. Within the exponential family and the mixed version, the corresponding derivatives can be computed in a relative easy way. And the dimension number of the parameters decreases dramatically. With this homotopy, the prior parameters is promoted to the posterior parameters by a so-called homotopy differential equation (HDE). In this HDE, we confront the high-dimension numerical integration. Some conventional means, e.g., sparse grid \cite{Gerstner}, Monte Carlo methods \cite{Dick}, can be used to deal with these integration terms.
The remainder of this paper is organized as follows: In Section 2, we give the basic framework of Bayesian inversion using homotopy. In Section 3, we introduce the exponential family and mixed exponential family. In Section 4, the approximated version of the homotopy differential equation is derived. In Section 5, some numerical examples are given to verify the effectiveness of the proposed algorithm.
\section{Bayesian inversion using homotopy}
Inverse problems concern converting observational data into information about systems which are not observed directly. In mathematics, an inverse problem takes the abstract form
\begin{align}\label{s1.1}
\boldsymbol{\tilde{y}}=\mathcal{G}(\boldsymbol{\kappa})+\boldsymbol{\Xi}
\end{align}
in which $\boldsymbol{\Xi}$ is an additive noise, the unknown $\boldsymbol{\kappa}\in U$ is to be determined, given the data $\boldsymbol{\tilde{y}}\in Y$, where $U$ and $Y$ are Banach spaces. We apply a probabilistic viewpoint, Bayesian approach, to give the solution information of \eqref{s1.1}, in which all
quantities including the unknown $\boldsymbol{\kappa}$, the noise $\Xi$ and the observations $\boldsymbol{\tilde{y}}$ are regarded as random variables.
In the Bayesian framework, the information about the unknown is updated by blending prior beliefs with observed data.
Typically, the prior and posterior are coded in the corresponding probability measures, which are linked by the Bayes' formula
\begin{align}
\frac{d\mu^{\boldsymbol{\tilde{y}}}}
{d\mu_0}(\boldsymbol{\kappa})
\propto L(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}}):=\exp\left(-\Phi(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}})\right),
\end{align}
where $L(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}})$ is the likelihood function and $\Phi(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}})$ is the negative log likelihood.
For the sake of simplicity, we consider the spaces $U$ and $Y$ are finite dimensional. The posterior density corresponding to $\mu^{\tilde{\boldsymbol{y}}}(\boldsymbol{\kappa})$ is given as
\begin{align}\label{pos1}
\mathfrak{p}^{\boldsymbol{\tilde{y}}}(\boldsymbol{\kappa})
=\frac{\exp\left(-\Phi(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}})\right)\mathfrak{q}(\boldsymbol{\kappa})}{Z}.
\end{align}
The denominator $Z$ is the normalization constant, which is usually neglected in sampling algorithms. The main target to be explored in the posterior density is the numerator
\begin{align}
\mathfrak{p}(\boldsymbol{\kappa})=\exp\left(-\Phi(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}})\right)\mathfrak{q}(\boldsymbol{\kappa}),
\end{align}
where the normalization constant $Z$ in \eqref{pos1} is dropped. We try to find an optimal approximation of $\mathfrak{p}(\boldsymbol{\kappa})$ from some commonly used distribution family. This is implemented in the frame of homotopy Bayesian approach.
The key idea of the homotopy Bayesian approach is to perform progressive processing.
In this method, instead of directly approximating the true density $\mathfrak{p}(\boldsymbol{\kappa})$, it starts with a tractable density and continuously approaches the true density via intermediate densities.
Choose the homotopy as follows
\begin{align}
\mathfrak{p}(\boldsymbol{\kappa}, t)=L^t(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}})\mathfrak{q}(\boldsymbol{\kappa})=\exp\left(-t\Phi(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}})\right)\mathfrak{q}(\boldsymbol{\kappa}), \,\, t\in [0, 1]
\end{align}
For each $t\in [0, 1]$, an optimal approximation of $\mathfrak{p}(\boldsymbol{\kappa}, t)$ is to be sought from some distribution family. Denote the probability density of the distribution family by a parameterized density family $\mathfrak{g}(\boldsymbol{\kappa}; \eta)$.
By miminzing a deviation function
$G(\eta, t)$ between $\mathfrak{p}(\boldsymbol{\kappa}, t)$ and $\mathfrak{g}(\boldsymbol{\kappa}; \eta)$, we obtain the optimal approximation of $\mathfrak{p}(\boldsymbol{\kappa}, t)$.
To measure the difference between two probability distributions over the same variable $\boldsymbol{\kappa}$, a measure, called the Kullback-Leibler divergence, or simply, the KL divergence, has been popularly used in the statistical learning, data mining literature. The concept was originated in probability theory and information theory. The KL divergence, which is closely related to relative entropy, information divergence, and information for discrimination, is a non-symmetric measure of the difference between two probability distributions $\mathfrak{X}$ and $\mathfrak{Y}$. If $\mathfrak{X}$ and $\mathfrak{Y}$ are discrete probability distributions, i.e., $\mathfrak{X}=(\mathfrak{X}(\boldsymbol{\kappa}_1), \mathfrak{X}(\boldsymbol{\kappa}_2), \cdots, \mathfrak{X}(\boldsymbol{\kappa}_m))$ and $\mathfrak{Y}=(\mathfrak{Y}(\boldsymbol{\kappa}_1), \mathfrak{Y}(\boldsymbol{\kappa}_2), \cdots, \mathfrak{Y}(\boldsymbol{\kappa}_m))$, the KL divergence is defined to be
\begin{align}
D_{\rm KL}(\mathfrak{X}||\mathfrak{Y})=\sum_{i=1}^m\mathfrak{X}(\boldsymbol{\kappa}_i)\log\frac{\mathfrak{X}(\boldsymbol{\kappa}_i)}{\mathfrak{Y}(\boldsymbol{\kappa}_i)}.
\end{align}
For continuous random variables $\mathfrak{X}$ and $\mathfrak{Y}$, we assume the corresponding probability densities are $\mathfrak{x}(\boldsymbol{\kappa})$ and $\mathfrak{y}(\boldsymbol{\kappa})$, respectively.
In this case, the Kullback-Leibler divergence between $\mathfrak{X}$ and $\mathfrak{Y}$ is given
\begin{align}
D_{\rm KL}(\mathfrak{X}||\mathfrak{Y})=\int \mathfrak{x}(\boldsymbol{\kappa})\log\frac{\mathfrak{x}(\boldsymbol{\kappa})}{\mathfrak{y}(\boldsymbol{\kappa})}d\boldsymbol{\kappa}.
\end{align}
In probability and statistics, the Hellinger distance is also usually used to quantify the similarity between two probability distributions. It is a type of f-divergence.
The squared Hellinger distances for discrete and continuous random variables are defined by
\begin{align}
\begin{aligned}
&H^2(\mathfrak{X}, \mathfrak{Y})=\frac{1}{2}\sum_{i=1}^m \left(\sqrt{\mathfrak{X}(\boldsymbol{\kappa}_i)}-\sqrt{\mathfrak{Y}(\boldsymbol{\kappa}_i)}\right)^2,\\
&H^{2}(\mathfrak{X}, \mathfrak{Y})=\frac{1}{2} \int\left(\sqrt{\mathfrak{x}(\boldsymbol{\kappa})}-\sqrt{\mathfrak{y}(\boldsymbol{\kappa})}\right)^{2} d \boldsymbol{\kappa},
\end{aligned}
\end{align}
respectively.
With the preceding two measure deviations acting as $G$, we seek
\begin{align}
\eta(t)=\argmin_{\eta} G(\eta, t).
\end{align}
The minimization necessary condition yields
\begin{align}
G_\eta(\eta(t), t)=0 \,\, \text{for} \,\, t\in [0, 1].
\end{align}
By taking the total derivative w.r.t. $t$ of the preceding equation, we obtain
\begin{align}
G_{\eta\eta}(\eta(t), t)\eta'(t)+G_{\eta t}(\eta(t), t)=0.
\end{align}
For $t=0$, the approximated density $\mathfrak{g}(\boldsymbol{\kappa}; \eta(0))$ is an optimal approximation $\mathfrak{q}(\boldsymbol{\kappa})$.
If the approximated distributions $\mathfrak{g}(\boldsymbol{\kappa}; \eta)$ are chosen from the same family as the prior distribution $\mathfrak{q}(\boldsymbol{\kappa})$, the initial condition is set according to $\mathfrak{g}(\boldsymbol{\kappa}; \eta(0))=\mathfrak{q}(\boldsymbol{\kappa})$. Let $\mathfrak{q}(\boldsymbol{\kappa})$ be identified as $\mathfrak{g}(\boldsymbol{\kappa}; \eta_0)$. By this, the initial condition is set to $\eta(0)=\eta_0$.
Solving for $\eta'(t)$, we arrive at the initial value problem of the homotopy differential equation (HDE)
\begin{align}\label{hde1}
\begin{aligned}
&\eta'(t)=-G_{\eta\eta}^{-1}(\eta(t), t)G_{\eta t}(\eta(t), t)),\\
&\eta(0)=\eta_0.
\end{aligned}
\end{align}
\section{Exponential family and mixed exponential family}
As the most widely used distribution family, the exponential family (EF) and the mixed exponential family (MEF) are served as the approximation of the posterior distribution.
The EF and MEF are a practically convenient and widely used unified families of distributions parametrized by a finite dimensional parameter vector.
The reason of its special significance
is that a number of important and useful calculations in statistics
can be done all at one stroke within the framework of the EF and MEF. This generality
contributes to both convenience and larger scale understanding. And besides, it has recently obtained additional importance due to its use and appeal to the machine learning community.
For a numeric random variable $\boldsymbol{\mathfrak{K}}$, the parametric EF probability density can be written as
\begin{align}
\mathfrak{q}(\boldsymbol{\kappa}; \theta)=h(\boldsymbol{\kappa}) \exp \left\{\langle T(\boldsymbol{\kappa}), \theta\rangle-A(\theta)\right\}
\end{align}
where $\theta$ is called the natural (canonical) parameter, $T(\boldsymbol{\kappa})$ is the sufficient statistic and $A(\theta)$ is the $\log$ normalizer
given by
\begin{align*}
A(\theta)=\log\int h(\boldsymbol{\kappa})\exp(\langle T(\boldsymbol{\kappa}), \theta\rangle)d\boldsymbol{\kappa}=\log Q(\theta).
\end{align*}
It is easy to know that
\begin{align}
\frac{d\log \mathfrak{q}(\boldsymbol{\kappa}; \theta)}{d\theta}=T(\boldsymbol{\kappa})-A'(\theta).
\end{align}
We can compute $A'(\theta)$ by
\begin{align*}
\begin{aligned}
A'(\theta)&=\frac{1}{Q(\theta)}\frac{dQ(\theta)}{d\theta}=\frac{\int h(\boldsymbol{\kappa})\exp(\langle T(\boldsymbol{\kappa}), \theta\rangle)T(\boldsymbol{\kappa})d\boldsymbol{\kappa}}{\int h(\boldsymbol{\kappa})\exp(\langle T(\boldsymbol{\kappa}), \theta\rangle)d\boldsymbol{\kappa}}\\
&=\frac{\int h(\boldsymbol{\kappa})\exp(\langle T(\boldsymbol{\kappa}), \theta\rangle-A(\theta))T(\boldsymbol{\kappa})d\boldsymbol{\kappa}}{\int h(\boldsymbol{\kappa})\exp(\langle T(\boldsymbol{\kappa}), \theta\rangle-A(\theta))d\boldsymbol{\kappa}}\\
&=\mathbb{E}[T(\boldsymbol{\kappa})].
\end{aligned}
\end{align*}
We only list the case of the multivariate Gaussian distribution, which is used in this paper. For this case, we further reduce the parameter dimension number.
For a Gaussian random variable $\boldsymbol{\mathfrak{K}}\in \mathbb{R}^{d}$, if $\boldsymbol{\mathfrak{K}}\sim N(\varkappa, \varSigma)$, then $\mathbb{E}[\boldsymbol{\mathfrak{K}}]=\varkappa$
and $\text{cov}[\boldsymbol{\mathfrak{K}}]=\varSigma$. $\varkappa$ and $\varSigma$ are called the moment parameters of the distribution.
The probability density is given
\begin{align*}
&\mathfrak{q}(\boldsymbol{\kappa} \mid \varkappa, \varSigma) =\frac{1}{(2 \pi)^{d / 2} \mid \varSigma\mid^{1 / 2}} \exp \left\{-\frac{1}{2}(\boldsymbol{\kappa}-\varkappa)^{\top} \varSigma^{-1}(\boldsymbol{\kappa}-\varkappa)\right\}
\\&=\frac{1}{(2 \pi)^{d/ 2}} \exp \left\{-\frac{1}{2} \operatorname{tr}\left(\varSigma^{-1} \boldsymbol{\kappa} \boldsymbol{\kappa}^{\top}\right)+\varkappa^{\top} \varSigma^{-1} \boldsymbol{\kappa}-\frac{1}{2} \varkappa^{\top} \varSigma^{-1} \varkappa-\frac{1}{2}\log |\varSigma|\right\}.
\end{align*}
The corresponding function $T(\boldsymbol{\kappa})=\left[\begin{array}{c}\boldsymbol{\kappa} \\ \operatorname{vec}(\boldsymbol{\kappa}\boldsymbol{\kappa}^\top)\end{array}\right]$ and natural parameter is
\begin{align*}
\theta=\left[\begin{array}{c}\varSigma^{-1}\varkappa \\ -\frac{1}{2}\operatorname{vec}(\varSigma^{-1})\end{array}\right]
\end{align*}
Here $A(\theta)=\frac{1}{2} \varkappa^{\top} \varSigma^{-1} \varkappa+\frac{1}{2}\log |\varSigma|
and $h(\boldsymbol{\kappa})=(2\pi)^{-\frac{d}{2}}$. Define the precision matrix by $\mathcal{P}=\varSigma^{-1}$. By the symmetric positive definiteness of $\mathcal{P}$, we have the Cholesky factorization $\mathcal{P}=\mathcal{R}^\top \mathcal{R}$ with $\mathcal{R}$ being a lower triangular matrix. Introducing the new parameter
\begin{align}\label{par1}
\theta=\left[\begin{array}{c}\varkappa \\ \operatorname{vech}(\mathcal{R})\end{array}\right]=\left[\begin{array}{c}\theta_1 \\ \theta_2\end{array}\right],
\end{align}
we have
\begin{align}
&\frac{\partial \log\mathfrak{q}(\boldsymbol{\kappa} \mid \varkappa, \varSigma) }{\partial \theta_1}=-\mathcal{R}^\top\mathcal{R}(\boldsymbol{\kappa}-\varkappa),\\
&\frac{\partial \log\mathfrak{q}(\boldsymbol{\kappa} \mid \varkappa, \varSigma) }{\partial \theta_2}=\operatorname{vech}\left([\operatorname{diag}(\mathcal{R})]^{-1}-\mathcal{R}(\boldsymbol{\kappa}-\varkappa)(\boldsymbol{\kappa}-\varkappa)^\top\right).
\end{align}
\begin{remark}
It is obvious that the dimension number of the new parameter system decrease dramatically compared with the moment parameter or the natural parameter system.
\end{remark}
The MEF is denoted by
\begin{align}
\mathfrak{g}(\boldsymbol{\kappa}; \eta)=\sum_{i=1}^{M} w_i \mathfrak{q}(\boldsymbol{\kappa}; \theta^i)=\sum_{i=1}^M w_i h(\boldsymbol{\kappa})\exp\left(\langle \Phi(\boldsymbol{\kappa}), \theta^i \rangle-A_i(\theta^i)\right),
\end{align}
where $\sum_{i=1}^M w_i=1$ and $w_i\geq 0$ for $i=1, 2, \cdots, M$. To remove the weight constraint, we let $w_i=\frac{\pi/2+\arctan\lambda_i}{\sum_{j=1}^M \pi/2+\arctan\lambda_j}$ and $\lambda_M=0$.
Thus, the MEF has the following form
\begin{align*}
\mathfrak{g}(\boldsymbol{\kappa}; \eta)=\sum_{i=1}^M \frac{\pi/2+\arctan\lambda_i}{\sum_{j=1}^M \pi/2+\arctan\lambda_j}h(\boldsymbol{\kappa})\exp\left(\langle \Phi(\boldsymbol{\kappa}), \theta^i \rangle-A_i(\theta^i)\right).
\end{align*}
Then it follows by simple calculations that
\begin{align}
&\frac{\partial \log\mathfrak{g}(\boldsymbol{\kappa}; \eta) }{\partial \theta^i}=\frac{w_i \mathfrak{q}(\boldsymbol{\kappa}; \theta^i)}{\mathfrak{g}(\boldsymbol{\kappa}; \eta)}\frac{\partial \log\mathfrak{q}(\boldsymbol{\kappa}; \theta^i) }{\partial \theta^i},\\
&\frac{\partial \log\mathfrak{g}(\boldsymbol{\kappa}; \eta) }{\partial \lambda_i}=\frac{\frac{ \mathfrak{q}(\boldsymbol{\kappa}; \theta^i)}{\mathfrak{g}(\boldsymbol{\kappa}; \eta)}-1}{(1+\lambda_i^2)\sum_{j=1}^M \left( \pi/2+\arctan\lambda_j\right)}, \,\, i=1, 2, \cdots, M-1.
\end{align}
\begin{remark}
The parameter dimension number of the mixed Gaussian exponential family is $M-1+M(d+d(d+1)/2)$.
\end{remark}
\section{Approximation of HDE}
In HDE \eqref{hde1}, the posterior density is involved in the integration term. It is obvious that the corresponding numerical evaluations become complicated.
Therefore, it is necessary to give an approximation version of \eqref{hde1} in real numerical simulation process.
For a partition $0=t_0<t_1<\cdots<t_n=1$, we have from Taylor expansion of $\eta(t)$
\begin{align}
\eta(t_{i+1})=\eta(t_i)+\eta'(t_i)\Delta t_i+o(\Delta t_i), \,\, i=0, \cdots, n-1,
\end{align}
where $\Delta t_i=t_{i+1}-t_i$. Truncating the Taylor expansion after the linear term, we obtain
\begin{align}\label{tay1}
\eta_{i+1}=\eta_{i}+\eta'_i \Delta t_i,
\end{align}
where $\eta_i$ and $\eta'_i$ are the approximations of $\eta(t_i)$ and $\eta'(t_i)$ respectively. As an approximation of $\eta'(t_i)$, we can take
\begin{align}\label{hdel1}
\eta'_i=-\tilde{G}_{\eta\eta}^{-1}(\eta_i, t_i)\tilde{G}_{\eta t}(\eta_i, t_i),
\end{align}
where $\tilde{G}_{\eta\eta}(\eta_i, t_i)$ and $\tilde{G}_{\eta t}(\eta_i, t_i)$ are some approximations of $G_{\eta\eta}(\eta(t_i), t_i)$ and $G_{\eta t}(\eta(t_i), t_i)$ respectively.
Let us check the two deviations of measures, i.e., the Kullback-Leibler divergence and squared Hellinger metric.
For Kullback-Leibler divergence, we have
\begin{align}
G(\eta, t)=\int \mathfrak{p}(\boldsymbol{\kappa}, t)\log\frac{\mathfrak{p}(\boldsymbol{\kappa}, t)}{\mathfrak{g}(\boldsymbol{\kappa}; \eta)}d\boldsymbol{\kappa}.
\end{align}
It follows that
\begin{align}
&G_\eta=-\int \mathfrak{p}(\boldsymbol{\kappa}, t)\frac{\partial \log \mathfrak{g}(\boldsymbol{\kappa}; \eta)}{\partial \eta}d\boldsymbol{\kappa},\\
&
\begin{aligned}
G_{\eta\eta}&=-\int \mathfrak{p}(\boldsymbol{\kappa}, t)\frac{\partial^2 \log \mathfrak{g}(\boldsymbol{\kappa}; \eta)}{\partial \eta^2}d\boldsymbol{\kappa}\\
&=-\int \mathfrak{p}(\boldsymbol{\kappa}, t) \frac{\mathfrak{g}(\boldsymbol{\kappa}; \eta)\frac{\partial^2 \mathfrak{g}(\boldsymbol{\kappa}; \eta)}{\partial\eta^2}-\frac{\partial \mathfrak{g}}{\partial\eta}\frac{\partial \mathfrak{g}}{\partial\eta}^\top}{\mathfrak{g}(\boldsymbol{\kappa}; \eta)^2}d\boldsymbol{\kappa},
\end{aligned}
\\
&
\begin{aligned}
G_{\eta t}&=\int \Phi(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}}) \mathfrak{p}(\boldsymbol{\kappa}, t)\frac{\partial \log \mathfrak{g}(\boldsymbol{\kappa}; \eta)}{\partial \eta}d\boldsymbol{\kappa}\\
&=\int \Phi(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}}) \mathfrak{p}(\boldsymbol{\kappa}, t)\frac{\frac{\partial \mathfrak{g}(\boldsymbol{\kappa}; \eta)}{\partial \eta}}{\mathfrak{g}(\boldsymbol{\kappa}; \eta)}d\boldsymbol{\kappa}.
\end{aligned}
\end{align}
When we implement the iteration \eqref{tay1} with \eqref{hdel1}, the density $\mathfrak{p}(\boldsymbol{\kappa}, t)$ at $t_i$ has been approximated by $\mathfrak{g}(\boldsymbol{\kappa}; \eta_i)$.
Therefore, we can take the approximations of $G_{\eta\eta}(\eta(t_i), t_i)$ and $G_{\eta t}(\eta(t_i), t_i)$ as
\begin{align}\label{g11}
\begin{aligned}
\tilde{G}_{\eta\eta}(\eta_i, t_i)&=-\int \frac{\mathfrak{g}(\boldsymbol{\kappa}; \eta)\frac{\partial^2 \mathfrak{g}(\boldsymbol{\kappa}; \eta)}{\partial\eta^2}-\frac{\partial \mathfrak{g}}{\partial\eta}\frac{\partial \mathfrak{g}}{\partial\eta}^\top}{\mathfrak{g}(\boldsymbol{\kappa}; \eta)}\bigg |_{\eta_i}d\boldsymbol{\kappa}\\
&=\int \frac{\partial \log\mathfrak{g}}{\partial\eta} \frac{\partial \log\mathfrak{g}}{\partial\eta} ^\top \mathfrak{g} \bigg |_{\eta_i}d\boldsymbol{\kappa}-\int \frac{\partial^2\mathfrak{g}}{\partial\eta^2} \bigg |_{\eta_i}d\boldsymbol{\kappa}\\
&=\int \frac{\partial \log\mathfrak{g}}{\partial\eta} \frac{\partial \log\mathfrak{g}}{\partial\eta} ^\top \mathfrak{g} \bigg |_{\eta_i}d\boldsymbol{\kappa}:=I(\eta_i)
\end{aligned}
\end{align}
and
\begin{align}
\begin{aligned}
&\tilde{G}_{\eta t}(\eta_i, t_i)=\int \Phi(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}}) \frac{\partial \mathfrak{g}(\boldsymbol{\kappa}; \eta)}{\partial \eta}\bigg |_{\eta_i}d\boldsymbol{\kappa}.
\end{aligned}
\end{align}
In \eqref{g11}, the term $I(\eta)$ is the Fisher information matrix when $\mathfrak{g}(\boldsymbol{\kappa}; \eta)$ is a probability density.
For squared Hellinger metric, we obtain the corresponding expressions
\begin{align}
&\tilde{G}_{\eta\eta}(\eta_i, t_i)=\frac{1}{4}I(\eta_i),\\
&\tilde{G}_{\eta t}(\eta_i, t_i)=\frac{1}{4}\int \Phi(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}}) \frac{\partial \mathfrak{g}(\boldsymbol{\kappa}; \eta)}{\partial \eta}\bigg |_{\eta_i}d\boldsymbol{\kappa}.
\end{align}
Therefore, for both cases, the approximated homotopy difference equation is
\begin{align}\label{hde4.1}
\eta_{i+1}=\eta_i-\eta'_i\Delta t_i,
\end{align}
where $\eta'_i=I(\eta_i)^{-1}\int \Phi(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}}) \frac{\partial \mathfrak{g}(\boldsymbol{\kappa}; \eta)}{\partial \eta}\mid_{\eta_i}d\boldsymbol{\kappa}$. The corresponding approximated homotopy differential equation can be written as
\begin{align}
\frac{d\tilde{\eta}}{dt}=-I^{-1}(\tilde{\eta})\int \Phi(\boldsymbol{\kappa}; \boldsymbol{\tilde{y}})\frac{\partial\mathfrak{g}}{\partial\tilde{\eta}}d\boldsymbol{\kappa}.
\end{align}
\section{Numerical examples}
The homotopy Bayes algorithm is easy to implement, which we list in Algorithm \ref{alg1}
\begin{algorithm}
\caption{Homotopy Bayes algorithm}
\begin{algorithmic}[1]
\State Initial condition: Take the initial parameter from the prior distribution.
\State While $t=t_i$, compute the integration in \eqref{hde4.1} using the Monte Carlo algorithm. And set $\eta_{i+1}=\eta_i-\eta'_i\Delta t_i$ and set $t_{i+1}=t_i+\Delta t$.
\State Stop when $t=1$.
\end{algorithmic}
\label{alg1}
\end{algorithm}
\subsection{Inverse heat conduction problem}
We consider the reconstruction of $\boldsymbol{\kappa}$ using the measurements of solution $u$ governed by the following system
\begin{align}\label{eqn1}
\begin{aligned}
&\nabla\cdot(\boldsymbol{\kappa}\nabla u)=0 \,\,\text{in}\,\, [0, 1]\times[0, 0.6],\\
&u(0, x_2)=u(1, x_2)=0, \,\, u(x_1, 0.6)=T=200, \,\, -\kappa_0\frac{\partial u}{\partial x_2}=2000.
\end{aligned}
\end{align}
This example is originally presented in Nagel and Sudret \cite{Nagel} (also \cite{Wagner}). We again solve the same problem with homotopy Bayesian approach and investigate the performance of the algorithm.
As displayed in Fig. \ref{fig1}, the background thermal conductivity is denoted as $\kappa_0$ that is known, while the conductivities of the material inclusions are termed as $\kappa_1$ and $\kappa_2$, respectively.
We consider the inverse heat conduction problem (IHCP) that is posed when the thermal conductivities ${\boldsymbol{\kappa}}=(\kappa_1, \kappa_2)^\top$ are unknown and their inference is intended.
With this in mind, a number of $N$ measurements ${\boldsymbol{\tilde{y}}}=(u(\boldsymbol{x}_1), \cdots, u(\boldsymbol{x}_N))^\top$ of the temperature field at the measurement locations $(\boldsymbol{x}_1, \boldsymbol{x}_2, \cdots, \boldsymbol{x}_N)^\top$ is available. The forward model $\mathcal{G}: \boldsymbol{\kappa}\rightarrow \tilde{\boldsymbol{y}}$ is established by the finite element discretization for \eqref{eqn1}. With the discretization model, the measured temperatures $\boldsymbol{\tilde{y}}$ are generated by adding ansatz noise to the numerical solutions as follows
\begin{align}
\boldsymbol{\tilde{y}}=\boldsymbol{y}+\Xi=\mathcal{G}(\boldsymbol{\kappa})+\Xi,
\end{align}
where $\Xi\sim N(0, \delta^2 I)$. The prior is set to a multivariate lognormal distribution $\mathfrak{q}(\boldsymbol{\kappa})=\prod_{i=1}^2 \mathfrak{q}(\kappa_i)$ with independent marginals $\mathfrak{q}(\kappa_i)=LN(\kappa_i|\varkappa_0, \sigma_0)$ with $\varkappa_0=30$ and $\sigma_0=6$.
Parameters $\varkappa_0$ and $\sigma_0$ describe the mean and standard deviation of the lognormal prior. They are related to the parameters of the associated normal distribution $N(\log(\kappa_i)|\lambda_0, \zeta_0^2)$
via $\varkappa_0=\exp(\lambda_0+\zeta_0^2/2)$ and $\sigma_0^2=(\exp(\zeta_0^2)-1)\exp(2\lambda_0+\zeta_0^2)$.
The unknown parameters are represented as $\kappa_i=\exp(\lambda_0+\zeta_0\xi_i)$ in terms of the standardized variables $\xi_i\in\mathbb{R}$ with Gaussian weight functions $N(\xi_i|0, 1)$. In this computation process, we take a uniform partition for homotopy parameter interval
$[0, 1]$. In this 2D IHCP, we take the $\Delta t_i=0.01$. This integrations in \eqref{hde4.1} are computed by the Monte Carlo numerical integration. For noise $\delta=0.25$, we display the numerical results in Fig. \ref{fig3}. The left one in Fig. \ref{fig3} is the function $\kappa=\exp(\lambda_0+\zeta_0\xi)$, where $\xi$ is estimated by the proposed algorithm. The right one in Fig. \ref{fig3} shows the true posterior density.
We also consider the IHCP with six unknown conductivities. For the corresponding setup one can refer to Fig. \ref{fig2}. The unknown conductivites $\boldsymbol{\kappa}=(\kappa_1, \cdots, \kappa_6)^\top$ are inferred with $N=20$ noisy measurements
$\boldsymbol{\tilde{y}}=(u(\boldsymbol{x}_1), \cdots, u(\boldsymbol{x}_{20}))^\top$ at the measurements $(\boldsymbol{x}_1, \boldsymbol{x}_2, \cdots, \boldsymbol{x}_{20})^\top$. The noise $\delta$ is taken as $0.05$. We display the reconstructed probability in Fig. \ref{fig4}.
\subsection{Inverse acoustic obstacle scattering}
We apply the proposed algorithm to an inverse acoustic scattering problem with a sound-soft obstacle. We consider the scattering by long cylindrical obstacles with cross sections $\Omega\subset\mathbb{R}^2$
that are starlike with respect to the origin.
In mathematics, we assume that $\Omega\subset\mathbb{R}^2$ is a bounded,
simply connected domain with $C^2$ boundary $\partial\Omega$. Then $\partial\Omega$ can be uniquely represented by a periodic function $r: [0, 2\pi)\rightarrow\mathbb{R}^+$:
\begin{align}\label{parq1}
\partial\Omega:=r(s)(\cos s, \sin s)=\exp(q(s))(\cos s, \sin s), \,\, s\in[0, 2\pi),
\end{align}
where $q(s)=\log r(s)$, $0<r(s)<r_{\max}$.
For a given incident plane wave
\begin{align}
u^{\rm i}(\boldsymbol{x}):=\exp(\mathfrak{i}k\boldsymbol{x}\cdot \boldsymbol{d}), \,\, \boldsymbol{x}\in\mathbb{R}^2,\,\, \boldsymbol{d}\in\mathbb{S}:=\{\hat{\boldsymbol{x}}\in\mathbb{R}^2\big| |\hat{\boldsymbol{x}}|=1\},
\end{align}
where $k>0$ is the wavenumber, $\mathfrak{i}=\sqrt{-1}$ and $\boldsymbol{d}:=(\cos\varsigma, \sin\varsigma)$ is the direction, the scattering problem is to find the scattered field $u^{\rm s}$, or the total field $u=u^{\rm i}+u^{\rm s}$, such that
\begin{align}\label{sca1}
\begin{aligned}
&\Delta u+k^2u=0, &\text{in}\,\, \mathbb{R}^2\backslash\bar{\Omega},\\
&u=0, &\text{on}\,\, \partial\Omega,\\
&\lim_{r\rightarrow\infty}\sqrt{r}\left(\frac{\partial u^{\rm s}}{\partial r}-\mathfrak{i}ku^{\rm s}\right)=0, & \text{Sommerfeld radiation condition}.
\end{aligned}
\end{align}
It is well-known from the Sommerfeld radiation condition that the scattered field admits the following asymptotic expansion
\begin{align}
u^{\rm s}(\boldsymbol{x}, \boldsymbol{d})=\frac{\exp(\mathfrak{i}\frac{\pi}{4})}{\sqrt{8k\pi}}\frac{\exp(\mathfrak{i}kr)}{\sqrt{r}}\left\{u^\infty(\boldsymbol{\hat{x}}, \boldsymbol{d})+O\left(\frac{1}{r}\right)\right\}\,\, \text{as}\,\, r:=|\boldsymbol{x}|\rightarrow\infty
\end{align}
uniformly for all directions $\boldsymbol{\hat{x}}=\boldsymbol{x}/|\boldsymbol{x}|$. The function $u^\infty(\boldsymbol{\hat{x}}, \boldsymbol{d})$ defined on the unit circle $\mathbb{S}\subset\mathbb{R}^2$ is called the far field pattern of $u^{\rm s}$.
The inverse scattering problem considered in this paper is to determine $\partial\Omega$ from the observation of the far field pattern $u^\infty(\boldsymbol{\hat{x}}, \boldsymbol{d})$.
We first list the basic knowledges to establish the forward map $\mathcal{G}$. Recall that the fundamental solution $\varPhi(\boldsymbol{x}, \boldsymbol{\tilde{x}})$ of the Helmholtz equation is given by
\begin{align*}
\varPhi(\boldsymbol{x}, \boldsymbol{\tilde{x}})=\frac{\mathfrak{i}}{4}H_0^1(|\boldsymbol{x}-\boldsymbol{\tilde{x}}|),
\end{align*}
where $H_0^1$ is the Hankel function of the first kind of order zero. The single-layer potential operator $\mathcal{S}$ and the double-layer potential operator $\mathcal{K}$ are defined by
\begin{align*}
\begin{aligned}
(\mathcal{S}\varphi)(\boldsymbol{x})=2\int_{\partial\Omega}\varPhi(\boldsymbol{x}, \boldsymbol{\tilde{x}}) \varphi(\boldsymbol{\tilde{x}})ds(\boldsymbol{\tilde{x}}),\,\, \boldsymbol{x}\in\partial\Omega,\\
(\mathcal{K}\varphi)(\boldsymbol{x})=2\int_{\partial\Omega} \frac{\partial\varPhi(\boldsymbol{x}, \boldsymbol{\tilde{x}})}{\partial \nu(\boldsymbol{\tilde{x}})}\varphi(\boldsymbol{\tilde{x}})ds(\boldsymbol{\tilde{x}}),\,\, \boldsymbol{x}\in\partial\Omega,
\end{aligned}
\end{align*}
respectively. From \cite{Colton}, we can know that $\mathcal{S}$ and $\mathcal{K}$ are bounded from $C^{0, \alpha}(\partial\Omega)$ into $C^{1, \alpha}(\partial\Omega)$, $\alpha\in (0, 1)$. According to the single- and double-layer potentials, the scattered
field can be written as
\begin{align}
u^{\rm s}(\boldsymbol{x}; \Omega)=\int_{\partial\Omega} \left\{\frac{\partial\varPhi(\boldsymbol{x}, \boldsymbol{\tilde{x}})}{\partial\nu(\boldsymbol{\tilde{x}})}-\mathfrak{i}\tau\varPhi(\boldsymbol{x}, \boldsymbol{\tilde{x}})\right\}\varphi(\boldsymbol{\tilde{x}})ds(\boldsymbol{\tilde{x}}), \,\, \boldsymbol{x}\in\mathbb{R}^2 \backslash \bar{\Omega},
\end{align}
where $\tau$ is a real coupling parameter and $\varphi(\boldsymbol{\tilde{x}})$ is the unknown density function. Then the direct scattering problem is to find the density $\varphi$ such that
\begin{align}\label{sd1}
(\mathcal{I}+\mathcal{K}-\mathfrak{i}\tau\mathcal{S})\varphi=-2u^{\rm i}\,\, \text{on}\,\, \partial\Omega.
\end{align}
There exists a unique solution $\varphi$ satisfying \eqref{sd1} and depending continuously on $u^{\rm i}$ \cite{Colton}. Furthermore, the far field pattern has the following form
\begin{align}\label{sd2}
u^\infty(\boldsymbol{\hat{x}}, \boldsymbol{d})=\frac{\exp(-\mathfrak{i}\frac{\pi}{4})}{\sqrt{8\pi k}}\int_{\partial\Omega} \left(k\nu(\boldsymbol{\tilde{x}})\cdot \boldsymbol{\hat{x}}+\tau\right)\exp(-\mathfrak{i}k\boldsymbol{\hat{x}}\cdot \boldsymbol{\tilde{x}})\varphi(\boldsymbol{\tilde{x}})ds(\boldsymbol{\tilde{x}}).
\end{align}
By combining \eqref{sd1} with \eqref{sd2}, the direct scattering problem can be written as
\begin{align}
u^\infty(\boldsymbol{\hat{x}}, \boldsymbol{d})=\mathcal{G}(\Omega),
\end{align}
where $\mathcal{G}$ is the shape-to-measurement operator. Using the parameterization \eqref{parq1} and taking the noise in measurements into account, the inverse model is given by
\begin{align}
\boldsymbol{\tilde{y}}=\mathcal{G}(q)(\boldsymbol{\hat{x}}, \boldsymbol{d})+\Xi,\,\, (\boldsymbol{\hat{x}}, \boldsymbol{d})\in \Gamma^{\rm o}\times\Gamma^{\rm i}\subset \mathbb{S}\times\mathbb{S},
\end{align}
where $\Gamma^{\rm o}$ is the aperture of the observation and $\Gamma^{\rm i}$ is the aperture of the incident wave.
The prior $q$ is taken as the truncated Fourier series \cite{Li}
\begin{align}\label{tr1}
q_N(s)=\frac{\kappa_0}{\sqrt{2\pi}}+\sum_{n=1}^N \left(\frac{\kappa_n}{n^{\upsilon}}\frac{\cos ns}{\sqrt{\pi}}+\frac{\tilde{\kappa}_n}{n^{\upsilon}}\frac{\sin ns}{\sqrt{\pi}}\right),
\end{align}
where $\kappa_n$ and $\tilde{\kappa}_n$ are i.i.d. (independent and identically distributed) with $\kappa_n, \tilde{\kappa}_n\sim N(0, 1)$ and $\upsilon$ is a positive constant.
The ansatz data is used to the numerical reconstruction.
The synthetic Gaussian noise is added to the true forward model, i.e.,
\begin{align}
\boldsymbol{\tilde{y}}=\mathcal{G}(q)(\boldsymbol{\hat{x}}, \boldsymbol{d})+\delta\|\mathcal{G}(q)(\boldsymbol{\hat{x}}, \boldsymbol{d})\|(\xi_1 +\xi_2 \mathfrak{i}),
\end{align}
where $\xi_1, \xi_2$ are i.i.d. normal Gaussian.
In the numerical implementation, we fix wavenumber $k=1$, $\delta=0.01$ and take $\upsilon=2.2, N=5$ in \eqref{tr1}. Some frequently used test shapes in obstacle scattering are chosen as the test examples.
For two incident waves, we collect the full aperture data. The reconstruction results are displayed in Fig. \ref{fig5}.
\begin{itemize}[labelwidth={1em},font=\bfseries,align=left]
\item[(a)] threelobes: $r(s)=0.5+0.25\exp(-\sin 3 s)-0.1\sin s$;
\item[(b) ] pear: $r(s)=\frac{5+\sin 3s}{6}$;
\item[(c)] bean: $r(s)=\frac{1+0.9\cos s+0.1\sin 2s}{1+0.75\cos s}$;
\item[(d)] peanut: $r(s)=0.4\sqrt{4\cos^2 s+\sin^2 s}$;
\item[(e)] acorn: $r(s)=\frac{3}{5}\sqrt{\frac{17}{4}+2\cos 3s}$;
\item[(f) ] roundedtriangle: $r(s)=2+0.5\cos s$;
\item[(g) ] roundrect: $r(s)=( \cos^4 s+(2/3\sin s)^4 )^{-1/4}$;
\item[(h)]
kite: $x_1=\cos s+0.65\cos 2s-0.65$, $ x_2= 1.5\sin s$.
\end{itemize}
%
\begin{figure}
\centering
\begin{tikzpicture}
\draw[->, thick] (-3,-2)--(-1,-2) node[right]{$x_1$};
\draw[->, thick] (-2.5,-2.2)--(-2.5,-1) node[above]{$x_2$};
\draw[thick] (-1.5,-1.5) rectangle (4,2);
\draw [thick] (-1.5,-1.5) -- (-1.75,-1.75);
\draw [thick] (-1.5,-1.15) -- (-1.75,-1.4);
\draw [thick] (-1.5,-0.8) -- (-1.75,-1.05);
\draw [thick] (-1.5,-0.45) -- (-1.75,-0.7);
\draw [thick] (-1.5,-0.1) -- (-1.75,-0.35);
\draw [thick] (-1.5,0.25) -- (-1.75,0);
\draw [thick] (-1.5,0.6) -- (-1.75,0.35);
\draw [thick] (-1.5,0.95) -- (-1.75,0.7);
\draw [thick] (-1.5,1.3) -- (-1.75,1.05);
\draw [thick] (-1.5,1.65) -- (-1.75,1.4);
\draw [thick] (-1.5,2) -- (-1.75,1.75);
\draw [thick] (4,2) -- (4.25,1.75);
\draw [thick] (4,1.65) -- (4.25,1.4);
\draw [thick] (4,1.3) -- (4.25,1.05);
\draw [thick] (4,0.95) -- (4.25,0.7);
\draw [thick] (4,0.6) -- (4.25,0.35);
\draw [thick] (4,0.25) -- (4.25,0);
\draw [thick] (4,-0.1) -- (4.25,-0.35);
\draw [thick] (4,-0.45) -- (4.25,-0.7);
\draw [thick] (4,-0.8) -- (4.25,-1.05);
\draw [thick] (4,-1.15) -- (4.25,-1.4);
\draw [thick] (4,-1.5) -- (4.25,-1.75);
\draw [black,fill=gray, ,opacity=0.5](0.05,0.35) circle (0.5);
\draw [black,fill=gray, ,opacity=0.8](2.5,0.35) circle (0.5);
\node at (0.05,0.35) {$\kappa_1$};
\node at (2.5,0.35) {$\kappa_2$};
\node at (1.3,0.3) {$\kappa_0$};
\node at (1.3,2.3) {$T$};
\draw[->, thick] (1,-2.2)--(1,-1.6)
\node at (1.3,-1.9) {$q$};
\foreach \i in {-1.05, 0.05, 1.21, 2.5, 3.55} {\draw (\i, 1.3) [fill=black] circle (0.05);}
\foreach \i in {-1.05, 0.05, 1.21, 2.5, 3.55} {\draw (\i, -0.6) [fill=black] circle (0.05);}
\foreach \i in {-1.05, 3.55} {\draw (\i, 0.35) [fill=black] circle (0.05);}
\end{tikzpicture}
\caption{ 2D IHCP: heat conduction setup.}
\label{fig1}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\draw[thick] (0,0) rectangle (8,4);
\foreach \i in {1.8, 4, 6.191}
\foreach \j in {1.2, 3}
\fill (\i, \j)[black,fill=gray, ,opacity=(\i+\j)/10] circle (0.5cm);
}
\node at (1.8, 1.2) {$\kappa_{4}$};
\node at (1.8, 3) {$\kappa_{1}$};
\node at (4, 1.2) {$\kappa_{5}$};
\node at (4, 3) {$\kappa_{2}$};
\node at (6.191, 1.2) {$\kappa_{6}$};
\node at (6.191, 3) {$\kappa_{3}$};
\node at (4, 2) {$\kappa_0$};
\foreach \x in {0.7, 1.8, 2.9, 4, 5.1, 6.2, 7.4}
\fill[color=black] (\x, .3) circle (.06cm);
\foreach \x in {0.7, 1.8, 2.9, 4, 5.1, 6.2, 7.4}
\fill[color=black] (\x, 3.7) circle (.06cm);
\foreach \y in {1.1, 2, 2.9}
\fill[color=black] (0.7,\y) circle (.06cm);
\foreach \y in {1.1, 2, 2.9}
\fill[color=black] (7.4,\y) circle (.06cm);
\foreach \y in {0, 0.2, 0.4,0.6,0.8,1,1.2,1.4,1.6,1.8,2,2.2,2.4,2.6,2.8,3,3.2,3.4,3.6,3.8,4}
\draw [thick] (0,\y) -- (-0.2,\y-0.1);
\foreach \y in {0, 0.2, 0.4,0.6,0.8,1,1.2,1.4,1.6,1.8,2,2.2,2.4,2.6,2.8,3,3.2,3.4,3.6,3.8,4}
\draw [thick] (8,\y) -- (8.2,\y-0.1);
\node at (4,4.4) {$T$};
\draw[->, thick] (4,-1)--(4,-0.2)
\node at (4.3,-0.6) {$q$};
\end{tikzpicture}
\caption{6D IHCP: heat conduction setup.}
\label{fig2}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.25]{rec_25_kappa.jpg}
\includegraphics[scale=0.25]{pde_25_kappa.jpg}\\
\includegraphics[scale=0.25]{rec_25_xi.jpg}
\includegraphics[scale=0.25]{pde_25_xi.jpg}
\caption{The numerical results for the 2D IHCP: The reconstruction (left) for $\boldsymbol{\kappa}=\exp(\lambda_0+\zeta_0\boldsymbol{\xi})$ and the true posterior density (right).}
\label{fig3}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.25]{6d_probability.jpg}
\caption{The reconstructed posterior probability density for 6D IHCP.}
\label{fig4}
\end{figure}
\begin{figure}
\centering
\subfigure[Threelobes with incident angles $\varsigma=\frac{\pi}{2}$ and $\varsigma=\frac{3\pi}{2}$.]{\includegraphics[width=4.8cm]{threelobes_2incident_full.eps}}
\subfigure[Pear with incident angles $\varsigma=0$ and $\varsigma=\frac{\pi}{2}$.]{\includegraphics[width=4.8cm]{pear_2incident_full.eps}}\\
\subfigure[Bean with incident angles $\varsigma=0$ and $\varsigma=\frac{\pi}{2}$.]{\includegraphics[width=4.8cm]{bean_2incident_full.eps}}
\subfigure[Peanut with incident angles $\varsigma=0$ and $\varsigma=\frac{\pi}{2}$.]{\includegraphics[width=4.8cm]{peanut_2incident_full.eps}}\\
\subfigure[Acorn with incident angles $\varsigma=0$ and $\varsigma=\frac{\pi}{2}$.]{\includegraphics[width=4.8cm]{acorn_2incident_full.eps}}
\subfigure[Roundedtriangle with incident angles $\varsigma=0$ and $\varsigma=\pi$.]{\includegraphics[width=4.8cm]{roundedtriangle_2incident_full.eps}}\\
\subfigure[Roundrect with incident angles $\varsigma=0$ and $\varsigma=\pi$.]{\includegraphics[width=4.8cm]{roundrect_2incident_full.eps}}
\subfigure[Kite with incident angles $\varsigma=0$ and $\varsigma=\pi$.]{\includegraphics[width=4.8cm]{kite_2incident_full.eps}}
\caption{The numerical reconstructions for inverse scattering problem using full aperture data with two incident waves. The blue curves are the posterior samples drawn from the approximated posterior distribution.}
\label{fig5}
\end{figure}
\bibliographystyle{plain}
|
2,877,628,090,273 | arxiv | \section{Introduction}
Extensive experimental efforts are under way at Brookhaven National
Laboratory and CERN to produce and investigate the new deconfined,
chirally symmetric high-temperature phase of QCD, usually called the
quark-gluon-plasma (QGP). While the very high energy densities generated
in high-energy nuclear collisions virtually guarantee that some new state
of matter is reached, there are still important unresolved theoretical
problems relating to the description of this state. One missing, critical
ingredient is a non-perturbative approach to dynamical QCD processes far
from thermodynamical equilibrium.
The study of non-equilibrium dynamics of relativistic quantum fields is
currently an active area of theoretical research \cite{NEQF}.
Approaches that go beyond perturbation theory include descriptions in
terms of probabilistic transport equations \cite{VBE}, and
deterministic or stochastic classical equations for the infrared
degrees of freedom of the quantum fields \cite{GM97,ASY99}.
In the special, but important
case of non-Abelian gauge theories, the extreme infrared limit has
been long known to correspond to a dynamical system exhibiting classical
as well as quantum chaos \cite{Mat,Sav}. Several years ago, this result
was extended to spatially varying, lattice regulated Yang-Mills fields
by numerical calculation of the maximal Lyapunov exponents and the
complete ergodic Lyapunov spectrum of classical SU(2) gauge theory
\cite{ba,gong1,gong2}. The most intriguing results with implications for
relativistic heavy ion physics are:
\begin{enumerate}
\item The ergodic Lyapunov spectrum looks exactly as expected for
a globally hyperbolic system.
\item The largest Lyapunov exponent appears to be related to the plasmon
damping rate as predicted by high temperature perturbation theory
\cite{Biro}.
\item The magnitude of the maximal Lyapunov exponent for SU(3) indicates
a rapid thermalization of gluons in heavy-ion collisions.
\end{enumerate}
These results suggest the extension of this approach to a systematic
semi-classical description of the dynamics of Yang-Mills field theories.
The success of such an approach will ultimately depend on one's ability
to find practical methods for the application of periodic orbit theory
to systems with many degrees of freedom. We discuss a very first
step in this direction.
We present results of an investigation of the relation between the
Lyapunov exponents of periodic and ergodic orbits. Periodic orbit theory,
in the framework of the thermodynamic formalism, makes detailed predictions
for the statistical properties of Lyapunov exponents of generic orbits
in Anosov systems, but few studies of these relations appear to have
been made for specific chaotic, non-linear dynamical systems. This
motivated our numerical study of the relation between the Lyapunov
exponents of periodic orbits and generic trajectories in a system for
which the Lyapunov exponents of periodic orbits (henceforth simply
called ``periodic Lyapunov exponents'') are known for all orbits below
a certain period: the two-dimensional hyperbola billiard \cite{jb}.
For this system, therefore, powerful mean value theorems can be invoked
to predict analytical relations which can be checked numerically.
Below, we present our numerical results confirming the general connection
between the Lyapunov exponents for ergodic and periodic orbits as well as
for their fluctuations.
We then dicuss the corresponding properties of the Lyapunov exponents of
ergodic trajectories for classical SU(2) Yang-Mills theory on a
three-dimensional lattice. The observed similarities suggest that this
system is also globally hyperbolic and could, in principle, be treated
within the framework of periodic orbit theory. Our conjecture yields
a prediction for the fluctuation properties of the ergodic Lyapunov
exponents which is verified numerically. On the basis of the relation
between these fluctuations and the fluctuations of the entropy growth rate
we obtain a prediction of the magnitude of entropy fluctuations as a
function of space-time volume. We find that for the conditions occurring
in high energy nuclear collisions these fluctuations are expected to be
very small, in agreement with observations.
We emphasize that it is presently impossible to predict how
far this approach will carry toward a description of the dynamics of
non-equilibrium processes in QCD. Classical Yang-Mills equations can only
be used to estimate a very limited number of dynamical parameters of the
QGP, namely those which have a well-defined classical limit, such as the
logarithmic entropy growth rate, $d(\ln S)/ dt$, but not quantities such
as the energy or entropy density. Our analysis is only relevant to the
fluctuation properties of such, essentially ``classical'' quantities.
However, we hasten to point out that, independently of the specific
application considered here, an improved understanding of the connection
between quantum field theory and periodic orbit theory is of fundamental
theoretical relevance for non-linear dynamics in general. To our knowledge
for the first time, we propose a general relationship between the mean
periodic Lyapunov exponents of a dynamical system, its mean ergodic Lyapunov
exponents, and the ergodic autocorrelation time. This general relationship
makes it possible to extract important new information for any
higher-dimensional system for which the explicit construction of the
periodic orbits is practically not feasible.
The basic assumption underlying periodic orbit theory is that the periodic
orbits sample the phase space of a non-linear dynamical system in such a
manner that its averaged properties can be systematically reconstructed
from the properties of the periodic orbits. For each such orbit there
is a spectrum of characteristic Lyapunov exponents that describe how fast
the separation between neighboring orbits increases with time. While periodic
orbit theory is an extremely powerful tool, its range
of applicability is strongly limited by the difficulties encountered
in determining the complete set of periodic orbits.
For any field theory with its potentially infinitely many degrees of freedom,
the task of numerically constructing the periodic orbits looks hopeless.
It is, however, relatively easy to obtain ergodic Lyapunov exponents by
numerical integration of the equations of motion \cite{ba}. Since it seems
plausible that every ergodic trajectory eventually comes close to any periodic
orbit, any infinite ergodic orbit should sample all periodic orbits. Thus it
appears as a natural conjecture that the average properties of ergodic
Lyapunov exponents and the average properties of periodic Lyapunov exponents
should be related. It is this relationship that we want to discuss in the
following.
\section{General Relations}
Before we investigate and confirm the relationship between ergodic and
periodic orbits for a simple but non-trivial system for which
all periodic Lyapunov exponents (up to a certain period) are known,
namely, the two-dimensional hyperbola billiard \cite{jb}, we review
some general relations between Lyapunov exponents of periodic and
generic trajectories. In the next section, we will compare these
analytic predictions for the properties of the ergodic Lyapunov exponents
$\lambda_{\rm r}$ with those obtained by numerical integration of a randomly
chosen ergodic trajectory $\vec x(t) = \vec x_0(t)+\delta\vec x(t)$:
\begin{equation}
\lambda_{\rm r} =
\lim_{\delta \vec x(0)\rightarrow 0}\lim_{t\rightarrow \infty} {1\over t}
\ln {\vert\delta \vec x(t)\vert\over \vert\delta \vec x(0)\vert}\ ,
\label{eq1}
\end{equation}
where the index $r$ indicates the random starting point.
(We remind the reader that for a fully ergodic system this yields the
maximal ergodic Lyapunov exponent, which for $d=2$ degrees of freedom is
the unique positive exponent.)
In a Hamiltonian hyperbolic dynamical system with $d$ degrees of freedom
ergodicity implies that the sum of its $d-1$ positive ergodic Lyapunov
exponents can also be obtained as the ergodic mean of the local expansion rate,
\begin{equation}
\lim_{t\rightarrow \infty} h_{\rm r}(t) \equiv
\lim_{t\rightarrow \infty} {1\over t} \int_0^t \chi(\vec x(t'))\ dt'
=\sum_{j=1}^{d-1}\lambda_{r,j} = h_{\rm KS}\ .
\label{eq2}
\end{equation}
Here $h_{\rm KS}$ denotes the Kolomogorov-Sinai entropy and
\begin{equation}
\chi(\vec x(t)) = {d\over dt}\ln\det\left(
{\partial\vec x(t) \over \partial\vec x(0)} \right)_{\rm expanding}
\label{eq3}
\end{equation}
is the local rate of expansion along the trajectory $\vec x(t)$.
Due to the equidistribution of periodic orbits in phase space it is
possible to evaluate the ergodic mean in (\ref{eq2}) by weighted sums
over periodic orbits. In fact, for hyperbolic systems the thermodynamic
formalism allows to express certain invariant measures on phase space
in terms of averages over periodic orbits, see, e.g., \cite{ParPol,ga}.
One is in particular able to obtain a relation that establishes a direct
connection between the positive ergodic Lyapunov exponents $\lambda_{r,j}$
and those of periodic orbits. Labelling periodic orbits by $\nu$, and
denoting their periods and positive Lyapunov exponents by $T_{\nu}$ and
$\lambda_{\nu,j}$, respectively, this relation reads
\begin{equation}
\sum_{j=1}^{d-1}\lambda_{r,j} = \lim_{t\rightarrow\infty}
\frac{\sum_{t\leq T_\nu\leq t+\varepsilon} \left( \sum_{j=1}^{d-1}
\lambda_{\nu,j}\right) \exp\left( -\sum_{j=1}^{d-1}\lambda_{\nu,j}
T_\nu\right)} {\sum_{t\leq T_\nu\leq t+\varepsilon}
\exp\left( -\sum_{j=1}^{d-1}\lambda_{\nu,j}T_\nu\right)} \ ,
\label{eq4}
\end{equation}
where $\varepsilon>0$ is arbitrary. Within the thermodynamic formalism the
topological pressure $P(\beta)$ was introduced as a useful tool to analyze
invariant measures on phase space in terms of periodic orbits as, e.g., in
(\ref{eq4}). This function can be expressed as
\begin{equation}
P(\beta) = \lim_{t\rightarrow\infty} {1\over t} \ln
\sum_{t\leq T_\nu\leq t+\varepsilon} \exp\left( -\beta \sum_{j=1}^{d-1}
\lambda_{\nu,j}T_\nu\right)\ ,
\label{eq5}
\end{equation}
and it is not difficult to derive from (\ref{eq5}) that
$P(\beta)$ is monotonically decreasing and convex. The exponential
proliferation of the number of periodic orbits immediately implies that
$P(0)=h_{top}$ (topological entropy). Moreover, the arithmetic average of
the sum of the positive periodic Lyapunov exponents is given by
$\bar\lambda=-P'(0)$. The relation (\ref{eq4}) then follows from (\ref{eq2})
and from the non-trivial identity $-P'(1)=h_{\rm KS}$. One also concludes that
the three quantities measuring a mean separation of neighboring trajectories
are ordered in the following way: $\bar\lambda \geq h_{top} \geq h_{\rm KS}$.
For further information see, e.g., \cite{ga}.
Our next goal is to investigate the fluctuations of the local rate of
expansion (\ref{eq3}), when integrated up to a sampling time $t_{\rm s}$,
about its ergodic mean (\ref{eq2}). We recall that this quantity was
denoted as $h_{\rm r}(t_{\rm s})$ in (\ref{eq2}).
For (uniformly) hyperbolic dynamical systems one expects that observables
sampled along ergodic trajectories up to time $t_{\rm s}$ show Gaussian
fluctuations about their ergodic mean. Indeed, in many cases a central limit
theorem holds true that also predicts the widths of these Gaussian to scale as
$t_{\rm s}^{-1/2}$ for large sampling times $t_{\rm s}$. More precisely,
Waddington \cite{wa} has shown that for Anosov systems (i.e., fully
hyperbolic systems on compact phase spaces) the difference
\begin{equation}
\sqrt{t_{\rm s}}\left[ h_{\rm r}(t_{\rm s}) - h_{\rm KS} \right]
\label{eq7}
\end{equation}
shows Gaussian fluctuations with variance $P''(1)$ in the limit
$t_{\rm s}\rightarrow\infty$. This means that
\begin{equation}
\Delta h_{\rm r}(t_{\rm s})\sim \sqrt{P''(1)/t_{\rm s}}\ ,\quad
t_{\rm s}\rightarrow\infty\ .
\label{eq8}
\end{equation}
According to (\ref{eq5}) the quantity $P''(1)$ can be
expressed in terms of periodic orbit sums as
\begin{equation}
P''(1) = \lim_{t\to\infty} t
\left[ \frac{\sum_{\nu}\left( \sum_{j}\lambda_{\nu,j}\right)^2
\exp\left( -\sum_{j}\lambda_{\nu,j}T_\nu\right)}
{\sum_{\nu}\exp\left( -\sum_{j}\lambda_{\nu,j}T_\nu\right)} -
\left( \frac{\sum_{\nu}\left(\sum_{j}\lambda_{\nu,j}\right)
\exp\left( -\sum_{j}\lambda_{\nu,j}T_\nu\right)}
{\sum_{\nu}\exp\left( -\sum_{j}\lambda_{\nu,j}T_\nu\right)}
\right)^2 \right] \ .
\label{eq9}
\end{equation}
On the other hand, the variance of the distribution of the periodic Lyapunov
exponents is related to $P''(0)$, since
\begin{equation}
P''(0) = \lim_{t\to\infty} t
\left[ \frac{\sum_{t\leq T_\nu\leq t +\varepsilon}
\left( \sum_{j=1}^{d-1}\lambda_{\nu,j}\right)^2}
{\sum_{t\leq T_\nu\leq t +\varepsilon}1} -
\left( \frac{\sum_{t\leq T_\nu\leq t +\varepsilon}\left(
\sum_{j=1}^{d-1}\lambda_{\nu,j}\right)}
{\sum_{t \leq T_\nu\leq t +\varepsilon}1}\right)^2 \right] \ .
\label{eq10}
\end{equation}
For the hyperbola billiard this variance was calculated numerically by Sieber
\cite{jb}, who found Gaussian distributions of the positive Lyapunov
exponents of periodic orbits with $N$ bounces off the boundary. For large $N$
the widths of these Gaussian scale like
\begin{equation}
\tilde\sigma_N\sim {0.199 \over \sqrt{N}}\ .
\label{eq11}
\end{equation}
Taking into account that the mean length of periodic orbits with $N$ bounces
scales as $\bar t_N\sim 2.027 N$ \cite{jb}, this yields a prediction for the
width of the distribution of periodic Lyapunov exponents expressed as a
function of $t$ that scales as
\begin{equation}
\Delta\lambda_{\nu}(t)\sim {0.283 \over \sqrt{t}}
\label{eq12}
\end{equation}
in the limit of long periodic orbits. One hence concludes that
$P''(0)=0.08$.
The variance of the fluctuations (\ref{eq7}) can also be related to the
autocorrelation function
\begin{equation}
a (\tau) = \langle \chi(\vec x(\tau))\,\chi(\vec x(0)) \rangle - (h_{\rm KS})^2
\label{eq13}
\end{equation}
of the local ergodic Lyapunov exponents, where $\langle\dots\rangle$ denotes
a phase space average. In order to derive this connection one averages the
square of (\ref{eq7}) over phase space, which then leads to
\begin{equation}
t\,(\Delta h_{\rm r}(t))^2 = {1 \over t}\int_{-t}^{+t}(t-|\tau|)\,a (\tau)
\ d\tau\ .
\label{eq14}
\end{equation}
A connection with the topological pressure can be established because
(\ref{eq7}) and (\ref{eq8}) imply that the autocorrelation function
(\ref{eq13}) vanishes faster than $1/\tau$ as $\tau\to\infty$. One can
therefore perform the limit $t\to\infty$ on both sides of (\ref{eq14}),
yielding
\begin{equation}
P''(1) = \lim_{t\rightarrow\infty} t\,(\Delta h_{\rm r}(t))^2 =
\int_{-\infty}^{+\infty}a (\tau)\ d\tau\ .
\label{eq15}
\end{equation}
Finally we want to discuss the probability for deviations of the sum of the
positive ergodic Lyapunov exponents, sampled over time $t$, from its ergodic
mean $h_{\rm KS}$. To this end let $p_t (h)$ denote the probability density
for $h_{\rm r}(t)$ to have a value $h$. Waddington has shown \cite{wa}
that for Anosov systems which are such that $P''(\beta)\neq 0$ for all
$\beta$, this probability density has the form
\begin{equation}
p_t(h) = f(h)\,\sqrt{t}\,\exp(-g(h)t)\ ,
\label{eq17}
\end{equation}
where $f(h)$ is a complicated, though uniquely fixed function. Moreover,
\begin{equation}
g(h)= \inf_{\beta} \{ h\beta + P(\beta+1) \}
\label{eq18}
\end{equation}
is a strictly convex, non-negative function with a unique minimum at the
ergodic mean $h_{min}=h_{\rm KS}$, where $g(h_{\rm KS})=0$. This means
that for large $t$ the probability of large deviations of $h_{\rm r}(t)$
from the ergodic mean is exponentially small.
\section{The Two-Dimensional Hyperbola Billiard}
We test the above statements in the two-dimensional hyperbola billiard,
for which all periodic orbits and their Lyapunov exponents are known
up to certain orbit period \cite{jb}. In order to be able to compare
our numerical results with the analytical predictions, which are based
on periodic orbits in a restricted length range, we have limited the
motion into the arms of the hyperbola billiard, using the cut-off
$\vert x\vert, \vert y\vert \leq x_{\rm lim} = 10/\sqrt{2}$ and
reflecting the motion horizontally or vertically at the boundary.
We have not studied the dependence of our results on $x_{\rm lim}$ in
any systematic fashion, but a cursory exploration did not reveal a
significant dependence.
Our numerical result for the KS-entropy was obtained as
$h_{\rm KS} \equiv \lambda_{\rm r} = 0.575$ by exploiting the relation
(\ref{eq1}) for the positive ergodic Lyapunov exponent. In \cite{jb} the
arithmetic average of the periodic Lyapunov exponents and the topological
entropy have been determined numerically as $\bar\lambda = 0.703$ and
$h_{\rm top} = 0.5925$, respectively, so that the ordering
$\bar\lambda \geq h_{\rm top} \geq h_{\rm KS}$ is respected. This provides
a non-trivial test since the general theoretical statement has only been
proven for uniformly hyperbolic systems (Anosov systems) on compact phase
spaces. The hyperbola billiard is only non-uniformly hyperbolic and,
moreover, without the imposed cut-off its phase space fails to be compact.
For the (cut-off) hyperbola billiard we found that the distributions of
the ergodic Lyapunov exponents (\ref{eq1}) that we determined numerically
up to sampling times $t_{\rm s}$ are very well described by Gaussians,
see Fig.~\ref{fig1}, if the sampling time is not too small
($t_{\rm s} \gg 1$). For small sampling times, most of the phase space
divergence occurs during intervals $t_{\rm s}$ when the trajectory
reflects off the hyperbolic boundary, making the distribution of
$h_{\rm r}(t_{\rm s}) \equiv \lambda_{\rm r}(t_{\rm s})$ strongly
non-Gaussian in the limit $t_{\rm s} \to 0$. We made power-law fits
of the form $a t_{\rm s}^{-b}$ to the dependence of the widths of
these Gaussians on $t_{\rm s}$. This gave the result (see Fig.~\ref{fig2}):
\begin{equation}
\Delta h_{\rm r}(t_{\rm s})\approx 0.86\, t_{\rm s}^{-1/2}\ .
\label{eq6}
\end{equation}
We also determined the correlation function $a(\tau)$ for the hyperbola
billiard by sampling $h_{\rm r}(t_{\rm s})$ for in small intervals
$t_{\rm s} = 1/(2\sqrt{2})$. The result is shown in Fig.~\ref{fig3}.
Clearly, $a(\tau)$ falls off rapidly with a time constant of about
$t_c = 6$. Therefore, we can test the relation (\ref{eq14}) by
integrating the right-hand side numerically. For $t = 28.3$,
corresponding to the lower plot in Fig.~\ref{fig1}, we obtain in this
way the prediction $\Delta h_{\rm r} = 0.197$ with an estimated numerical
uncertainty of about 25\%. The value obtained from the Gaussian fit to
the histogram in Fig.~\ref{fig1} is $\Delta h_{\rm r} = 0.159$.
The quality of this agreement must be judged with the fact in mind
that the correlation function $a(\tau)$, as well as the distribution
$\chi(\vec x(t))$ are highly singular for the hyperbola billiard
in the limit $\tau\to 0$.
\section{The SU(2) gauge theory on a lattice}
Let us now turn to a comparison with results obtained for ergodic
orbits in the classical SU(2) Yang-Mills theory regularized on a
lattice. In \cite{gong2} the complete (positive) Lyapunov spectra were
obtained for lattice volumes $L^3$ with $L=1,2,3$. We have extended
these calculations to the lattices of size $L=4,6$. All our calculations
were performed for an average energy per plaquette $E_{\rm p}\approx 1.8$.
For sufficiently long trajectories and fixed energy per lattice site the
Lyapunov spectrum has a unique shape, independent of the lattize size,
as shown in Fig.\ref{fig4}. Indeed, for a completely hyperbolic system,
physical intuition requires that the Kolmogorov-Sinai entropy $-P'(1)$
is an extensive quantity. For this to be true, the sum over all positive
Lyapunov exponents must scale like the lattice volume $L^3$ and the shape of
the distribution of Lyapunov exponents must be independent of $L$.
Figure \ref{fig4} confirms this expectation.
In Fig.~\ref{fig5} we show distributions of the
sum over positive Lyapunov exponents as a function of the
length of the sampled ergodic trajectories (obtained as function of
the sampling time $t_{\rm s}$ on a single, very long trajectory). Obviously,
the distributions are nicely fitted by Gaussians whose widths decrease
like $1/\sqrt{t_{\rm s}}$ (see Fig.~\ref{fig6}). This behavior is identical
to that of the two-dimensional hyperbolic system studied before
(cf.~Fig.~\ref{fig1}). We also determined again the autocorrelation
function $a(\tau)$ defined in (\ref{eq13}) by sampling the distribution
$p_t(h)$ with small time steps (see top part of Fig.~\ref{fig7}).
For the $L=4$ lattice the result is shown in the lower part of
Fig.~\ref{fig7}. This allows us to test the relation (\ref{eq14})
connecting the $a(\tau)$ with the variance of the ergodic Lyapunov
exponents. Using (\ref{eq14}) we obtain the value $\Delta h_{\rm r}=0.88$
for $t_{\rm s}=6$, whereas the Gaussian fit to the sampled distribution
shown in the top part of Fig.~\ref{fig5} is $\Delta h_{\rm r}=0.83$.
One can also read off from the distributions shown in Fig.~\ref{fig5} how
the widths of the Gaussians scale with the lattice size $L$. To a very good
approximation we find that it is proportional to $\sqrt{L^3}$. If one
includes the sampling time dependence, the variance of $h_{\rm KS}$ scales
like $\sqrt{L^3/t_{\rm s}}$. As the mean value $h_{\rm KS}$ of the
distribution $p_t(h)$ scales like $L^3$, this result confirms the
Gaussian nature of the fluctuations. Our result also has
important consequences for heavy-ion collisions. If fluctuations
are Gaussian with a dimensional scale given by the mean maximal ergodic
Lyapunov exponent, which is found numerically to be of order
(0.5 fm)$^{-1}$ \cite{ba}, then for typical volumes and reaction times
encountered in nuclear reactions the relative fluctuations must be very
small, of order $\sqrt{(0.5\,{\rm fm})^4/(5\,{\rm fm})^4}=0.01$.
This result is in agreement with a recent measurement of the fluctuations
in relativistic heavy-ion collisions, which show that the primary
event-by-event fluctuations in the mean value of the transverse momentum
do not exceed 1 percent \cite{fluc1}.
Let us stress that while it is consistent to assume that the
SU(2) gauge theory treated as a classical field theory on the lattice
is a hyperbolic system, our positive evidence is limited. It should
be clear that it is impossible to exclude, by numerical calculations
for a limited number of trajectories, that there are regions in the
high-dimensional phase space of our lattice field theory which are
not hyperbolic. (Then the SU(2) field on the lattice would not be an
Anosov system.) Also it is unproven, though highly probable, that
the addition of the quarks will not change the picture.
\section{Conclusions}
We have shown by numerical simulations that for a two-dimensional
billiard the mean values for the ergodic and periodic Lyapunov
exponents and their fluctuations as a function of trajectory length
(i.e. time) are closely related. We have derived a general relation
between their mean values and checked it numerically.
This demonstrates that we understand the relationship between
ergodic and periodic Lyapunov exponents for the hyperbola billiard.
We have than analyzed in a similar way classical SU(2) gauge theory
on a lattice. For all investigated properties we found good agreement
with the expectations for a globally hyperbolic (Anosov) system.
We conclude that for all quantities of interest which have a well-defined
classical limit (like the growth rate of entropy after the initial
energy deposition by hard interactions) the probability for large
fluctuations should be exponentially small. For typical high-energy
heavy-ion collisions (Pb+Pb) such fluctuations are estimated to be
at most of the order of a few percent.
\section*{Acknowledgments}
We thank T. Guhr and M. Brack for very helpful discussions.
B.M. acknowledges support by the Alexander von Humboldt-Stiftung
(U.S. Senior Scientist Award) and by a grant from the U.S. Department
of Energy (DE-FG02-96ER40495). A.S. acknowledges support by GSI and DFG.
We also acknowledge computational support by the NC Supercomputing
Center and the Intel Corporation.
\begin{thebibliography}{99}
\bi{NEQF} D. Boyanovsky, H.J. de Vega, R. Holman, S. Prem Kumar,
R.D. Pisarski, and J. Salgado, preprint hep-ph/9810209;
C. Wetterich, {\sl Phys. Rev. E {\bf 56}}, 2687 (1997);
F. Cooper, S. Habib, Y. Kluger, E. Mottola, J.P. Paz, and P.R. Anderson,
{\sl Phys. Rev. D {\bf 50}}, 2848 (1994);
E. Calzetta and B.L. Hu, {\sl Phys. Rev. D {\bf 37}}, 2878 (1988).
\bi{VBE} H.T. Elze and U. Heinz, {\sl Phys. Rept. {\bf 183}}, 81 (1989);
P.F. Kelly, Q. Liu, C. Lucchesi, and C. Manuel, {\sl Phys. Rev.
D {\bf 50}}, 4209 (1994).
\bi{GM97} M. Gleiser and R.O. Ramos, {\sl Phys. Rev. D {\bf 50}}, 2441 (1994);
C. Greiner and B. M\"uller, {\sl Phys. Rev. D {\bf 55}}, 1026 (1997).
\bi{ASY99} D. B\"odeker, {\sl Phys. Lett. B {\bf 426}}, 351 (1999) and
preprint hep-ph/9905239;
P. Arnold, D.T. Son, and L.G. Yaffe, {\sl Phys. Rev. D {\bf 59}},
105020 (1999).
\bi{Mat} S.G. Matinyan, G.K. Savvidy, and N.G. Ter-Arutunian Savvidy,
{\sl JETP Lett. {\bf 53}}, 421 (1981) [{\sl Pis'ma Zh. Eksp. Teor. Fiz.
{\bf 34}}, 613 (1981)].
\bi{Sav} G.K. Savvidy, {\sl Nucl. Phys. B {\bf 246}}, 302 (1984).
\bi{ba} B. M\"uller and A. Trayanov, {\sl Phys. Rev. Lett. {\bf 68}}, 3387
(1992); T.S. Bir\'o, C. Gong, B. M\"uller, and A. Trayanov, {\sl Int. J.
Mod. Phys. C {\bf 5}}, 113 (1994).
\bi{gong1} C. Gong, {\sl Phys. Lett. B {\bf 298}}, 257 (1993).
\bi{gong2} C. Gong, {\sl Phys. Rev. D {\bf 49}}, 2642 (1994).
\bi{Biro} T.S. Bir\'o, C. Gong, and B. M\"uller, {\sl Phys. Rev. D {\bf 52}},
1260 (1995).
\bi{jb} M. Sieber, {\sl The Hyperbola Billiard: A Model for the Semiclassical
Quantization of Chaotic Systems}, DESY preprint 91-030; M. Sieber and
F. Steiner, {\sl Physica D {\bf 44}}, 248 (1990).
\bi{ParPol} W. Parry and M. Pollicott, {\sl Zeta Functions and the Periodic
Orbit Structure of Hyperbolic Dynamics, Ast\'erisque} {\bf 187-188} (1990).
\bi{ga} P. Gaspard, {\sl Chaos, Scattering and Statistical Mechanics},
Cambridge University Press (1998).
\bi{wa} S. Waddington, {\sl Ann. Inst. Henri Poincar\'e C {\bf 13}}, 445 (1996).
\bi{fluc1} H. Appelsh\"auser et al. (NA49 collaboration), preprint
hep-ex/9904014.
\end{thebibliography}
\begin{figure}[htb]
\centerline{\mbox{\epsfig{file=fig1.eps,width=0.7\linewidth}}}
\bigskip
\caption{Distribution the calculated local Lyapunov exponents for ergodic
trajectories of two different length $t_{\rm s}$ in the hyperbola billiard.}
\label{fig1}
\end{figure}
\newpage
\begin{figure}[tbh]
\centerline{\mbox{\epsfig{file=fig2.eps,width=0.7\linewidth}}}
\bigskip
\caption{The widths of the Gaussians illustrated in Fig.~\protect\ref{fig1}
as a function of $t_{\rm s}$ together with a fits of the form
$at_{\rm s}^{-1/2}$ (solid line) and $at_{\rm s}^{-b}$ with $b=0.45$
(dashed line).}
\label{fig2}
\end{figure}
\newpage
\begin{figure}[htb]
\centerline{\mbox{\epsfig{file=fig3.eps,width=0.7\linewidth}}}
\bigskip
\caption{Temporal autocorrelation for the local Lyapunov exponents
determined along an ergodic trajectory in the two-dimensional hyperbola
billiard. The dashed line is a fit of the form $a\exp(-\tau^2/t_c^2)$
yielding $t_c\approx 6$.}
\label{fig3}
\end{figure}
\newpage
\begin{figure}[htb]
\centerline{\mbox{\epsfig{file=fig4.eps,width=0.7\linewidth}}}
\bigskip
\caption{Distribution of numerically obtained ergodic Lyapunov
exponents for a classical SU(2) gauge theory on lattices of size
$L=2,4,6$. The index $i$ numbers the Lyapunov exponents and the
abscissa is scaled with $L^3$.}
\label{fig7}
\end{figure}
\begin{figure}[htb]
\centerline{\mbox{\epsfig{file=fig5.eps,width=0.7\linewidth}}}
\bigskip
\caption{Distributions of the sum of the positive ergodic Lyapunov
exponents for different values of the trajectory length $t_{\rm s}$ for classical
lattice SU(2) gauge theory on lattices of size $L=4,6$.}
\label{fig4}
\end{figure}
\newpage
\begin{figure}[htb]
\centerline{\mbox{\epsfig{file=fig6.eps,width=0.7\linewidth}}}
\bigskip
\caption{The widths of the Gaussians illustrated in Fig.~\protect\ref{fig4}
and scaled with $(t_{\rm s}/L^3)^{1/2}$, as a function of $t_{\rm s}$.}
\label{fig5}
\end{figure}
\newpage
\begin{figure}[htb]
\centerline{\mbox{\epsfig{file=fig7.eps,width=0.7\linewidth}}}
\bigskip
\caption{Top: The distribution of the sum of local expansion rates $h_{\rm r}$
for L=4 and a short sampling time $t_{\rm s}=0.1$, together with a
Gaussian fit. Because the globally expanding phase space volume may
be locally contracting, the distribution has a small tail extending
to negative values of $h_{\rm r}$. This tail disappears (becomes
exponentially small) for large values of $t_{\rm s}$.
Bottom: The autocorrelation function $a(\tau)$ for this distribution.}
\label{fig6}
\end{figure}
\newpage
\end{document}
|
2,877,628,090,274 | arxiv | \section{Introduction}
The Relativistic Heavy
Ion Collider (RHIC) at Brookhaven and the Large Hadron Collider (LHC)
at CERN will provide an unprecedented range of energies and luminosities
that will hopefully probe the Quark-Gluon Plasma and Chiral
Phase transitions. The basic picture of the ion-ion collisions in the
energy ranges probed by these accelerators as seen in the center-of-mass frame
(c.m.), is
that of two highly Lorentz-contracted `pancakes' colliding and leaving
a `hot' region at mid-rapidity with a high multiplicity of
secondaries\cite{bjorken}.
At RHIC for $Au+Au$ central collisions with typical luminosity of
$10^{26}/cm^2.s$ and c.m. energy $\approx 200 \text{GeV}/n-n$, a multiplicity
of 500-1500 particles per unit rapidity in the central rapidity region
is expected\cite{book1,book2,muller}. At LHC for head on $Pb+Pb$ collisions
with
luminosity $10^{27}/cm^2.s$ at c.m. energy $\approx 5 \,\text{TeV}/n-n$, the
multiplicity of charged secondaries will be in the range $2000-8000$
per unit rapidity in the central region\cite{muller}. At RHIC and LHC
typical estimates\cite{bjorken,book1,book2,muller,alam,meyer} of energy
densities
and temperatures near the central rapidity region are $\varepsilon \approx 1-10
\, \text{GeV}/\text{fm}^3, T_0 \approx 300-900 \text{\,MeV}$.
Since the lattice estimates\cite{muller,meyer} of the transition temperatures
in QCD, both for the QGP and Chiral phase transitions are $T_c \approx
160-200 \text{\,MeV}$, after the collision the central region will be
at a temperature $T>T_c$. In the usual dynamical scenario that one
\cite{bjorken}
envisages, the initial state cools off via hydrodynamic expansion through the
phase transition down to a freeze-out temperature, estimated to be $T_F \approx
100 \text{\,MeV}$\cite{satz}, at which the mean free-path of the hadrons is
comparable to the size of the expanding system.
The initial state after the collision is strongly out of equilibrium and there
are very few quantitative models to study its subsequent evolution.
There are perturbative and non-perturbative phenomena that contribute
to the processes of thermalization and hadronization. The perturbative
(hard and semihard) aspects are studied via
parton cascade models which assume that at large energies the nuclei can
be resolved into their partonic constituents and the dynamical evolution
can therefore be tracked by following the parton distribution functions
through the perturbative parton-parton
interactions \cite{wang,geigmuller,geiger,eskola,eskola2}. Parton cascade
models
(including screening corrections to the QCD parton-parton cross sections)
predict that thermalization occurs on time scales $\approx 0.5\,
\text{fm}/c$\cite{shuryak}. After thermalization, and provided that the
mean-free
path is much shorter than the typical interparticle separation, further
evolution of the plasma can be described with boost-invariant relativistic
hydrodynamics \cite{bjorken,cooperfry}. The details of the
dynamical evolution {\em between} the parton cascade through hadronization,
and
eventual description via hydrodynamics is far from clear but will
require a non-perturbative treatment.
The non-perturbative aspects of particle production and hadronization
typically
envisage a flux-tube of strong color-electric fields, in which the field
energy leads to production of $\bar{q}q$ pairs\cite{biro,tube}. Recently the
phenomenon of pair production from strong electric fields in boost-invariant
coordinates was
studied via non-perturbative methods that address the non-equilibrium
aspects and allow a comparison with hydrodynamics\cite{cooper2}.
The dynamics {\em near} the phase transition is even less understood
and involves physics beyond the realm of perturbative methods. For
instance, considerable
interest
has been sparked recently by the possibility that disoriented chiral
condensates (DCC's) could form during the evolution of the QCD plasma through
the chiral phase transition \cite{anselm1}-\cite{revs}. Rajagopal and
Wilczek\cite{wilraj} have argued that if the chiral phase transition
occurs strongly out of equilibrium, spinodal instabilities\cite{boysinglee}
could lead to the formation and relaxation of large pion domains. This
phenomenon could provide a striking experimental signature of the chiral phase
transition and could provide
an explanation for the Centauro and anti-Centauro (JACEE) cosmic ray
events\cite{lattes}. An experimental program is underway at Fermilab to search
for candidate events\cite{dccexp1,dccexp2}. Most of the
theoretical studies of the dynamics of the chiral phase transition and
the possibility of formation of DCC's have focused on initial
states that are in local thermodynamic equilibrium
(LTE)\cite{gavin}-\cite{boydcc}.
We propose to study the {\em non-equilibrium} aspects
of the dynamical evolution of highly excited initial states by relaxing the
assumption of initial LTE (as would be appropriate for the initial conditions
in a heavy-ion collision). Consider, for example,
a situation where the relevant quantum field theory is prepared in an initial
state with a particle
distribution
sharply peaked in momentum space around $\vec k_0$ and $-\vec k_0$ where
$\vec k_0$ is a particular momentum. This configuration would be envisaged to
describe two `pancakes' or `walls' of quanta moving in opposite
directions with momentum $|\vec k_0|$. In the target frame this
field configuration would be seen as a `wall' of quanta moving towards
the target and hence the name `tsunami'\cite{rob}. Such an initial
state is out
of equilibrium and under time evolution with the proper interacting
Hamiltonian,
non-linear effects should result in a redistribution of
particles, as well as particle production and relaxation. The evolution
of this strongly out of equilibrium initial state would be relevant
for understanding phenomena such as formation and relaxation of
chiral condensates. Starting from such a state and following the complete
evolution of the system thereon, is clearly a formidable problem even within
the framework of an effective field theory such as the linear $\sigma$-model.
In this article we consider an even more simplistic initial condition,
where the occupation number of particles
is sharply localized in a thin spherical {\em shell} in momentum space around a
momemtum $ |\vec k_0| $,
i.e. a {\em spherically symmetric} version of the `tsunami'
configuration.
The reason for the simplification is purely technical since spherical symmetry
can be used to reduce the number of equations. Although this is a
simplification of the idealized problem, it will be seen below that the
features of the dynamics contain the essential ingredients to help us
gain some understanding of more realistic situations.
We consider a weakly coupled $\lambda\Phi^4$ theory ($\lambda\sim 10^{-2}$)
with the fields in the vector representation of the $O(N)$ group.
Anticipating non-perturbative physics, we study the dynamics consistently
in the leading order in the $1/N$ expansion which will allow an analytic
treatment as well as a numerical analysis of the dynamics.
The pion wall
scenario described above is realized by considering an initial state described
by a Gaussian wave functional with a large number of particles at
$|\vec k_0|$ and a high density is achieved by taking the number of
particles per correlation volume to be very large. As in finite
temperature field theory, a resummation along the lines of the Braaten
and Pisarski \cite{htl} program must be implemented to take into
account the non-perturbative aspects of the physics in the dense
medium. As will be explicitly shown below, the large $N$ limit in the
case under consideration provides a resummation scheme akin to the
hard thermal loop program \cite{htl}.
The dynamical evolution of this spherically symmetric ``tsunami''
configuration described above reveals many remarkable features: i) In a theory
where the symmetry is spontaneously broken in the absence of a medium, when the
initial state is the $O(N)$ symmetric, high densty, ``tsunami''
configuration, we find that there exists a critical
density
of particles depending on the effective (HTL-resummed)
coupling beyond which spinodal instabilities are induced leading to a
{\em dynamical} symmetry breaking. ii) When these instabilities occur, there is
profuse production of low-momentum pions (Goldstone bosons) accompanied by a
dramatic
re-arrangement of the
particle distribution towards low momenta. This distribution is non-thermal
and its asymptotic behavior signals the onset of Bose condensation of pions.
iii) The final equation of state of
the ``pion gas'' asymptotically at long times is ultrarelativistic despite the
non-equilibrium distributions.
The paper is organized as follows: In Section II we introduce the model under
consideration and describe the non-perturbative framework,
namely the large $N$ approximation. Section III is devoted to the
construction
of the wave functional and a detailed description of the initial conditions
for the problem. The dynamical aspects are covered in Section IV. We first
outline some issues dealing with renormalization and then provide a qualitative
understanding of the time evolution using wave functional arguments. We argue
that the system could undergo dynamical symmetry breakdown and provide
analytic estimates for the onset of instabilities. We present the results of
our numerical calculations in Section V which confirm the robust features of
the analytic estimates for a range of parameters. In
Section VI we analyze the details of symmetry breaking and argue that the long
time dynamics can be interpreted as the onset of formation of a Bose
condensate even when the order parameter vanishes.
Finally in Section VII we present our conclusions and future avenues of
study.
\section{The model}
As mentioned in the introduction we consider a
$\lambda\Phi^4$ theory with $O(N)$ symmetry in the large-$N$ limit with the
Lagrangian,
\begin{equation}
{\cal{L}}=\frac{1}{2}(\partial_\mu\vec{\Phi}).(\partial^\mu\vec{\Phi})
-\frac{m_B^2}{2}(\vec{\Phi}\cdot\vec{\Phi})
-\frac{\lambda}{8N}(\vec{\Phi}\cdot\vec{\Phi})^2
\end{equation}
where $\vec{\Phi}$ is an $O(N)$ vector, $\vec{\Phi}=(\sigma,\vec{\pi})$ and
$\vec{\pi}$ represents the $N-1$ pions,
$\vec{\pi}=(\pi^1,\pi^2,...,\pi^{N-1})$. We then shift $\sigma$ by its
expectation value in the non-equilibrium state
\begin{equation}
\sigma(\vec{x},t)=\sqrt{N}\phi(t)+\chi(\vec{x},t)\;\;\;;
\langle \sigma(\vec{x},t)\rangle = \sqrt{N}\phi(t)\;.
\end{equation}
We refer the
interested reader to
several articles which discuss the implementation of the large $N$ limit
(see for e.g. \cite{largen1,largen2,largen3,frw1,noneq,boydiss,erice97}).
The $1/N$ series may be generated by introducing an auxiliary field
$\alpha(x)$ which is an {\em algebraic} function of $\vec{\Phi}^2(x)$, and then
performing the functional integral over $\alpha(x)$ using the saddle point
approximation in the large $N$ limit\cite{largen1,largen2,largen3}. It can be
shown that the leading order terms in the expansion can be
easily obtained by the following Hartree factorisation of the quantum
fields\cite{frw1,noneq,boydiss,erice97},
\begin{eqnarray}
&&\chi^4\rightarrow6\langle\chi^2\rangle\chi^2+\text{constant}
\;\;;\;\chi^3\rightarrow3\langle\chi^2\rangle\chi\label{hart1}\nonumber
\\
&&(\vec{\pi}\cdot\vec{\pi})^2\rightarrow 2\langle\vec{\pi}^2
\rangle\vec{\pi}^2-\langle\vec{\pi}^2\rangle^2 +{\cal O}(1/N)
\label{hart3}\nonumber \\
&&\vec{\pi}^2\chi^2\rightarrow\vec{\pi}^2\langle\chi^2\rangle
+\langle\vec{\pi}^2\rangle\chi^2\;\;;\;\vec{\pi}^2\chi\rightarrow
\langle\vec{\pi}^2\rangle\chi\;.\label{hart2}
\end{eqnarray}
All expectation values are to be computed in the non-equilibrium state.
In the leading order large $N$ limit we then obtain,
\begin{eqnarray}
{\cal L} & = &-\frac{1}{2}\vec{\pi}\cdot(\partial^2+{\cal
M}^2_{\pi}(t))\vec{\pi}
-\frac{1}{2}\chi(\partial^2+{\cal M}^2_{\chi}(t))\chi -\chi V'(\phi(t),t)+
\frac{N \lambda}{8}\langle \pi^2 \rangle^2\;,
\label{lagra} \\
{\cal M}^2_{\pi}(t) & = & m^2_B+ \frac{\lambda}{2} \left[\phi^2(t)+\langle
{\pi}^2 \rangle\right]\;,\label{massoft}\\
{\cal M}^2_{\chi}(t) & = & m^2_B+ \frac{\lambda}{2} \left[3\phi^2(t)+\langle
{\pi}^2 \rangle\right]\;,\label{masschioft}\\
V'(\phi(t),t)& = &\sqrt{N}\left(\ddot{\phi}+
\frac{\lambda}{2}[\phi^2+\langle\pi^2\rangle]\phi+m_B^2\phi\right)\;,
\label{vprime}\\
\langle\pi^2\rangle & = & \langle\vec{\pi}^2\rangle/N\;. \label{expi2}
\end{eqnarray}
This approximation allows us to expand about field configurations that are far
from the perturbative vacuum. In particular it is an excellent tool for
studying the behaviour of matter in extreme conditions such as high
temperature or high
density\cite{cooper2,largen1,largen2,largen3,frw1,noneq,boydiss,erice97}.
One way to obtain the non-equilibrium equations of motion is through the
Schwinger-Keldysh Closed Time Path formalism. This is the usual Feynman path
integral defined on a complex time contour which allows the computation of
in-in expectation values as opposed to in-out S-matrix elements. For
details see the references\cite{ctp}.
The Lagrangian density in this formalism is given by
\begin{equation}
{\cal L}_{neq}={\cal L}[\vec{\Phi}^+]-{\cal L}[\vec{\Phi}^-]\;,
\end{equation}
with the fields $\Phi^\pm$ defined along the forward $(+)$ and backward $(-)$
time branches.
The non-equilibrium equations of motion are then obtained by requiring the
expectation
value of the quantum fluctuations in the non-equilibrium state to vanish
i.e. from the tadpole equations\cite{noneq},
\begin{equation}
\langle\chi^\pm\rangle=\langle\vec{\pi}^{\pm}\rangle=0\;.
\end{equation}
In the leading order approximation of the large $N$ limit, the Lagrangian for
the $\chi$ field is
quadratic plus linear and the tadpole equation for the $\chi$ leads to the
equation
of motion for the order parameter or the zero mode
\begin{equation}
\ddot{\phi}+\frac{\lambda}{2}[\phi^2(t)+\langle\pi^2\rangle(t)]\phi(t)+
m_B^2\phi(t)=0\;.
\label{eqnmotion}
\end{equation}
In this leading approximation the non-equilibrium action for the
$N-1$ pions is,
\begin{equation}
\int d^4x[{\cal{L}}_{\pi^+}-{\cal{L}}_{\pi^-}]=\int d^3xdt\left\{(
-\frac{1}{2}{\vec{\pi}^+}
\cdot\partial^2{\vec{\pi}^+}-{\cal{M}}_\pi^{+2}(t)\vec{\pi}^+\cdot\vec{\pi}^+)
-(+\longrightarrow-)\right\}\;.
\label{pionlag}
\end{equation}
We have not written the action for the $\chi$ field fluctuations
because they decouple from the dynamics of the pions in the leading order in
the large
$N$ limit\cite{frw1,noneq}.
Having introduced the model and
the non-perturbative approximation scheme the next step is to
construct an appropriate non-equilibrium initial state or density matrix.
Although one could continue the analysis of the dynamics using the
Schwinger-Keldysh method, we will study the
dynamics in the Schr\"odinger representation in terms of wave-functionals
because this will display the nature of
the quantum states more clearly. We find it convenient
to work with the Fourier-transformed fields defined as,
\begin{equation}
\vec{\pi}(\vec{x},t)=\frac{1}{\sqrt{V}}\sum_{\vec{k}}
e^{-i\vec{k}\cdot\vec{x}}\
\vec{\eta}_{\vec{k}}(t)\;,
\end{equation}
where we have chosen to quantize in a box of finite volume $V$ that will be
taken
to infinity at the end of our calculations. The Hamiltonian
for the pions is given by
\begin{equation}
H_{\pi}=\sum_{\vec{k}}\left(\frac{1}{2}\vec{\Pi}_{\vec{k}}\cdot
\vec{\Pi}_{-\vec{k}}+\frac{1}{2}\omega_k^2(t)\vec{\eta}_{\vec{k}}
\cdot\vec{\eta}_{-\vec{k}}\right)-\frac{N\lambda}{8} \left(\sum_{\vec k}
\langle\vec{\eta}_{\vec{k}} \cdot\vec{\eta}_{-\vec{k}} \rangle\right)^2\;,
\label{hamilt}
\end{equation}
where
\begin{equation}
\omega_k^2(t)=\vec{k}^2+{\cal{M}}^2_{\pi}(t)
\label{omegadef}
\end{equation}
is the effective time dependent frequency and ${\cal{M}}^2_{\pi}(t)$ is given
by Eq.(\ref{massoft}). To leading order in the large
$N$ limit the theory becomes Gaussian and the non-linearities are encoded
in a self-consistency condition, since the frequency (\ref{omegadef})
depends on $\langle \vec{\pi}^2 \rangle$ and this expectation value
is in the time dependent state, as displayed by the set of equations
(\ref{massoft}-\ref{expi2}).
\section{the initial state}
As stated in the introduction, our ultimate goal is to model and study
the non-equilibrium aspects of the evolution of an initial, highly
excited state that relaxes following high energy,
heavy-ion collisions. An idealized description of the associated physics would
be to consider two wave packets made up of very high
energy components representing the heavy ions and moving with a highly
relativistic momentum toward each other. The goal would be to follow
the dynamical evolution of the wavefunctionals corresponding to this
situation, thus clearly elucidating the non-equilibrium features involved
in the phase transition processes following the interactions of the wave
packets. This initial state could be described by a
distribution of particles, sharply peaked around some special values
$\vec{k}_0$ and $-\vec{k}_0$ in momentum space. The evolution of this state
then
follows from the functional Schr\"odinger equation.
Even with the simplification
of a scalar field theory such a program is very ambitious and beyond
the present numerical capabilities. One of the major difficulties is that
selecting one particular momentum breaks rotational invariance and the
evolution equations depend on the direction of wave vectors even in
the Gaussian approximation. (This statement will become clear below).
In this article however, we choose to study a much simpler description of the
initial state which is characterized by a high
density particle distribution in a thin spherical `shell' in momentum space
. We propose an initial particle distribution that has
support concentrated at $|\vec{k}_0|$.
This particular state does not provide the necessary geometry for a
heavy ion collision, however it does describe a situation in which
initially there is a large multiplicity of particles in a small momentum
`shell', there is no special beam-axis and the pions
are distributed equally in all directions {\em with a sharp spatial
momentum}. This is a rotation invariant state that describes a highly
out of equilibrium situation and that will relax during its time evolution (a
spherical ``tsunami'').
\subsection{The Wave Functional:}
Since in the leading order approximation in the large $N$ expansion the theory
has become Gaussian (at the expense of a self-consistency condition),
we choose a Gaussian ansatz for the wave-functional at $t=0$.
The reason for this choice is that upon time evolution this wave functional
will remain Gaussian and will be identified with a
squeezed state functional of pions.
\begin{equation}
\Psi(t=0)=\Pi_{\vec{k}}{\cal{N}}_{k}(0)\exp
\left[-\frac{A_{k}(0)}{2}\;
\vec{\eta}_{\vec{k}}\cdot\vec{\eta}_{-\vec{k}}\right]\;.\label{wavefunc0}
\end{equation}
This state will then evolve according to the
Hamiltonian given in Eq. (\ref{hamilt}) which is essentially a harmonic
oscillator Hamiltonian with self-consistent, time-dependent frequencies. The
functional Schr\"odinger equation is given by
\begin{equation}
i\frac{\partial\Psi}{\partial t}=H\Psi. \label{timedep}
\end{equation}
The last term in the Hamiltonian (\ref{hamilt}) which is independent of the
fields (a time dependent `vacuum energy term') can be absorbed in
an overall time dependent phase of the wave functional. Removing this
term by a phase redefinition, the functional Schr\"odinger equation
becomes
\begin{equation}
i\dot{\Psi}[\eta]=\sum_{\vec{k}}\left[
-\frac{\hbar^2}{2}\frac{\delta^2}
{\delta\vec{\eta}_{\vec{k}}\delta\vec{\eta}_{-\vec{k}}}
+\omega_{k}^2(t)\; \vec{\eta}_{\vec{k}}\cdot\vec{\eta}_{-\vec{k}}\right]
\Psi[\eta]\label{schrod}
\end{equation}
which then leads to a set of differential equations for the covariance $A_{k}$.
The time dependence of the normalization factors
${\cal N}_k$ is completely determined by that of the $A_{k}$ as
a consequence of unitary time evolution. The state for arbitrary time
$ t $ takes then the form:
\begin{equation}
\Psi(t)=\Pi_{\vec{k}}{\cal{N}}_{k}(t)\exp
\left[-\frac{A_{k}(t)}{2}\;
\vec{\eta}_{\vec{k}}\cdot\vec{\eta}_{-\vec{k}}\right]\; .
\label{wavefunct}
\end{equation}
The evolution equations for
the covariance are obtained by taking the functional derivatives and
comparing powers of $\eta_{\vec k}$ on both sides. We obtain the following
evolution equations\cite{boydcc,frw1}
\begin{eqnarray}
i\dot{A}_{k}(t) & = & A_{k}^2(t)-\omega_{k}^2(t),
\label{riccati}\\
{\cal N}_k(t) & = & {\cal N}_k(0) \exp\left[\int^t_0 A_{Ik}(t')dt'\right]\;,
\label{norma}
\end{eqnarray}
with $A_{k}= A_{Rk}(t)+iA_{Ik}(t)$. The equal time two-point correlation
function in the time evolved non-equilibrium state is given by
\begin{eqnarray}
\langle\vec{\eta}_{\vec{k}}\cdot\vec{\eta}_{-\vec{k}}\rangle
&&=\frac{<\Psi|\; \vec{\eta}_{\vec{k}}\cdot\vec{\eta}_{-\vec{k}}\; |\Psi>}
{<\Psi|\Psi>}\nonumber\\
&&=\frac{\int[{\cal D}\vec{\eta}_{\vec{q}}]\; (\vec{\eta}_{\vec{k}}
\cdot\vec{\eta}_{-\vec{k}})\; \Pi{\cal N}_{\vec{q}}\;
\exp\left[-\frac{A_q(t)}{2}
\; \vec{\eta}_q\cdot\vec{\eta}_{-q}\right]}{\int[{\cal D}
\vec{\eta}_{\vec{q}}]\; \Pi{\cal N}_{\vec q}\; \exp\left[-\frac{A_q(t)}{2}
\vec{\eta}_q\cdot\vec{\eta}_{-q}\right]}\nonumber \\
&&=\frac{N}{2A_{Rk}(t)}\;,
\end{eqnarray}
leading to the self-consistency condition
\begin{equation}
\langle \pi^2 \rangle(t) = \sum_k \frac{1}{2 A_{Rk}(t)}\;. \label{selfish}
\end{equation}
Formally, one can also represent these two-point equal time correlators in
terms of functional integrals over the closed time path contour where the
initial state is chosen to be the Gaussian functional described above. However
the explicit
and rather simple ansatz for the wave functional enables one to obtain the
two-point functions directly in a rather straightforward manner. Moreover, the
wave functional approach will permit a much clearer understanding of the
physics of the problem. The Ricatti equation (\ref{riccati}) can be cast in a
simpler form by writing $A_k$ in terms of new variables $\phi^*_k$ as
\begin{equation}
A_k(t)=-i\frac{\dot{\phi}^*_k(t)}{\phi^*_k(t)}\;,\label{phidef}
\end{equation}
leading to the simple equation for the new variables
\begin{equation}
\ddot{\phi}^*_k+\omega_k^2(t)\;\phi^*_k=0\;.\label{phidiff}
\end{equation}
In terms of these mode functions we find that the real and imaginary parts of
the covariance $A_k$ are
given by
\begin{eqnarray}
A_{Rk}(t)& = & \frac{i}{2}
\frac{\dot{\phi}_k\phi^*_k-
\dot{\phi}^*_k\phi_k}{|\phi_k|^2},\label{areal}
\\
A_{Ik}(t) & = & - \frac{d}{d t}\ln|\phi_k(t)|^2 \; . \label{aima}
\end{eqnarray}
From the differential equation for the $\phi_k(t)$ given by
Eq. (\ref{phidiff}) it is clear that the combination that appears in
the numerator of Eq. (\ref{areal}) is
the Wronskian $\Omega_k$ of the differential equations and will consequently be
determined
from the initial conditions alone. The expression for the quantum fluctuations
$\langle\pi^2\rangle=\langle\vec{\pi}^2\rangle/N$ is given by,
\begin{equation}
\langle\pi^2\rangle(t)=\int \frac{d^3k}{(2\pi)^3}\;
\langle\eta_{\vec{k}}(t)\eta_{-\vec{k}}(t)\rangle
=\int \frac{d^3k}{(2\pi)^3} \frac{|\phi_k(t)|^2}{2\Omega_k}\; .
\label{fluct}
\end{equation}
The mode functions $\phi_k$ have a very simple interpretation: they
obey the Heisenberg equations of motion for the
pion fields obtained from the Hamiltonian (\ref{hamilt}). Therefore
we can write the Heisenberg field operators as
\begin{equation}
\vec{\pi}(\vec x,t) = \frac{1}{\sqrt{V}}\sum_k \frac{1}{\sqrt{2\Omega_k}}\left[
\vec{a}_k\; \phi_k(t) \;
e^{i\vec k \cdot \vec x}+\vec{a}^{\dagger}_k \;\phi^*_k(t)\;
e^{-i\vec k \cdot \vec x} \right]
\end{equation}
where $\vec{a}_k \; , \; \vec{a}^{\dagger}_k$ are the time independent
annihilation and creation operators with the usual Bose commutation relations.
\subsection{Initial Conditions:}
Within this Gaussian ansatz for the wave functional, the initial
conditions are completely determined by the initial conditions on the
mode functions $ \phi_k(t) $. In order to physically motivate the initial
conditions we now establish the relation between the particle number
distribution and these mode functions.
Since, in a time dependent situation there is an ambiguity in the definition
of the particle number, we {\em define} the particle number with respect to
the eigenstates of the instantaneous Hamiltonian (\ref{hamilt}) at
the {\em initial time}, i.e.
\begin{eqnarray}
\hat{n}_k=&&\frac{1}{ \omega_k(0)}\left(-\frac{1}{2}
\frac{\delta^2}{\delta\vec{\eta}_{\vec{k}}\delta\vec{\eta}_{-\vec{k}}}+
\frac{\omega_k^2(0)}{2}\vec{\eta}_{\vec{k}}\cdot\vec{\eta}_{-\vec{k}}\right)-
\frac{1}{2}\nonumber \\
=&&\frac{1}{ \omega_k(0)}\left[\frac{1}{2}\vec{\Pi}_{\vec{k}}\cdot
\vec{\Pi}_{-\vec{k}}+\frac{\omega_k^2(0)}{2}
\vec{\eta}_{\vec{k}}\;\cdot\vec{\eta}_{-\vec{k}}\right]-\frac{1}{2}\;.
\label{numberofparts}
\end{eqnarray}
Here, $\omega_k(0)$ is the frequency (\ref{omegadef}) evaluated at $t=0$,
i.e. the curvature of the potential term in the functional
Schr\"odinger equation (\ref{schrod}) at $t=0$ and provides a
definition of the
particle number (assuming that $\omega_k^2(0)>0$). The expectation value of the
number operator in the time evolved state is then
\begin{eqnarray}
n_k(t)=&&<\Psi|\hat{n}_k|\Psi>
=\frac{[A_{Rk}(t)-\omega_k(0)]^2+A_{Ik}^2(t)}{4\,\omega_k(0)\,A_{Rk}(t)}
\label{expvaln}\\
=&&\frac{\Delta_k(t)^2+\delta_k(t)^2}{4[1+\Delta_k(t)]}\; ,\label{numbexp1}
\end{eqnarray}
where $\Delta_k$ and $\delta_k$ are defined through the relations,
\begin{equation}
A_{Rk}(t)=\omega_k(0)\;[1+\Delta_k(t)]\;\;\;;A_{Ik}(t)=\omega_k(0)
\;\delta_k(t)\;. \label{deltadef}
\end{equation}
In terms of the mode functions $\phi_k$ and $\dot{\phi}_k$ the expectation
value of the number operator is given by
\begin{equation}
n_k(t)=\frac{1}{4\;\Omega_k\;{\omega_k(0)}}
\left[|\dot{\phi}_k(t)|^2
+\omega^2_k(0)|\phi_k(t)|^2\right]-\frac{1}{2}\;.\label{numbexp2}
\end{equation}
The quantity $\delta_k(t)$ appears as the phase of the wave function and will
be chosen to be zero at $t=0$ for simplicity,
\begin{equation}
A_{Ik}(0)=0\;\;\;;\quad\delta_k(0)=0.
\end{equation}
Assuming $\delta_k(0)=0$, the initial conditions on the $\phi_k(0)$
variables can
be obtained at once from Eq. (\ref{phidef}) and are found to be,
\begin{equation}
\dot{\phi}^*_k(0)=i\Omega_k=i\omega_k(0)\;[1+\Delta_k(0)]\;\;
\;;\phi^*_k(0)=1\;,
\end{equation}
where $\Omega_k$ is the Wronskian $ W[\phi_k(t)^*,\phi_k(t)] $.
Hence the wave functional of the system at $t=0$ can be specified completely
(up to a phase) by the single function $\Delta_k(0)$.
Using Eq. (\ref{numbexp1}) one can easily solve for $\Delta_k(0)$ in terms of
the initial particle spectrum
\begin{equation}
\Delta_k\equiv\Delta_k(0)=2[n_k(0)\pm\sqrt{n_k(0)^2+n_k(0)}]\;.\label{Deltak}
\end{equation}
Which of the two solutions will give us interesting physics is a more subtle
question that we shall address in the next section when we discuss the dynamics
of the problem.
Before moving on to the description of the dynamics let us briefly
summarize what we have done. We proposed a rather simple description of
a large multiplicity, high energy particle collision process by preparing an
initial state with an extremely high number density of particles concentrated
at momenta given by $|\vec{k}|=k_0$. Consistent with the
leading order in a large $N$ approximation, we chose a Gaussian ansatz for
our wave functional, parametrized by the variables $\phi_k(t)$,
$\dot{\phi}_k(t)$ (or alternatively $\Delta_k(t)$ and $\delta_k(t)$)
and the initial conditions on
these variables are determined completely by the choice of the particle
distribution function $n_k(0)$ at $t=0$ (Eq. (\ref{Deltak})). The next step
will be to obtain the {\em renormalized} equations of motion and then to study
the dynamics analytically as well as numerically.
\section{The Dynamics}
From the discussion in the previous sections we see that the following set of
equations for the order parameter $\phi(t)$ and the mode functions $\phi_k(t)$
must be solved self-consistently in order to study the dynamics:
\begin{eqnarray}
&&\frac{d^2\phi(t)}{dt^2}+\left(m_B^2+\frac{\lambda}{2}[\phi^2(t)+
\langle\pi^2\rangle_B]\right)
\phi(t)=0\; ,\label{zeromode}\\
&&\frac{d^2\phi^*_k(t)}{dt^2}+\left(k^2+m_B^2+\frac{\lambda}{2}[\phi^2(t)+
\langle\pi^2\rangle_B]\right)\phi^*_k(t)=0\;,\label{kmodes}\\
&&\phi^*_k(0)=1\;\;\;;\dot{\phi}^*_k(0)=i\Omega_k\;, \label{inicon}
\end{eqnarray}
with the self-consistent condition
\begin{equation}
\langle\pi^2\rangle(t)_B=\int \frac{d^3k}{(2\pi)^3}
\frac{|\phi_k(t)|^2}{2\Omega_k}\; .\label{selfcons}
\end{equation}
The quantities in the above equations must be renormalized. This is achieved by
first demanding that all the equations of motion be finite and then absorbing
the divergent pieces into a redefinition of the mass and coupling constant
respectively,
\begin{equation}
m_B^2+\frac{\lambda}{2}[<\pi^2>(t)_B+\phi^2(t)]
=m_R^2+\frac{\lambda_R}{2}[<\pi^2>(t)_R+\phi^2(t)] = {\cal M}^2_{R\;\pi}(t)\;.
\label{renormass}
\end{equation}
A detailed derivation of the renormalization prescriptions requires a WKB
analysis of the mode functions $\phi_k(t)$ that reveals their ultraviolet
properties. Such an analysis has been performed elsewhere \cite{frw1,noneq}. In
summary
the mass term will absorb quadratic and logarithmic divergences while the
coupling constant will acquire a logarithmically divergent
renormalization\cite{frw1,largen1}. In particular
\begin{equation}
<\pi^2>_R(t) = \int \frac{d^3k}{(2\pi)^3}\left\{
\frac{|\phi_k(t)|^2}{2\Omega_k}-\frac{1}{2k}+\frac{\theta(k-\kappa)}{4k^3}
{\cal M}^2_{R\;\pi}(t) \right\}
\label{renofluc}
\end{equation}
with $\kappa$ an arbitrary renormalization scale.
Introducing the effective mass of the particles at
the initial time as
\begin{equation}
M^2_R = {\cal M}^2_{R\; \pi}(t=0)\; , \label{inimass}
\end{equation}
we recognize that this effective mass has contributions from the
non-equilibrium particle distribution and is the analog of the hard-thermal
loop (HTL) resummed effective mass in a scalar field theory. Recall, however,
that the initial distribution is {\em not} thermal.
In a scalar theory, the HTL effective mass is obtained by summing the daisy and
superdaisy diagrams \cite{parwani} which
is precisely the resummation implied in the leading order in the large $N$
approximation. To
see this more clearly consider the case in which the order parameter
vanishes, i.e. $\phi(t) \equiv 0$; then the effective mass at the
initial time can be written as a gap equation
\begin{eqnarray}
M^2_R=&&m_B^2+\frac{\lambda}{4\pi^2}
\int \frac{k^2dk}{2\Omega_k(0)}\label{dressedmass}\\\nonumber\\\nonumber
=&&m_B^2+\frac{\lambda}{4\pi^2}
\int \frac{k^2dk}{2\omega_k(0)}+
\frac{\lambda}{4\pi^2}\int
\frac{k^2dk}{2\omega_k(0)}\left[-\frac{\Delta_k(0)}{1+\Delta_k(0)}\right]
\end{eqnarray}
where we have used the relation $ \Omega_k(0)=\omega_k(0)(1+\Delta_k) $.
The second term in the above expression is the usual contribution obtained
at zero temperature (and zero density) for the (self-consistent) renormalized
mass parameter $M_R^2$ i.e the
one loop tadpole (with $\omega_k(0)=
\sqrt{k^2+M^2_R}$). The third term contains the non-equilibrium effects
associated
with the particle distributions and vanishes when $n_k \rightarrow 0$. This
term is finite (since $n_k(0)$ is assumed to be localized within a small range
of momenta). For a given distribution $n_k(0)$, the solution to the
self-consistent gap
equation (\ref{dressedmass}) gives the effective mass, dressed by the
medium effects. This is indeed very similar to the finite temperature case in
which the tadpole term provides a contribution $\propto \lambda T^2$ in the
high temperature limit. We will see later that a term very similar to this can
be extracted in the limit in which the distribution $n_k(0)$ is very large.
Since the relevant scale is the quasiparticle mass $M_R$,
we will choose to take $M^2_R > 0$ to describe an initial situation in
which the $O(N)$ symmetry is unbroken.
It is convenient
for numerical purposes to introduce the following dimensionless quantities:
\begin{eqnarray}
&&q=\frac{k}{M_R} ;\;\;\;\;{\tau}=M_R\;t ;\;\;\;\; \varphi^2(\tau)=
\frac{\lambda_R\; \phi^2(t)}{2M^2_R}\; ; \; g=\frac{\lambda_R}{8\pi^2}\; ; \;
W_q= \frac{\Omega_k}{M_R}
\nonumber\\
&& g\Sigma(\tau) = \frac{\lambda}{2M^2_R}\left[\langle \pi^2 \rangle_R(t)
-\langle\pi^2\rangle_R(0)\right]\;. \label{dimless}
\end{eqnarray}
In terms of these dimensionless quantities the equations of motion
(\ref{zeromode}, \ref{kmodes}) with
the initial conditions (\ref{inicon}) become
\begin{eqnarray}
&&\left[\frac{d^2}{d\tau^2}+1+\varphi^2(\tau)-\varphi^2(0)+
g\Sigma(\tau)\right]
\varphi(\tau)=0\;,\label{zeromodeeq}
\\
&&\left[\frac{d^2}{d\tau^2}+q^2+1+\varphi^2(\tau)-\varphi^2(0)+
g\Sigma(\tau)\right]\phi_q(\tau)=0 \; \; ; \; \; \phi_q(0)=1 \;; \;
\dot{\phi}_q(0)= -iW_q\;,
\label{modeeq} \\
&&g\Sigma(\tau) = g \int q^2dq \left\{
\frac{|\phi_q(\tau)|^2-1}{ W_q}+\frac{\theta(q-1)}{2q^3}
\left({{{\cal M}^2_{R\;\pi}(\tau)}\over {M^2_R}}-1\right) \right\}\;,
\label{gsigma}
\end{eqnarray}
where we have chosen the renormalization scale $\kappa = M_R$ for simplicity.
In order to make our statements precise in the analysis of the
spherically symmetric ``tsunami'', we will assume the initial particle
distribution to be Gaussian and peaked at some value $q_0$ and width
$\xi$ so that
\begin{equation}
n_q(0)=\frac{N_0}{I}\exp\left[-\left[\frac{q-q_0}{\xi}\right]^2\right],
\label{inidistbn}
\end{equation}
where $N_0$ is the total number
of particles in a correlation volume $M^{-3}_R$ and $I$ is a normalization
factor . The case $N_0>>1$ corresponds to the high
density regime with many particles in the effective
correlation volume.
\subsection{Preliminary considerations of the dynamics:}
Before engaging in a full numerical solution of the evolution equations, we
can obtain a clear, qualitative understanding of the main features of the
evolution by looking at the quantum mechanics of the wave functional.
The dynamics is different for the different solutions
for $ \Delta_q $ in Eq. (\ref{Deltak}) and can be understood with simple
quantum mechanical arguments:
\subsubsection{Case I}
$\Delta_q=2[n_{q}(0)+\sqrt{n_{q}^2(0)+n_{q}(0)}]$:
In this case
\begin{eqnarray}
\Delta_q(0) & \approx & 4 \; n_q(0) >>1 \; \; \text{for} \; \;
\frac{|q-q_0|}{\xi}
\approx 1 \nonumber \\
& \approx & 0 \; \; \text{for}\; \; \frac{|q-q_0|}{\xi}
>> 1, \label{range}
\end{eqnarray}
and the covariance of the wave functional (\ref{wavefunc0}) is given by
\begin{eqnarray}
A_q(0) = \Omega_q(0) = \omega_q(1+\Delta_q)& >> & \omega_q \; \;
\text{for}\; \; \frac{|q-q_0|}{\xi} \approx 1 \nonumber \\
& \approx & \omega_q \; \;
\text{for}\; \; \frac{|q-q_0|}{\xi} >> 1.
\label{inicova}
\end{eqnarray}
Now, for each wave vector $ \vec q $ we have a Gaussian wave-function
which is the ground state of a harmonic oscillator with frequency
$\Omega_q(0)$, but
whose evolution is determined by a Hamiltonian for a harmonic oscillator of
frequency $\omega_q(0)$ at very early times. For the modes $ q $ such that
$\Omega_q >>
\omega_q(0)$, the wave function is very narrow compared to the
second derivative of the potential and there is a very small probability for
sampling large amplitude field configurations (i.e. large $ \eta_{\vec
q} $). It is a property of these squeezed states that whereas the
wave-functional is narrowly localized in field space, it is a wide
distribution in the canonical momentum (conjugate to
the field) basis.
This wave function will spread out under time
evolution to obtain a width compatible with
the frequency $\omega_q$, i.e. the covariance $A_q(\tau)$ will {\em diminish in
time} and the fluctuation
\begin{equation}
\langle |\vec{\eta}_q|^2 \rangle(\tau) \propto \frac{1}{A_{Rq}(\tau)}
\label{flucu}
\end{equation}
will {\em increase} in time. This will in turn cause the time dependent
frequency in the Hamiltonian, $\omega_q(\tau)$ to {\em increase} with
time since the frequency and the fluctuations are directly related via
the self-consistency condition. The
resulting dynamics is then expected to approach an oscillatory regime in which
the width of the wavefunctional and the frequency of the harmonic oscillator
are of the same order. Under these circumstances, there is
the possibility that for a particular range of parameters (coupling,
central momentum of the distribution and particle
density) parametric amplification can occur \cite{noneq,boydiss} that could
result in particle production and redistribution of particles as will
be discussed below within an early time analysis. We will see that this case
corresponds to a ``tsunami'' configuration in a theory which is {\em symmetric}
even in the absence of the medium.
\subsubsection{Case II}
$ \Delta_q=2[n_{q}(0)-\sqrt{n_{q}^2(0)+n_{q}(0)}]$:
In this case
\begin{eqnarray}
\Delta_{q}(0)& \approx & -1+\frac{1}{4n_{q_0}} \;\; \; \text{for}\; \;
\frac{|q-q_0|}{\xi} \approx 1
\nonumber \\
& \approx & 0 \; \; \text{for}\; \; \frac{|q-q_0|}{\xi} >> 1, \label{delgreat}
\end{eqnarray}
and the covariance is
\begin{eqnarray}
A_q(0) \approx \frac{\omega_q}{4n_q(0)} & << & \omega_q \; \;
\text{for}\; \; \frac{|q-q_0|}{\xi} \approx 1 \nonumber \\
& \approx & \omega_q\; \; \text{for}\; \; \frac{|q-q_0|}{\xi} >> 1.
\end{eqnarray}
Therefore, large amplitude field configurations with momenta in the narrow
``tsunami'' shell,
now have a high probability of being realized. As before, the wave function
for each $\vec k$ mode corresponds
to the ground state of a harmonic oscillator of frequency
$$
\Omega_q(0)={{\omega_q}\over {4n_{q_0}(0)}}
$$
which evolves with a Hamiltonian for
a harmonic oscillator with frequency $ \omega_q(0) $. In this case the wave
function for $ q \approx q_0 $ is spread out over field amplitudes much larger
than
$$
1/\sqrt{\omega_{q_0}}
$$
and it is localized in the canonical momentum basis.
Under time evolution the wave function will tend to be squeezed i.e. it
will be forced to
diminish its width and to become localized inside the potential well. This
implies that the covariance $A_q(\tau)$ will {\em increase} under time
evolution, while the
fluctuation (\ref{flucu}) and the time dependent frequency $\omega_q(\tau)$ will
{\em decrease}, i.e. the potential `flattens out'.
In this case the quantity $g\Sigma(\tau)$ (the renormalized quantum
fluctuations) in the evolution equations (\ref{modeeq}) decreases and
as will be seen below, under certain conditions, can become {\em negative}.
There is thus a possibility of inducing spinodal
instabilities in the quantum fluctuations. To see this, consider the case in
which $\varphi \equiv 0$ and the
effective mass squared ${\cal M}^2_{R\pi}(\tau)=1+g\Sigma(\tau)$ in the equation
(\ref{modeeq}) becomes negative, i.e. when $g\Sigma(\tau) < -1$. The modes for
which $q^2 < |{\cal M}^2_R(\tau)|$ will see an inverted harmonic oscillator
and they will begin to grow almost exponentially, resulting
in copious particle production for these modes as can be seen from the
expression for the particle number as a function of time (\ref{numbexp2}).
This situation, in which the potential turns into a maximum at the origin
{\em dynamically}, corresponds to symmetry breaking, since the minimum will
be away from the origin.
In this case the dynamics will result in a re-arrangement of the particle
distribution: spinodal instabilities will arise, long-wavelength
modes will begin to get populated at the expense of the initial non-equilibrium
distribution. The spinodal instabilities will in turn result in an {\em
increase}
in the fluctuations that will tend to cancel the negative contribution to
$g\Sigma$ from the initial non-equilibrium distribution. Eventually a
stationary regime should ensue in which the instabilities are turned-off and
the distribution
of particles will be peaked at low momenta.
At this point we want to emphasize that the possibility for the
onset of spinodal
instabilities is purely {\em dynamical}.
In contrast to previous studies of dynamics in spinodally unstable
situations \cite{wilraj,gavin,boysinglee,boydcc} in which an initially
symmetric state is evolved with a {\em broken symmetry} Hamiltonian, in
the present case the initial state {\em and the effective Hamiltonian}
are symmetric and the instability is a consequence of the non-equilibrium
dynamics.
The above analysis of the dynamics, based on the quantum mechanical analogy
will be shown to be accurate in the next section where we present the details
of the numerical evolution.
\subsection{Early Time Analysis:}
A more quantitative understanding of these cases
can be achieved by studying the early time
behaviour of the solutions and setting $\varphi \equiv 0$.
\subsubsection{Case I}
In this case with $ \Delta_q $ given by (\ref{range})and focusing on
the very early time during which backreaction effects can be ignored,
the solution to the mode equations (Eq. (\ref{modeeq})) with
the
initial conditions given by Eq. (\ref{inicon}) is simply a superposition of plane waves with frequency $\omega_q(0)$:
\begin{equation}
\phi^*_q(\tau)\approx \cos(\omega_q(0) \tau)-
i( 1 + \Delta_q) \sin(\omega_q(0) \tau) \label{formu1}.
\end{equation}
The renormalized quantum fluctuations which are dominated by the modes within
the highly populated momentum shell are given by,
\begin{equation}
g\Sigma(\tau)
\approx-{g}\int_0^\Lambda q^2 dq\frac{\sin^2(\omega_q(0)\tau)
[1-(1+\Delta_q)^2]}{\omega_q(1+\Delta_q)}\label{shellcontri}.
\end{equation}
If the initial distribution of particles is
sufficiently sharp, a qualitative understanding of the early time
dynamics can be obtained by a saddle point analysis of the
contribution from the region of large occupation number.
In this limit $g\Sigma(\tau)$ is approximately given by,
\begin{equation}\label{gSig1}
g\Sigma(\tau) \approx +
\frac{4gN_0}{\omega_{q_0}}\sin^2\left(\omega_{q_0}\tau\right).\label{early1}
\end{equation}
The mode equations now become
\begin{equation}
\left[\frac{d^2}{d\tau^2}+q^2+1+\frac{2gN_0}{\omega_{q_0}}-
\frac{2gN_0}{\omega_{q_0}}\cos(2\omega_{q_0}\tau)\right]\phi_q(\tau)=0.
\label{mathieu}
\end{equation}
This is a Mathieu equation whose solutions are of the Floquet form\cite{abramowitz}. The first and broadest instability band is
centered at the value of $q$ given by
\begin{equation}
q^2= q^2_0-\frac{2gN_0}{\omega_{q_0}}.\label{reso}
\end{equation}
The width of the unstable band depends on the parameter
\begin{equation}
Q=\frac{gN_0}{\omega^3_{q_0}} \label{unspara}
\end{equation}
and can be read off in reference\cite{abramowitz}. There is a rather small
window of relevant parameters that could allow appreciable parametric amplification.
Whether the backreaction effects allow the unstable band to remain
under time evolution resulting in large particle production and
redistribution of particles is a detailed dynamical question that will be studied numerically below.
\subsubsection{Case II}
The dynamics in this case can be understood by the heuristic
arguments presented below. We work with the solution
$\Delta_q=2[n_q(0)-\sqrt{n_q^2(0)+n_q(0)}]$, which in the limit
$N_0 >>1$ yields (\ref{delgreat})
\begin{eqnarray}
\Delta_q&&\approx -1+\frac{1}{4n_{q_0}}\;\;\;\; \text{for}\;\;\;q \approx
q_0
\label{peak}\\
&&\approx 0 \;\;\;\; \text{otherwise}. \label{away}
\end{eqnarray}
leading now to the following approximate form for the fluctuation at early
times:
\begin{equation}
g\Sigma(\tau) \approx
-\frac{4gN_0}{\omega_{q_0}}{\sin^2(\omega_{q_0}\tau)}\label{early}
\end{equation}
when the backreaction effects can be ignored.
The first feature to note is that $g$ and $N_0$ appear together in such a way
that the effective coupling is now $gN_0$ and hence the physics is
intrinsically non-perturbative when $N_0 \approx 1/g$. This situation
is very similar to that in high temperature field theory wherein the
relevant dimensionless quantity is $T/m(T)$ ($m(T)$ is the temperature
corrected effective mass) and the effective coupling constant for
long-wavelength physics is $\lambda T/m(T)$.
In this situation the non-perturbative hard-thermal-loop resummation is
required.
Secondly the expression for $ g\Sigma(\tau) $ is
always less than or equal to zero. Notice that unlike {\it Case I}, $
g\Sigma(\tau) $ is negative [see Eq.(\ref{gSig1})]. In particular $
g\Sigma(0) = 0 $ and then $ g\Sigma(\tau) $ becomes negative i.e. it
begins to {\em decrease}. The fact that the
fluctuations decrease was exactly what we had expected from the wave functional
analysis presented in the previous section.
Furthermore we see that when $ 4gN_0/\omega_{q_o}>1 $ at very early times there
will be an unstable band of wave-vectors. An estimate of the width of the band
can be provided by averaging the time dependence of $g\Sigma$ over one period
of oscillation. This estimate yields the band of wave-vectors
\begin{equation}
0 < q < \sqrt{\frac{2gN_0}{\omega_{q_o}}-1}= q_m \label{spinoband}
\end{equation}
which will become spinodally unstable.
The mode functions for these wavevectors will grow exponentially at
early times and their contribution to the fluctuation $g\Sigma(\tau)$
(\ref{dimless}) will grow -- this is a back-reaction mechanism that will tend
to shut-off the instabilities.
This means that if we begin with a
completely $O(N)$ symmetric state i.e. $\varphi(0)=\dot{\varphi}(0)=0$
and if we choose $N_0$ large
enough such that $1+g\Sigma \approx
1-4 g N_0\sin^2(\omega_{q_0}\tau)/\omega_{q_0} < 0$, spinodal
instabilities will
be triggered and the symmetry will be spontaneouly broken.
The condition for spinodal instabilities to appear is given by
\begin{equation}
\frac{4g N_0}{\omega_{q_0}} > 1 \label{spinocond}
\end{equation}
which determines the critical value of the particle number in a correlation
volume in terms of the coupling and the peak momentum of the distribution.
In the preceding sections we provided an intuitive understanding of the
underlying mechanism
of symmetry breaking in terms of a quantum mechanical analogy.
We now provide an alternative argument to clarify
the physical mechanism for the dynamical symmetry breaking. The argument
begins with the expression for the `dressed' mass in Eq. (\ref{dressedmass})
which we write in terms of dimensionless quantities as
\begin{eqnarray}
M^2_R & = & m^2_R+
g M^2_R \int
\frac{q^2dq}{\omega_q(0)}\left[-\frac{\Delta_q}{1+\Delta_q}\right],
\label{dressedmass2} \\
m^2_R & = & m^2_B + g \int \frac{q^2\; dq}{\omega_q(0)} \; . \label{renoma}
\end{eqnarray}
The second term is dominated by the peak in the initial particle distribution.
Using eqns. (\ref{peak},\ref{away}) and using a saddle point approximation
assuming a sharp distribution, we obtain the relationship
\begin{equation}
M^2_R\left[1-\frac{4g N_0}{\omega_{q0}}\right] = m^2_R. \label{barebroken}
\end{equation}
Then choosing the effective mass $M^2_R >0$ as we have done
throughout, we see that when the
condition for spinodal instabilities in Eq. (\ref{spinocond}) is fulfilled
then it must be that $m^2_R <0$. Therefore the renormalized
mass squared
{\em in the absence of} the medium is negative and the medium effects,
i.e. the
non-equilibrium distribution of particles dresses this mass making the
effective, medium `dressed' mass squared positive. Thus in the absence
of medium the potential was a (spontaneous) symmetry breaking potential. The
initial distribution restores the symmetry at $ \tau=0 $ much in the same
way as in finite temperature field theory at temperatures larger than
the critical temperature. However the initial
state is strongly out of equilibrium and its time evolution re-distributes the
particles towards low momentum and the spinodal instabilities result from
the squeezing of the quantum state as explained above.
This situation must be contrasted to that in {\it Case I} above. The same
argument, now applied to {\it Case I} leads to the result
\begin{equation}
M^2_R\left[1+\frac{4g N_0}{\omega_{q0}}\right] = m^2_R. \label{bareunbroken}
\end{equation}
We clearly see that with a positive effective mass, {\it Case I}
corresponds to the
situation in which the theory was {\em symmetric} even without the medium
effects (i.e. $m^2_R >0$).
Thus we obtain a physical picture of the different cases: in {\it Case I}
the symmetry was unbroken {\em without} a medium and remains unbroken
when the large density of particles is added. By contrast in {\it Case II}, the
symmetry is spontaneously {\em broken } in
the absence of a medium, the high density initial state restores the symmetry
in a state out of equilibrium. Under time evolution
the dynamics then redistributes the particles producing
spinodal instabilities and breaking the symmetry.
We reiterate that the second case represents a novel situation which is in a
sense, contrary to
what happens at high temperature where thermal fluctuations suppress the
possibility of long-wavelength instabilities.
The issue of symmetry breaking is a subtle one here. If
we begin with symmetric initial conditions,
$\varphi(0)=\dot{\varphi}(0)=0$, the wavefunctional will always be
symmetric since the evolution will maintain this symmetry. In order to
test whether the symmetry is spontaneously broken or not, one must
provide an initial state that is slightly asymmetric, with a very small initial
expectation value $\varphi(0)\neq 0$, and follow the subsequent
time evolution. If the expectation value oscillates around zero,
then the symmetry is {\em not} spontaneously broken since the minimum
of the `dynamical effective potential' is at the origin in field space.
If the expectation value begins rolling away from zero and reaches a
stationary value away from zero then one can
assert that there is a dynamical minimum away from the origin and the
symmetry is spontaneously broken. Thus the test of symmetry breaking
requires an initial condition with a small value of the order parameter.
\subsection{The Late Time Regime}
The asymptotic value of the order parameter
can be obtained by analyzing the full dynamics of the
theory and depends on the initial conditions. This reflects the fact that there
is no static effective potential description of the physics. However,
some information about the asymptotic state (when
$\dot{\varphi}(\infty)=\ddot{\varphi}(\infty)=0$) can be obtained from the
equation of
motion (\ref{zeromodeeq}) by setting $\ddot{\varphi}(\infty)=0$ which yields
the sum rule \cite{boydiss,erice97}
\begin{equation}
1+\varphi^2(\infty)+g\Sigma(\infty)=0 \label{sumrule}
\end{equation}
provided $\varphi(\infty)\neq 0$.
This sum rule guarantees that
the pions are the asymptotic massless Goldstone bosons since
$$
{\cal M}_\pi^2(\tau)=
m_R^2+\frac{\lambda_R}{2}\left[ \phi^2(\infty)+
\langle\pi^2\rangle_R(\infty)\right]
$$
(Eq. (\ref{massoft})) and the sum rule
is a consequence of the Ward identities associated with the global $O(N)$
symmetry.
The non-linear evolution of the mode functions results in a
redistribution of particles within the spinodally unstable band. The
distribution becomes
more peaked at low momentum and the effective potential flattens
resulting in a non-perturbatively large distribution of Goldstone bosons
at low momentum.
\section{Numerical Analysis}
\subsubsection{\it Case I} We have investigated the possibility of parametric
amplification in this case in a wide region of parameters but always in the
dense regime $N_0 >>1$ and varying the center of the distribution. We find
numerically that the backreaction effects shut off the parametric instabilities
rather soon allowing only small particle production and redistribution
of particles. Typically the distribution develops peaks but remains
qualitatively unchanged and the dynamics is purely oscillatory.
\subsubsection{Case II}
The numerical analysis of the problem involves the solution of the coupled
set of equations (\ref{zeromodeeq}), (\ref{modeeq}) and (\ref{gsigma}) appended
with initial conditions. We choose the zero mode initial conditions to be
$\varphi(0)=10^{-3}, \dot{\varphi}(0)=0$ while the mode functions satisfy
$\phi_q(0)=1;\;\dot{\phi}_q(0)=-i\omega_q(0)(1+\Delta_q)$. Here
$\omega_{q}(0)=\sqrt{q^2+1}$ and
\begin{equation}
\Delta_q=2[n_q(0)-\sqrt{n_q^2(0)+n_q(0)}]\; .
\end{equation}
We have tested the numerics with a momentum cutoff $\Lambda=25$ in units of the
renormalized mass $M_R$ and found that after renormalization
the numerical results are insensitive to the value of the cutoff
provided it is chosen to be much larger than the largest wave-vector which
becomes spinodally unstable. The initial particle distribution is chosen to be
\begin{equation}
n_{q}(0)=\frac{N_0}{I}e^{-(q-5)^2}, \label{testdist}
\end{equation}
where the total initial number of particles is taken to be to be $N_0=2000$,
the coupling is fixed at $g=10^{-2}$ and the initial value of the
order parameter is taken to be $\varphi(0)=10^{-3}$.
{\bf Results:}
Fig.(\ref{gsigmass}) shows $g\Sigma(\tau)$ vs. $\tau$ and the effective
`pion' mass squared ${\cal M}^2_{R\pi}(\tau)$. We see clearly that
spinodal instabilities are produced and the quantitative features of
the dynamics are in agreement with the estimates established for the
early time dynamics given by Eq.(\ref{early}). We see
that the pions become massless, asymptotically. The distribution function
$n_q(\tau)$
multiplied by the coupling $g$
is shown in Fig. (\ref{nkt}) at different times, clearly demonstrating
how the distribution changes in time. As a consequence of the
spinodal instabilities the long-wavelength modes grow exponentially and
the ensuing particle production for these modes populates the band of unstable
modes.
In particular the amplitudes of the long wavelength modes that become
spinodally unstable grow to be {\em non-perturbatively large} of order
$ 1/g $ and dominate the dynamics completely.
At earlier times the initial peak in the distribution at $q \approx 5$
can still be seen, but at later times it is overwhelmed by the distribution at
long wavelengths. Fig. (\ref{zoomed}) shows a zoom-in
of the distribution functions ($gn_q$) vs. $q$ at $\tau=30,80$ near $q=0$ and
also
near the peak of the initial distribution, around
$q_0 \approx 5$. We see a remnant of the original peak, slighly
shifted to the right but much broader than the initial distribution and
with about half the original amplitude. After $\tau \approx 10$ the
distributions do not vary much in this region of momenta, but they
do vary dramatically at low momenta. Fig. (\ref{noft}) shows the
total number of particles as a function of $\tau$. We see clearly that
initially the total number of particles diminishes because
the fluctuations decrease at early times. The long-wavelength modes
begin to grow because of spinodal instabilities but their contributions are
suppressed
by phase space. Only when their amplitudes become non-perturbatively large,
is the particle production at long-wavelengths an effective contribution
to the total particle number. When this happens, there is an explosive burst of
particle production following which the total number of particles
remains fairly constant throughout the evolution. After the spinodal
instabilities are shut-off, which for the values chosen for the numerical
evolution correspond to $\tau \approx 2$, the dynamics becomes non-linear.
Whereas during the initial stages the dynamics is in the linear regime,
after backreaction effects have shut-off the spinodal instabilities the
further evolution of the distribution functions is a consequence of the
non-linearities.
Fig.(\ref{ordpara}) exhibits one of the clear signals of symmetry breaking. The
order parameter begins very near the
origin, but once the spinodal instabilities kick in, the origin becomes a
maximum and the order parameter begins to roll away from it. Notice that
the order parameter reaches a very large value, which is the dynamical
turning point of the trajectory, before settling towards a non-zero value.
We find that the value of the turning point and the final value of
the order parameter depend on the initial conditions. To illustrate
the non-perturbative growth of modes clearly, we have plotted the quantity
$g|\phi_q(\tau=5)|^2-g|\phi_q(\tau=0)|^2$ in Fig. (\ref{modulo})
which shows how the amplitude of the long wavelength
modes becomes non-perturbatively large and of order $ 1/g $.
We have also carried out the numerical evolution with $g=10^{-2}, N_0=4000$ and
$g=10^{-3}, N_0=40000$ with the same value of $q_0$ and found the same
quantitative
behavior, proving that the relevant combination is $gN_0$ as revealed
by the analytic estimates above. We have also confirmed that for $gN_0 <<1$ there
are no spinodal instabilities and the dynamics is purely oscillatory without
a redistribution of the particles and with no appreciable particle production.
When the peak of the initial distribution function is {\em beyond} the spinodally
unstable band $q>q_m$ (see Eq.(\ref{spinoband})),
the original distribution is depleted and broadened somewhat with
irregularities and wiggles but remains
qualitatively unchanged (see fig. \ref{zoomed}). However, when the
peak of the initial distribution is {\em within} the spinodally unstable band
there is a complete re-distribution of particles towards low momentum.
The original distribution disappears under time evolution and after the
spinodal time only the low momentum modes are populated.
\section{Symmetry Breaking, Energy, Pressure and Equation of State:}
\subsection{ Onset of Bose Condensation:}
We have seen both from the numerical evolution and from the argument based
on the sum rule (\ref{sumrule}) which is a result of the Ward identities and
Goldstone's theorem, that the effective mass term
vanishes asymptotically. Therefore the asymptotic equation of motion for the
mode
functions is that of a massless free field. In particular the asymptotic
solution for the $q=0$ mode
is given by
\begin{equation}
\phi_0(\tau\rightarrow \infty) = A+B\tau
\end{equation}
where $A$ and $B$ are complex coefficients that can only be obtained from the
full
time evolution. However because the Wronskian
\begin{equation}
\phi_0(\tau)\dot{\phi}^*_0(\tau)-
\phi^*_0(\tau)\dot{\phi}_0(\tau)= 2i{W_0} \label{wronski}
\end{equation}
is constant in time, neither $A$ nor $B$ can vanish \cite{thanks}.
This situation must be contrasted with that for the $q\neq 0$ modes whose
asymptotic behavior is of the form
\begin{equation}
\phi_q(\tau \rightarrow \infty) = \alpha_q \; e^{iq\tau}+ \beta_q \;
e^{-iq\tau} \; \;.
\label{asymqnotzero}
\end{equation}
This causes
the number of particles at {\em zero momentum} to grow
asymptotically as $\tau^2$ whereas the number saturates for the $q\neq 0$
modes.
The three dimensional phase space conspires to
cancel the contribution from the $q=0$ mode to the {\em total}
number of particles, energy and pressure, which,
from the numerical evolution (see Fig.(4)) are seen to remain constant at
long times. This situation is very similar to that in
Bose-Einstein condensation where the excess number of particles at a
fixed temperature goes into the condensate, while the total number of
particles
outside the condensate is fixed by the temperature and the chemical potential.
The $q=0$ mode will become macroscopically occupied when $\tau
\sim\sqrt{V}$
where $V$ is the volume of the system (i.e. the number of particles in
the zero momentum mode becomes of the order of the spatial volume). When this
happens this mode must be isolated and studied separately from the $q \neq 0$
modes because its contribution to the momentum integral will be cancelled by
the small phase space at small momentum. Again the situation is very similar
to the case of the usual Bose-Einstein condensation. Notice that this argument
is independent of a non-vanishing order parameter $\varphi$ and leads to the
identification of the zero momentum mode as a Bose condensate
that signals spontaneous symmetry breaking even
when the order parameter remains zero. Since the effective mass is
zero we identify the condensing quanta as pions and therefore this mechanism is
a novel form of pion condensation in the absence of direct scattering.
When scattering is included, beyond the leading order in the large $N$
approximation,
the formation of the Bose condensate will require a detailed understanding
of the different time scales. The time scale for the collisionless process
described above must be compared to the time scale for collisional processes
that would tend to deplete the condensate. If spinodal
instabilities causing non-perturbative
particle production at low momentum occur on much shorter time scales than
collisional redistribution then we would expect that there will be a
non-perturbatively large population at low momenta that could be interpreted as
a coherent condensate.
The spinodal instabilities seen in this article are similar to those which
lead to the formation of Disoriented Chiral
Condensates\cite{wilraj,gavin,boysinglee,boydcc}. However we emphasize that
unlike most of the previously studied scenarios for DCC formation
in which a `quench' into the spinodal region was introduced
{\em ad-hoc}, in the
present situation the spinodal instabilities are of {\em dynamical
origin}. We have studied a situation where the vacuum theory has symmetry
breaking minima
(with $m_R^2<0$ in Eq. (\ref{barebroken})) but the initial state is highly
excited with the particle density larger
than a critical value leading to a symmetry restored theory in the
medium. However this initial
state is strongly out of equilibrium and its dynamical evolution automatically
induces
spinodal instabilities.
\subsection{Energy, Pressure and Equation of State:}
As mentioned in the introduction the goal of our study is to understand
the dynamical evolution of strongly out of equilibrium
states. In the usual investigations of the dynamics of the
quark gluon plasma one uses a hydrodynamic description in which the
energy density, pressure and all the thermodynamic variables depend
only on proper time\cite{bjorken,cooperfry}.
The hydrodynamic equations are then a consequence
of the conservation laws which are appended with an equation of state to
determine the evolution completely. The hydrodynamic regime corresponds
to the case when the collisional mean free path is shorter than the
wavelength of the hydrodynamic collective modes, and therefore the concept of local thermodynamic
equilibrium is warranted.
A valid question in the situation that we have
studied in this aricle,
is whether and when an equation of state is a meaningful
concept. In the leading order in the large $N$ expansion there are no
collisional processes (these arise at ${\cal O}(1/N)$) and therefore
the concept of a hydrodynamic regime in is not
applicable in principle. Furthermore since
the state considered is spatially homogeneous the pressure will depend
on time rather than on proper time. Since the energy is conserved
and the pressure evolves with time, an equation of state will
have a meaning only when the evolution has reached the asymptotic regime.
The energy density is given by
\begin{equation}
\frac{E}{NV}=\frac{1}{2}\dot{\phi}^2+
\frac{1}{2}m^2_B\phi^2 + \frac{\lambda_B}{8} \phi^4 + \frac{1}{4\pi^2}\int
\frac{k^2 dk}{2\Omega_q}
\left[|\dot{\phi}_q(\tau)|^2 + \omega^2_q(\tau)
|\phi_q(\tau)|^2\right]-\frac{\lambda_B}{8} \langle \pi^2 \rangle^2.
\label{enerdens}
\end{equation}
The last term which arises in a consistent large $N$ expansion, is extremely
important in
that after renormalization
it provides a {\em negative } contribution which can interpreted as
part of the effective potential\cite{boydiss,noneq}.
Using the equation of motion for the order parameter and the mode functions it
is straightforward to show that the energy is conserved and
the last term is necessary to ensure energy conservation.
Since the energy is conserved it can be renormalized by a subtraction at
$\tau=0$, and therefore is finite in terms of the renormalized quantities.
The pressure is given by the following expression,
\begin{equation}
\frac{p+E}{NV} = \dot{\phi}^2
+ \frac{1}{2\pi^2}\int \frac{k^2 dk}{2\Omega_q}
\left[|\dot{\phi}_q(\tau)|^2 + \frac{k^2}{3} |\phi_q(\tau)|^2\right].\label{ppluse}
\end{equation}
Unlike the energy density, the pressure is not a constant of the motion
and needs proper subtractions to render it finite. The detailed expressions for
both the renormalized energy and pressure can be found
in references \cite{noneq,erice97,baacke}.
However, rather than computing the total energy
density and pressure, we will study the contributions from the modes
that are highly populated and whose amplitudes become non-perturbatively
large ($\approx 1/g$). Asymptotically, when the effective mass vanishes
and the low momentum modes become highly populated with amplitudes
of $O(1/g)$ the renormalized energy density is given by
(see ref.\cite{noneq,erice97} for the explicit expression of the renormalized
energy density)
\begin{equation}
\frac{E}{NV}= \frac{1}{4\pi^2}\int^{k_m}_0 \frac{k^2 dk}{2\Omega_q}
\left[|\dot{\phi}_q(\tau)|^2 + \omega^2_q(\tau) |\phi_q(\tau)|^2\right]
+ {\cal O}(g) \label{enerdensasym}
\end{equation}
where $k_m$ is the largest spinodally unstable wave vector at early times and
${\cal O}(g)$
represents terms that are perturbatively small.
Using the asymptotic solutions for the mode functions given by
Eq. (\ref{asymqnotzero}) and neglecting the strongly oscillatory phases
that average out at long times we obtain
\begin{equation}
\frac{E}{NV}= \frac{M^4_R}{2\pi^2}\int^{q_m}_0 \frac{q^4 dq}{2W_q}
\left[|\alpha_q|^2+|\beta_q|^2\right]+{\cal O}(g).
\label{enerdensasym2}
\end{equation}
Similarly, neglecting the contribution of
modes with small amplitudes, we find that the renormalized pressure plus
energy density is given by,
\begin{equation}
\frac{p+E}{NV} = \frac{4}{3}
\frac{M^4_R}{2\pi^2}\int^{q_m}_0 \frac{q^2 dq}{2W_q}
\left[|\alpha_q|^2 + |\beta_q|^2\right], \label{ppluseasym}
\end{equation}
so that in the asymptotic regime
\begin{equation}
p=\frac{E}{3},
\end{equation}
independent of the particle distribution which is non-thermal. This
is one of the important results of this work. In Fig. (\ref{trace}) we show
the trace of the energy momentum tensor $E-3P$ as a function of time,
for the same value of parameters as for Figs. (1-5). Clearly,
the trace vanishes asymptotically.
During the early stages of the
dynamics when spinodal instabilities arise and develop with
profuse particle production, an equation of state cannot be defined.
The dynamics cannot be described in terms of hydrodynamic
evolution. Since the processes under consideration are collisionless, there
is no local thermodynamic equilibrium and an equation of state is ill-defined.
\section{Conclusions}
We have studied the evolution of an $O(N)$-symmetric quantum field theory,
prepared in a strongly
out-of-equilibrium initial state. The initial state was characterized by a
particle distribution localized in a thin spherical
shell peaked about a non-zero momentum, a spherical ``tsunami''.
The formulation of this scenario resulted from a simplification of the
idealized `colliding-pancake' description of a heavy-ion
collision.
For a large density of particles in the initial state, the ensuing dynamics is
non-perturbative and
consequently we studied the $O(N)$ theory in the leading order in the large $N$
limit which is a systematic non-perturbative approximation scheme . When the
tree-level theory has vacua that spontaneously break the
symmetry and the
number of particles within a correlation volume at $t=0$ is so high that
the symmetry is restored initially , spinodal instabilities are then
induced {\em dynamically}
resulting in profuse particle production for low momenta.
This situation is to be contrasted with
the usual studies of DCC's where the initial state is assumed to satisfy LTE
(local thermodynamic equilibrium) and the spinodal instabilities are introduced
either via an {\em ad-hoc} quench or via cooling due to hydrodynamic expansion
which is also introduced phenomenologically.
Backreaction of the long wavelength fluctuations eventually shuts off these
instabilities and the nonlinearities redistribute particles towards
low momenta. We thus find asymptotically in time a novel form of
pion condensation at low momentum, out of thermal equilibrium.
Furthermore, a macroscopic condensate of the Bose-Einstein type will form
at much longer times ($ mt \sim \sqrt{V} $).
When the spinodal instabilities
shut off we find that the asymptotic `quasiparticles' are massless
pions with a non-thermal, non-perturbative distribution function peaked at low
momentum but with an ultrarelativistic equation of state.
We believe that these phenomena point out to very novel
and non-perturbative mechanisms for particle production and relaxation
that are collisionless, strongly out of local thermodynamic equilibrium and
cannot be described in the early stages via
a coarse-grained hydrodynamic evolution. These are the result of strongly out
of equilibrium initial states of high density that could potentially
be of importance in the dynamics of heavy ion collisions at high luminosity
accelerators.
A more realistic treatment, modelling a collision will require an initial
state which breaks the rotational invariance and selects out a beam-axis along
which the colliding pions move in opposite directions. However, the analysis of
such initial conditions is beyond the present numerical capabilities and will
be deferred to a future work.
An upshot of this study of high density, non-equilibrium particle distributions
is the following tantalizing theoretical question: can one extract a
resummation scheme, or an effective theory akin to the Hard Thermal Loop
effective expansion for arbitrary {\em non-equilibrium, non-thermal}
distributions such as the ``tsunami'' configuration for {\em gauge theories}?
Possible answers and consequences of such initial states for gauge theories will be
discussed in a forthcoming article \cite{gaugetsunami}.
\section{Acknowledgements:}
D. B. thanks the N.S.F for partial support through the grant
awards: PHY-9605186 and LPTHE for warm hospitality. R. H.
and S. P. K. were supported by DOE grant DE-FG02-91-ER40682. S. P. K. would
like to thank BNL for hospitality during the progress of this work. H. J. de V.
thanks BNL and U. of Pittsburgh for warm hospitality.
The work of R.D.P. is supported by a DOE grant at
Brookhaven National Laboratory, DE-AC02-76CH00016.
The authors acknowledge partial support by NATO.
\newpage
|
2,877,628,090,275 | arxiv | \section{Introduction}
The crust of a neutron star (NS) is a rigid elastic shell around a
kilometre thick, which connects the supranuclear-density fluid core
with the star's magnetosphere, and in turn any observable
phenomena. As for any elastic medium, however, there is a maximum
strain it can sustain -- beyond which the crust will yield locally,
causing seismic activity or `crustquakes'.
A crustquake scenario related to changes in rotational strain was
suggested shortly after the discovery of radio pulsars, as a way to
explain observations that the otherwise stable spindown of a pulsar
can be interrupted by abrupt increases -- `glitches' -- in spin frequency and spin-down
rate \citep{baym_ppr}. The idea is that the rotational oblateness at
the star's birth is frozen into the crust; as the star spins down it
wants to become more spherical, and the overly-oblate crust develops strains
which eventually break it and cause an increase in angular momentum of
the crust. This mechanism alone, however, cannot
explain the observed timing behaviour; instead, in the currently
standard glitch scenario the spin-up is attributed to a sudden
transfer of angular momentum from a more rapidly rotating superfluid
component to the rest of the star \citep{and_itoh}. Nonetheless, crustquakes are often invoked in
glitch models, either as a trigger for these sudden angular-momentum
transfer events \citep{le96,Eich10} or to explain the persistent
changes in spin-down rate seen after some glitches \citep{accp94}.
\begin{figure*}
\begin{center}
\begin{minipage}[l]{0.8\linewidth}
\psfrag{decay}{{\bf field decay/rearrangement}}
\includegraphics[width=\linewidth]{figs/crust-crack-new.eps}
\end{minipage}
\caption{\label{crust-crack}
Cartoon of crust-breaking scenario. For clarity we have shown the motion of an equatorial region of magnetic
flux to represent field rearrangement, but the argument is applicable to any local changes in the field
anywhere in the crust. We assume the young NS has reached a
hydromagnetic equilibrium by the time the crust freezes, so
that the crust does not initially need to support any
stresses (left-hand plot). At some later point in the star's evolution the magnetic field has lost
energy and would need to adjust to remain in a global
fluid equilibrium, but whilst this adjustment may take
place in the fluid core, it is resisted by shear
stresses in the crust (middle plot). The magnetically-induced stresses to the crust build, and
eventually some region of the crust will exceed its
yield strain and break (right-hand plot). The local magnetic field will
be able to return to a fluid equilibrium again.}
\end{center}
\end{figure*}
In addition to these rotational effects, magnetic stresses will also develop in the crust throughout a NS's
lifetime, as a result of internal magnetic field evolution.
For typical radio pulsars such stresses might be negligible, since the
crust's elastic energy exceeds the magnetic energy, and the elastic
force dominates the Lorentz force. For highly magnetised NSs
however, like magnetars -- objects with inferred dipole magnetic
fields at least as high as $\sim\!10^{15}$ G -- the two energies are
comparable, and it is quite feasible that magnetic stresses could be
strong enough to induce crust-yielding: crustquakes or plastic
flow. Such magnetically-driven seismic activity forms the core of the widely-accepted model for magnetar
activity, firstly put forward by \citet{thom_dunc95}
to explain bursts in anomalous X-ray pulsars (AXPs) and the bursts and
gamma-ray giant flares in soft-gamma ray repeaters (SGRs). The
recurrent bursts in magnetars have characteristic durations in the range
$\sim\!0.01-1\ \rm{s}$ and peak luminosities up to $10^{41}\ \rm{erg\ s}^{-1}$,
and are in many cases associated with glitches or other timing
anomalies \citep{wt06,dib_kaspi14}.
The potential connection with crustquakes is consistent with the
observation that the burst-energy distribution in magnetars follows a
power law \citep{cheng96,gogus00}, similar to that of earthquakes.
Recent observations are indicative of a continuum of activity in radio
pulsars and magnetars \citep{kaspi}: SGRs have been discovered with weak inferred dipole
fields (see, e.g. \citet{rea10}), and magnetar-like activity has been
seen from some (otherwise rotationally-powered) radio pulsars, such as
the burst and coincident glitch in J1846-0258 \citep{ggg+08,kh09}.
This has led to considerable efforts to explain the different
phenomenologies of NSs in a unified scheme by studying the
thermal and magnetic-field evolution in their crusts \citep{perna_pons,PonsRea12}.
These first results suggest that seismic activity induced by
magnetic field evolution is of relevance not only for magnetars, but
also rotationally-powered pulsars.
Motivated by the many possible observational manifestations of
crustal stresses in a NS, we study a mechanism in
which a rearranging global magnetic field provides the source of these
stresses, eventually causing the crust to yield. We
derive a condition for magnetically-induced crustal failure based on
the von Mises criterion for the yielding of elastic media. Using a
variety of different magnetic-field models, including NSs with normal and
superconducting cores and with a force-free magnetosphere, we study
the crustal strain patterns that these field configurations would
produce and the point at which regions of the crust will yield. We
find a relationship between the depth of a crustal fracture, the
breaking strain and the corresponding energy release, and deduce a
characteristic field strength associated with crustquakes. We argue
that magnetically-induced crustquakes could power even the most luminous magnetar
phenomena, contrary to previous suggestions, as well as operate in NSs with
less exceptional inferred dipole magnetic field strengths.
\section{Magnetic-field equilibrium sequences}
\label{equilibria}
Around a day into its life a neutron star begins to form a crust,
crystallising gradually from the inside out over the course of the
following century\footnote{\red{See, e.g., \citet{ruderman_1968} for an
early discussion of this; \citet{gnedin} and references therein for
the theory of crustal thermal relaxation; and \citet{krueger} for a
figure of how different regions freeze into a crust over time.}}. Before the crust has even
begun to form, however, it is reasonable to expect the magnetic field to have
reached an equilibrium with the fluid star, since the timescale of
this process will be the same order of magnitude as an Alfv\'en-wave
crossing time (around a second for typical NS parameters and a
$10^{14}$ G field; shorter for stronger fields). The crust will thus
freeze in a relaxed state threaded by its early-stage magnetic field;
in the absence of shear stresses the equilibrium description of this
phase will just be that of a magnetised fluid body (left panel of
figure \ref{crust-crack}). Over time the star will gradually lose magnetic
energy through secular decay processes; see section \ref{decay}. The magnetic field will want to
adjust to a new fluid equilibrium, but its rearrangement will be
inhibited by the crust's rigidity (middle plot of figure
\ref{crust-crack}). The magnetically-induced stresses in the crust
will thus grow over time, and eventually exceed the elastic yield
value; when this happens the crust will break in the region where its
breaking strain has been exceeded, and the field will be able to
return (locally) to its fluid equilibrium configuration, depicted in
the right-hand panel of figure \ref{crust-crack}. The stress that
builds up in a NS's crust will thus be
sourced by the difference between the field configuration present when
the crust froze, and the magnetic field's desired present equilibrium,
which it is prevented from reaching by shear stresses. Both the `before'
and the (desired) `after' magnetic-field configurations are therefore fluid
equilibria, so by comparing two such equilibria with different values of
magnetic energy we can determine the expected stresses built up in an
elastic crust. A quantitative description of the above scenario
is given in section \ref{strain-deriv}, and its potential shortcomings
are discussed in the following subsection, \ref{caveats}.
To explore the possible range of these `before' and `after' magnetic-field
configurations during the evolution of a highly-magnetised neutron
star, we consider three classes of neutron-star model:
accounting for the possibilities that the core protons are
superconducting or not, and considering a scenario where the star has
a magnetar-like magnetosphere in equilibrium with its
interior field (and matches smoothly to it at the stellar
surface). From these various plausible models of a NS's field
configuration we hope to look for universal features and also
possible differences in how the crust breaks which could be used to
distinguish between them.
Our model NS is composed of protons, neutrons and electrons, but
the electrons have negligible inertia and their chemical potential can
simply be added as an extra contribution to that of the protons. We
are then left with a two-fluid core of protons and superfluid neutrons, matched at $0.9$ times the stellar
radius $R_*$ to a non-superconducting and unstrained crust. We capture
these features by confining the neutron fluid to the region between the
centre and $0.9R_*$, while having the proton fluid extend from the centre
out to the stellar surface, so that the shell from $0.9R_*$ to $R_*$
is a single-fluid region. Since a relaxed elastic medium obeys the
same equilibrium equations as a fluid, we can thus regard the proton
fluid in this outer single-fluid region as a `crust'
\citep{prix_novcom}. The equation of state we choose is effectively a double
polytrope \citep{LAG}, \red{setting the proton and neutron polytropic
indices $N_\textrm{p},N_\textrm{n}$ to values of $1.5$ and $1.0$
respectively, to mimic a `realistic' core proton-fraction profile in the
core (e.g. that of \citet{douchin})}. Since the polytropic indices of the two
fluids are different, the stellar models have composition-gradient
stratification. The neutron-density profile has, however,
negligible impact on these configurations. \red{Finally, although it would naturally be
more desirable to work directly with a tabulated equation of state,
instead of our double-polytrope approximation to one, we do not
believe that doing so would have any serious impact on our
results; see the discussion in section \ref{crust_props}.}
The code we use to calculate equilibria works in dimensionless units, and physical values
given here come from redimensionalising code results to one
particular model star of $1.4$ solar masses and with fixed neutron and proton
polytropic constants of
$k_\textrm{n}=5.65\times 10^4\ \textrm{g}^{-1}\textrm{cm}^{5}\textrm{s}^{-2}$
and
$k_\textrm{p}=2.74\times 10^{10}\ \textrm{g}^{-2/3}\textrm{cm}^{4}\textrm{s}^{-2}$
respectively. For our chosen neutron and proton density profiles these values produce a star with
radius of 12 km (varying very slightly with field
strength). Note that since all our models have the same mass and the same
equation of state (i.e. fixed polytropic indices and constants), they
correspond to the same physical star --
allowing for a direct comparison between different models.
In all cases we are interested in mixed poloidal-toroidal
magnetic-field configurations, since these are the most generic
models and also the most likely to be stable
\citep{tayler_mix}. Although the toroidal-field component can be
locally strong in our
models, its contribution to the total magnetic energy is always small compared
with the poloidal one. The equilibrium models we consider here are all
chosen to have the strongest possible toroidal component; as we will
see later in section 4, this allows us to put an upper limit on how
readily the crust will yield.
The key differences between our three classes of model come from the
form of the magnetic force $\boldsymbol{\mathfrak{F}}_{\textrm{mag}}$, and the electric current distribution; we
discuss each case next and show example field configurations, all with
a polar-cap field strength $B_p=6.0\times 10^{14}$ G for direct comparison.
\subsection{Normal core protons, vacuum exterior}
\begin{figure}
\begin{center}
\begin{minipage}[c]{0.8\linewidth}
\includegraphics[width=\linewidth]{figs/norm_6e14.eps}
\end{minipage}
\caption{\label{norm_6e14}
Magnetic-field configuration for a model magnetar with
a polar-cap field strength $B_p=6.0\times 10^{14}$
G and a total magnetic energy of $6.8\times 10^{47}$ erg. The core is a two-fluid system of superfluid
neutrons and normal protons, matched to a normal crust
at a dimensionless radius $r/R_*=0.9$. The thick black
arc at $r/R_*=1$ represents the stellar surface. We plot
the poloidal-field lines, denoting the direction of this field component, whilst
the colour scale shows the magnitude of the toroidal
component, whose direction is azimuthal -- into/out of
the page.}
\end{center}
\end{figure}
This is the simplest case, where both the core protons and the crust are subject to the familiar
Lorentz force for normal (non-superconducting) matter:
\begin{equation} \label{lorentz}
\boldsymbol{\mathfrak{F}}_{\textrm{mag}} = \frac{1}{4\pi}(\nabla\times{\bf{B}})\times{\bf{B}},
\end{equation}
where ${\bf{B}}$ is the magnetic field.
Note that the neutron fluid does not feel any magnetic force. We
assume that the exterior of the star is a vacuum, with no
charged particles able to carry an electric current, so that
Amp\`ere's law simply imposes a restriction on the form of the
external magnetic field ${\bf{B}}_\textrm{ext}$:
\begin{equation}
\nabla\times{\bf{B}}_{\textrm{ext}} = 0.
\end{equation}
An alternative way to look at this condition is that there
\emph{could} be magnetospheric currents, but that they do not
communicate with the interior and therefore do not affect its
equilibrium\footnote{The converse assumption -- that the interior
field does not influence the exterior -- is standard in pulsar
magnetosphere modelling; see the discussion in \citet{GLA}.}.
One could justify this rather simplistic model by suggesting that the magnetic field in
a magnetar's core is strong enough to break proton superconductivity
\citep{baym_pp,sin_sed}, so that the normal-matter
equations would apply. One key motivation for us, however, is that
it allows us to produce configurations with stronger toroidal
components than in our other cases;
see figure \ref{norm_6e14}. We believe the reason for this
to be numerical rather than physical -- our code's iterative scheme converges to
strong-toroidal-field solutions more readily in this case than for the
other two models considered in this paper. This class of model is constructed using the
techniques described in \citet{LAG}, although the resultant field
configurations are not dissimilar to those of single-fluid models.
\subsection{Superconducting core protons, vacuum exterior}
Our next class of equilibrium models are constructed as described in
\citet{sc_eqm_paper}. These consist of a core of superfluid neutrons
and type-II superconducting protons, matched to a normal crust.
In the crust, the magnetic field is smoothly distributed (on a
microscopic scale) and the
magnetic force is just the Lorentz force \eqref{lorentz}, acting on the entire crust. In the core, by contrast, the
effect of proton superconductivity is to quantise the field into
an array of thin fluxtubes; on the macroscopic level, this produces a
different magnetic force. Unlike the Lorentz force, which depends only
on the macroscopic field ${\bf{B}}$, the magnetic force for a type-II
superconductor also involves the \emph{lower critical field} ${\bf
H}_{c1}$, related to the magnetic field along fluxtubes \citep{easson_peth,akgun_wass,GAS}. This latter field is
parallel to ${\bf{B}}$ and proportional to the local proton density: in the
centre, where the proton density is highest, it reaches $10^{16}$ G,
but is on average around $10^{15}$ G within the core, irrespective of the value of
${\bf{B}}$. The most important feature governing these equilibria is the difference in
the form of the magnetic force for the core and crust:
\begin{equation} \label{mag_forces}
\boldsymbol{\mathfrak{F}}_{\textrm{mag}} = \left\{
\begin{split}
& \frac{1}{4\pi}(\nabla\times{\bf H}_{c1})\times{\bf{B}}-\frac{\rho_\rmp}{4\pi}\nabla\brac{B\pd{H_{c1}}{\rho_\rmp}}
& \ \ \textrm{(core)} \\
& \frac{1}{4\pi}(\nabla\times{\bf{B}})\times{\bf{B}}
& \ \ \textrm{(crust)} \\
\end{split}
\right.
\end{equation}
Our models assume the core and crustal fields match without any
current sheet in this region, and as for the models described in the
previous subsection do not have any exterior current. An example of a
model with core proton superconductivity is shown in figure \ref{sc_6e14}.
\begin{figure}
\begin{center}
\begin{minipage}[c]{0.8\linewidth}
\includegraphics[width=\linewidth]{figs/sc_6e14.eps}
\end{minipage}
\caption{\label{sc_6e14}
Magnetic-field configuration for a model NS with
$B_p=6.0\times 10^{14}$ G, with a
superfluid-superconducting core matched to a normal
crust. The total magnetic energy for this model is
$3.1\times 10^{48}$ erg. The crust-core boundary and surface are at
dimensionless radii of $0.9$ and $1.0$ as before; and
again, we plot poloidal field lines in black and
toroidal-field magnitude with the colour scale. Note
the weakness of the toroidal component compared with
the normal-matter model in figure \ref{norm_6e14}.}
\end{center}
\end{figure}
\subsection{Normal core protons, magnetosphere}
These equilibria are constructed in the same way as the
models with a normal core, except that we now allow for a
toroidal-field component that extends outside the star. This is
sourced by a poloidal electric current in a magnetosphere of charged
particles, located in a lobe around the equator as argued for by
\citet{belo_thomp}. Outside the lobe region there is vacuum,
where the field obeys $\nabla\times{\bf{B}}_{\textrm{ext}}=0$, but within it there
is a force-free region with
\begin{equation}
\nabla\times{\bf{B}}_\textrm{ext}=\alpha{\bf{B}}_\textrm{ext},
\end{equation}
where $\alpha$ is a function constant along magnetic-field lines,
governing the distribution of magnetospheric current; see
\citet{GLA} for details on the method of solution for these
configurations. In figure \ref{magneto_6e14} we plot two such models
of NSs in dynamical equilibrium with their magnetosphere, both
with $B_p=6.0\times 10^{14}$ G, but with $2.5\times 10^{46}$ erg of magnetic energy
removed from the toroidal component in the second plot, illustrating
how the magnetosphere rearranges in this case.
\begin{figure}
\begin{center}
\begin{minipage}[c]{0.8\linewidth}
\psfrag{before}{initial state}
\psfrag{after}{after magnetospheric decay}
\includegraphics[width=\linewidth]{figs/magneto_6e14.eps}
\end{minipage}
\caption{\label{magneto_6e14}
Two magnetic-field configurations for a normal-matter
NS with a current-carrying corona and
$B_p=6.0\times 10^{14}$ G, but with the lower plot having
$2.5\times 10^{46}$ erg less magnetic energy. All of
this energy has been taken out of the toroidal
component, visibly altering the magnetosphere. The
toroidal component attains a maximum value greater than
that of the model in figure \ref{sc_6e14}, but still less
than that in figure \ref{norm_6e14}. The lower model
has $5.4\times 10^{47}$ erg of magnetic energy.}
\end{center}
\end{figure}
At this point it is worth speculating about scenarios for the
formation of an equatorial corona of current-carrying plasma, although
this is not the focus of our work. The standard
argument for the formation of such a corona \citep{belo_thomp} assumes
the interior field evolves in such a way that it wishes to `eject
magnetic helicity' -- equivalently, to induce an electric current in
the environs of the star. For a mature NS this process cannot
happen immediately, but initially results in crustal stresses building
-- when these are released the initially poloidal field is twisted in an azimuthal direction,
thus generating a toroidal component.
As shown in \citet{GLA}, given a
sufficiently dense corona of charged particles, the star can form a
magnetosphere which is in dynamical equilibrium with the internal
field and hence supported by a relaxed crust, as opposed to one which
has to shear to generate the field. If we assume the toroidal
component is always confined to the same flux surface (poloidal field
line), then the decay of this field would cause the magnetosphere to change
shape, moving in towards the crust. As before, the magnetic flux's
inward motion would initially be inhibited by shear forces, but at a
later stage the induced stresses could grow large enough to break the crust. Figure
\ref{magneto_6e14} assumes a scenario like this, where the
configuration of the upper panel decays into that of the lower panel.
One could also view the panels in reverse, however, where an internal toroidal
field wishes to rise out of the star for whatever reason, but is again
inhibited by the crust. Clearly one cannot view the specific
configurations of figure \ref{magneto_6e14} as representing this scenario,
since that would require an increase in magnetic energy, but
qualitatively similar solutions with decreasing magnetic energy could
be constructed. In terms of a
changing global equilibrium this way round seems less likely, but
it does broadly represent the corona-formation mechanism discussed in
\citet{belo_thomp} and \citet{belo09}. Note that the strain patterns that would
build by running the scenario in this order would be the same as in
the reverse order, however, since the strain/yield criterion of
section \ref{strain-deriv} remains the same if the `before' and
`after' configurations are swapped around.
\section{Field decay}
\label{decay}
The fact that NS magnetic fields \emph{do} decay is well-established, from both
theoretical and observational study; our knowledge of the relative importance of
different decay mechanisms, and their corresponding timescales, is
nevertheless surprisingly incomplete. If the activity of young NSs like magnetars is
powered by field decay, however, there must be at least one
rather rapidly-acting decay mechanism. We therefore consider it
reasonable to \emph{assume} magnetically-induced stresses will
build in a NS crust on a timescale short enough to be astrophysically
relevant, even if current theoretical uncertainties
prevent us from pinpointing the mechanism(s) which will most readily
build these stresses. Here we briefly
review the literature on magnetic-field evolution to highlight the most promising
mechanisms for relatively rapid changes in a NS's field.
The most familiar source of magnetic field dissipation is Ohmic decay, which in
terrestrial materials and the neutron-star crust is the macroscopic
result of electrons scattering off a solid material's ion lattice, thus
heating it and reducing the electric current. Ohmic decay operates
more rapidly on small-scale fields than large-scale ones. In the crust
of a NS the separate process of Hall
drift acts to redistribute the magnetic flux into structures of
progressively shorter lengthscales \citep{gold_reis}; although this
process is not itself dissipative it aids Ohmic decay,
which acts more rapidly on small-scale magnetic fields.
For the core, many studies have argued that
the evolution is likely to be very slow. Ohmic decay itself must be
restricted to the thin cores of normal protons at the centre of
fluxtubes -- these cores comprise a minute volume of the NS core,
which is otherwise in a superconducting state, and so this decay
mechanism is expected to be extremely slow \citep{baym_pp2}. Ambipolar diffusion --
a drift of the charged particles, and hence the magnetic field, with
respect to the neutrons -- is both dissipative and acts to move the
core magnetic field outwards into the crust \citep{gold_reis}. However,
accounting for the superfluid state of the neutrons drastically
increases its timescale \citep{GJS}.
For magnetic
fields $B<H_{c1}\approx 10^{15}$ G, the Meissner effect is expected to
expel core magnetic flux to the crust-core boundary, a region of
(probably) higher electrical resistivity and hence faster Ohmic decay. More precisely, the Meissner effect
dictates that the eventual equilibrium state of the field will be one where it
is exponentially screened from the core over some short lengthscale --
it does not specify the dynamical mechanism which might achieve
this, nor the timescale. Different mechanisms have been invoked for
the transport of magnetic flux out of the core. The fluxtubes may move
out of the core through mutual self-repulsion \citep{kocharovsky},
driven by a buoyancy force \citep{mus_tsyg,wendell,harrison91,jones91}, or dragged by the
outwardly-moving neutron vortices as the star's rotation rate
decreases \citep{ding_cc}. The result of these various studies is an assortment of prospective timescales
for field decay which range over at least eight (!) orders of magnitude
($10^4-10^{12}$ yr). Nonetheless the consensus, inasmuch as there is
one, points to a rather slow core evolution and suggests that observed
field decay is crustal in origin.
Slow core-field evolution may, however, be contradicted by the
observation that young NSs like magnetars are able to build and
release huge stresses: the most energetic giant flare ($\sim\!
10^{46}\ \textrm{erg}$) came from a magnetar believed to be under a
thousand years old \citep{palmer,tendulkar}. Alternatively, instead of
being the result of secular stress build-up, magnetar giant flares may be
the manifestation of a rapidly-acting hydromagnetic instability \citep{thom_dunc96,ioka}
-- although that in itself requires the instability to be somehow
suppressed until some critical point, and therefore one might again have
to invoke the build-up of crustal stresses. There is clearly more
work to be done in attempting to achieve some kind of consensus on the
role of magnetic field decay in NS phenomena -- but if crustquakes induce
magnetar activity, as discussed in this paper, we may in fact be able to use
\emph{observations} to determine a core field decay timescale and hence
reduce the discordance of the theoretical models.
\section{Magnetically-induced crustquakes}
\subsection{Crustal properties}
\label{crust_props}
\red{To obtain quantitative results about how a magnetic field can act
to strain and eventually break a neutron-star crust, we need a
realistic model of this region -- in particular, for the crustal shear
modulus and breaking strain. In our equilibrium models, described in
section \ref{equilibria}, we used a
double-polytrope equation of state designed to mimic a `realistic'
core proton fraction, but unfortunately this results in an unrealistically low density crust.} The crustal
density distribution does not have a strong impact on the magnetic
field configuration, but is important for calculating a reasonable
shear-modulus profile. Accordingly, we choose to take quantities from
a tabulated, `realistic', equation of state \citep{douchin}, and by doing so we
can take advantage of a recent shear-modulus fitting formula based on
the results of molecular-dynamics simulations \citep{horo_hugh}.
Using a polytropic crust model to calculate magnetic-field equilibria, but then adopting
parameters from a tabulated equation of state to calculate the shear
modulus, is clearly not consistent. Nonetheless, we
argue next that the level of inconsistency is justifiable to our order
of working. We use our equilibrium calculations solely to get models
of the magnetic field and not, for example, pressure or density
profiles. Had we employed the Douchin-Haensel equation of state
consistently throughout this work -- i.e. for our equilibrium
calculations too -- we would have obtained somewhat different
magnetic-field distributions. The degree of inconsistency in our
approach in this paper, therefore, depends on the difference between
equilibria calculated using our polytropic models and those calculated
with the equation of state of \citet{douchin} -- and this difference
should be small, since the dependence of magnetic-field distributions
on the stellar equation of state is in fact quite weak \citep{yosh_yosh_eri}.
\begin{figure}
\begin{center}
\begin{minipage}[c]{\linewidth}
\psfrag{log_mu}{$\displaystyle\log\brac{\frac{\mu}{\textrm{dyn\ cm}^{-2}}}$}
\psfrag{r_dimless}{$r/R_*$}
\includegraphics[width=\linewidth]{figs/shear_mod.eps}
\end{minipage}
\caption{\label{shear_mod}
The profile of the shear modulus $\mu$ throughout
the crust, calculated using equation
\eqref{HH_shear-mod} -- a fitting formula based on the
results of molecular-dynamics simulations \citep{horo_hugh}. The required crustal input
quantities (for example, the variation of atomic number within
the crust) come from polynomial fits to the
tabulated equation of state of \citet{douchin}, and
our magnetar temperature profile is taken from
\citet{kaminker_2009}.}
\end{center}
\end{figure}
\subsubsection{\red{Shear modulus}}
From the Douchin-Haensel tabulated equation of state we make simple
polynomial fits to the radial dependence of the baryon
number $n_b$, atomic weight $A$, atomic number $Z$ and free neutron
fraction $x_\textrm{n}^\textrm{free}$ in the crust. We fit our temperature profile to
results for a 1000-year-old magnetar from \citet{kaminker_2009} (see
their figure 6; we use their profile for the lower of the two heat
intensities, with a heat source at the top of the inner crust). The
maximum temperature slightly exceeds $10^9$ K. Our
fitting formulae approximate crustal parameters over the density range
$0.05\rho_\textrm{cc}<\rho<\rho_\textrm{cc}$ (where $\rho_\textrm{cc}$ is the
density at the crust-core boundary) and may deviate from the correct
behaviour below this density. On our numerical grid the crust is
covered by $24$ radial points, meaning that our fitting formulae are
designed to approximate all but the outermost four points -- precise
enough for our purposes.
To calculate crustal properties, we first note that the ion number
density in the crust $n_i=n_b(1-x_\textrm{n}^\textrm{free})/A$, from which we define
the ion sphere radius $a_i=(4\pi n_i/3)^{-1/3}$. The Coulomb coupling
parameter $\Gamma$ is then given by
\begin{equation}
\Gamma = \frac{(Ze)^2}{a_i T},
\end{equation}
where $e$ is the elementary charge (i.e. of a proton).
From the various crustal properties discussed above, we are now in a position to determine
the shear modulus $\mu$ of our model NS crust
using the formula from \citet{horo_hugh}:
\begin{equation} \label{HH_shear-mod}
\mu = \brac{0.1106-\frac{28.7}{\Gamma^{1.3}}} \frac{n_i}{a_i}(Ze)^2.
\end{equation}
The resulting shear modulus profile we use is shown in figure
\ref{shear_mod}. At the innermost crustal gridpoint $\mu=2.4\times
10^{30}$ dyn cm${}^{-2}$ -- a little higher than the crust-core value
of $\mu=1.8\times 10^{30}$ dyn cm${}^{-2}$ from \citet{hoffman_heyl} and the classic
estimate of $\mu\approx 10^{30}$ dyn cm${}^{-2}$ \citep{ruderman_1969}.
\subsubsection{\red{Breaking strain}}
\label{breaking_strain}
\red{Recent molecular-dynamics simulations indicate that the neutron-star
crust is considerably stronger than previously thought
\citep{horo_kad,hoffman_heyl}, with a
breaking strain $\sigma_\textrm{max}$ around $0.1$ (dimensionless,
since a strain is a fractional deformation; a ratio of two
lengths). $\sigma_\textrm{max}$ is essentially temperature-independent as long
as one is well above the melting temperature (whose value corresponds to
$\Gamma\approx 175$); it is also independent of density, except perhaps
in a narrow region of `nuclear pasta' at the crust-core boundary
\citep{ravenhall}; and neither impurities nor strain rate have a
significant impact on it. Accordingly, taking the
breaking strain as constant is a good first approximation
\citep{horo_private}. Note that the breaking \emph{stress}, by
contrast, is a dimensional quantity (with units of pressure) and has significant
variation within the crust \citep{chug_horo}. In this paper we adopt two
canonical values for the breaking strain: $\sigma_\textrm{max}=0.1$ to
reflect recent simulations, and $\sigma_\textrm{max}=0.001$ to compare
with earlier work.}
\subsection{A criterion for magnetically-induced crustquakes}
\label{strain-deriv}
We are interested in how magnetic field decay/rearrangement causes
strain to build in a neutron star's crust, and where and when this
strain might finally cause the crust to break. Since there is no
reason to expect the magnetic field to be uniform -- or to
decay/rearrange uniformly -- the built-up strains will vary
greatly within the crust. Previous crust-breaking criteria based on global
estimates \citep{thom_dunc95,hoffman_heyl} are therefore not only
somewhat crude, but also give no idea of \emph{where} the crust will
fail. We aim to improve on these by using a criterion, which we derive
next, accounting for the \emph{local} changes in magnitude and direction of the
field.
To simplify the algebra in the derivation which follows, we use
standard tensor index notation, denoting tensor indices with $i$ and
$j$. We start with the general stress tensor for the crust in our
model:
\begin{equation}
\tau_{ij} = -pg_{ij} + \mu\sigma_{ij} + \mathcal{M}_{ij},
\end{equation}
where $p$ is fluid pressure, $g_{ij}$ the flat-space 3-metric,
$\sigma_{ij}$ the elastic strain tensor and $\mathcal{M}_{ij}$ the Maxwell
(magnetic) stress tensor. In this problem we are only considering equilibrium
configurations --- either strained or unstrained --- so the sum of the
stresses should balance: $\tau_{ij}=0_{ij}$, where $0_{ij}=0\ \forall\{i,j\}$.
We assume the NS's crust freezes in a relaxed state, with
a certain magnetic energy and polar-cap field strength; quantities
pertaining to this state will be denoted with a subscript or
superscript zero in the following derivation. With no shear
forces present, the equilibrium at this stage is
that of a fluid body:
\begin{equation}
0_{ij} = -p^0 g_{ij} + \mathcal{M}^0_{ij}.
\end{equation}
Over the star's lifetime, different secular processes (see previous
section) act to reduce the magnetic energy, so that the star's evolution can
be described by a sequence of quasi-static equilibria, with
incrementally smaller values of magnetic energy. These are no longer fluid equilibria,
however, as the crust resists any adjustment of the magnetic field by
balancing the Lorentz forces by its elastic shear force:
\begin{equation}
0_{ij} = -pg_{ij} + \mu\sigma_{ij} + \mathcal{M}_{ij}.
\end{equation}
The magnetically-induced change to the fluid pressure $p$ will be
tiny, and so the difference between its initial value and that at a
later time may safely be neglected, i.e. $p^0-p\approx 0$. The strain
in the crust is thus entirely sourced by the difference in the Maxwell
stress tensor between initial and later field configurations:
\begin{equation} \label{maxwell-diff}
\mu\sigma_{ij} = \mathcal{M}^0_{ij} - \mathcal{M}_{ij}.
\end{equation}
The magnetically-induced stresses in the crust gradually grow, and are
largest where the field wishes to adjust the most. For sufficiently strong
magnetic fields and sufficient readjustment, the crust will yield in
some region, allowing the magnetic field in
the affected region to return to a fluid equilibrium; recall the
cartoon in figure \ref{crust-crack}.
To proceed we need the explicit form of $\mathcal{M}_{ij}$. Since the crust
is not superconducting this is the familiar Maxwell stress tensor:
\begin{equation} \label{maxwell-tensor}
\mathcal{M}_{ij} = \frac{1}{4\pi}\brac{B_i B_j - \frac{1}{2}B^2\delta_{ij} }.
\end{equation}
Note that taking the divergence of this tensor gives:
\begin{equation}
\div\mathcal{M} = \frac{({\bf{B}}\cdot\nabla){\bf{B}}}{4\pi} - \frac{\nabla B^2}{8\pi},
\end{equation}
the Lorentz force, as expected.
The von Mises criterion predicts that an isotropic elastic medium will
yield when
\begin{equation} \label{von_mises_orig}
\sqrt{ \textstyle{\frac{1}{2}}\sigma_{ij}\sigma^{ij} } \geq \sigma_\textrm{max}.
\end{equation}
This is not, strictly
speaking, a criterion for breaking; `yield' means only that the crust
ceases to respond elastically to additional strains, but may enter a regime of plastic
flow before actually breaking. We ignore the distinction between these
two responses for now, and use the terms `yield' and `break'
interchangeably. For the purposes of our work the distinction is not so
important, as we anticipate both breaking and plastic flow to release
the same total amount of pent-up magnetic energy, but perhaps in
characteristically different ways and over different timescales; see
section \ref{discussion}.
Now, from equations \eqref{maxwell-diff} and \eqref{maxwell-tensor}:
\begin{align}
\sigma_{ij}\sigma^{ij}
&= \frac{1}{\mu^2}
\brac{ \mathcal{M}^0_{ij}\mathcal{M}^{ij}_0 + \mathcal{M}_{ij}\mathcal{M}^{ij}
- \mathcal{M}^0_{ij}\mathcal{M}^{ij} - \mathcal{M}_{ij}\mathcal{M}^{ij}_0 } \nonumber \\
&= \frac{1}{64\pi^2\mu^2}
\brac{ 2B^2 B_0^2 + 3B^4 + 3B_0^4 - 8({\bf{B}}\cdot{\bf{B}}_0)^2 }.
\end{align}
The von Mises criterion \eqref{von_mises_orig} applied to the case of
crust-yielding sourced by a changing magnetic-field equilibrium is therefore:
\begin{equation}
\frac{1}{8\pi\mu}\sqrt{ B^2 B_0^2 + \textstyle{\frac{3}{2}}B^4 + \textstyle{\frac{3}{2}}B_0^4
- 4({\bf{B}}\cdot{\bf{B}}_0)^2 } \geq \sigma_\textrm{max}.
\end{equation}
Since we will explore varying the breaking strain, we will use the
following quantity in strain plots:
\begin{equation} \label{von_mises_mag}
\frac{\sqrt{ B^2 B_0^2 + \textstyle{\frac{3}{2}}B^4
+ \textstyle{\frac{3}{2}}B_0^4 - 4({\bf{B}}\cdot{\bf{B}}_0)^2 }}{8\pi\mu\sigma_\textrm{max}}.
\end{equation}
Accordingly, we expect any regions in the crust where this quantity
exceeds unity to break. We consider the validity of our crustquake model
and alternatives to it in the next subsection, and then present our results.
\subsection{Validity of our crustquake criterion}
\label{caveats}
In this paper we aim to take a commonly-invoked idea of magnetic field decay
driving crustquakes and put it on a firm quantitative footing. Our
approach, in summary, is to study how a changing magnetic-field
equilibrium strains a NS's crust. We do not perform time-dependent
simulations of this process, so we cannot actually simulate a fracture
event -- instead, we use the von Mises yield criterion to check which
regions of the crust have exceeded the breaking strain, and infer that
those regions will yield. We have in
mind a scenario where a substantial region of the crust fails
collectively in a fracture -- which appears contradictory
to a recent suggestion that crack propagation, and hence mechanical
failure, is inhibited in magnetised NS crusts by the Lorentz force
\citep{lev_lyu}. We are not considering mechanical failures with
arbitrary geometry, however, but ones which are \emph{induced} by the
Lorentz force and therefore are dictated by the magnetic-field
geometry rather than impeded by it. Nonetheless, even if the crust
fails gradually in small regions and/or enters a regime of plastic flow \citep{jones03,belo_lev}, the results we
present should still represent the total energy output over the
yield process.
We assume shear stresses are sourced solely by the crust resisting the
rearrangement of the star's hydromagnetic equilibrium. This is in the
same spirit as \citet{braith_spruit}, although their approach was to isolate one
piece of the Lorentz force to diagnose the build-up of stress, whereas
we have derived a tensor-based yield criterion which follows
rigorously from elasticity theory. By comparing hydromagnetic equilibria, we
are neglecting the separate secular evolution of the star's field, and
in particular that of the crust \citep{pons_mu}; the only
role of any dissipative effect in our models is to induce
the field to rearrange into a new equilibrium. Our study is therefore
complementary to the crustquake modelling of \citet{perna_pons}, who
\emph{did} look at the build-up of crustal stresses due to magneto-thermal evolution in
the crust, but neglected any effects related to changes in
the star's global equilibrium.
\begin{figure*}
\begin{center}
\begin{minipage}[c]{\linewidth}
\psfrag{supercon}{core superconductivity}
\psfrag{normal}{normal core}
\psfrag{vacuum}{vacuum exterior}
\psfrag{magnetosphere}{magnetosphere}
\psfrag{high-sigma}{high-$\sigma_{\textrm{max}}$}
\psfrag{low-sigma}{low-$\sigma_{\textrm{max}}$}
\includegraphics[width=\linewidth]{figs/strains.eps}
\end{minipage}
\caption{\label{strains}
Logarithmic plots of the ratio of magnetic strain to
breaking strain within a neutron-star crust;
when the ratio exceeds unity (i.e. zero for this
logarithmic plot) the crust should break. The colour
scale shows regions where the ratio is $0.5$ or
greater, corresponding to $-0.3$ or greater on the
logarithmic scale, reflecting the fact that a real
NS crust's crystalline lattice may contain flaws and
impurities which cause it to break before reaching the
limit for a pure crust. We plot the crust at twice its
actual thickness to show strain patterns more clearly. All
plots show strain built up in NSs with a
present-day field strength $B_p=6.0\times 10^{14}$ G,
after the loss of $2.5\times 10^{46}$ erg of magnetic
energy. This loss represents $0.80\%$ of the
present-day total magnetic energy for the left-hand
plots (superconducting core protons and a vacuum
exterior), $4.7\%$ for the middle plots (normal core
protons, non-vacuum exterior) and $3.6\%$ for the right-hand
plots (normal core protons, vacuum exterior). The top row shows results for a very strong crust, with
a breaking strain $\sigma_\textrm{max}=0.1$; the bottom row is the same
set of configurations but assuming a more
`traditional' value of $\sigma_\textrm{max}=0.001$. Our
models show that a NS crust yields most easily if the
star has a locally strong toroidal-field component,
with the failure occurring in the outer equatorial
region first.}
\end{center}
\end{figure*}
In addition to the potential role played by the secular
field-rearrangement processes in the crust, one other potential
concern is the inherent degeneracy in picking sequences of equilibria to
represent snapshots of the rearrangement of a NS's decaying magnetic
field. Since this process is dissipative, there is no obvious quantity
to hold constant -- in contrast with, for example, the case of
accretion-driven burial of a NS's magnetic field \citep{payne_mel}.
Although we rescale our numerical results to one specific physical NS
($1.4$ solar masses and a radius of $12$
km), our picture of a sequence of equilibria as snapshots of a secular
evolution is therefore not self-consistent. Somewhat arbitrarily, we assume that the ratio of poloidal
to toroidal components remains constant for our models with only
interior currents, whilst assuming that in our `magnetosphere' models
the exterior current decays most quickly, thus predominantly reducing
the toroidal component (which is partially sourced by these exterior
currents). Ideally one would verify these assumptions with a full
magneto-thermal evolution of the coupled core-crust-magnetosphere
system, but the technology to perform such simulations is not yet
available. For now, we believe the work presented in this paper to
be as complete as is currently possible.
\subsection{Strain patterns in a neutron-star crust}
In figure \ref{strains} we plot the strain patterns that would develop in a
NS crust after a period in which $2.5\times 10^{46}$ erg of
magnetic energy has decayed, assuming the crust's initial state was
relaxed. In all cases the final, `present-day' polar cap field strength is taken to
be $B_p=6.0\times 10^{14}$ G. For clarity the $0.1R_*$-thick crust
($1.2$ km for our models) has
been stretched linearly in the plot to appear at twice its actual thickness. We consider the three classes of model
described in section \ref{equilibria}: superconducting core protons
and a vacuum exterior; normal core protons and magnetospheric
currents; normal core protons and a vacuum exterior. We plot the
quantity from equation \eqref{von_mises_mag}, which is greater than
unity for regions of the crust which are expected to yield; since our
colourscale is logarithmic, zero represents the minimum value at which
the crust is expected to yield (given the caveats discussed in the
previous subsection). To include regions on the verge of breaking, we
also show parts of the crust where the quantity \eqref{von_mises_mag}
exceeds $0.5$. These parts may actually fail, rather than just being
on the verge of it, if the crustal lattice contains flaws/impurities
or if a large region fails collectively, for example. The colour scale shows how much
strain builds up in each region. The top row of plots assumes a very
strong crust, with $\sigma_\textrm{max}=0.1$, whilst the bottom row
uses $\sigma_\textrm{max}=0.001$ for comparison; see the discussion
in section \ref{breaking_strain}.
Superficially, figure \ref{strains} seems to suggest that
normal-matter models with a vacuum exterior are the most prone to
fracture, given a fixed loss of total magnetic energy. If we return to
the equilibrium models used to generate these plots (figures
\ref{norm_6e14}, \ref{sc_6e14} and \ref{magneto_6e14}), however, we see
that the comparison is not quite fair: the three classes of equilibria
have strikingly different toroidal-field strengths, with that of the
superconducting model being an order of magnitude weaker than the
other two. A more reliable conclusion to draw from our results is that a strong
toroidal-field component allows for the greatest build-up of stress
in a NS crust, in agreement with previous studies
\citep{thom_dunc95,pons_perna}. Our results are also distinct from this
earlier work, however, in that we anticipate the greatest stress
build-up -- and eventually a crustquake -- to occur in
a belt around the equator. By contrast, a poloidal-dominated field builds up
stresses more gradually, and in a region around the pole.
\subsection{Energy release and characteristic field strength for crustquakes}
One key question for any model of crustquakes is the amount of
energy that could be released in such an event. Here we compare
sequences of models to determine the relationship between the various
quantities in the problem: the energy release in a quake, the depth of
the `fracture' (i.e. the region which fails), the breaking strain and the polar-cap field
strength. For later comparison, we first quote the result of a
back-of-the-envelope calculation \citep{thom_dunc95} for crustquake
energy release:
\begin{equation} \label{TD95_estimate}
\frac{E_\textrm{out}}{10^{40}\textrm{ erg}}
\sim 4\brac{\frac{l}{1\textrm{ km}}}^2
\brac{\frac{B_\textrm{c}}{10^{15}\textrm{ G}}}^{-2}
\brac{\frac{\sigma_\textrm{max}}{0.001}}^2,
\end{equation}
where $B_\textrm{c}$ is the crustal magnetic field.
Note that this estimate gives the energy released $E_\textrm{out}$ from the failure of
an \emph{area} of size $l^2$; we find the notion of energy release
from a volume more natural, since magnetic energy is a volume integral
over $B^2$.
For our results, we produce sequences of field configurations by fixing one equilibrium model, the
`after' (present-day) model with crustal strains sourced by the magnetic
field, and varying the other, `before' (original) configuration -- i.e. the
initial star with its relaxed crust. We assume the `before' field
has decayed into the `after' field -- so that the greater the
difference in magnetic energy between these models, the larger the
region of the crust that should be strained to the point of yielding. We also explore the effect of
varying the breaking strain and the `after' field strength. We
then compare the depth of the fracture in each case with
the magnetic-energy change in the region which fails, which we regard as the
energy released over the crustquake and denote $E_\textrm{quake}$.
As discussed in the previous subsection, our models with
normal core protons and vacuum exterior have the highest ratio of
toroidal-component maximum to polar-cap field strength. We believe
that equilibrium solutions with similarly high ratios do \emph{exist}
in other cases, in particular the case with core superconductivity,
but that our numerical scheme is simply less successful at converging
to them. In this section we will only consider the class of models
with a normal core and vacuum exterior, and will use the strongest
toroidal components we can, as before, since this seems to be
associated with the greatest build-up of strain. Given that we believe
similarly strong toroidal fields should exist in other cases, however,
the results presented here are intended to be representative of a favourable
crust-breaking scenario for \emph{any} model.
\begin{figure}
\begin{center}
\begin{minipage}[c]{\linewidth}
\psfrag{Emag}{$\displaystyle\frac{E_\textrm{quake}}{10^{45}\textrm{ erg}}$}
\psfrag{frac_depth}{$\displaystyle\frac{d}{R_\textrm{c}}$}
\includegraphics[width=\linewidth]{figs/released_Emag.eps}
\end{minipage}
\caption{\label{log-lin}
The amount of energy released in a crustal
fracture, as a function of fracture depth. Fixing
the present-day polar-cap field strength as
$B_p=3.0\times 10^{14}$ G we consider three different
breaking strains, as labelled on the figure:
$\sigma_\textrm{max}=0.001,0.005,0.01$. Top: for
sufficiently deep fractures the relationship between
depth and energy loss is approximately
exponential (shown by the lines). Bottom: a zoomed-in
version of the above shows that for more
shallow fractures the relationship deviates
from the exponential one and is better approximated by
a cubic function.}
\end{center}
\end{figure}
We begin by fixing the present-day polar-cap field strength as
$3.0\times 10^{14}$ G and varying the initial field strength. We then
calculate the ratio of magnetically-induced strain $\sigma$ to
breaking strain $\sigma_\textrm{max}$ throughout the crust,
using equation \eqref{von_mises_mag}, to determine
what depth of region will fail according to the von Mises yield
criterion. The difference in magnetic energy between the
`before' and `after' equilibrium configurations, within the volume of
the crust which breaks, gives us the energy $E_\textrm{quake}$ released in such a
crustquake:
\begin{equation} \label{E_quake}
E_\textrm{quake}
= \int\limits_{\sigma\geq\sigma_\textrm{max}}
\frac{(B_0^2-B^2)}{8\pi}\ \textrm{d}V.
\end{equation}
Our results for the variation of energy release with fracture depth
are plotted in figure \ref{log-lin} for three different breaking
strains, \red{to allow us to check the dependence on this quantity too}. For fracture
depths exceeding around half the crustal thickness, we find that the data is fitted satisfactorily by
an exponential relation between energy release and depth; see top
panel. For more shallow fractures, however, a cubic fit is better
(bottom panel). Note that the exponential relation could not in any
case be applicable at shallow depths, since it does not give the
correct limiting behaviour that if there is no fracture there can be
no energy release (i.e. the energy-versus-depth fit line must pass through the origin).
\begin{figure}
\begin{center}
\begin{minipage}[c]{\linewidth}
\psfrag{Emag}{$\displaystyle\frac{E_\textrm{quake}}{10^{45}\textrm{ erg}}$}
\psfrag{frac_depth}{$\displaystyle\frac{d}{R_\textrm{c}}$}
\includegraphics[width=\linewidth]{figs/varyB_Equake_inset.eps}
\end{minipage}
\caption{\label{varyB}
The relationship between fracture depth and energy
release for four different present-day polar-cap field
strengths, for a breaking strain of $0.005$. We see
that the results appear to be completely independent of
the field strength. The exponential fit is seen to
approximate the behaviour for deep fractures and large
energy release, whilst the cubic fit (inset) is more
accurate for shallow fractures and smaller release of energy.}
\end{center}
\end{figure}
\red{Since equation \eqref{TD95_estimate} suggests our results may be
dependent on the NS's field strength, we investigate this next}. In
figure \ref{varyB} we fix the breaking strain at $0.005$ and show
the variation of crustquake energy release with fracture depth for four different present-day polar-cap field strengths,
varying over an order of magnitude. The data points for the four
different field strengths all appear to lie along the same line, with
no evident variation with field strength. This is not actually so
surprising -- \red{whilst the stresses are \emph{induced} by the magnetic
field, they are stored as elastic energy, so that crustquake energy
release depends only on crustal properties: the volume of the crust which yields and
the strain at which this occurs. The magnetic-field strength is
likely to be important in affecting the rate of crustquake events, but such
time-dependent behaviour is beyond the scope of this paper}.
As for figure
\ref{log-lin}, we see in figure \ref{varyB} that
the quake energy-depth relation appears to be exponential for deeper
fractures, and cubic for shallower fractures (see inset). Combining
our results from these last two figures and denoting the crustal
thickness by $R_\textrm{c}$, we find that the relation
between quake depth $d$ and energy release is \emph{independent} of the field
strength -- in contrast with earlier estimates -- and given by
\begin{equation} \label{E-d_deep}
\frac{E_\textrm{quake}}{10^{45}\textrm{ erg}}
= 0.31\brac{\frac{\sigma_\textrm{max}}{0.001}}
\exp\left[6(\textstyle\frac{d}{R_{\textrm{c}}}-1)\right]
\end{equation}
for deep fractures ($d\gtrsim 0.5R_\textrm{c}$), and
\begin{equation} \label{E-d_shallow}
\frac{E_{\textrm{quake}}}{10^{45}\textrm{ erg}}
= 0.25\brac{\frac{\sigma_{\textrm{max}}}{0.001}}
\brac{\frac{d}{R_\textrm{c}}}^3
\end{equation}
for more shallow ones.
Since we work in axisymmetry, the above results apply to the case of a
whole equatorial belt of crust fracturing at once; the width and
length of the fracture\footnote{In Cartesian coordinates, the strain
plots of figure \ref{strains} are in the $x-z$ plane. Since the
fractures we consider are centred around the equator, we use the
term `depth' to refer to the size of the fracture in the
$x$-direction, `length' to refer to the size across the surface of
the star, i.e. in the $y-$direction, and `width' to refer to the
fracture's size in the $z$-direction.} are therefore not
independent of the depth, and so we obtain relations in terms of
this one fracture dimension, instead of all three. Whilst the width
and depth of the fracture are both related to the crustal thickness,
the length $l$ is related to the larger scale of the
circumference of the star $2\pi R_*=20\pi R_\textrm{c}$. To reflect
this, we can modify equation \eqref{E-d_shallow} by replacing one factor
of $d/R_\textrm{c}$ with the term $l/2\pi R_*$ to reflect the expected relationship if the length of the
fracture does not extend right across the star:
\begin{equation} \label{E-dl}
\frac{E_{\textrm{quake}}}{10^{45}\textrm{ erg}}
\approx 0.25\brac{\frac{\sigma_{\textrm{max}}}{0.001}}
\brac{\frac{d}{R_\textrm{c}}}^2
\brac{\frac{l}{2\pi R_*}}.
\end{equation}
Note that our results are only quantitatively correct for our
particular (axisymmetric) crust-yielding scenario though, so the above
relation is an approximate one. Now, from the definition of magnetic
energy as a volume integral of $B^2$,
we see that its dimensions are $[E]=[B]^2 L^3$; we can
therefore use our quake energy-depth relations to find a
characteristic field strength associated with the crust yielding. In
particular, if we take the shallow-fracture formula
\eqref{E-dl} and multiply through by $10^{45}$ erg and
$R_\textrm{c}^3=(1.2\times 10^5\textrm{ cm})^3$ we get
\begin{equation} \label{E-dl_dim}
E_\textrm{quake}\approx 2.3\times 10^{27}
\brac{\frac{\sigma_\textrm{max}}{0.001}}
d^2 l\textrm{ erg},
\end{equation}
where we have also used the fact that the ratio of fracture depth to
length $d/l\approx R_\textrm{c}/2\pi R_* = 1/20\pi$.
Equation \eqref{E-dl_dim} gives us a relation in physical units
between quake energy, depth and length, with a constant of
proportionality $2.3\times 10^{27}(\sigma_\textrm{max}/0.001)$, which
must therefore have dimensions of $[B]^2$. Given the expression for magnetic energy release
\eqref{E_quake}, we choose to define a characteristic field strength
$B_\textrm{break}$ for breaking a cubic region of crust by equating
the constant of proportionality from \eqref{E-dl_dim} with
$8\pi B_\textrm{break}^2$. From equation \eqref{E-dl_dim} this then
gives\footnote{Our argument uses an impure form of dimensional
analysis, as we have included the factor of $8\pi$ from the magnetic
energy expression and the $1/20\pi$ factor from the fracture
depth-to-length ratio, since both factors are greater than an order
of magnitude in themselves. Readers uncomfortable with the inclusion
of these extra factors can remove them from the final result for
$B_\textrm{break}$ by multiplying by $\sqrt{8\pi/20\pi}$,
resulting in a prefactor of $3.8\times 10^{14}$ instead of the
value of $2.4\times 10^{14}$ in equation \eqref{B_char}.}
\begin{equation} \label{B_char}
B_\textrm{break}=2.4\times 10^{14}\brac{\frac{\sigma_\textrm{max}}{0.001}}^{1/2}\textrm{ G}.
\end{equation}
We interpret this result to mean that, although the quake energy-depth
relation does not involve the field strength itself, there is
nonetheless a characteristic (\emph{local}) strength of field related to
crust-breaking.
\section{Discussion}
\label{discussion}
Neutron stars display a variety of abrupt energetic phenomena -- most
spectacularly the giant flares of magnetars, but also smaller
bursts, and glitches in their rotation rate. These phenomena all point
to some sudden release of stress that has built up gradually -- and
the star's elastic crust is a natural candidate for a region that can
become gradually stressed then fail suddenly. It is, therefore, worth
concluding with a discussion of the possible role of
magnetically-induced crustquakes in flares, bursts and glitches.
We turn first to a class of phenomena for which crustquakes have traditionally
\emph{not} been invoked: the giant flares of magnetars. The three
events observed to date have all involved energy outputs in excess of
$10^{44}$ erg \citep{fenimore,feroci,palmer}, an amount thought to be too great to have come from
crustal energy release alone \citep{thom_dunc95}; this worry, in part, has motivated a number of studies
exploring the alternative possibility that spontaneous reconnection in the magnetosphere is
responsible for magnetar flares, in analogy with dynamics in the solar corona (see, e.g.,
\citet{lyutikov03}). One key result of our paper is that a crust stressed by magnetic-field
rearrangement \emph{can}, in fact, comfortably store the required amount of energy to power
a giant flare.
The most energetic observed giant flare to date was the 2004
event of SGR 1806-20; its estimated energy output over the flare was
an enormous $2\times 10^{46}$ erg \citep{palmer}. This value is not
very precise -- in particular, the probable anisotropic nature of the
flare would make it an overestimate -- but let us nonetheless assume
that this amount of energy was released from a crustquake. From
equation \eqref{E-d_deep}, we then have an estimate
that the minimum breaking strain of the crust must be around $0.065$
(corresponding to the case of a fracture extending to the base of the
crust). This value is comfortably below the
recent result, obtained from molecular dynamics simulations, that a NS
crust has a breaking strain of $0.12$
\citep{horo_kad}. These simulations also show that
the crust fails in a large-scale collective fashion -- this could
conceivably fit the observed behaviour of giant flares, whose
luminosity peaks rapidly then decays exponentially \citep{palmer}.
We can also use
our results to put an upper limit on the expected maximum size of a
giant flare powered by crustal energy release alone. Taking a breaking
strain of $0.12$ and assuming a fracture extending to the base of the
crust, equation \eqref{E-d_deep} gives a maximum total energy release of
$4\times 10^{46}$ erg. If any future giant flare appears to be more
energetic than this (using the isotropic-emission assumption),
then either the energy release is not crustal in origin, or it is
highly anisotropic -- leaving current estimates for flare energies
seriously in error.
In addition to the rare giant flares, magnetars also suffer far more
common short-duration bursts with energies up to $\sim\! 10^{41}$
erg and intermediate events with energies around $10^{43}$ erg. If these bursts are also a manifestation of crustquakes, they
must involve the yielding of much more shallow regions. Unlike the
highly rigid inner regions of the crust, the outermost part of the
crust can only support small stresses, and could feasibly fail at
lower strains through some gradual process (like plastic flow or a
succession of small fractures) instead of one large collective failure; this would account for the groups of small bursts seen
from some sources \citep{maz99,mer08}. Assuming short bursts are indeed powered
by the release of crustal energy, equation \eqref{E-d_shallow}
suggests that a $10^{41}$-erg event would be associated with the magnetar's
crust yielding to a depth of roughly $90$m (for a breaking strain of $0.001$), or
to a depth of $20$m (if the breaking strain is $0.1$). Interestingly,
the burst afterglow of Swift J$1822.3-1606$ has been shown to be well
modelled by a $3\times 10^{42}$-erg shallow-depth heat deposition
\citep{scholz} -- which could have resulted from a magnetically-induced
crustquake; \red{see also \citet{rea2013} for similar outburst
modelling for SGR $0418+5729$ and \citet{camero} for SGR $0501+4516$.} A period of
burst activity might indicate the gradual failure of a somewhat deeper
region; our energy-depth formulae should still be valid for this case,
but with the crustquake energy release being the \emph{total} energy
output over the period of bursting.
The final class of abrupt phenomena we wish to mention are
glitches. Unlike flares and bursts, these spin-up events cannot be due
to magnetically-induced crustquakes, since the resulting change in the
stellar moment of inertia due to such a crustquake event could only ever be
minute: it scales with the ratio of magnetic to fluid
pressure. Instead, we expect the usual glitch scenario to apply even for
highly-magnetised NSs: the star's superfluid component cannot
spin down regularly with the crust and so develops a difference in
angular velocity; beyond some critical value, however, the superfluid is
forced to re-equilibrate with the crust by transferring angular
momentum, which is then seen as a spin-up of the crust
\citep{and_itoh}. Nonetheless, it may not be safe to assume that the
magnetic field can be neglected in the treatment of glitch modelling.
As discussed in the introduction,
radiative changes associated with glitches have been observed in AXPs,
and moreover in at least three typically rotationally-powered pulsars
with high magnetic fields \citep{ant_1119}. These observations may point to magnetically-induced
crustquake activity occurring simultaneously -- either
as a trigger or a result of the glitch.
Finally, we have identified a characteristic field strength \eqref{B_char} associated with
crust-breaking, corresponding to the constant of proportionality in
the quake energy-depth relation. It suggests that for a crustquake to
occur, the field strength must reach approximately $10^{14}-10^{15}$ G
locally (depending on the crustal breaking
strain); this \red{is in agreement with the findings of
\citet{pons_perna}, who considered a different scenario for
the build-up of magnetically-induced stresses.}
Superficially, it appears as if this characteristic field strength
might only be attained in magnetars -- but in fact, the observed field
strengths of NSs are just inferences about the dipolar field component at the
polar cap. It is quite likely that NSs with inferred dipole fields of
the order of $10^{13}$ G, or perhaps lower still, will harbour some
region in their crust where the local field exceeds $10^{14}$
G. Within our crustquake model, therefore, it would be quite
natural to find crossover sources displaying both `radio-pulsar' and
`magnetar' characteristics -- and we anticipate the distinctions
between supposedly different classes of NS to become further eroded over time.
\section*{Acknowledgements}
SKL, DA and ALW acknowledge support from NWO Vidi and Aspasia grants (PI Watts);
NA acknowledges support from STFC in the UK. We thank
Christian Kr\"uger for discussions on modelling the
neutron-star crust, Chuck Horowitz for helpful correspondence about
the breaking strain, and Jose Pons and the anonymous referee for their
constructive suggestions.
\small
|
2,877,628,090,276 | arxiv | \section{Introduction}
The Toda systems \cite{LezSav92,RazSav97,MikOlsPer81} associated
with loop groups possess features attractive from both mathematical
and physical perspectives. The fact that they have the so-called
soliton solutions is certainly among such interesting properties.
Here, by an $N$-soliton solution we simply mean a solution depending
on $N$ linear combinations of independent variables. In particular,
the investigation of soliton solutions imply developing methods of
solving nonlinear partial differential equations and besides, also
modeling various nonlinear phenomena in particle physics and field
theory, see, for example, the paper \cite{BueFerRaz02} and references
therein.
There are various approaches to constructing soliton solutions for
loop Toda systems. The best known and elaborated among them are,
probably, the rational dressing formalism \cite{ZakSha79,Mik81},
that is a version of the inverse scattering method, and the
Hirota's approach \cite{Hir04,Hol92,ConFerGomZim93,MacMcG92,
AraConFerGomZim93,ZhuCal93} based on an appropriate change of
the field variables. Also certain combinations of these two
methods prove to be quite efficient in the purpose of finding
soliton solutions of Toda equations \cite{BueFerRaz02,AssFer07}.
Besides, it is worth while mentioning generalizations of the
Leznov--Saveliev \cite{OliSavUnd93,OliTurUnd92,OliTurUnd93} and
the B\"{a}cklund--Darboux
\cite{ForGib80, MatSal91, LiaOliTur93, RogSch02, Zhou06}
methods that were employed at Toda systems.
In a recent paper \cite{NirRaz08a}, we have carried out a comparative
analysis of the Hirota's and rational dressing methods in application
to abelian Toda systems associated with the untwisted loop groups of
complex general linear groups and, in particular, explicitly
reproduced the corresponding multi-soliton solutions. Further, in
the subsequent paper \cite{NirRaz08b}, we have constructed soliton
solutions for abelian twisted loop Toda systems. And now, we are
going to investigate the non-abelian loop Toda equations being a
direct generalization of the systems considered in \cite{NirRaz08a}.
Here we work within the rational dressing formalism based upon
appropriate block-matrix representation. The latter is naturally
suggested by the $\mathbb Z$-gradation under consideration and turns
out to be most suitable to the non-abelian Toda systems.
Inasmuch as the abelian soliton solutions allow for such a physical
interpretation as of interacting extended particle-like objects, so
their non-abelian generalizations should be very interesting as such
objects having additionally certain internal structures. And since
this physical interpretation promises a good basis for a consistent
modeling of various nonlinear phenomena, the mathematical part
consisting in developing the corresponding integration methods
and constructing explicit soliton solutions becomes crucial.
Note finally that since the pioneering paper \cite{Mik81} where
simplest non-abelian loop Toda equations were presented, certain
efforts have been made to solve them by means of various methods.
Thus, in \cite{EtiGelRet97a} the notion of quasi-determinants
was exploited for the purpose, see also \cite{XLiNim07}; in
\cite{ParShi95} a simplest matrix generalization of the sine-Gordon
equation was treated by the rational dressing method.\footnote{However,
it is not quite clear how the soliton solutions of \cite{ParShi95}
were obtained without averaging over the action of the corresponding
automorphism group, which is one of the principal ingredients of
the rational dressing procedure.} An approach based on the dressing
(gauge) transformation method was developed in a series of papers
\cite{GomGueSotZim01, CabGomGueSotZim01, GomGueSotZim00, GomZimSot99}
for a simplest case of non-abelian affine Toda systems where a
specific gradation leads to a minimal extension of the abelian
counterpart, and then the vertex operator method was also used
there in order to construct some soliton solutions.
\section{Formulation of loop Toda equations}
The formulation of Toda systems, in a way most appropriate to
our purposes, is based on their simple differential-geometry
and group-algebraic background, and here we generally follow
the monographs \cite{LezSav92, RazSav97} and the papers
\cite{NirRaz06, NirRaz07a, NirRaz07b}.
Let the trivial fiber bundle $\mathbb R^2 \times \mathcal G \rightarrow \mathbb R^2$,
with the structure Lie group $\mathcal G$ and its Lie algebra $\mathfrak G$, be
given. We identify a connection in this fiber bundle with a
$\mathfrak G$-valued $1$-form $\mathcal O$ on $\mathbb R^2$ and decompose it
over basis $1$-forms,
\[
\mathcal O = \mathcal O_- {\mathrm{d}} z^- + \mathcal O_+ {\mathrm{d}} z^+,
\]
where $z^-$, $z^+$ are the standard coordinates on the base manifold
$\mathbb R^2$, and the components $\mathcal O_-$, $\mathcal O_+$ are $\mathfrak G$-valued
functions on it. We assume that the connection $\mathcal O$ is flat, and
it means that its curvature is zero. Then, in terms of the components,
we have
\begin{equation}
\partial_- \mathcal O_+ - \partial_+ \mathcal O_- + [\mathcal O_-, \mathcal O_+] = 0,
\label{e:2.1}
\end{equation}
where we use the notation $\partial_- = \partial/\partial z^-$
and $\partial_+ = \partial/\partial z^+$. One can consider this
relation as a system of partial differential equations. The
general solution of this system is well known,
\[
\mathcal O_- = \Phi^{-1} \partial_- \Phi, \qquad
\mathcal O_+ = \Phi^{-1} \partial_+ \Phi,
\]
where $\Phi$ is an arbitrary mapping of $\mathbb R^2$ to $\mathcal G$.
Actually, the zero-cur\-va\-tu\-re condition as a system of
partial differential equations is trivial due to the
gauge invariance. Indeed, if a connection $\mathcal O$ satisfies
(\ref{e:2.1}), then for an arbitrary mapping $\Psi$ of
$\mathbb R^2$ to $\mathcal G$ the gauge-transformed connection
\begin{equation}
\mathcal O^\Psi = \Psi^{-1} \mathcal O \Psi + \Psi^{-1} \mathrm{d} \Psi,
\label{e:2.2}
\end{equation}
satisfies (\ref{e:2.1}) as well.
To obtain nontrivial integrable systems out of the zero-curvature
condition we impose on the connection $\mathcal O$ some restrictions
which destroy the gauge invariance. To come specifically to
Toda systems, we should use certain grading and gauge-fixing
conditions.
Suppose that $\mathfrak G$ is endowed with a $\mathbb Z$-gradation,
\[
\mathfrak G = \bigoplus_{k \in \mathbb Z} \mathfrak G_k,
\qquad
[\mathfrak G_k , \mathfrak G_l] \subset \mathfrak G_{k+l},
\]
and $L$ is such a positive integer that the grading subspaces
$\mathfrak G_{k}$ and $\mathfrak G_{-k}$, where $0 < k < L$, are trivial. The
grading condition states that the components of $\mathcal O$ have the form
\begin{equation}
\mathcal O_- = \mathcal O_{-0} + \mathcal O_{-L}, \qquad \mathcal O_+ = \mathcal O_{+0} +
\mathcal O_{+L}, \label{e:2.3}
\end{equation}
where $\mathcal O_{-0}$ and $\mathcal O_{+0}$ take values in $\mathfrak G_0$, while
$\mathcal O_{-L}$ and $\mathcal O_{+L}$ take values in $\mathfrak G_{-L}$ and
$\mathfrak G_{+L}$ respectively. There is a residual gauge invariance.
Indeed, the gauge transformation (\ref{e:2.2}) with $\Psi$ taking
values in the connected Lie subgroup $\mathcal G_0$ corresponding to the
subalgebra $\mathfrak G_0$ does not violate the grading condition
(\ref{e:2.3}). Therefore, we additionally impose a
gauge-fixing condition of the form
\[
\mathcal O_{+0} = 0.
\]
Now the components of the connection $\mathcal O$ can be represented as
\begin{equation}
\mathcal O_- = \Xi^{-1} \partial_- \Xi + \mathcal F_-, \qquad \mathcal O_+ = \Xi^{-1}
\mathcal F_+ \Xi,
\label{e:2.4}
\end{equation}
where $\Xi$ is a mapping of $\mathbb R^2$ to $\mathcal G_0$, $\mathcal F_-$ and
$\mathcal F_+$ are some mappings of $\mathbb R^2$ to $\mathfrak G_{-L}$ and
$\mathfrak G_{+L}$. One can easily see that the zero-curvature
condition is equivalent to the equality\footnote{We assume for
simplicity that $\mathcal G$ is a subgroup of the group formed by
invertible elements of some unital associative algebra $\mathcal A$.
In this case $\mathfrak G$ can be considered as a subalgebra of the
Lie algebra associated with $\mathcal A$. Our consideration can be
generalized to the case of an arbitrary Lie group~$\mathcal G$.}
\begin{equation}
\partial_+ (\Xi^{-1} \partial_- \Xi) = [\mathcal F_-, \Xi^{-1} \mathcal F_+ \Xi]
\label{e:2.5}
\end{equation}
and the relations
\begin{equation}
\partial_+ \mathcal F_- = 0, \qquad \partial_- \mathcal F_+ = 0.
\label{e:2.6}
\end{equation}
We suppose that the mappings $\mathcal F_-$ and $\mathcal F_+$ are fixed and
consider (\ref{e:2.5}) as an equation for $\Xi$ called the Toda
equation. When the group $\mathcal G_0$ is abelian the corresponding Toda
equations are called abelian. In other cases we have non-abelian
Toda systems.
Thus, a Toda equation associated with a Lie group $\mathcal G$ is specified
by a choice of a $\mathbb Z$-gradation of the Lie algebra $\mathfrak G$ of
$\mathcal G$ and mappings $\mathcal F_-$, $\mathcal F_+$ satisfying the conditions
(\ref{e:2.6}). To classify the Toda equations associated with a
Lie group $\mathcal G$ one should classify the $\mathbb Z$-gradations of
the Lie algebra $\mathfrak G$ of $\mathcal G$.
We consider the case where $\mathcal G$ is a loop group of a
finite-dimensional Lie group, $\mathcal L_{a,M}(G)$, where $a$ is an
automorphism of $G$ of order $M$. The corresponding Lie algebra
$\mathfrak G$ is thus the loop Lie algebra $\mathcal L_{A,M}(\mathfrak g)$, where
$\mathfrak g$ is the Lie algebra of the Lie group $G$, with $A$ being the
respective automorphism of $\mathfrak g$ of order $M$. This is a subalgebra
of the loop Lie algebra $\mathcal L(\mathfrak g)$ formed by elements $\xi$
satisfying the equality
\[
\xi(\epsilon_M s) = A(\xi(s)),
\]
where $\epsilon_M = \mathrm e^{2 \pi \mathrm i/M}$ is the $M$th principal root
of unity and $s \in S^1$. Similarly, the loop group $\mathcal L_{a,M}(G)$
is defined as the subgroup of the loop group $\mathcal L(G)$ formed by
the elements $\chi$ satisfying the equality
\[
\chi(\epsilon_M s) = a(\chi(s)).
\]
For a consistent description of these objects given in a way
most suitable for the matter of loop Toda equations we refer
to \cite{NirRaz06,NirRaz07a,NirRaz07b}. To have a wider list
of publications on such systems see also
\cite{FerMirGui95,FerMirGui97,FerGalHolMir97,Mir99}
and references therein.
To specify the Toda equations associated with the loop group
$\mathcal L_{a,M}(G)$ we first note that the group $\mathcal L_{a,M}(G)$
and its Lie algebra $\mathcal L_{A,M}(\mathfrak g)$ are infinite-dimensional
manifolds. However, using the so-called exponential law
\cite{KriMic91,KriMic97}, that can generally
be expressed by the canonical identification
\[
C^\infty(\mathcal M,C^\infty(\mathcal N,\mathcal P)) =
C^\infty(\mathcal M \times \mathcal N,\mathcal P),
\]
where $\mathcal M$, $\mathcal N$, $\mathcal P$ are finite-dimensional manifolds
and $\mathcal N$ is besides compact, we reformulate the zero-curvature
representation of the Toda equations associated with $\mathcal L_{a,M}(G)$
in terms of finite-dimensional manifolds.
In the case under consideration the connection components $\mathcal O_-$
and $\mathcal O_+$ entering the equality (\ref{e:2.1}) are mappings of
$\mathbb R^2$ to the loop Lie algebra $\mathcal L_{A,M}(\mathfrak g)$. We denote
the corresponding mappings of $\mathbb R^2 \times S^1$ to $\mathfrak g$ by
$\omega_-$ and $\omega_+$, and call them also the connection
components. The mapping $\Phi$ generating the connection is
a mapping of $\mathbb R^2$ to $\mathcal L_{a,M}(G)$. Denoting the
corresponding mapping of $\mathbb R^2 \times S^1$ by $\varphi$
we write
\begin{equation}
\varphi^{-1} \partial_- \varphi = \omega_-, \qquad \varphi^{-1}
\partial_+ \varphi = \omega_+. \label{e:2.7}
\end{equation}
Seeing that the mapping $\varphi$ uniquely determines the mapping
$\Phi$, we say that the mapping $\varphi$ also generates the
connection under consideration.
We follow the classification of loop Toda systems performed in
\cite{NirRaz06,NirRaz07a,NirRaz07b}. Important for our purposes
here is that the initial Toda equation associated with
$\mathcal L_{a,M}(G)$ is equivalent to a Toda equation associated
with $\mathcal L_{a',M'}(G)$ arising when $\mathcal L_{A',M'}(\mathfrak g)$
is supplied with the standard $\mathbb Z$-gradation.
The grading subspaces for the standard $\mathbb Z$-gradation
of a loop Lie algebra $\mathcal L_{A,M}(\mathfrak g)$ are
\[
\mathcal L_{A, M}(\mathfrak g)_{k} = \{ \xi \in \mathcal L_{A,M}(\mathfrak g)
\mid \xi = \lambda^k x, \ A(x) = \epsilon_M^{k} x \},
\]
where by $\lambda$ we denote the restriction of the standard
coordinate on $\mathbb C$ to $S^1$. It is clear that every automorphism
$A$ of the Lie algebra $\mathfrak g$ satisfying the relation
$A^M = \mathrm{id}_\mathfrak g$ induces a $\mathbb Z_M$-gradation
of $\mathfrak g$ with the grading subspaces
\[
\mathfrak g_{[k]_M} = \{x \in \mathfrak g \mid A(x) = \epsilon_M^{k} x\},
\qquad k = 0, \ldots, M-1,
\]
where by $[k]_M$ we denote the element of the ring
$\mathbb Z_M$ corresponding to the integer $k$. Vice versa,
any $\mathbb Z_M$-gradation of $\mathfrak g$ obviously defines
an automorphism $A$ of $\mathfrak g$ satisfying the relation
$A^M = \mathrm{id}_\mathfrak g$. In terms of the corresponding
$\mathbb Z_M$-gradation the grading subspaces for the standard
$\mathbb Z$-gradation of a loop Lie algebra $\mathcal L_{A,M}(\mathfrak g)$
are
\[
\mathcal L_{A, M}(\mathfrak g)_{k} = \{ \xi \in \mathcal L_{A,M}(\mathfrak g)
\mid \xi = \lambda^k x, \ x \in \mathfrak g_{[k]_M} \}.
\]
It is evident that for the standard $\mathbb Z$-gradation the subalgebra
$\mathcal L_{A,M}(\mathfrak g)_0$ is isomorphic to the subalgebra
$\mathfrak g_{[0]_M}$ of $\mathfrak g$, and the Lie group $\mathcal L_{a,M}(G)_0$
is isomorphic to the connected Lie subgroup $G_0$ of $G$ corresponding
to the Lie algebra $\mathfrak g_{[0]_M}$. Hence, the relations (\ref{e:2.4})
are equivalent to the relations
\begin{equation}
\omega_- = \gamma^{-1} \partial_- \gamma + \lambda^{-L} c_-,
\qquad
\omega_+ = \lambda^L \gamma^{-1} c_+ \gamma, \label{e:2.8}
\end{equation}
where $\gamma$, taken as a smooth mapping of $\mathbb R^2 \times S^1$
to $G$ corresponding to the mapping $\Xi$ in accordance with the
exponential law, is actually a mapping of $\mathbb R^2$ to $G_0$, and
respecting the mappings $\mathcal F_-$ and $\mathcal F_+$, the mappings
$c_-$ and $c_+$ above are mappings of $\mathbb R^2$ to $\mathfrak g_{-[L]_M}$
and $\mathfrak g_{+[L]_M}$ respectively. The Toda equation can
subsequently be written as
\begin{equation}
\partial_+(\gamma^{-1} \partial_- \gamma) = [c_-, \gamma^{-1} c_+
\gamma]. \label{e:2.9}
\end{equation}
The conditions (\ref{e:2.6}) imply that
\begin{equation}
\partial_+ c_- = 0, \qquad \partial_- c_+ = 0. \label{e:2.10}
\end{equation}
We call an equation of the form (\ref{e:2.9}) also a Toda equation.
Let us consider the transformations
\begin{gather}
\gamma' = \eta_+^{-1} \: \gamma \: \eta_-, \label{e:2.11} \\
c'_- = \eta_-^{-1} c_- \eta_-, \qquad c'_+
= \eta_+^{-1} c_+ \eta^{}_+, \label{e:2.12}
\end{gather}
where $\eta_-$ and $\eta_+$ are some mappings of $\mathbb R^2 \times S^1$
to $G_0$ that satisfy the conditions
\[
\partial_+ \eta_- = 0, \qquad \partial_- \eta_+ = 0.
\]
If a mapping $\gamma$ satisfies the Toda equation (\ref{e:2.9}), then
the mapping $\gamma'$ satisfies the Toda equation (\ref{e:2.9}) where
the mappings $c_-$, $c_+$ are replaced by the mappings $c'_-$ and
$c'_+$. If the mappings $\eta_-$ and $\eta_+$ are such that
\[
\eta_-^{-1} c_- \eta_- = c_-, \qquad \eta_+^{-1} c_+ \eta^{}_+ = c_+
\]
then the transformation (\ref{e:2.11}) is a symmetry transformation
for the Toda equation under consideration.
\section{Untwisted loop Toda equations} \label{s:3}
The complete classification of Toda equations associated with
twisted loop groups of complex classical Lie groups, where
the corresponding twisted loop Lie algebras are endowed with
integrable $\mathbb Z$-gradations with finite-dimensional grading
subspaces, is given in the series of papers \cite{NirRaz06,
NirRaz07a, NirRaz07b}. We will use these results related to
the particular case of untwisted loop groups of the complex
general linear groups. The $\mathbb Z$-gradations of the corresponding
loop Lie algebras are thus generated by an inner automorphism
of the initial finite-dimensional complex Lie algebra
$\mathfrak{gl}_n(\mathbb C)$,
\[
A(x) = h \: x \: h^{-1},
\]
where $x$ is an arbitrary element of $\mathfrak{gl}_n(\mathbb C)$, and $h$ is
a diagonal matrix of the form
\[
h = \left( \begin{array}{cccc}
\epsilon_M^{m_1} I_{n_1} & & & \\
& \epsilon_M^{m_2} I_{n_2} & & \\
& & \ddots & \\
& & & \epsilon_M^{m_p} I_{n_p}
\end{array} \right),
\]
where $I_{n_\alpha}$ denotes the $n_\alpha \times n_\alpha$ unit matrix
and $M \ge m_1 > m_2 > \ldots > m_p > 0$. Here $n_\alpha$,
$\alpha = 1,\ldots,p$, are positive integers, such that
$\sum_{\alpha=1}^p n_\alpha = n$. According to the block-matrix structure
of $h$, it is convenient to represent the element $x$ as a $p \times p$
block matrix $(x_{\alpha\beta})$, where $x_{\alpha \beta}$ is an
$n_\alpha \times n_\beta$ matrix,
\begin{equation}
x = \left( \begin{array}{cccc}
x_{11} & x_{12} &\ldots& x_{1p} \\
x_{21} & x_{22} &\ldots& x_{2p} \\
\vdots & \vdots & \ddots & \vdots \\
x_{p1} & x_{p2} &\ldots& x_{pp}
\end{array} \right).
\label{e:3.1}
\end{equation}
Here the inner automorphism $a$ acts on an arbitrary element $g$ of
$\mathrm{GL}_n(\mathbb C)$ as $a(g) = h g h^{-1}$, with the same diagonal matrix
$h$ given above.
The mapping $\gamma$ has the block-diagonal form
\[
\psset{xunit=1.7em, yunit=1.2em}
\gamma = \left( \raise -1.8\psyunit \hbox{\begin{pspicture}(.6,.6)
(4.5,4.2)
\rput(1,4){$\Gamma_1$}
\rput(2,3){$\Gamma_2$}
\qdisk(2.7,2.3){.7pt} \qdisk(3,2){.7pt} \qdisk(3.3,1.7){.7pt}
\rput(4,1){$\Gamma_p$}
\end{pspicture}} \right).
\]
For each $\alpha = 1, \ldots, p$ the mapping $\Gamma_\alpha$ is a
mapping of $\mathbb R^2$ to the Lie group $\mathrm{GL}_{n_\alpha}(\mathbb C)$.
The mapping $c_+$ has the following block-matrix structure:
\begin{equation}
\psset{xunit=2.5em, yunit=1.4em}
c_+ = \left( \raise -2.4\psyunit \hbox{\begin{pspicture}(.6,.5)
(5.6,5.3)
\rput(1,5){$0$} \rput(2,4.92){$C_{+1}$}
\rput(2,4){$0$}
\qdisk(2.8,4.2){.7pt} \qdisk(3,4){.7pt} \qdisk(3.2,3.8){.7pt}
\qdisk(2.8,3.2){.7pt} \qdisk(3,3){.7pt} \qdisk(3.2,2.8){.7pt}
\qdisk(3.8,3.2){.7pt} \qdisk(4,3){.7pt} \qdisk(4.2,2.8){.7pt}
\rput(4,2){$0$} \rput(5,1.87){$C_{+(p-1)}$}
\rput(1,.94){$C_{+0}$} \rput(5,1){$0$}
\end{pspicture}} \right),
\label{e:3.2}
\end{equation}
where for each $\alpha = 1, \ldots, p-1$ the mapping $C_{+\alpha}$ is
a mapping of $\mathbb R^2$ to the space of $n_\alpha \times n_{\alpha+1}$
complex matrices, and $C_{+0}$ is a mapping of $\mathbb R^2$ to the space
of $n_p \times n_1$ complex matrices. The mapping $c_-$ has a similar
block-matrix structure:
\begin{equation}
\psset{xunit=2.5em, yunit=1.4em}
c_- = \left( \raise -2.4\psyunit \hbox{\begin{pspicture}(.5,.5)
(5.5,5.3)
\rput(1,5){$0$} \rput(5,4.92){$C_{-0}$}
\rput(1,4){$C_{-1}$} \rput(2,4){$0$}
\qdisk(1.8,3.2){.7pt} \qdisk(2,3){.7pt} \qdisk(2.2,2.8){.7pt}
\qdisk(2.8,3.2){.7pt} \qdisk(3,3){.7pt} \qdisk(3.2,2.8){.7pt}
\qdisk(2.8,2.2){.7pt} \qdisk(3,2){.7pt} \qdisk(3.2,1.8){.7pt}
\rput(4,1.87){$0$}
\rput(4,.94){$C_{-(p-1)}$} \rput(5,1){$0$}
\end{pspicture}} \right),
\label{e:3.3}
\end{equation}
where for each $\alpha = 1, \ldots, p-1$ the mapping $C_{-\alpha}$ is
a mapping of $\mathbb R^2$ to the space of $n_{\alpha+1} \times n_\alpha$
complex matrices, and $C_{-0}$ is a mapping of $\mathbb R^2$ to the space
of $n_1 \times n_p$ complex matrices. The conditions (\ref{e:2.10})
imply
\[
\partial_+ C_{-\alpha} = 0, \qquad \partial_- C_{+\alpha} = 0, \qquad
\alpha = 0, 1, \ldots, p-1.
\]
It is not difficult to show that the Toda equation (\ref{e:2.9}) is
equivalent to the following system of equations for the mappings
$\Gamma_\alpha$:
\begin{align}
\partial_+ \left( \Gamma_1^{-1} \: \partial_- \Gamma_1^{} \right)
&= - \Gamma_1^{-1} C_{+1}^{} \: \Gamma_2^{} \: C_{-1}^{}
+ C_{-0}^{} \Gamma_p^{-1} C_{+0}^{} \Gamma_1^{},
\notag \\*
\partial_+ \left( \Gamma_2^{-1} \: \partial_- \Gamma_2^{} \right)
&= - \Gamma_2^{-1} C_{+2}^{} \: \Gamma_3^{} \: C_{-2}^{}
+ C_{-1}^{} \Gamma_1^{-1} C_{+1}^{} \Gamma_2^{},
\notag \\*
& \quad \vdots
\label{e:3.4} \\*
\partial_+ \left(\Gamma_{p-1}^{-1} \: \partial_-
\Gamma_{p-1}^{}\right)
&= - \Gamma_{p-1}^{-1} C_{+(p-1)}^{} \: \Gamma_p^{} \: C_{-(p-1)}^{}
+ C_{-(p-2)}^{} \Gamma_{p-2}^{-1} C_{+(p-2)}^{} \Gamma_{p-1}^{},
\notag \\*
\partial_+ \left( \Gamma_p^{-1} \: \partial_- \Gamma_p^{} \right)
&= - \Gamma_p^{-1} C_{+0}^{} \: \Gamma_1^{} \: C_{-0}^{}
+ C_{-(p-1)}^{} \Gamma_{p-1}^{-1} C_{+(p-1)}^{} \Gamma_p^{}.
\notag
\end{align}
As is shown in \cite{NirRaz07a, NirRaz07b}, if for some $\alpha$
we have $C_{-\alpha} = 0$ or $C_{+\alpha} = 0$, then the system of
equations (\ref{e:3.4}) is equivalent to a system of equations
associated with a respective finite-dimensional Lie group, or
to a set of two such systems. Hence, to deal actually with
Toda equations associated with a loop group, we assume that
all mappings $C_{-\alpha}$ and $C_{+\alpha}$ are nontrivial.
This is possible only if $m_\alpha = (p - \alpha + 1)L$
and $M = p L$. Moreover, it appears that in the case
under consideration we can assume, without any loss
of generality, that the positive integer $L$ is equal
to $1$.
The Toda equations (\ref{e:3.4}) can also be written as
\begin{equation}
\partial_+ (\Gamma^{-1}_\alpha \partial_- \Gamma^{}_\alpha)
+ \Gamma^{-1}_\alpha C_{+\alpha} \Gamma^{}_{\alpha+1} C_{-\alpha}
- C_{-(\alpha-1)} \Gamma^{-1}_{\alpha-1} C_{+(\alpha-1)}
\Gamma_{\alpha} = 0,
\label{e:3.5}
\end{equation}
with $\Gamma_\alpha$ subject to the periodicity condition
$\Gamma_{\alpha+p} = \Gamma_\alpha$. If transformed according
to (\ref{e:2.11}), (\ref{e:2.12}), the submatrices entering
the Toda equations would look here as follows:
\begin{equation}
\Gamma'_\alpha = \eta^{-1}_{+\alpha} \Gamma_\alpha \eta_{-\alpha},
\qquad
C'_{-\alpha} = \eta^{-1}_{-(\alpha+1)} C_{-\alpha} \eta_{-\alpha},
\qquad
C'_{+\alpha} = \eta^{-1}_{+\alpha} C_{+\alpha} \eta_{+(\alpha+1)},
\label{e:3.6}
\end{equation}
with the block-diagonal matrices $\eta_\pm$ defined by
$(\eta_\pm)_{\alpha\beta} = \eta_{\pm \alpha} \delta_{\alpha\beta}$.
Similarly to the abelian case \cite{NirRaz08a}, it can be shown
that the determinant of the mapping $\gamma$ can be represented
in a factorized form as
\[
\det \gamma = \prod_{\alpha = 1}^p \det \Gamma_\alpha
= \Gamma_+^{} \Gamma_-^{-1},
\]
where
\[
\partial_+ \Gamma_- = 0, \qquad \partial_- \Gamma_+ = 0.
\]
Then, setting
\[
\eta_{-\alpha} = \Gamma_-^{1/n} I_{n_\alpha}, \qquad
\eta_{+\alpha} = \Gamma_+^{1/n} I_{n_\alpha}
\]
in (\ref{e:3.6}), we can see that it is possible to make
the determinant of the transformed mapping $\gamma'$ be equal
to $1$,
\[
\det \gamma' = \prod_{\alpha = 1}^p \det \Gamma'_\alpha
= 1.
\]
Therefore, the reduction to the non-abelian Toda systems associated
with the loop groups of the special linear groups is possible, just
as well as it was in the abelian case \cite{NirRaz08a}.
\section{Rational dressing} \label{s:4}
We require that for any $m \in \mathbb R^2$ the matrices $c_-(m)$
and $c_+(m)$ commute, that is equivalent to the relations
\begin{equation}
C_{-(\alpha-1)} C_{+(\alpha-1)} - C_{+\alpha} C_{-\alpha} = 0.
\label{e:4.1}
\end{equation}
Then it is obvious that
\begin{equation}
\gamma = I_n,
\label{e:4.2}
\end{equation}
where $I_n$ is the $n \times n$ unit matrix, is a solution to the
Toda equation (\ref{e:2.9}). Denote a mapping of $\mathbb R^2 \times S^1$
to $\mathrm{GL}_n(\mathbb C)$ which generates the corresponding connection by
$\varphi$. Using the equalities (\ref{e:2.7}) and (\ref{e:2.8})
and remembering that in our case $L = 1$, we write
\[
\varphi^{-1} \partial_- \varphi = \lambda^{-1} c_-, \qquad
\varphi^{-1} \partial_+ \varphi = \lambda \: c_+,
\]
where the matrices $c_+$ and $c_-$ are defined by the relations
(\ref{e:3.2}), (\ref{e:3.3}).
To construct more interesting solutions to the Toda equations we will
look for a mapping $\psi$, such that the mapping
\begin{equation}
\varphi' = \varphi \: \psi
\label{e:4.3}
\end{equation}
would generate a connection satisfying the grading condition and the
gauge-fixing constraint $\omega_{+0} = 0$.
For any $m \in \mathbb R^2$ the mapping $\tilde \psi_m$ defined by the
equality $\tilde \psi_m(s) = \psi(m, s)$, $s \in S^1$, is a smooth
mapping of $S^1$ to $\mathrm{GL}_n(\mathbb C)$. We treat the unit circle $S^1$ as
a subset of the complex plane which, in turn, is a subset of the
Riemann sphere. Assume that it is possible to extend
analytically each mapping $\tilde \psi_m$ to all of the Riemann
sphere. As the result we get a mapping of the direct product of
$\mathbb R^2$ and the Riemann sphere to $\mathrm{GL}_n(\mathbb C)$ which we also
denote by $\psi$. Suppose that for any $m \in \mathbb R^2$ the analytic
extension of $\tilde \psi_m$ results in a rational mapping regular at
the points $0$ and $\infty$, hence the name rational dressing. Below,
for each point $s$ of the Riemann sphere we denote by $\psi_s$ the
mapping of $\mathbb R^2$ to $\mathrm{GL}_n(\mathbb C)$ defined by the equality
$\psi_s(m) = \psi(m, s)$.
We work with the Toda equations described in section \ref{s:3}.
It means that the mapping $\psi$ is generated by a mapping of
the Euclidean plane to the loop group $\mathcal L_{a,p}(\mathrm{GL}_n(\mathbb C))$
with the corresponding inner automorphism of order $p$. Hence,
for any $m \in \mathbb R^2$ and $s \in S^1$ we should have
\begin{equation}
\psi(m, \epsilon_p s) = h \: \psi(m, s) \: h^{-1},
\label{e:4.4}
\end{equation}
where $h$ is a block-diagonal matrix explicitly given by the expression
\begin{equation}
h_{\alpha,\beta} = \epsilon_p^{p-\alpha+1} I_{n_\alpha} \delta_{\alpha\beta},
\qquad \alpha, \beta = 1,2,\ldots,p.
\label{*}
\end{equation}
The equality (\ref{e:4.4}) means that two rational mappings
coincide on $S^1$, therefore, they must coincide on the entire
Riemann sphere.
A mapping, satisfying the equality (\ref{e:4.4}), can be constructed
by the following procedure. Let $\chi$ be an arbitrary mapping of the
direct product of $\mathbb R^2$ and the Riemann sphere to the algebra
$\mathrm{Mat}_n(\mathbb C)$ of $n \times n$ complex matrices. Let $\hat a$ be
a linear operator acting on $\chi$ as
\[
\hat a \: \chi (m, s) = h \: \chi(m, \epsilon_p^{-1} s) \: h^{-1}.
\]
It is easy to get convinced that the mapping
\[
\psi = \sum_{k=1}^p \hat a^k \chi
\]
satisfies the relation $\hat a \: \psi = \psi$ which is equivalent
to the equality (\ref{e:4.4}). Here we have $\hat a^p \chi = \chi$.
Note that $\chi$ is in fact a mapping to the Lie group
$\mathrm{GL}_n(\mathbb C)$, but, to justify the above averaging
relation, we should consider $\mathrm{GL}_n(\mathbb C)$ as a subset
of $\mathrm{Mat}_n(\mathbb C)$.
To construct a rational mapping satisfying (\ref{e:4.4}) we start
with a rational mapping regular at the points $0$ and $\infty$ and
having poles at $r$ different nonzero points $\mu_i$, $i = 1, \ldots,
r$. Concretely speaking, we consider a mapping $\chi$ of the form
\[
\chi = \left( I_n + p \sum_{i=1}^r
\frac{\lambda}{\lambda - \mu_i} P_i \right) \chi_0,
\]
where $P_i$ are some smooth mappings of $\mathbb R^2$ to the algebra
$\mathrm{Mat}_n(\mathbb C)$ and $\chi_0$ is a mapping of $\mathbb R^2$ to the Lie
subgroup of $\mathrm{GL}_n(\mathbb C)$ formed by the elements
$g \in \mathrm{GL}_n(\mathbb C)$ satisfying the equality
\begin{equation}
h g h^{-1} = g, \label{e:4.5}
\end{equation}
where $h$ is given by the expression (\ref{*}). Actually this
subgroup coincides with the subgroup $G_0$. The averaging procedure
leads to the mapping
\begin{equation}
\psi = \left(I_n + \sum_{i=1}^r \sum_{k=1}^p \frac{\lambda}
{\lambda - \epsilon_p^k \mu_i} h^k P_i \: h^{-k} \right) \psi_0,
\label{e:4.6}
\end{equation}
where $\psi_0 = p \chi_0$. It is convenient to assume that
$\mu_i^p \ne \mu_j^p$ for all $i \ne j$.
Denote by $\psi^{-1}$ the mapping of $\mathbb R^2 \times S^1$ to
$\mathrm{GL}_n(\mathbb C)$ defined by the relation
\[
\psi^{-1}(m, s) = (\psi(m, s))^{-1}.
\]
Suppose that for any fixed $m \in \mathbb R^2$ the mapping
$\tilde\psi_m^{-1}$ of $S^1$ to $\mathrm{GL}_n(\mathbb C)$ can be extended
analytically to a mapping of the Riemann sphere to $\mathrm{GL}_n(\mathbb C)$ and
as the result we obtain a rational mapping of the same structure as
the mapping $\psi$,
\begin{equation}
\psi^{-1} = \psi_0^{-1} \left( I_n
+ \sum_{i = 1}^r \sum_{k=1}^p \frac{\lambda}
{\lambda - \epsilon_p^k \nu_i} \: h^k \: Q_i \: h^{-k} \right),
\label{e:4.7}
\end{equation}
with the pole positions satisfying the conditions $\nu_i \ne 0$,
$\nu_i^p \ne \nu_j^p$ for all $i \ne j$, and additionally $\nu_i^p \ne
\mu_j^p$ for any $i$ and $j$. We will denote the mapping of the direct
product of $\mathbb R^2$ and the Riemann sphere to $\mathrm{GL}_n(\mathbb C)$ again by
$\psi^{-1}$.
By definition, the equality
\[
\psi^{-1} \psi = I_n
\]
is valid at all points of the direct product of $\mathbb R^2$ and $S^1$.
Since $\psi^{-1} \psi$ is a rational mapping, the above equality is
valid at all points of the direct product of $\mathbb R^2$ and the Riemann
sphere. Hence, the residues of $\psi^{-1} \psi$ at the points $\nu_i$
and $\mu_i$ should be equal to zero. Explicitly we have
\begin{gather}
Q_i \left( I_n + \sum_{j = 1}^r \sum_{k=1}^p \frac{\nu_i}{\nu_i -
\epsilon_p^k \: \mu_j} \: h^k \: P_j \: h^{-k} \right) = 0,
\label{e:4.8} \\
\left( I_n + \sum_{j = 1}^r \sum_{k=1}^p \frac{\mu_i}{\mu_i -
\epsilon_p^k \: \nu_j} \: h^k \: Q_j \: h^{-k} \right) P_i = 0.
\label{e:4.9}
\end{gather}
We will discuss later how to satisfy these relations, and now let us
consider what connection is generated by the mapping $\varphi'$
defined by (\ref{e:4.3}) with the mapping $\psi$ possessing the
prescribed properties.
Using the representation (\ref{e:4.3}), we obtain for the components
of the connection generated by $\varphi'$ the expressions
\begin{gather}
\omega_- = \psi^{-1} \partial_- \psi + \lambda^{-1} \psi^{-1} c_-
\psi, \label{e:4.10} \\
\omega_+ = \psi^{-1} \partial_+ \psi + \lambda \psi^{-1} c_+ \psi.
\label{e:4.11}
\end{gather}
We see that the component $\omega_-$ is a rational mapping which has
simple poles at the points $\mu_i$, $\nu_i$ and zero.\footnote{Here
and below discussing the holomorphic properties of mappings and
functions we assume that the point of the space $\mathbb R^2$ is arbitrary
but fixed.} Similarly, the component $\omega_+$ is a rational mapping
which has simple poles at the points $\mu_i$, $\nu_i$ and infinity.
We are looking for a connection which satisfies the grading and
gauge-fixing conditions. The grading condition in our case is the
requirement that for each point of $\mathbb R^2$ the component $\omega_-$
is rational and has the only simple pole at zero, while the component
$\omega_+$ is rational and has the only simple pole at infinity.
Hence, we demand that the residues of $\omega_-$ and $\omega_+$ at the
points $\mu_i$ and $\nu_i$ should vanish.
The residues of $\omega_-$ and $\omega_+$ at the points $\nu_i$ are
equal to zero if and only if
\begin{gather}
(\partial_- Q_i - \nu_i^{-1} Q_i \: c_-) \left( I_n + \sum_{j = 1}^r
\sum_{k=1}^p \frac{\nu_i}{\nu_i - \epsilon_p^k \: \mu_j} \: h^k \: P_j
\: h^{-k}
\right) = 0,
\label{e:4.12} \\
(\partial_+ Q_i - \nu_i \: Q_i \: c_+) \left( I_n + \sum_{j = 1}^r
\sum_{k=1}^p \frac{\nu_i}{\nu_i - \epsilon_p^k \: \mu_j} \: h^k \: P_j
\: h^{-k}
\right) = 0,
\label{e:4.13}
\end{gather}
respectively. Similarly, the requirement of vanishing of the residues
at the points $\mu_i$ gives the relations
\begin{gather}
\left( I_n + \sum_{j = 1}^r \sum_{k=1}^p \frac{\mu_i}{\mu_i -
\epsilon_p^k \: \nu_j} \: h^k \: Q_j \: h^{-k} \right) (\partial_- P_i +
\mu_i^{-1} c_- \: P_i) = 0,
\label{e:4.14} \\
\left( I_n + \sum_{j = 1}^r \sum_{k=1}^p \frac{\mu_i}{\mu_i -
\epsilon_p^k \: \nu_j} \: h^k \: Q_j \: h^{-k} \right)
(\partial_+ P_i + \mu_i \: c_+ \: P_i) = 0.
\label{e:4.15}
\end{gather}
To obtain the relations (\ref{e:4.12})--(\ref{e:4.15}) we made
use of the equalities (\ref{e:4.8}), (\ref{e:4.9}).
Suppose that we have succeeded in satisfying the relations
(\ref{e:4.8}), (\ref{e:4.9}) and (\ref{e:4.12})--(\ref{e:4.15}).
In such a case from the equalities (\ref{e:4.10}) and (\ref{e:4.11})
it follows that the connection under consideration satisfies the
grading condition.
It is easy to see from (\ref{e:4.11}) that
\[
\omega_+(m, 0) = \psi_0^{-1}(m) \partial_+ \psi_0(m).
\]
Taking into account that $\omega_{+0}(m) = \omega_+(m, 0)$, we
conclude that the gauge-fixing constraint $\omega_{+0} = 0$ is
equivalent to the relation
\begin{equation}
\partial_+ \psi_0 = 0. \label{e:4.16}
\end{equation}
Assuming that this relation is satisfied, we come to a connection
satisfying both the grading condition and the gauge-fixing condition.
Recall that if a flat connection $\omega$ satisfies the grading and
gauge-fixing conditions, then there exist a mapping $\gamma$ from
$\mathbb R^2$ to $G$ and mappings $c_-$ and $c_+$ of $\mathbb R^2$ to
$\mathfrak g_{-1}$ and $\mathfrak g_{+1}$, respectively, such that the
representation (\ref{e:2.8}) for the components $\omega_-$
and $\omega_+$ is valid. In general, the mappings $c_-$ and $c_+$
parameterizing the connection components may be different from the
mappings $c_-$ and $c_+$ which determine the mapping $\varphi$. Let us
denote the mappings corresponding to the connection under
consideration by $\gamma'$, $c_-'$ and $c_+'$. Thus, we have
\begin{align}
\psi^{-1} \partial_- \psi + \lambda^{-1} \psi^{-1} c_- \psi &=
\gamma^{\prime -1} \partial_- \gamma' + \lambda^{-1} c_-',
\label{e:4.17} \\
\psi^{-1} \partial_+ \psi + \lambda \psi^{-1} c_+ \psi &= \lambda
\gamma^{\prime -1} c_+' \gamma'. \label{e:4.18}
\end{align}
Note that $\psi_\infty$ is a mapping of $\mathbb R^2$ to the Lie subgroup
of $\mathrm{GL}_n(\mathbb C)$ defined by the relation (\ref{e:4.5}). Recall that
this subgroup coincides with $G_0$, and denote $\psi_\infty$ by
$\gamma$. {}From the relation (\ref{e:4.17}) we obtain the equality
\[
\gamma^{\prime -1} \partial_- \gamma' = \gamma^{-1} \partial_- \gamma.
\]
The same relation (\ref{e:4.17}) gives
\[
\psi^{-1}_0 c_- \psi_0 = c_-'.
\]
Impose the condition $\psi_0 = I_n$, which is consistent with
(\ref{e:4.16}). Here we have
\[
c_-' = c_-.
\]
Finally, from (\ref{e:4.18}) we obtain
\[
\gamma^{\prime -1} c_+' \gamma' = \gamma^{-1} c_+ \gamma.
\]
We see that if we impose the condition $\psi_0 = I_n,$ then the
components of the connection under consideration have the form
given by (\ref{e:2.8}) where $\gamma = \psi_\infty$.
Thus, to find solutions to Toda equations under consideration, we can
use the following procedure. Fix $2r$ complex numbers $\mu_i$ and
$\nu_i$. Find matrix-valued functions $P_i$ and $Q_i$ satisfying
the relations (\ref{e:4.8}), (\ref{e:4.9}) and
(\ref{e:4.12})--(\ref{e:4.15}). With the help
of (\ref{e:4.6}), (\ref{e:4.7}), assuming that
\[
\psi_0 = I_n,
\]
construct the mappings $\psi$ and $\psi^{-1}$. Then, the mapping
\begin{equation}
\gamma = \psi_\infty
\label{e:4.19}
\end{equation}
satisfies the Toda equation (\ref{e:2.9}).
Let us return to the relations (\ref{e:4.8}), (\ref{e:4.9}). It can
be shown that, if we suppose that the matrices $P_i$ and $Q_i$ are of
maximum rank, then we get the trivial solution of the Toda equation
given by (\ref{e:4.2}). Hence, we will assume that $P_i$ and $Q_i$
are not of maximum rank. The simplest case here is given by matrices
of rank one which can be represented as
\begin{equation}
P_i = u^{}_i {}^{t\!} w^{}_i, \qquad Q_i = x^{}_i {}^{t\!} y^{}_i,
\label{**}
\end{equation}
where $u$, $w$, $x$ and $y$ are $n$-dimensional column vectors.
The $\mathbb Z$-gradation suggests that it is convenient to consider
the $n \times n$ matrix-valued functions $P_i$ and $Q_i$ in the
corresponding block-matrix form. According to the representation
(\ref{e:3.1}), we can write
\[
P_i = \left( \begin{array}{cccc}
(P_i)_{11} & (P_i)_{12} &\ldots& (P_i)_{1p} \\
(P_i)_{21} & (P_i)_{22} &\ldots& (P_i)_{2p} \\
\vdots & \vdots & \ddots & \vdots \\
(P_i)_{p1} & (P_i)_{p2} &\ldots& (P_i)_{pp}
\end{array} \right),
\]
and make similar block-matrix partition for $Q_i$, where the
submatrices $(P_i)_{\alpha\beta}$ and $(Q_i)_{\alpha\beta}$
are complex $n_\alpha \times n_\beta$ matrices. Then, in terms
of such block submatrices, the relations (\ref{**}) take the
forms
\[
(P_i)_{\alpha\beta} = u_{i,\alpha} \: {}^{t\!}w_{i,\beta},
\qquad
(Q_i)_{\alpha\beta} = x_{i,\alpha} \: {}^{t\!}y_{i,\beta},
\]
where the standard matrix multiplication of the $n_\alpha \times 1$
submatrices $u_{i,\alpha}$, $x_{i,\alpha}$ by the
$1 \times n_\beta$ submatrices ${}^{t\!}w_{i,\beta}$,
${}^{t\!}y_{i,\beta}$ is implied as respective. We see that, from the
point of view of the $\mathbb Z$-gradation, also the $n \times 1$ matrices
$u_i$, $w_i$, $x_i$ and $y_i$ receive a natural representation in a
block-matrix form,
\[
{}^{t\!}u_i = \left(
{}^{t\!}u_{i,1} \:\: \\
{}^{t\!}u_{i,2} \:\: \\
\ldots \\
{}^{t\!}u_{i,\alpha} \:\: \\
\ldots \\
{}^{t\!}u_{i,p}
\right),
\qquad
{}^{t\!}y_i = \left(
{}^{t\!}y_{i,1} \:\: \\
{}^{t\!}y_{i,2} \:\: \\
\ldots \\
{}^{t\!}y_{i,\alpha} \:\: \\
\ldots \\
{}^{t\!}y_{i,p}
\right),
\]
where $u_{i,\alpha}$ and $y_{i,\alpha}$, $\alpha=1,\ldots,p$,
are complex $n_\alpha \times 1$ matrices. We have similar
expressions also for $w_i$ and $x_i$. This representation,
together with the block-matrix form (\ref{*}) of $h$,
allows us to write the relations (\ref{e:4.8}) and
(\ref{e:4.9}) as follows:
\begin{gather}
{}^{t\!}y_{i,\alpha} + \sum_{j=1}^{r} \sum_{\delta,\beta=1}^p
\frac{\displaystyle
\nu_i \: \epsilon_p^{-\beta(\delta-\alpha)}}
{\displaystyle \nu_i - \epsilon_p^\beta \: \mu^{}_j}
\left( {}^{t\!}y_{i,\delta} \: u_{j,\delta} \right)
{}^{t\!}w_{j,\alpha} = 0,
\label{e:4.20} \\
u_{i,\alpha} + \sum_{j=1}^{r} \sum_{\delta,\beta=1}^p
\frac{\displaystyle
\mu_i \: \epsilon_p^{-\beta(\alpha-\delta)}}
{\displaystyle \mu_i - \epsilon_p^\beta \: \nu_j}
x_{j,\alpha} \left( {}^{t\!}y_{j,\delta} \: u_{i,\delta} \right) = 0.
\label{e:4.21}
\end{gather}
Using the identity
\begin{equation}
\sum_{\alpha=0}^{p-1} \frac{z \: \epsilon_p^{-\beta\alpha}}
{z - \epsilon_p^\alpha}
= p \frac{z^{p-|\beta|_p}}{z^p - 1},
\label{e:4.22}
\end{equation}
where $|\beta|_p$ is the residue of division of $\beta$ by $p$,
we can rewrite (\ref{e:4.20}) in terms of the block submatrices,
\begin{equation}
{}^{t\!}y_{i,\alpha} + p \sum_{j=1}^{r}
(R_\alpha)_{i j} \: {}^{t\!}w_{j,\alpha} = 0.
\label{e:4.23}
\end{equation}
Here the ${r} \times {r}$ matrices $R_\alpha$ are defined as
\[
(R_\alpha)_{i j} = \frac{1}{\nu_i^p - \mu_j^p}
\sum_{\beta=1}^p \nu_i^{p - |\beta-\alpha|_p} \mu_j^{|\beta-\alpha|_p}
\: {}^{t\!}y_{i,\beta} \: u_{j,\beta}.
\]
The identity (\ref{e:4.22}) allows us to write also the submatrix form
of (\ref{e:4.21}) as
\begin{equation}
u_{i,\alpha} + p \sum_{j=1}^{r} x_{j,\alpha} (S_\alpha)_{j i}
= 0, \label{e:4.24}
\end{equation}
where
\[
(S_\alpha)_{j i} = - \frac{1}{\nu^p_j - \mu^p_i}
\sum_{\beta=1}^p \nu_j^{|\alpha-\beta|_p}
\mu_i^{p - |\alpha-\beta|_p}
\: {}^{t\!}y_{j,\beta} \: u_{i,\beta}.
\]
With the help of the equality
\[
p - 1 - |\alpha-1|_p = |-\alpha|_p
\]
it is straightforward to demonstrate that
\[
(S_\alpha)_{j i} = - \frac{\mu_i}{\nu_j}
(R_{\alpha+1})_{j i},
\]
and so, (\ref{e:4.24}) can be written as
\begin{equation}
u_{i,\alpha} - p \mu_i \sum_{j=1}^{r} x_{j,\alpha}
\frac{1}{\nu_j} (R_{\alpha+1})_{j i} = 0.
\label{e:4.25}
\end{equation}
We use the equations (\ref{e:4.23}) and (\ref{e:4.25}) to express
the vectors $w_i$ and $x_i$ via the vectors $u_i$ and $y_i$,
\[
{}^{t\!}w_{i,\alpha} = - \frac{1}{p} \sum_{j=1}^{r}
(R^{-1}_\alpha)_{i j} \: {}^{t\!}y_{j,\alpha}, \qquad
x_{i,\alpha} = \frac{1}{p} \sum_{j=1}^{r}
u_{j,\alpha} \: \frac{1}{\mu_j} \:
(R^{-1}_{\alpha+1})_{j i} \: \nu_i.
\]
Apart from the summation over the pole indices $j$, there are the
corresponding matrix multiplications of the submatrices entering the
last two relations. As a result, we come to the following solution
of the relations (\ref{e:4.8}) and (\ref{e:4.9}):
\[
(P_i)_{\alpha\beta} = - \frac{1}{p} u_{i,\alpha} \sum_{j=1}^{r}
(R^{-1}_\beta)_{i j} \: {}^{t\!}y_{j,\beta}, \qquad
(Q_i)_{\alpha\beta} = \frac{1}{p} \sum_{j=1}^{r} u_{j,\alpha} \:
\frac{1}{\mu_j} \: (R^{-1}_{\alpha+1})_{j i} \: \nu_i \:
{}^{t\!}y_{i,\beta}.
\]
Using (\ref{e:4.6}) and (\ref{e:4.19}), we get
\[
\gamma = \psi^{}_\infty
= I_n + \sum_{i = 1}^{r} \sum_{\alpha=1}^p h^{\alpha} \: P_i \:
h^{-\alpha}.
\]
For the submatrices of $\gamma$ this gives the expression
\[
\gamma_{\alpha\beta}
= \delta_{\alpha\beta} \left( I_{n_\alpha} + p \sum_{i=1}^{r}
(P_i)_{\alpha \alpha} \right)
= \delta_{\alpha\beta} \left( I_{n_\alpha} - \sum_{i, j = 1}^{r}
u_{i,\alpha} \: (R^{-1}_\alpha)_{i j} \: {}^{t\!}y_{j,\alpha} \right).
\]
Hence, in view of the block-diagonal structure of $\gamma$, we have
\[
\Gamma_\alpha = 1 - \sum_{i, j = 1}^{r}
u_{i,\alpha} \: (R^{-1}_\alpha)_{i j} \: {}^{t\!}y_{j,\alpha}.
\]
According to our general convention, we assume that the
$n_\alpha \times 1$ matrix-valued functions $u_{i,\alpha}$ and
$y_{i,\alpha}$ are defined for arbitrary integer values of $\alpha$
and
\[
u_{i,\alpha+p} = u_{i,\alpha}, \qquad
y_{i,\alpha+p} = y_{i,\alpha}.
\]
The periodicity of $(R_\alpha)_{i j}$ actually follows from its
definition,
\[
(R_{\alpha+p})_{i j} = (R_{\alpha})_{i j}.
\]
It appears that it is more convenient to use quasi-periodic quantities
$\widetilde u_{i,\alpha}$, $\widetilde y_{i,\alpha}$ and $(\widetilde R_\alpha)_{i j}$
defined by
\begin{gather*}
\widetilde u_{i,\alpha} = u_{i,\alpha} \mu^\alpha_i, \qquad
\widetilde y_{i,\alpha} = y_{i,\alpha} \nu^{-\alpha}_i, \\
(\widetilde R_\alpha)_{i j} = \nu^{-\alpha}_i (R_\alpha)_{i j} \: \mu^\alpha_j.
\end{gather*}
For these quantities we have
\begin{gather*}
\widetilde u_{i,\alpha+p} = \widetilde u_{i,\alpha} \: \mu^p_i, \qquad
\widetilde y_{i,\alpha+p} = \widetilde y_{i,\alpha} \: \nu^{-p}_i, \\
(\widetilde R_{\alpha+p})_{i j} = \nu^{-p}_i
(\widetilde R_\alpha)_{i j} \: \mu^p_j.
\end{gather*}
The expression of the matrix elements of the matrices $\widetilde R_\alpha$
through the functions $\widetilde y_{i,\alpha}$ and $\widetilde u_{i,\alpha}$
has a nicely simplified form
\begin{equation}
(\widetilde R_\alpha)_{i j} = \frac{1}{\nu_i^p - \mu_j^p}
\left( \mu_j^p \sum_{\beta=1}^{\alpha-1} {}^{t\!} \widetilde y_{i,\beta}
\: \widetilde u_{j,\beta}
+ \nu_i^p \sum_{\beta=\alpha}^p {}^{t\!} \widetilde y_{i,\beta}
\: \widetilde u_{j,\beta} \right).
\label{e:4.26}
\end{equation}
In terms of the quasi-periodic quantities, for the matrix-valued
functions $\Gamma_\alpha$ we have
\begin{equation}
\Gamma_\alpha = I_{n_\alpha} - \sum_{i,j=1}^{r}
\widetilde u_{i,\alpha} \: (\widetilde R^{-1}_\alpha)_{i j}
\: {}^{t\!} \widetilde y_{j,\alpha}.
\label{e:4.27}
\end{equation}
It is useful to have also the explicit expression of the inverse
mapping $\gamma^{-1}$. Using the relation
\[
\gamma^{-1} = \psi^{-1}_\infty
= I_n + \sum^{r}_{i=1} \sum^p_{\alpha=1}
h^{\alpha} \: Q_i \: h^{-\alpha}
\]
we derive
\begin{equation}
\Gamma^{-1}_\alpha = I_{n_\alpha} + \sum^{r}_{i,j=1}
\widetilde u^{}_{i,\alpha}
(\widetilde R^{-1}_{\alpha+1})_{i j} \: {}^{t\!} \widetilde y_{j,\alpha}.
\label{e:4.28}
\end{equation}
Using the definition of $\widetilde R_\alpha$, we come to the equality
\[
(\widetilde R^{}_{\alpha+1})_{i j} = (\widetilde R^{}_\alpha)_{i j}
- {}^{t\!} \widetilde y_{i,\alpha} \: \widetilde u_{j,\alpha}.
\]
It is clear that in the case under consideration we do not have
any determinant representation specific to the abelian case
\cite{Mik81, NirRaz08a}, and the last two relations are just
helpful for verifying the equations of motion by the obtained
solutions.
Further, it follows from (\ref{e:4.20}) and (\ref{e:4.21}) that,
to fulfill also (\ref{e:4.12})--(\ref{e:4.15}), it is sufficient
to satisfy the equations
\begin{gather}
\partial_- y_i = \nu_i^{-1} \: {}^{t\!}c_- \: y_i,
\qquad
\partial_+ y_i = \nu_i^{} \: {}^{t\!}c_+ \: y_i,
\label{e:4.29} \\
\partial_- u_i = - \mu_i^{-1} \: c_- \: u_i,
\qquad
\partial_+ u_i = - \mu_i^{} \: c_+ \: u_i.
\label{e:4.30}
\end{gather}
The general solution to these equations in the case when $c_-$ and
$c_+$ are constant is formally
\begin{gather*}
y_i(z^-,z^+)
= \exp(\nu_i^{-1} \: {}^{t\!}c_- \: z^-
+ \nu_i^{} \: {}^{t\!}c_+ \: z^+) \: y^0_i,
\\
u_i(z^-,z^+)
= \exp(- \mu_i^{-1} \: c_- \: z^-
- \mu_i^{} \: c_+ \: z^+) \: u^0_i,
\end{gather*}
where $y^0_i = y_i(0,0)$ and $u^0_i = u_i(0,0)$.
Thus we have shown that it is possible to satisfy (\ref{e:4.8}),
(\ref{e:4.9}) and (\ref{e:4.12})--(\ref{e:4.15}) and construct in
this way a wide class of solutions to the non-abelian loop Toda
equations (\ref{e:3.4}). In what follows we will suppose that
$c_-$ and $c_+$ represent constant mappings and shall make
the above formal solution to the equations (\ref{e:4.29}),
(\ref{e:4.30}) explicit.
\section{Deriving soliton solutions} \label{s:5}
\subsection{The eigenvalue problems} \label{s:5.1}
Seeing the formal expressions for $u_i$ and $y_i$, we understand
that we need to somehow handle the exponentials of the matrices
$c_-$ and $c_+$. To this end, it is customary to treat them as
matrices of linear operators. Assume that the submatrices entering
the mappings $c_-$ and $c_+$ are of maximum ranks, that is
\[
{\mathrm {rank}} \; C_{-\alpha} = {\mathrm{min}} \;
(n_{\alpha + 1},n_{\alpha}),
\qquad
{\mathrm {rank}} \; C_{+\alpha} = {\mathrm{min}} \;
(n_{\alpha},n_{\alpha + 1}),
\]
and they respect the commutativity of $c_-$ and $c_+$ according
to (\ref{e:4.1}). Here we consider the case where these matrices
are such that the corresponding $n_{\alpha+1} \times n_\alpha$ and
$n_{\alpha} \times n_{\alpha+1}$ submatrices $C_{-\alpha}$ and
$C_{+\alpha}$ can be brought to the forms
\[
\left( \begin{array}{c}
I_{n_\alpha} \\ 0
\end{array}
\right), \qquad
\left( I_{n_\alpha} \:\:\: 0 \right)
\]
if $n_\alpha \le n_{\alpha + 1}$, and
\[
\left( I_{n_{\alpha+1}} \:\:\: 0 \right), \qquad
\left( \begin{array}{c}
I_{n_{\alpha+1}} \\ 0
\end{array}
\right)
\]
if $n_{\alpha + 1} \le n_\alpha$, respectively, by implementing
the transformations (\ref{e:2.12}), or the same in the submatrix
form (\ref{e:3.6}), accompanied by an appropriate change of
independent variables.
Denote by $n_*$ the minimum value of the positive integers
$\{n_\alpha\}$. Consider the eigenvalue problems for the linear
operators $c_-$ and $c_+$. The corresponding characteristic
polynomial is $(-1)^n t^{n - p n_*} (t^p - 1)^{n_*}$ giving
rise to the characteristic equation
\[
t^{n-pn_*} \prod^{p}_{\alpha=1}
\left( t - \epsilon_p^\alpha \right)^{n_*} = 0.
\]
Therefore, the spectrum consists of the zero eigenvalue
having the algebraic multiplicity $n-pn_*$ and nonzero
eigenvalues being powers of the $p$th root of unity
having the algebraic multiplicity $n_*$ each. We also
take into account that the spectra of similar matrices
coincide.
The eigenvalue problem relations
\[
c_- \Psi_{\beta} = \epsilon_p^{-\beta} \Psi_{\beta}, \qquad
c_+ \Psi_{\beta} = \epsilon_p^\beta \Psi_{\beta}
\]
are satisfied by the eigenvectors\footnote{Here $\Psi_\beta$ is in
fact an $n \times n_*$ matrix being thus a collection of different
$n_*$ eigenvectors of $c_-$ corresponding to one and the same
eigenvalue $\epsilon^{-\beta}_p$.}
\[
{}^{t\!}\Psi_{\beta} = \left(
{}^{t\!}\Psi_{\beta,1},\ldots,
{}^{t\!}\Psi_{\beta,\alpha},\ldots,
{}^{t\!}\Psi_{\beta,p}
\right)
\]
with
\[
\Psi_{\beta,\alpha} = \epsilon_p^{\alpha \beta} \theta_\alpha,
\]
where constant $n_\alpha \times n_*$ submatrices $\theta_\alpha$
are subject to the conditions
\begin{equation}
C_{-\alpha} \theta_{\alpha} = \theta_{\alpha+1}, \qquad
C_{+\alpha} \theta_{\alpha+1} = \theta_\alpha.
\label{e:5.1}
\end{equation}
Denote by $k$ the rank of the matrix $c_-$.
Then we have $n-k={\mathrm{dim}\;{\mathrm{ker}\;c_-}}$.
It is clear that for the case under consideration
$k = \mathrm{rank}\;c_-
= \sum^p_{\alpha=1}\mathrm{min}\;(n_\alpha,n_{\alpha+1}) \ge pn_*$.
In general, the algebraic multiplicity of an eigenvalue does not
coincide with its geometric multiplicity, the former is just
non less than the latter. To be precise, here we have that the
algebraic and geometric multiplicities of the nonzero eigenvalues
$t = \epsilon_p^{\pm \alpha}$ are one and the same and equal
to $n_*$ for all $\alpha=1,2,\ldots,p$. Indeed, it can be shown
that there are no generalized eigenvectors of $c_-$ corresponding
to its nonzero eigenvalues, that is, no nontrivial eigenvectors of
the form $\Psi'_\beta = (c_- - \epsilon^{-\beta}_p I_n) \Psi_\beta$
for any $\beta = 1,2,\ldots,p$. In contrast, there are nontrivial
generalized eigenvectors of $c_-$ corresponding to the zero eigenvalue.
As a consequence, the algebraic multiplicity of the zero eigenvalue,
equal to $n - pn_*$ as is seen from the characteristic equation,
does not coincide with its geometric multiplicity equal to $n - k$.
Hence, treating $c_-$ as a matrix of a linear operator acting on an
$n$-dimensional vector space $V$, we see that the latter can be
decomposed into a direct sum as
\[
V = V_0 \oplus V_1,
\]
where $V_1$ is a $pn_*$-dimensional subspace spanned by
the $\Psi$-eigenvectors of $c_-$ with nonzero eigenvalues,
actually, $V_1 = {\mathrm{im}\;c_-}$, and $V_0$ is simply
defined to be its orthogonal complement spanned by the null
vectors and generalized null vectors of $c_-$. In this, the
null-subspace spanned by the generalized null vectors has
the dimension $k - p n_*$.
Similar consideration can be given for the matrix $c_+$ as well.
Note also that the above decomposition induces the corresponding
dual decomposition.
Now, seeing the structure of the general solution for $y_i$ and
$u_i$, we can proceed as follows. Expand the initial values
$y^0_i$ and $u^0_i$ over the basis vectors of $V$, taking
into account its decomposition:
\[
y^0_i = \Psi_{0} \: d_{i,0}
+ \sum^p_{\alpha=1} \Psi_{\alpha} \: d_{i,\alpha},
\qquad
u^0_i = \Psi_{0} \: c_{i,0}
+ \sum^p_{\alpha=1} \Psi_{\alpha} \: c_{i,\alpha},
\]
where $\Psi_{0}$ is an $n \times (n-pn_*)$ matrix whose columns are
an appropriate collection of basis vectors of $V_0$, the $n \times n_*$
matrices $\Psi_{\alpha}$ are giving basis vectors of $V_1$ as introduced
earlier, $c_{i,0}$ and $d_{i,0}$ are $(n-pn_*) \times 1$
matrices, while $c_{i,\alpha}$ and $d_{i,\alpha}$ are $n_* \times 1$
matrices, altogether encoding the initial value data for $u_i$ and
$y_i$. Then, in view of the above consideration of the properties of
the matrices $c_\pm$, the general solutions to (\ref{e:4.29}),
(\ref{e:4.30}) take the forms
\begin{gather}
y_i(z^-,z^+) = \Psi_{0} \: q_{i-}(z^-) q_{i+}(z^+) \: d_{i,0}
+ \sum^p_{\alpha=1} \Psi_{\alpha}
\exp \left( \nu^{-1}_i \: \epsilon_p^{\alpha} \: z^-
+ \nu_i \: \epsilon_p^{-\alpha} \: z^+ \right) d_{i,\alpha},
\label{e:5.2} \\
u_i(z^-,z^+) = \Psi_{0} \: p_{i-}(z^-) p_{i+}(z^+) \: c_{i,0}
+ \sum^p_{\alpha=1} \Psi_{\alpha}
\exp \left( -\mu^{-1}_i \: \epsilon_p^{-\alpha} \: z^-
- \mu_i \: \epsilon_p^\alpha \: z^+ \right)
c_{i,\alpha},
\label{e:5.3}
\end{gather}
where $p_{i\pm}(z^\pm)$ and $q_{i\pm}(z^\pm)$ are
$(n-pn_*) \times (n-pn_*)$ matrices with matrix elements
$\left(p_{i\pm}(z^\pm)\right)_{ab}$ and
$\left(q_{i\pm}(z^\pm)\right)_{ab}$,
$a,b=1,2,\ldots,n-pn_*$, being polynomials in $z^\pm$ of
degrees not greater than $k - p n_*$. More specifically, these
polynomials are such that the equations (\ref{e:4.29}) and
(\ref{e:4.30}) should be satisfied.
Further, for the submatrices introduced earlier according
to the $\mathbb Z$-grading structure we obtain
\begin{multline*}
{}^{t\!}y_{i,\delta} \: u_{j,\delta}
= {}^{t\!}d_{i,0} \: {}^{t\!}q_{i+} {}^{t\!}q_{i-}
({}^{t\!}\Psi_{0,\delta} \Psi_{0,\delta}) p_{j-} p_{j+}
\: c_{j,0} \\[.5em]
+ \sum^p_{\alpha,\beta=1}
\exp{\left( Z_{-\alpha}(\nu_i) - Z_{\beta}(\mu_j) \right)}
{}^{t\!}d_{i,\alpha} \: ({}^{t\!}\Psi_{\alpha,\delta}
\: \Psi_{\beta,\delta}) \: c_{j,\beta},
\end{multline*}
where for convenience we have introduced the notation
\[
Z_\alpha(\mu_i) = \mu^{-1}_i \: \epsilon_p^{-\alpha} \: z^-
+ \mu^{}_i \: \epsilon_p^\alpha \: z^+.
\]
Recalling that
$\Psi_{\beta,\delta} = \epsilon_p^{\delta\beta} \theta_\delta$
and passing to quasi-periodic quantities, we find
\begin{multline*}
{}^{t\!} \widetilde y_{i,\delta} \: \widetilde u_{j,\delta}
= {}^{t\!}d_{i,0} \: {}^{t\!}q_{i+} {}^{t\!}q_{i-}
\nu^{-\delta}_i \: ({}^{t\!}\Psi_{0,\delta} \Psi_{0,\delta})
\: \mu^\delta_j \: p_{j-} p_{j+} \: c_{j,0} \\[.5em]
+ \sum^p_{\alpha,\beta=1} \nu^{-\delta}_i \: \mu^\delta_j
\: \epsilon_p^{\delta(\alpha + \beta)}
\exp{\left( Z_{-\alpha}(\nu_i) - Z_{\beta}(\mu_j) \right)}
{}^{t\!}d_{i,\alpha} \: ({}^{t\!}\theta_\delta \theta_\delta) \:
c_{j,\beta}.
\end{multline*}
Suppose that the $n_\delta \times n_*$ submatrices $\theta_\delta$ are
such that
\[
{}^{t\!}\theta_\delta \theta_\delta = \Theta,
\]
where $\Theta$ is one and the same non-degenerate $n_* \times n_*$ matrix
for all $\delta=1,2,\ldots,p$. Consequently, we have from (\ref{e:4.26})
\begin{eqnarray}
(\widetilde R_\alpha)_{i j} &=&
{}^{t\!}d_{i,0} \: {}^{t\!}q_{i+} {}^{t\!}q_{i-}
(\Psi^0_\alpha)^{}_{i j} \: p_{j-} p_{j+} \: c_{j,0}
\nonumber \\[.5em]
&& + \sum^p_{\beta,\delta=1} \mathrm e^{Z_{-\beta}(\nu_i)
- Z_\delta(\mu_j)}
\frac{\epsilon_p^{\alpha(\beta+\delta)}}{1-\mu_j\nu^{-1}_i
\epsilon_p^{\beta+\delta}} \:
{}^{t\!}d_{i,\beta} \: \Theta \: c_{j,\delta} \:
(\mu^\alpha_j \: \nu^{-\alpha}_i),
\label{e:5.4}
\end{eqnarray}
where, for sake of brevity, we have used the notation
\[
(\Psi^0_\alpha)^{}_{i j} = \frac{1}{\nu^p_i - \mu^p_j}
\left( \mu^p_j \sum^{\alpha-1}_{\varepsilon=1}
\nu^{-\varepsilon}_i \:
{}^{t\!}\Psi_{0,\varepsilon} \: \Psi_{0,\varepsilon}
\: \mu^\varepsilon_j + \nu^p_i
\sum^p_{\varepsilon=\alpha} \nu^{-\varepsilon}_i \:
{}^{t\!}\Psi_{0,\varepsilon} \: \Psi_{0,\varepsilon}
\: \mu^\varepsilon_j \right)
\]
for this constant $(n-pk_*) \times (n-pk_*)$ matrix.
The relations (\ref{e:5.2}), (\ref{e:5.3}) and (\ref{e:5.4})
allow us to construct general solutions $\Gamma_\alpha$ by
(\ref{e:4.27}) (it should be instructive to compare our
general solution with the corresponding construction of
\cite{EtiGelRet97a} where the notion of quasi-determinants
was exploited for the purpose).
\subsection{One-soliton solution} \label{s:5.2}
To construct simplest one-soliton solutions ($r = 1$) to the Toda
equations, we assume that the initial data of the system are such
that the coefficients $c_{\alpha}$ are nonzero only for one value of
the index $\alpha$, which we denote by $I$, and the coefficients
$d_{\alpha}$ are nonzero only for two values of the index $\alpha$,
which we denote by $J$ and $K$. Besides, let $c_{0}$ and $d_{0}$
be zero.\footnote{Note that, essentially unlike the twisted abelian
case \cite{NirRaz08b}, keeping these null-eigenspace coefficients
nonzero here we do not obtain any soliton-like solutions.} Thus,
for $n_\alpha \times 1$ submatrices of the matrix-valued functions
$\widetilde u$ and $\widetilde y$ we have
\begin{eqnarray*}
&& \widetilde u^{}_{\alpha} = \mu^{\alpha} \: \epsilon_p^{\alpha I}
\: \mathrm e^{-Z_I(\mu)} \: \theta_\alpha \: c_{I},
\\[.5em]
&& {}^{t\!}\widetilde y_{\alpha} = \nu^{-\alpha} \: \epsilon_p^{\alpha J}
\: \mathrm e^{Z_{-J}(\nu)} \: {}^{t\!}d_{J} \: {}^{t\!}\theta_\alpha
+ \nu^{-\alpha} \: \epsilon_p^{\alpha K} \: \mathrm e^{Z_{-K}(\nu)}
\: {}^{t\!}d_{K} \: {}^{t\!}\theta_\alpha.
\end{eqnarray*}
The matrix $\widetilde R_\alpha$ is simply a function for this case and
is defined by the expression
\begin{multline*}
\widetilde R_\alpha = \mu^\alpha \: \nu^{-\alpha} \: \epsilon_p^{I\alpha}
\: \mathrm e^{-Z_I(\mu)} \left(
\mathrm e^{Z_{-J}(\nu)} \frac{\epsilon_p^{J\alpha}}
{1 - \mu \nu^{-1} \epsilon_p^{I+J}}
\: ({}^{t\!}d_{J} \: \Theta \: c_{I}) \right. \\
\left. + \mathrm e^{Z_{-K}(\nu)} \frac{\epsilon_p^{K\alpha}}
{1 - \mu \nu^{-1} \epsilon_p^{I+K}}
\: ({}^{t\!}d_{K} \: \Theta \: c_{I})
\right).
\end{multline*}
And for the $n_\alpha \times n_\alpha$ matrix-valued functions
$\Gamma_\alpha$ this gives
\[
\Gamma_\alpha = \frac{I_{n_\alpha}
- (1 - \mu \nu^{-1} \epsilon_p^{I+J}) Y^{(J)}_\alpha
+ \widetilde d \epsilon_p^{\alpha(K-J)}
\mathrm e^{Z_{-K}(\nu) - Z_{-J}(\nu)}(I_{n_\alpha}
- (1 - \mu \nu^{-1} \epsilon_p^{I+K}) Y^{(K)}_\alpha)}
{1 + \widetilde d \epsilon_p^{\alpha(K-J)} \mathrm e^{Z_{-K}(\nu) - Z_{-J}(\nu)}},
\]
where we have introduced constant idempotent
$n_\alpha \times n_\alpha$ matrices
\[
Y^{(A)}_\alpha = \frac{(\theta_\alpha \: c_{I} \: {}^{t\!}d_{A}
\: {}^{t\!}\theta_\alpha)}{{}^{t\!}d_{A} \: \Theta \: c_{I}},
\qquad
(Y^{(A)}_\alpha)^2 = Y^{(A)}_\alpha, \qquad A = J, K,
\]
satisfying the relations
\[
Y^{(J)}_\alpha Y^{(K)}_\alpha = Y^{(K)}_\alpha, \qquad
Y^{(K)}_\alpha Y^{(J)}_\alpha = Y^{(J)}_\alpha,
\]
and also the notation
\[
\widetilde d = \frac{{}^{t\!}d_{K} \: \Theta \: c_{I}}
{{}^{t}d_{J} \: \Theta \: c_{I}}
\frac{1 - \mu \nu^{-1} \epsilon_p^{I+J}}
{1 - \mu \nu^{-1} \epsilon_p^{I+K}}.
\]
The expression for $\Gamma_\alpha$ can be rewritten as
\[
\Gamma_\alpha = \left[ I_{n_\alpha} -
(1 - \mu \nu^{-1} \epsilon_p^{I+J})Y^{(J)}_\alpha \right]
\frac{I_{n_\alpha} + \epsilon_p^{\alpha\rho}
\mathrm e^{Z(\zeta) + \widetilde\delta} \widetilde X_\alpha}
{1 + \epsilon_p^{\alpha\rho} \mathrm e^{Z(\zeta) + \widetilde\delta}},
\]
where
\[
\widetilde X_\alpha
= \left( I_{n_\alpha} - (1 - \mu \nu^{-1} \epsilon_p^{I+J})
Y^{(J)}_\alpha \right)^{-1}
\left( I_{n_\alpha} - (1 - \mu \nu^{-1} \epsilon_p^{I+K})
Y^{(K)}_\alpha \right)
\]
and we use the notations
\[
\rho = K-J, \qquad \kappa_\rho = 2\sin\frac{\pi\rho}{p}, \qquad
\zeta = -\mathrm i \nu \epsilon_p^{-(K+J)/2}, \qquad \widetilde d = \exp\widetilde\delta,
\]
and the function $Z$ in the exponent takes the most familiar form
\[
Z(\zeta) = \kappa_{\rho} ( \zeta^{-1} \: z^- + \zeta \: z^+ ).
\]
With the help of the above properties of the matrices
$Y^{(J)}_\alpha$, it is not difficult to make sure that
\[
h^{}_{\alpha,J} C_{+\alpha} h^{-1}_{\alpha+1,J} = C_{+\alpha}
\]
for
\[
h_{\alpha,J} = I_{n_\alpha} - (1 - \mu \nu^{-1} \epsilon_p^{I+J})
Y^{(J)}_\alpha, \qquad
h^{-1}_{\alpha,J} = I_{n_\alpha}
- (1 - \mu^{-1} \nu \epsilon_p^{-(I+J)}) Y^{(J)}_\alpha.
\]
Therefore the transformation
\begin{equation}
h_{\alpha,J} \Gamma_\alpha \to \Gamma_\alpha
\label{***}
\end{equation}
is a symmetry transformation of the Toda equations (\ref{e:3.5})
realizing a particular case of the simplest WZNW-type symmetry
transformation described by the relations (\ref{e:3.6}) with
$\eta_{+ \alpha} = h_{\alpha,J}$, $\eta_{- \alpha} = I_{n_\alpha}$.
Similarly, one can use the relations
\[
h^{-1}_{\alpha+1,J} C_{-\alpha} h_{\alpha,J} = C_{-\alpha}
\]
to show that also the transformation
\[
\Gamma_\alpha h_{\alpha,J} \to \Gamma_\alpha
\]
is a symmetry transformation of the Toda equations (\ref{e:3.5})
corresponding to the transformations (\ref{e:3.6}) with
$\eta_{- \alpha} = h^{-1}_{\alpha,J}$, $\eta_{+ \alpha} = I_{n_\alpha}$.
Now, performing the symmetry transformation (\ref{***}), we write
the one-soliton solution to the equations (\ref{e:3.5}) as follows:
\[
\Gamma_\alpha = \frac{I_{n_\alpha} + \epsilon_p^{\alpha\rho}
\mathrm e^{Z(\zeta) + \widetilde\delta} \widetilde X_\alpha}
{1 + \epsilon_p^{\alpha\rho} \mathrm e^{Z(\zeta) + \widetilde\delta}},
\]
with all entries defined above. Using the properties of the
idempotent matrices $Y^{(A)}_\alpha$, we can also rewrite the
expressions for the matrices $\widetilde X_\alpha$ as
\[
\widetilde X_\alpha = h^{-1}_{\alpha,J} h_{\alpha,K}
= I_{n_\alpha} + \mu^{-1} \nu \epsilon_p^{-(I+J)}
\left( (1-\mu \nu^{-1} \epsilon_p^{I+J} ) Y^{(J)}_\alpha
- (1-\mu \nu^{-1} \epsilon_p^{I+K}) Y^{(K)}_\alpha \right).
\]
Using (\ref{e:4.28}) we can also show that
\[
\Gamma_\alpha^{-1} = \frac{I_{n_\alpha} + \epsilon_p^{(\alpha+1)\rho}
\mathrm e^{Z(\zeta) + \widetilde\delta} \widetilde X^{\prime}_\alpha}
{1 + \epsilon_p^{(\alpha+1)\rho} \mathrm e^{Z(\zeta) + \widetilde\delta}},
\]
where
\[
\widetilde X^{\prime}_{\alpha} = h^{-1}_{\alpha,K} \: h^{}_{\alpha,J}.
\]
Here it is obvious that
$\widetilde X^{\prime}_{\alpha} = \widetilde X^{-1}_{\alpha}$,
and it is not difficult to show that
$\Gamma^{-1}_\alpha \Gamma^{}_\alpha = I_{n_\alpha}$.
It is worthwhile noting that when $n_\alpha = 1$, we get
$Y^{(J)}_\alpha = Y^{(K)}_\alpha = 1$, so that
$\widetilde X_\alpha = \epsilon_p^\rho$, while
$\widetilde X'_\alpha = \epsilon_p^{-\rho}$, and also $\widetilde d \to d$,
and so we recover precisely the abelian case \cite{NirRaz08a}.
\subsection{Multi-soliton solutions} \label{s:5.3}
Now, to obtain solutions depending on $r$ linear combinations of
independent variables we assume that for each value of the index
$i = 1,\ldots,r$ the matrix-valued coefficients $c_{i,\alpha}$ are
different from zero for only one value of $\alpha$, which we denote
by $I_i$, and that the matrix-valued coefficients $d_{i,\alpha}$ are
different from zero for only two values of $\alpha$, which we denote
by $J_i$ and $K_i$. And we also use the following slightly simplified
notation for such nonvanishing initial-data $n_* \times 1$
matrix-valued coefficients:
\[
d_{J_i} = d_{i,J_i}, \quad d_{K_i} = d_{i,K_i} \qquad
c_{I_i} = c_{i,I_i}.
\]
Then we have from the equality (\ref{e:5.4}) that
\begin{multline*}
(\widetilde R_\alpha)_{i j} =
\nu^{-\alpha}_i \: \epsilon_p^{\alpha J_i} \: \mathrm e^{Z_{-J_i}(\nu_i)}
\left( \frac{{}^{t\!}d_{J_i} \: \Theta \: c^{}_{I_j}}
{1 - \mu_j \nu^{-1}_i \epsilon_p^{I_j + J_i}}
\right. \\[.5em]
\left. + \epsilon_p^{(K_i - J_i)\alpha} \:
\mathrm e^{Z_{-K_i}(\nu_i) - Z_{-J_i}(\nu_i)}
\frac{{}^{t\!}d_{K_i} \: \Theta \: c^{}_{I_j}}
{1-\mu_j \nu^{-1}_i \epsilon_p^{I_j + K_i}} \right)
\mu^\alpha_j \: \epsilon_p^{\alpha I_j} \: \mathrm e^{-Z_{I_j}(\mu_j)}.
\end{multline*}
With account of explicit forms of $u^{}_{i,\alpha}$ and
${}^{t\!}y_{j,\alpha}$, the expression for $\Gamma_\alpha$
can be written in the form
\[
\Gamma_\alpha = I_{n_\alpha} - \sum^{r}_{i,j=1}
(\widetilde R^{\prime -1}_\alpha)^{}_{i j}
\left( \widetilde Y^{(J)}_{\alpha,j i}
+ \epsilon_p^{\alpha\rho_j} \: \mathrm e^{Z(\zeta_j)}
\: \widetilde Y^{(K)}_{\alpha,j i} \right),
\]
where
\[
(\widetilde R'_\alpha)^{}_{i j}
= \widetilde D_{i j}(\nu \epsilon_p^{-J},\mu \epsilon_p^I)
+ \epsilon_p^{\alpha\rho_i} \: \mathrm e^{Z(\zeta_i)}
\: \widetilde D_{i j}(\nu \epsilon_p^{-K},\mu \epsilon_p^I)
\]
and (for $A = J, K$)
\[
\widetilde D_{i j}(\nu \epsilon_p^{-A},\mu \epsilon_p^I)
= ({}^{t\!}d_{A_i} \: \Theta \: c^{}_{I_j}) \:
D_{i j}(\nu \epsilon_p^{-A},\mu \epsilon_p^I)
= ({}^{t\!}d_{A_i} \: \Theta \: c^{}_{I_j})
\frac{\nu_i \: \epsilon_p^{-A_i}}
{\nu_i \: \epsilon_p^{-A_i} - \mu_j \: \epsilon_p^{I_j}},
\]
(cp. with the notation used for the abelian case \cite{NirRaz08a})
and besides,
\[
\widetilde Y^{(A)}_{\alpha,i j} = ({}^{t\!}d_{A_i} \: \Theta \: c^{}_{I_j})
Y^{(A)}_{\alpha,i j} = \theta_\alpha \: c^{}_{I_j}
\: {}^{t\!}d_{A_i} \: {}^{t\!}\theta_\alpha.
\]
The idempotent $n_\alpha \times n_\alpha$ matrices
$Y^{(A)}_{\alpha,i j}$ satisfy the following remarkable
properties:
\begin{equation}
Y^{(A)}_{\alpha,i j} Y^{(B)}_{\alpha,k\ell} =
\frac{{}^{t\!}d_{A_i} \Theta c_{I_\ell}}
{{}^{t\!}d_{A_i} \Theta c_{I_j}} \cdot
\frac{{}^{t\!}d_{B_k} \Theta c_{I_j}}
{{}^{t\!}d_{B_k} \Theta c_{I_\ell}}
Y^{(B)}_{\alpha,k j}, \qquad A,B = J,K,
\label{e:5.5}
\end{equation}
while their tilded counterparts are subject to the relations of
simpler forms,
\begin{equation}
\widetilde Y^{(A)}_{\alpha,i j} \: \widetilde Y^{(B)}_{\alpha,k\ell} =
({}^{t\!}d_{A_i} \: \Theta \: c_{I_\ell}) \widetilde Y^{(B)}_{\alpha,k j}.
\label{e:5.6}
\end{equation}
Further, we can write
\[
\Gamma_\alpha = \frac{\displaystyle
I_{n_\alpha} \det \widetilde R'_\alpha - \sum^{r}_{i,j=1}
(\widetilde\mathcal R'_\alpha)^{}_{i j}
\left( \widetilde Y^{(J)}_{\alpha,i j} + \epsilon_p^{\alpha\rho_i}
\: \mathrm e^{Z(\zeta_i)} \: \widetilde Y^{(K)}_{\alpha,i j}\right)}
{\displaystyle \det \widetilde R'_\alpha},
\]
meaning that, according to Leibniz,
\[
\det \widetilde R'_\alpha = \sum_{\sigma \in S_{r}} {\mathrm{sgn}}(\sigma)
\prod^{r}_{\ell=1} \left(
\widetilde D^{}_{\ell,\sigma(\ell)}(\nu \epsilon_p^{-J},\mu \epsilon_p^I)
+ \epsilon_p^{\alpha\rho_\ell} \: \mathrm e^{Z(\zeta_\ell)} \:
\widetilde D^{}_{\ell,\sigma(\ell)}(\nu \epsilon_p^{-K},\mu \epsilon_p^I)
\right)
\]
and
\[
(\widetilde\mathcal R'_\alpha)^{}_{i j}
= \sum_{\scriptstyle \sigma \in S_{r}}
{\mathrm{sgn}}(\sigma)
\prod^{r}_{\scriptstyle \ell=1 \atop
\scriptstyle \ell \ne i, \sigma(\ell) \ne j}
\left( \widetilde D^{}_{\ell,\sigma(\ell)}
(\nu \epsilon_p^{-J},\mu \epsilon_p^I)
+ \epsilon_p^{\alpha\rho_\ell} \: \mathrm e^{Z(\zeta_\ell)} \:
\widetilde D^{}_{\ell,\sigma(\ell)}(\nu \epsilon_p^{-K},\mu \epsilon_p^I)
\right).
\]
Here $S_{r}$ is the symmetric group on the set of integers
$\{1,2,\ldots,{r}\}$, and $\mathrm{sgn}(\sigma)$ denotes
the signature of the permutation $\sigma$.
For sake of brevity, it is also convenient to denote
$\widetilde D_{i j}(A) = \widetilde D_{i j}(\nu \epsilon_p^{-A},\mu \epsilon_p^{I})$.
Seeing that
\[
\det \widetilde R'_\alpha =
\det \widetilde D(\nu \epsilon_p^{-J},\mu \epsilon_p^{I}) \cdot
\det \widetilde R''_\alpha,
\]
where
\[
(\widetilde R''_\alpha)^{}_{i j} = \delta_{i j}
+ \epsilon_p^{\alpha\rho_i} \: \mathrm e^{Z(\zeta_i)} \sum^{r}_{k=1}
\widetilde D^{}_{i k}(\nu \epsilon_p^{-K},\mu \epsilon_p^{I})
\widetilde D^{-1}_{k j}(\nu \epsilon_p^{-J},\mu \epsilon_p^{I}),
\]
we can also write
\[
\Gamma_\alpha = \frac{\displaystyle
I_{n_\alpha} \det \widetilde R''_\alpha - \sum^{r}_{i,j,k=1}
\widetilde D^{-1}_{i k}(J) (\widetilde\mathcal R''_\alpha)^{}_{k j}
\left( \widetilde Y^{(J)}_{\alpha,j i} + \epsilon_p^{\alpha\rho_j}
\: \mathrm e^{Z(\zeta_j)} \: \widetilde Y^{(K)}_{\alpha,j i}\right)}
{\displaystyle \det \widetilde R''_\alpha},
\]
where
\[
(\widetilde\mathcal R''_\alpha)^{}_{k j}
= \sum_{\scriptstyle \sigma \in S_{r}}
{\mathrm{sgn}}(\sigma)
\prod^{r}_{\scriptstyle \ell = 1 \atop
\scriptstyle \ell \ne j, \sigma(\ell) \ne k}
\left( \delta_{\ell,\sigma(\ell)}
+ \epsilon_p^{\alpha\rho_\ell} \: \mathrm e^{Z(\zeta_\ell)}
\sum^{r}_{m=1} \widetilde D^{}_{\ell m}(K)
\widetilde D^{-1}_{m,\sigma(\ell)}(J)
\right).
\]
Let us introduce a new notation for sake of certain brevity:
\[
\widetilde H^{}_{i j}(K,J) = \sum^{r}_{k=1}
\widetilde D^{}_{i k}(K) \: \widetilde D^{-1}_{k j}(J).
\]
Then we find that
\[
\Gamma_\alpha = h_{\alpha,J} \: {\widetilde T^{-1}_\alpha} \: {\widetilde T^X_\alpha},
\]
where
\[
h_{\alpha,J} = I_{n_\alpha} - \sum^{r}_{i,j=1}
\widetilde D^{-1}_{i j}(J) \: \widetilde Y^{(J)}_{\alpha,j i},
\]
and the quantities $\widetilde T_\alpha$ and $n_\alpha \times n_\alpha$
matrices $\widetilde T^X_\alpha$ together represent a non-abelian analogue
of the Hirota's $\tau$-functions,
\begin{gather}
\widetilde T_\alpha = 1 + \sum^{r}_{i=1} E_{\alpha,i}
+ \sum^{r}_{\ell=2}
\sum^{r}_{\scriptstyle i_1,i_2,\ldots,i_\ell=1
\atop \scriptstyle i_1 < i_2 < \ldots < i_\ell}
\widetilde \eta_{i_1 i_2\ldots i_\ell} \:
E_{\alpha,i_1} \: E_{\alpha,i_2} \ldots E_{\alpha,i_\ell},
\nonumber \\
\label{e:5.7} \\
\widetilde T^X_\alpha = I_{n_\alpha} + \sum^{r}_{i=1} E_{\alpha,i} \:
\widetilde X_{\alpha,i}
+ \sum^{r}_{\ell=2}
\sum^{r}_{\scriptstyle i_1,i_2,\ldots,i_\ell=1
\atop \scriptstyle i_1 < i_2 < \ldots < i_\ell}
\widetilde \eta_{i_1 i_2 \ldots i_\ell} \:
E_{\alpha,i_1} \: E_{\alpha,i_2} \ldots E_{\alpha,i_\ell}
\widetilde X_{\alpha, i_1 i_2 \ldots i_\ell}.
\nonumber
\end{gather}
Here we also use our standard notation, coming yet from the abelian
case \cite{NirRaz08a},
\[
E_{\alpha,i} = \epsilon_p^{\alpha\rho_i} \mathrm e^{Z(\zeta_i)
+ \widetilde \delta_i},
\]
with the quantities $\widetilde \delta_i$ defined by
\[
\mathrm e^{\widetilde \delta_i} \equiv \widetilde H^{}_{i i}
= \sum^{r}_{k=1}
\widetilde D^{}_{i k}(K) \: \widetilde D^{-1}_{k i}(J),
\]
and the `soliton interaction coefficients' given by
\[
\widetilde \eta_{i_1 i_2 \ldots i_\ell} =
\frac{\displaystyle \sum_{\sigma \in S_\ell} \mathrm{sgn}(\sigma)
\prod^\ell_{m=1} \widetilde H_{i_m i_{\sigma(m)}}}
{\displaystyle \prod^\ell_{k=1} \widetilde H_{i_k i_k}}.
\]
Note that in the abelian case \cite{NirRaz08a} these quantities
factorize into pairwise interaction coefficients. To be noted
also here the relationship with our former notation from
\cite{NirRaz08a}, $\widetilde d_i \equiv \widetilde H_{i i}$. Finally,
the matrices $\widetilde X_{\alpha, \cdots }$ are defined as
follows:
\[
\widetilde X_{\alpha,i_1 i_2 \ldots i_\ell}
= h^{-1}_{\alpha,J} \: X_{\alpha,i_1 i_2 \ldots i_\ell},
\]
where for $\ell = 1$
\[
X_{\alpha,i} = I_{n_\alpha} - \frac{1}{\widetilde H^{}_{i i}}
\sum^{r}_{j,k=1} \left(
\widetilde D^{-1}_{k j}(J) \: \widetilde H^{}_{i i}
- \widetilde D^{-1}_{k i}(J) \: \widetilde H^{}_{i j}
\right) \widetilde Y^{(J)}_{\alpha,j k}
- \frac{1}{\widetilde H^{}_{i i}}
\sum^{r}_{k=1} \widetilde D^{-1}_{k i}(J)
\: \widetilde Y^{(K)}_{\alpha,i k}
\]
and the higher order quantities are
\begin{eqnarray*}
X_{\alpha,i_1 i_2 \ldots i_\ell} &=& I_{n_\alpha}
- \frac{1}{\displaystyle \sum_{\sigma \in S_\ell} \mathrm{sgn}(\sigma)
\displaystyle \prod^\ell_{m=1} \widetilde H^{}_{i_m i_{\sigma(m)}}}
\times \\
&& \times \left( \sum^{r}_{j,k=1}
\sum_{\sigma \in S'_{\ell+1}}
\mathrm{sgn}(\sigma) \:
\widetilde D^{-1}_{k \sigma(j)}(J) \:
\widetilde H^{}_{i_1 i_{\sigma(1)}}
\widetilde H^{}_{i_2 i_{\sigma(2)}} \ldots
\widetilde H^{}_{i_\ell i_{\sigma(\ell)}} \:
\widetilde Y^{(J)}_{\alpha,j k} \right. \\
&& \hskip5mm
+ \left. \sum^{r}_{k=1} \sum_{\sigma \in S_\ell}
\mathrm{sgn}(\sigma) \:
\widetilde D^{-1}_{k i_{\sigma(1)}}(J) \:
\widetilde H^{}_{i_2 i_{\sigma(2)}}
\widetilde H^{}_{i_3 i_{\sigma(3)}} \ldots
\widetilde H^{}_{i_\ell i_{\sigma(\ell)}} \:
\widetilde Y^{(K)}_{\alpha, i_1 k} \right), \quad \ell \ge 2,
\end{eqnarray*}
with $S_\ell$ being the symmetric group on the set
$\{1,2,\ldots,\ell\}$, $S'_{\ell + 1}$ the symmetric
group on the set
$\{j, i_1, i_2, \ldots, i_\ell\}$,
everywhere admitting that
$\sigma(i_m) = i_{\sigma(m)}$.
In particular, for $\ell=r$, we obtain a remarkably
simple equality
\[
X_{\alpha,1 2 \ldots {r}} = h_{\alpha,K}
= I_{n_\alpha} - \sum^{r}_{j,k=1}
\widetilde D^{-1}_{k j}(K) \:
\widetilde Y^{(K)}_{\alpha,j k}
\]
and
\[
\widetilde \eta^{}_{12\ldots{r}}
= \frac{\displaystyle \det \widetilde H}{\displaystyle \prod^{r}_{k=1}
\widetilde H_{i_k i_k}}
= \exp{\biggl( -\sum^{r}_{k=1} \widetilde \delta_k \biggr)}
\: \det \widetilde H.
\]
We notice that $h^{-1}_{\alpha,J}$ is a linear combination of
$\widetilde Y^{(J)}_{\alpha,i j}$,
\[
h^{-1}_{\alpha,J} = I_{n_\alpha} + \sum^{r}_{i,j=1}
\widetilde B^{-1}_{j i}(J) \: \widetilde Y^{(J)}_{\alpha,i j},
\qquad
\widetilde B_{i j}(J) = ({}^{t\!}d_{J_i} \: \Theta \: c^{}_{I_j})
\frac{\mu_j \: \epsilon_p^{I_j}}
{\nu_i \: \epsilon_p^{-J_i} - \mu_j \: \epsilon_p^{I_j}},
\]
and so, as well as the matrices $X_{\alpha, \cdots}$, also the
$\widetilde X_{\alpha, \cdots}$ can easily be written with the help of
(\ref{e:5.6}) just in form of $I_{n_\alpha}$ minus certain linear
combinations of $\widetilde Y^{(J)}_{\alpha,i j}$ and
$\widetilde Y^{(K)}_{\alpha,i j}$.
With the help of the relations (\ref{e:5.1}) and (\ref{e:5.5}),
(\ref{e:5.6}) it can be seen that
\[
C'_{-\alpha} = h^{-1}_{\alpha+1,J} C_{-\alpha} h^{}_{\alpha,J}
= C_{-\alpha},
\qquad
C'_{+\alpha} = h^{-1}_{\alpha,J} C_{+\alpha} h^{}_{\alpha+1,J}
= C_{+\alpha}.
\]
Hence, again, as it was the case for the one-soliton solutions,
the transformation
\[
h_{\alpha,J} \Gamma_\alpha \to \Gamma_\alpha
\]
is a symmetry transformation of the Toda equations (\ref{e:3.5}).
Consequently, we can write the multi-soliton solution to the
nonlinear matrix differential Toda equations (\ref{e:3.5})
as the `ratio'
\[
\Gamma_\alpha = {\widetilde T^{-1}_\alpha}{\widetilde T^{X}_\alpha},
\]
where the `numerator' and `denominator' are given explicitly
by (\ref{e:5.7}). Observe also that for $p=n$ one has $n_\alpha=1$
and then $\widetilde X_{\alpha,i_1 i_2 \ldots i_\ell}$ turns into
$\epsilon_p^{(\rho_{i_1} + \rho_{i_2} + \ldots + \rho_{i_\ell})}$,
and so, one obtains $\widetilde T^X_\alpha \to \widetilde T_{\alpha+1}$, with
$\widetilde T_\alpha$ reproducing the Hirota's $\tau$-functions for abelian
Toda systems \cite{NirRaz08a}.
It is also useful to have an explicit expression for the inverse
mapping, that is
\[
\Gamma^{-1}_{\alpha} = \widetilde T^{X^{-1}}_{\alpha+1} \:
\widetilde T^{-1}_{\alpha+1}.
\]
Here, the entries of this expression are defined according
to the relations (\ref{e:5.7}), with $\widetilde X_{\alpha,\ldots}$
replaced by
\[
\widetilde X^{-1}_{\alpha,i_1 i_2 \ldots i_\ell}
= X^{-1}_{\alpha,i_1 i_2 \ldots i_\ell} h^{}_{\alpha,J},
\]
where for $\ell = 1$ we have
\[
X^{-1}_{\alpha,i} = I_{n_\alpha} + \frac{1}{\widetilde F^{}_{i i}}
\sum^{r}_{j,k=1} \left(
\widetilde B^{-1}_{k j}(J) \: \widetilde F^{}_{i i}
- \widetilde B^{-1}_{k i}(J) \: \widetilde F^{}_{i j}
\right) \widetilde Y^{(J)}_{\alpha,j k}
+ \frac{1}{\widetilde F^{}_{i i}}
\sum^{r}_{k=1} \widetilde B^{-1}_{k i}(J)
\: \widetilde Y^{(K)}_{\alpha,i k},
\]
with the quantities $\widetilde F$ defined similarly to $\widetilde H$, only that
by means of $\widetilde B$,
\[
\widetilde F_{i j}(K,J)
= \sum_{k=1}^{r} \widetilde B^{}_{i k}(K) \: \widetilde B^{-1}_{k j}(J),
\]
and for $\ell \ge 2$ the other $n_\alpha \times n_\alpha$
inverse matrices $X^{-1}_{\alpha,\ldots}$ are
\begin{eqnarray*}
X^{-1}_{\alpha,i_1 i_2 \ldots i_\ell} &=& I_{n_\alpha}
+ \frac{1}{\displaystyle \sum_{\sigma \in S_\ell} \mathrm{sgn}(\sigma)
\displaystyle \prod^\ell_{m=1} \widetilde F^{}_{i_m i_{\sigma(m)}}}
\times \\
&& \times \left( \sum^{r}_{j,k=1}
\sum_{\sigma \in S'_{\ell+1}}
\mathrm{sgn}(\sigma) \:
\widetilde B^{-1}_{k \sigma(j)}(J) \:
\widetilde F^{}_{i_1 i_{\sigma(1)}}
\widetilde F^{}_{i_2 i_{\sigma(2)}} \ldots
\widetilde F^{}_{i_\ell i_{\sigma(\ell)}} \:
\widetilde Y^{(J)}_{\alpha,j k} \right. \\
&& \hskip5mm
+ \left. \sum^{r}_{k=1} \sum_{\sigma \in S_\ell}
\mathrm{sgn}(\sigma) \:
\widetilde B^{-1}_{k i_{\sigma(1)}}(J) \:
\widetilde F^{}_{i_2 i_{\sigma(2)}}
\widetilde F^{}_{i_3 i_{\sigma(3)}} \ldots
\widetilde F^{}_{i_\ell i_{\sigma(\ell)}} \:
\widetilde Y^{(K)}_{\alpha, i_1 k} \right).
\end{eqnarray*}
Now, to make our construction a little bit more transparent, we
add the following observation. Let us consider an $r \times r$
matrix $\widetilde \Delta_{i}$ being explicitly of the form
\[
\widetilde \Delta^{}_{i} =
\left(
\begin{array}{cccc}
\widetilde D^{}_{11}(J) & \widetilde D^{}_{12}(J) & \ldots & \widetilde D^{}_{1r}(J) \\
\vdots & \vdots & \ddots & \vdots \\
\widetilde D^{}_{i,1}(K) & \widetilde D^{}_{i,2}(K) & \ldots & \widetilde D^{}_{i,r}(K) \\
\vdots & \vdots & \ddots & \vdots \\
\widetilde D^{}_{r1}(J) & \widetilde D^{}_{r2}(J) & \ldots &
\widetilde D^{}_{rr}(J)
\end{array}
\right).
\]
That is, one takes the matrix $\widetilde D(J)$ and simply changes its $i$th
row by the corresponding row of the matrix $\widetilde D(K)$. Then we can write
\[
X_{\alpha,i} = I_{n_\alpha} - \sum^{r}_{\scriptstyle j,k=1
\atop \scriptstyle j \ne i}
(\widetilde \Delta^{-1}_{i})^{}_{k j}
\: \widetilde Y^{(J)}_{\alpha,j k}
- \sum^{r}_{k=1}
(\widetilde \Delta^{-1}_{i})^{}_{k i}
\: \widetilde Y^{(K)}_{\alpha,i k}.
\]
And, besides, one also has
\[
\mathrm e^{\widetilde \delta_i} = \widetilde H^{}_{i i}
= \frac{\det \widetilde \Delta_{i}}{\det \widetilde D(J)}.
\]
In general, for $\ell \ge 2$, introduce the following
$r \times r$ matrix:
\[
\widetilde \Delta^{}_{i_1 i_2 \ldots i_\ell} =
\left(
\begin{array}{cccc}
\widetilde D^{}_{11}(J) & \widetilde D^{}_{12}(J) & \ldots & \widetilde D^{}_{1r}(J) \\
\vdots & \vdots & \ddots & \vdots \\
\widetilde D^{}_{i_1 1}(K) & \widetilde D^{}_{i_1 2}(K) & \ldots &
\widetilde D^{}_{i_1 r}(K) \\
\vdots & \vdots & \ddots & \vdots \\
\widetilde D^{}_{i_2 1}(K) & \widetilde D^{}_{i_2 2}(K) & \ldots &
\widetilde D^{}_{i_2 r}(K) \\
\vdots & \vdots & \ddots & \vdots \\
\widetilde D^{}_{i_\ell 1}(K) & \widetilde D^{}_{i_\ell 2}(K) & \ldots &
\widetilde D^{}_{i_\ell r}(K) \\
\vdots & \vdots & \ddots & \vdots \\
\widetilde D^{}_{r1}(J) & \widetilde D^{}_{r2}(J) & \ldots &
\widetilde D^{}_{rr}(J)
\end{array}
\right).
\]
Here, similarly to the preceding, we have taken the matrix
$\widetilde D(J)$ and replaced its rows $\widetilde D_{i_1 j}$,
$\widetilde D_{i_2 j}$, $\ldots$, $\widetilde D_{i_\ell j}$,
for $j$ running from $1$ to $r$, by the corresponding
matrix elements of $\widetilde D(K)$. And with the help of the
introduced matrices we immediately find out that
\[
X_{\alpha,i_1 i_2 \ldots i_\ell} = I_{n_\alpha} \: \: \: \: \: \:
- \sum^{r}_{\scriptstyle j,k=1 \atop \scriptstyle
j \ne i_1,i_2,\ldots,i_\ell}
(\widetilde \Delta^{-1}_{i_1 i_2 \ldots i_\ell})^{}_{k j}
\: \widetilde Y^{(J)}_{\alpha,j k} \: \: \: \: \: \: \: \: \: \:
- \sum^{r}_{\scriptstyle j,k=1 \atop \scriptstyle
j = i_1, i_2, \ldots, i_\ell}
(\widetilde \Delta^{-1}_{i_1 i_2 \ldots i_\ell})^{}_{k j}
\: \widetilde Y^{(K)}_{\alpha,j k}.
\]
Also the corresponding inverse matrices $X^{-1}_{\alpha,\ldots}$
can easily be found by replacing $\widetilde D$ by $\widetilde B$ in the above
construction and putting there the sign $+$ instead of $-$.
Note finally that if $n_* = 1$ some relations simplify, and we
recover for them certain expressions specific to the abelian
case \cite{NirRaz08a}.
\section{Conclusion}
In this paper we have considered the non-abelian Toda systems
associated with the untwisted loop groups of the complex general
linear groups. Developing the rational dressing method, we have
constructed multi-soliton solutions for these equations. Here,
the block-matrix representation of the group and algebra elements,
as suggested by the $\mathbb Z$-gradation, turned out to be most
appropriate to the problem under consideration. The solutions
are presented in a form of a direct matrix generalization of
the expressions obtained earlier for the abelian case
\cite{NirRaz08a}. In particular, the fact of non-abelian
generalization shows up explicitly through the special
matrices $\widetilde X_{\alpha,\ldots}$ and non-factorability of the
`soliton interaction coefficients' $\widetilde \eta_{i_1i_2\ldots}$.
We have also observed that the reduction to the non-abelian loop
Toda systems associated with the complex special linear groups can
be performed.
Now, it would be interesting to generalize our consideration to
other non-abelian loop Toda systems described in the classification
of \cite{NirRaz07a, NirRaz07b}, that is to Toda systems associated
with twisted loop groups of general linear groups and twisted and
untwisted loop groups of the complex orthogonal and symplectic Lie
groups.
\vskip2mm
{\bf Acknowledgments}
We are very grateful to the members of the Theoretical Physics Department
of the University of Wuppertal, especially to Profs. Hermann Boos, Frank
G\"ohmann and Andreas Kl\"umper for hospitality, certain interest to our
work and stimulating discussions. This work was supported in part by the
Russian Foundation for Basic Research under grant \#07--01--00234 and by
the joint DFG--RFBR grant \#08--01--91953.
|
2,877,628,090,277 | arxiv | \section{#1}}
\begin{document}
\vglue 2cm
\begin{center}
\title{New solutions of the star-triangle relation with discrete and continuous spin variables}
\author{Andrew P.~Kels}
\address{Institut f\"{u}r Mathematik, MA 8-4, Technische Universit\"{a}t Berlin,\\
Str. des 17. Juni 136, 10623 Berlin, Germany.}
\end{center}
\begin{abstract}
A new solution to the star-triangle relation is given, for an Ising type model of interacting spins containing integer and real valued components. Boltzmann weights of the model are given in terms of the lens elliptic gamma function, and are based on Yamazaki's recently obtained solution of the star-star relation. The star-triangle relation given here, implies Seiberg duality for the $4\!-\!d$ $\mathcal{N}=1$ $S_1\times S_3/\mathbb{Z}_r$ index of the $SU(2)$ quiver gauge theory, and the corresponding two component spin case of the star-star relation of Yamazaki. A proof of the star-triangle relation is given, resulting in a new elliptic hypergeometric summation/integration identity. The star-triangle relation in this paper contains the master solution of Bazhanov and Sergeev as a special case. Two other limiting cases are considered one of which gives a new star-triangle relation in terms of ratios of infinite $q$-products, while the other case gives a new way of deriving a star-triangle relation that was previously obtained by the author.
\end{abstract}
\newpage
\section{Introduction}
The star-triangle relation is a distinguished form of the Yang-Baxter equation for Ising-type models on two-dimensional lattices. In these
models the fluctuating variables, or ``spins'', are assigned to lattice
sites, and two spins interact only if they are connected by an edge of
the lattice. Remarkably, many physically interesting models in this class can be solved exactly, for instance, the $2\!-\!d$ Ising \cite{Bax82}, and chiral Potts \cite{AuY87,Baxter:1987eq} models, and some others \cite{Zam-fish,FZ82,Kashiwara:1986,FV95,BMS07a,BMS07b,Bazhanov:2010kz} (see also \cite{BKS2,Bax02rip} for a review of other known cases). The star-triangle relation plays the role of the integrability condition for these models.
Recently Bazhanov and Sergeev (BS) obtained an important ``master'' solution \cite{Bazhanov:2010kz} of the star-triangle relation, which contained all previously known solutions of this relation as particular cases, and provides interesting new examples. The above master solution is expressed in terms of the elliptic gamma function, which contains two arbitrary free parameters ${\mathsf p}$ and ${\mathsf q}$, that play the role of elliptic nomes. The spin variables for the corresponding statistical mechanical model take continuous real values on the circle.
Considered as a mathematical identity the BS master solution is identical to the elliptic beta integral of Spiridonov \cite{Spiridonov-beta}. The latter discovery was central to the modern development of the theory of elliptic hypergeometric functions \cite{Spiridonov-essays}, and some recent works further highlight that some of these identities are connected to the integrability of lattice models of statistical mechanics. Some examples include an extension of the BS master solution to the case of multi-component spins \cite{BS11,BKS}, and remarkable correspondences to Seiberg duality in supersymmetric gauge theories \cite{DolanOsborn,Spiridonov-statmech,Yamazaki2012, Yamazaki2013}. Recently Yamazaki introduced a new integrable model \cite{Yamazaki2013}, with Boltzmann weights satisfying the star-star relation, by using the property that the latter relation is equivalent to a particular Seiberg duality for the $4\!-\!d$ $\mathcal{N}=1$ lens index for a class of $SU(N)$ quiver gauge theories. This star-star relation is rather general and contains its variant for the master solution \cite{Bazhanov:2010kz} and its multi-spin generalisation \cite{BS11,BKS} as particular cases.
In Section \ref{sec:str} it is shown that the Boltzmann weights for the model with two-component spins introduced by Yamazaki, also satisfy a star-triangle relation. A proof for the star-triangle relation is given in Appendix \ref{app:proof}, which also verifies the corresponding two component spin star-star relation, since the former relation implies the latter (but the reverse is not true). The actual proof given in Appendix \ref{app:proof} is for an identity more general than the star-triangle relation, resulting in a new elliptic hypergeometric summation/integration identity for the lens elliptic gamma function, that contains six complex and six integer variables. This identity contains Spiridonov's celebrated elliptic beta integral as a particular case. Two limiting cases of the star-triangle relation are considered, resulting in one new solution to the star-triangle relation with Boltzmann weights given in terms of infinite q-products, and another new solution with Boltzmann weights given in terms of the Euler gamma function, that has recently been obtained by the author \cite{K14}. Possible relations to existing integrals in the literature are discussed.
\subsection{Solvable square lattice model}
All models may be considered here on the square lattice made up of of $N$ sites. Spin variables
\begin{equation}
\sigma_j=(x_j,m_j)\,,\quad x_j\in\mathbb{R}\,,\;m_j\in\mathbb{Z}\,,\quad j=1,2,\ldots,N,
\end{equation}
are assigned to each site of the lattice, where $x_j$ takes real values, and $m_j$ takes integer values. Two spins interact only if they are connected by an edge of the lattice. The interactions are represented by the Boltzmann weights ${\mathcal W}_\alpha(\sigma_i,\sigma_j)$, and $\overline{\mathcal W}_\alpha(\sigma_i,\sigma_j)$, associated to horizontal and vertical edges respectively, where $\sigma_i$ and $\sigma_j$ are the spins located at the end of the edge, as shown in Figure~\ref{2boltzmannweights}. Here two Boltzmann weights are distinguished by crossing of dashed rapidity lines, a property which allows one to also consider the model on more general ``Z-invariant'' lattices \cite{Bax1}.
\begin{figure}[hbt]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(15,5)
\put(3.8,1.5)
{\begin{picture}(6,2)
\setlength{\unitlength}{1.2cm}
\path(0,1)(2,1)
\put(-0.1,1){\circle{0.16}}
\put(2.1,1){\circle{0.16}}
\multiput(0,0)(0.11,0.11){18}{\line(1,1){0.05}} \path(2,1.9)(2,2)(1.9,2)
\multiput(2,0)(-0.11,0.11){18}{\line(1,-1){0.05}} \path(0,1.9)(0,2)(0.1,2)
\put(-0.6,0.9){$\sigma_i$}
\put(2.3,0.9){$\sigma_j$}
\put(0.35,-1.0){$\mathcal{W}_{\alpha}(\sigma_i,\sigma_j)$}
\multiput(4,0)(0.11,0.11){18}{\line(1,1){0.05}} \path(6,1.9)(6,2)(5.9,2)
\multiput(6,0)(-0.11,0.11){18}{\line(1,-1){0.05}} \path(4,1.9)(4,2)(4.1,2)
\path(5,0)(5,2)
\put(5,-0.1){\circle{0.16}}
\put(5,2.1){\circle{0.16}}
\put(4.9,-0.5){$\sigma_i$}
\put(4.9,2.4){$\sigma_j$}
\put(4.5,-1.0){$\overline{\mathcal{W}}_{\alpha}(\sigma_i,\sigma_j)$}
\end{picture}}
\end{picture}
\caption{Horizontal (left) and vertical edges (right), and their Boltzmann weights. Here ``rapidity lines'' (dashed arrows) are shown to distinguish the two types of weights, $\mathcal{W}$ and $\overline{\mathcal{W}}$.}
\label{2boltzmannweights}
\end{center}
\end{figure}
The two edge Boltzmann weights, depend on the additive spectral variable $\alpha$, and are related to each other by the crossing symmetry property $\overline{\mathcal W}_\alpha(\sigma_i,\sigma_j)={\mathcal W}_{\eta-\alpha}(\sigma_i,\sigma_j)$. The ``crossing parameter'', $\eta$, is model dependent, and the regime $0<\alpha<\eta$ is a physical regime of the model, where the Boltzmann weights are positive and real-valued. For all models considered in this paper, the Boltzmann weights are also spin reflection symmetric, such that ${\mathcal W}_\alpha(\sigma_i,\sigma_j)={\mathcal W}_\alpha(\sigma_j,\sigma_i)$.
To each spin $\sigma_j$ in the lattice one also associates the single-spin weights ${\mathcal S}(\sigma_j)$, which are independent of the spectral variable $\alpha$. The partition function of the model is then defined as a product of all Boltzmann weights, with a integral (sum) over all internal continuous (discrete) spins, while boundary spins are kept fixed,
\begin{align}
\label{z-main}
{\cal Z}=\sum\int
\prod_{(ij)}{\mathcal W}_{\alpha}(\sigma_i,\sigma_j)\
\prod_{(kl)}{\mathcal W}_{{\eta}-\alpha}(\sigma_k,\sigma_l)\ \prod_{n}
{\mathcal S}(\sigma_n)\,d x_n\,.
\end{align}
The first product is taken over all horizontal edges $(ij)$, the second over all vertical edges $(kl)$ and the third product over all internal sites of the lattice. The goal of statistical mechanics is to evaluate \eqref{z-main} in the thermodynamic limit, when the number of sites in the lattice goes to infinity. This evaluation is possible if the Boltzmann weights satisfy the star-triangle equation. For the models given here this relation reads
\begin{align}
\label{msstr}
\begin{array}{l}
\displaystyle\sum_{m_0}\int
d x_0\,\mathcal{S}(\sigma_0)\mathcal{W}_{\eta-\alpha_i}(\sigma_i,\sigma_0)\mathcal{W}_{\eta-\alpha_j}(\sigma_j,\sigma_0)\mathcal{W}_{\eta-\alpha_k}(\sigma_0,\sigma_k)\\[.3cm]
\phantom{MMMMMMMMM}\displaystyle=
{\cal
R}(\alpha_i,\alpha_j,\alpha_k)\,\mathcal{W}_{\alpha_i}(\sigma_j,\sigma_k)\mathcal{W}_{\alpha_j}(\sigma_i,\sigma_k)\mathcal{W}_{\alpha_k}(\sigma_j,\sigma_i)\,,
\end{array}
\end{align}
where the three spectral parameters $\alpha_i$, $\alpha_j$, $\alpha_k$ satisfy the constraint $\alpha_i+\alpha_j+\alpha_k=\eta$ and the factor ${\cal R}(\alpha_i,\alpha_j,\alpha_k)$ is independent of the spins $\sigma_i,\sigma_j,\sigma_k$. The integral and sum are evaluated over a given set of continuous and discrete values respectively. There exists also a second star-triangle relation obtained by exchanging the order of spins appearing in the Boltzmann weights in \eqref{msstr}, however for models with symmetric Boltzmann weights, ${\mathcal W}_\alpha(\sigma_i,\sigma_j)={\mathcal W}_\alpha(\sigma_j,\sigma_i)$, the two expressions are equivalent. The star-triangle relation \eqref{msstr} implies that the row-to-row transfer matrices of the lattice model commute \cite{Bax72}.
For all models considered here, the normalisation of the Boltzmann weights is chosen such that the spin independent factor $\mathcal{R}(\alpha_i,\alpha_j,\alpha_k)$ in \eqref{msstr} is equal to one. This result is based on a factorisation for $\mathcal{R}(\alpha_i,\alpha_j,\alpha_k)$ \cite{Bax02rip}, which holds for all of the above mentioned solutions of the star-triangle relation, particularly for the new solutions appearing in the following section.
For this special normalisation, the Boltzmann weights of the model also satisfy the following boundary conditions
\begin{align}
\label{msbc}
\left.{\mathcal W}_\alpha(\sigma_i,\sigma_j)\right|_{\alpha=0}=1,\quad\left.{\mathcal W}_{\eta-\alpha}(\sigma_i,\sigma_j)\right|_{\alpha\rightarrow0}=\frac{1}{2{\mathcal S}(\sigma_i)}(\delta(x_i\!+\!x_j)\,\delta_{m_i,-m_j}+\delta(x_i\!-\!x_j)\,\delta_{m_i,m_j})\,,
\end{align}
where $\delta(x)$, and $\delta_{m,n}$, are respectively Dirac and Kronecker delta functions. The exact form of the second boundary condition differs slightly depending on the symmetries of the Boltzmann weights, and explicit expressions for the boundary conditions in each case will be given for the three different models obtained in the next section.
From the boundary conditions \eqref{msbc}, and star-triangle relation \eqref{msstr}, one obtains the following inversion relations
\begin{align}
\label{msinv}
\begin{array}{rcl}
\displaystyle{\mathcal W}_\alpha(\sigma_i,\sigma_j){\mathcal W}_{-\alpha}(\sigma_i,\sigma_j)&\!\!\!\!=\!\!\!\!&1\,, \\[0.2cm]
\displaystyle\sum_{m_0}\!\int\!\! dx_0\,{\mathcal S}(\sigma_0){\mathcal W}_{\eta-\alpha}(\sigma_i,\sigma_0){\mathcal W}_{\eta+\alpha}(\sigma_0,\sigma_j)&\!\!\!\!=\!\!\!\!&\displaystyle\frac{1}{2{\mathcal S}(\sigma_i)}(\delta(x_i\!+\!x_j)\,\delta_{m_i,-m_j}+\delta(x_i\!-\!x_j)\,\delta_{m_i,m_j})\,.
\end{array}
\end{align}
The above relations, \eqref{msstr} and \eqref{msinv}, allow one to show that in the thermodynamic limit as the number of lattice sites goes to infinity $N\to\infty$, the bulk free energy of the model vanishes
\begin{align}
\lim_{N\to \infty} N^{-1} \log{\cal Z} = 0\,.\label{fzero}
\end{align}
A derivation of this result requires some extensions \cite{BKS2} of the standard inversion relation method \cite{Str79,Zam79,Bax82inv}. Here the boundary spins are assumed to be kept finite in the limit $N\to\infty$, and there is an analyticity assumption for the free energy of the model in the physical regime. The result \eqref{fzero} is purely a consequence of the special choice of normalisation for the Boltzmann weights \cite{BMS07a,BMS07b,Bazhanov:2010kz}.
\section{New discrete and continuous spin solutions to the star-triangle relation}
\label{sec:str}
In this section the Boltzmann weights are defined that give a new solution of the star-triangle relation \eqref{msstr}.\footnote{These Boltzmann weights correspond to the 2-component spin version of Yamazaki's star-star relation, which was obtained by identifying the correspondence of this relation with Seiberg duality of the lens index for a $\mathcal{N}=1$ supersymmetric quiver gauge theory \cite{Yamazaki2013}.} This star-triangle relation and the corresponding proof given in Appendix \ref{app:proof} are the main result of the paper.
Recall the definition of the spin
\begin{equation}
\sigma_j=(x_j,m_j),\quad x_j\in\mathbb{R}\,,\; m_j\in\mathbb{Z}\,.
\end{equation}
Restrict the continuous real valued component $x_j$, and the discrete integer valued component $m_j$, to take values
\begin{equation}
\label{rdef}
0\leq x_j<\pi,\quad m_j=0,1,\ldots,\floor{r/2}\,,
\end{equation}
for some positive integer parameter $r=1,2,\ldots,$ where $\floor{~}$ is the floor function. Define also the elliptic nomes ${\mathsf p},{\mathsf q}$, and crossing parameter $\eta$ as
\begin{equation}
\label{nomes}
{\mathsf p}=\textrm{{\large e}}^{\pi\mathsf{i}\sigma},\;{\mathsf q}=\textrm{{\large e}}^{\pi\mathsf{i}\tau},\;\eta=-\pi\mathsf{i}(\sigma+\tau)/2\,,\quad\mathop{\hbox{\rm Im}}\nolimits\sigma,\;\mathop{\hbox{\rm Im}}\nolimits\tau >0\,.
\end{equation}
Note that a physical regime where Boltzmann weights are real and positive valued can be found for ${\mathsf p}={\mathsf q}^*$. Define the elliptic gamma function as \cite{Rui-EGF,BS11}
\begin{equation}
\label{egf}
\Phi(z;{\mathsf p},{\mathsf q})=\prod_{j,k=0}^\infty\frac{1-\textrm{{\large e}}^{2\mathsf{i} z}\,{\mathsf p}^{2j+1}\,{\mathsf q}^{2k+1}}{1-\textrm{{\large e}}^{-2\mathsf{i} z}\,{\mathsf p}^{2j+1}\,{\mathsf q}^{2k+1}}\,.
\end{equation}
In terms of the elliptic gamma function, the so-called lens elliptic gamma function is defined as \cite{Yamazaki2013}
\begin{equation}
\label{legf}
\begin{array}{rcl}
\displaystyle\Phi_{r,m}(z)\!\!\!\!&=&\!\!\!\!\displaystyle\Phi(z+(r/2-\llbracket m\rrbracket _r)\,\pi\sigma;{\mathsf p}\,{\mathsf q},\,{\mathsf p}^r)\,\Phi(z-(r/2-\llbracket m\rrbracket _r)\,\pi\tau;{\mathsf p}\,{\mathsf q},\,{\mathsf q}^r) \\[0.3cm]
\!\!\!\!&=&\!\!\!\!\displaystyle\prod_{j,k=0}^\infty\!\frac{1-\textrm{{\large e}}^{2\mathsf{i} z}\,{\mathsf p}^{-2\llbracket m\rrbracket _r}\,({\mathsf p}{\mathsf q})^{2j+1}\,({\mathsf p}^r)^{2k+2}}{1-\textrm{{\large e}}^{-2\mathsf{i} z}\,{\mathsf p}^{2\llbracket m\rrbracket _r}\,({\mathsf p}{\mathsf q})^{2j+1}\,({\mathsf p}^r)^{2k}}\frac{1-\textrm{{\large e}}^{2\mathsf{i} z}\,{\mathsf q}^{2\llbracket m\rrbracket _r}\,({\mathsf p}{\mathsf q})^{2j+1}\,({\mathsf q}^r)^{2k}}{1-\textrm{{\large e}}^{-2\mathsf{i} z}\,{\mathsf q}^{-2\llbracket m\rrbracket _r}\,({\mathsf p}{\mathsf q})^{2j+1}\,({\mathsf q}^r)^{2k+2}}\,,
\end{array}
\end{equation}
where $\llbracket m\rrbracket _r\in\{0,1,\ldots ,r-1\}$ denotes $m \mbox{ modulus } r$. From \eqref{rdef}, note that when $r=1$, then $m_j=0$, and the lens elliptic gamma function reduces to the usual elliptic gamma-function \eqref{egf}
\begin{equation}
\Phi_{1,0}(z)=\Phi(z;{\mathsf p},{\mathsf q})\,.
\end{equation}
The lens elliptic gamma function \eqref{legf}, satisfies the following periodicity and inversion relations
\begin{equation}
\Phi_{r,m}(z)=\Phi_{r,m}(z+\pi),\quad\frac{1}{\Phi_{r,m}(z)}=\Phi_{r,-m}(-z)\,.
\end{equation}
Now define the edge Boltzmann weight as
\begin{equation}
\label{ebw}
{\mathcal W}_\alpha(\sigma_i,\sigma_j)=\displaystyle\frac{\textrm{{\large e}}^{-2\alpha\,(\,\llbracket m_i-m_j\rrbracket _\pm+\llbracket m_i+m_j\rrbracket _\pm\,)/r}}{\kappa(\alpha)}\frac{\Phi_{r,m_i-m_j}(x_i-x_j+\mathsf{i}\alpha)\,\Phi_{r,m_i+m_j}(x_i+x_j+\mathsf{i}\alpha)}{\Phi_{r,m_i-m_j}(x_i-x_j-\mathsf{i}\alpha)\,\Phi_{r,m_i+m_j}(x_i+x_j-\mathsf{i}\alpha)}\,,
\end{equation}
where $\llbracket m\rrbracket_\pm:=\llbracket m\rrbracket_r\llbracket -m\rrbracket_r$. The spectral parameter $\alpha$ is taken to lie in the domain $0<\alpha<\eta$, where $\eta$ is real. For ${\mathsf p}={\mathsf q}^*$, this is a physical regime of the model, where the Boltzmann weights \eqref{ebw} take real, positive values.
The normalisation factor $\kappa(\alpha)$ is given by
\begin{equation}
\label{ebwnorm}
\kappa(\alpha)=\exp\left\{\sum_{n\neq0}\frac{\textrm{{\large e}}^{4\alpha n}(({\mathsf p}{\mathsf q})^{rn}-({\mathsf p}{\mathsf q})^{-rn})}{n(({\mathsf p}{\mathsf q})^{2n}-({\mathsf p}{\mathsf q})^{-2n})({\mathsf p}^{rn}-{\mathsf p}^{-rn})({\mathsf q}^{rn}-{\mathsf q}^{-rn})}\right\}\,,
\end{equation}
and satisfies the pair of functional equations
\begin{equation}
\label{functrels}
\frac{\kappa(\eta-\alpha)}{\kappa(\alpha)}=\Phi_{r,0}(\mathsf{i}(\eta-2\alpha)),\quad\kappa(\alpha)\kappa(-\alpha)=1\,.
\end{equation}
For $r=1$ this reduces to the normalisation of the Boltzmann weights for the BS master solution \cite{Bazhanov:2010kz}.
Note that the functional equations \eqref{functrels} arise when solving for the free energy of the model, $\lim_{N\rightarrow\infty}N^{-1}\log\cal{Z}$, using the inversion relation method \cite{Str79,Zam79,Bax82inv}. The solution of these functional equations with appropriate analyticity properties, \eqref{ebwnorm}, is included in the normalisation of the Boltzmann weights \eqref{ebw}, and the result \eqref{fzero} follows \cite{BMS07a,BMS07b,Bazhanov:2010kz,BKS2}.
Next define the single-spin Boltzmann weight as\footnote{This differs from Yamazaki's single-spin weight ($\mathbb{S}^v(s)$ in his notation \cite{Yamazaki2013}) by dropping a constant singular factor that appears to be incorrect, at least from the point of view of the statistical mechanical model (in his notation this factor is $(\Gamma_{r,0}(1;p,q))^{-(N-1)}$).}
\begin{equation}
\label{ebws}
\begin{array}{rcl}
\displaystyle{\mathcal S}(\sigma_i)\!\!\!&=&\!\!\!\displaystyle\frac{\varepsilon_i}{\pi}\,({\mathsf p}^{2r};{\mathsf p}^{2r})_\infty({\mathsf q}^{2r};{\mathsf q}^{2r})_\infty\,\textrm{{\large e}}^{2\eta\llbracket 2m_i\rrbracket _\pm/r}\,\Phi_{r,-2m_i}(-2x_i-\mathsf{i}\eta)\,\Phi_{r,2m_i}(2x_i-\mathsf{i}\eta)\,, \\[0.5cm]
\!\!\!&=&\!\!\!\displaystyle\frac{\varepsilon_i}{\pi}\,\textrm{{\large e}}^{2\eta\llbracket 2m_i\rrbracket _\pm/r}\,{\vartheta}_4(2x_i+(r/2-\llbracket 2m_i\rrbracket _r)\pi\sigma\,|\,{\mathsf p}^r)\,{\vartheta}_4(2x_i-(r/2-\llbracket 2m_i\rrbracket _r)\pi\tau\,|\,{\mathsf q}^r)\,,
\end{array}
\end{equation}
where
\begin{equation}
\label{epsdef}
\varepsilon_i=\left\{\begin{array}{ll}\frac{1}{2}&\displaystyle\quad m_i=0 \mbox{ or } \llbracket r-m_i\rrbracket _r\,, \\[0.3cm] 1&\quad\mbox{otherwise}\,,\end{array}\right.
\end{equation}
$\vartheta_4$ is a Jacobi theta function
\begin{equation}
\vartheta_4(z\,|\,{\mathsf p})=({\mathsf p}^2;{\mathsf p}^2)_\infty\prod_{n=1}^\infty\left(1-\textrm{{\large e}}^{2\mathsf{i} z}{\mathsf p}^{2n-1}\right)\left(1-\textrm{{\large e}}^{-2\mathsf{i} z}{\mathsf p}^{2n-1}\right)\,,
\end{equation}
and $(x;{\mathsf q})_\infty=\prod_{j=0}^\infty\,(1-x\,{\mathsf q}^j)$ is the ${\mathsf q}$-Pochhammer symbol.
The Boltzmann weights \eqref{ebw} are reflection symmetric
\begin{equation}
\label{spinrefl}
{\mathcal W}_\alpha(\sigma_i,\sigma_j)={\mathcal W}_\alpha(\sigma_j,\sigma_i)\,.
\end{equation}
The Boltzmann weights have an obvious $\pi$-periodic symmetry in the continuous spin variable, and they also are invariant under the spin transformation $x_i\rightarrow-x_i$, $m_i\rightarrow r-m_i$,
Accordingly the discrete spins are restricted to values $0,1,\ldots,\floor{r/2}$, and the $\varepsilon_i$ factor was introduced in \eqref{epsdef} to account for this.
The Boltzmann weights \eqref{ebw} satisfy the following boundary conditions analogous to \eqref{msbc}
\begin{align}
\begin{array}{rcl}
\displaystyle\left.{\mathcal W}_\alpha(\sigma_i,\sigma_j)\right|_{\alpha=0}&\!\!\!\!=\!\!\!\!&\ds1,\\[0.3cm]
\displaystyle\left.{\mathcal W}_{\eta-\alpha}(\sigma_i,\sigma_j)\right|_{\alpha\rightarrow0}&\!\!\!\!=\!\!\!\!&\displaystyle\frac{\varepsilon_{i}}{{\mathcal S}(\sigma_i)}(\delta(\sin(x_i\!+\!x_j))\,\delta_{\llbracket m_i+m_j\rrbracket_r,0}+\delta(\sin(x_i\!-\!x_j))\,\delta_{\llbracket m_i-m_j\rrbracket_r,0})\,,
\end{array}
\end{align}
where ${\mathcal S}(\sigma_i)\neq0$. The $r=1$ case of these relations were previously obtained for the BS master solution \cite{Bazhanov:2010kz}, and in connection with biorthogonality of elliptic hypergeometric functions \cite{Sp2008}.
The Boltzmann weights \eqref{ebw}, and \eqref{ebws}, satisfy the star-triangle relation\footnote{It follows from \eqref{functrels} that a factor $\mathcal{R}(\alpha_i,\alpha_j,\alpha_k)=\Phi_{r,0}(\mathsf{i}(\eta-2\alpha_i))\Phi_{r,0}(\mathsf{i}(\eta-2\alpha_j))\Phi_{r,0}(\mathsf{i}(\eta-2\alpha_k))\,,$ would appear on the right hand side of the star-triangle relation \eqref{str}, if the Boltzmann weights \eqref{ebw} weren't normalised by $\kappa(\alpha)$.}
\begin{equation}
\label{str}
\begin{array}{l}
\displaystyle\sum_{m_0=0}^{\floor{r/2}}\,\int^\pi_0\! dx_0\,\mathcal{S}(\sigma_0)\mathcal{W}_{\eta-\alpha_i}(\sigma_i,\sigma_0)\mathcal{W}_{\eta-\alpha_j}(\sigma_j,\sigma_0)\mathcal{W}_{\eta-\alpha_k}(\sigma_k,\sigma_0)=\mathcal{W}_{\alpha_i}(\sigma_j,\sigma_k)\mathcal{W}_{\alpha_j}(\sigma_i,\sigma_k)\mathcal{W}_{\alpha_k}(\sigma_j,\sigma_i)\,,
\end{array}
\end{equation}
with the spectral parameters satisfying $\eta=\alpha_i+\alpha_j+\alpha_k$. For $r=1$ this reduces to the master solution of the star-triangle relation \cite{Bazhanov:2010kz}.
The star-triangle relation \eqref{str} is a particular case of a new elliptic hypergeometric summation/integration identity given in Appendix \ref{app:proof}.
\subsection{Limit: $r\rightarrow\infty$}
\label{sec:rinf}
The $r\rightarrow\infty$ limit of \eqref{str} is formally fairly straightforward due to the simple asymptotics of the lens elliptic gamma function. Consider the same elliptic nomes ${\mathsf p},{\mathsf q}$ from the previous section in \eqref{nomes}. Define the function $Q$ as the $r\rightarrow\infty$ limit of the lens elliptic gamma function \eqref{legf}
\begin{equation}
\label{qdef}
Q(z,n)=\lim_{r\rightarrow\infty}\Phi_{r,n}(z)=\left\{
\begin{array}{lr}
\displaystyle\prod_{j=0}^\infty\,\frac{1-\textrm{{\large e}}^{2\mathsf{i} z}\,{\mathsf p}^{-2n}\,({\mathsf p}{\mathsf q})^{2j+1}}{1-\textrm{{\large e}}^{-2\mathsf{i} z}\,{\mathsf q}^{-2n}\,({\mathsf p}{\mathsf q})^{2j+1}}& n<0\,, \\
\displaystyle\prod_{j=0}^\infty\,\frac{1-\textrm{{\large e}}^{2\mathsf{i} z}\,{\mathsf q}^{2n}\,({\mathsf p}{\mathsf q})^{2j+1}}{1-\textrm{{\large e}}^{-2\mathsf{i} z}\,{\mathsf p}^{2n}\,({\mathsf p}{\mathsf q})^{2j+1}}& n\geq0\,.
\end{array}\right.
\end{equation}
This function satisfies the following inversion relation
\begin{equation}
Q(z,n)=\frac{1}{Q(-z,-n)}\,.
\end{equation}
From this function one defines the edge Boltzmann weights
\begin{equation}
\label{rinfwts}
{\mathcal W}_\alpha(\sigma_i,\sigma_j)=\frac{\textrm{{\large e}}^{-2\alpha |m_i-m_j|-2\alpha|m_i+m_j|}}{\kappa(\alpha)}\,\frac{Q(x_i-x_j+\mathsf{i}\alpha,m_i-m_j)\,Q(x_i+x_j+\mathsf{i}\alpha,m_i+m_j)}{Q(x_i-x_j-\mathsf{i}\alpha,m_i-m_j)\,Q(x_i+x_j-\mathsf{i}\alpha,m_i+m_j)}\,,
\end{equation}
with the normalisation
\begin{equation}
\label{rinfnorm}
\kappa(\alpha)=\exp\left\{-\sum_{n\neq0}\frac{\textrm{{\large e}}^{4\alpha n}}{n(({\mathsf p}{\mathsf q})^{2n}-({\mathsf p}{\mathsf q})^{-2n})}\right\}\,.
\end{equation}
The spectral variable is restricted to the region $0<\alpha<\eta$, with $\eta$ defined in \eqref{nomes}. The normalisation factor $\kappa$ satisfies the following functional equations
\begin{equation}
\frac{\kappa(\eta-\alpha)}{\kappa(\alpha)}=Q(\mathsf{i}(\eta-2\alpha),0),\quad\kappa(\alpha)\kappa(-\alpha)=1\,,
\end{equation}
which are required for \eqref{fzero} to hold.
Define also the single-spin Boltzmann weight as
\begin{equation}
\label{rinfswt}
{\mathcal S}(\sigma_j)=\frac{1}{2\pi}\,\textrm{{\large e}}^{4\eta |m_j|}\,Q(2x_j-\mathsf{i}\eta,2m_j)\,Q(-2x_j-\mathsf{i}\eta,-2m_j)\,.
\end{equation}
The continuous spins $x_j$ and discrete spins $m_j$ now take values
\begin{equation}
0\leq x_j<\pi,\quad m_j\in\mathbb{Z}\,.
\end{equation}
The Boltzmann weights \eqref{rinfwts} satisfy spin reflection symmetry
\begin{equation}
{\mathcal W}_\alpha(\sigma_i,\sigma_j)={\mathcal W}_\alpha(\sigma_j,\sigma_i)\,,
\end{equation}
and are $\pi$-periodic in the continuous spin $x_j$. These Boltzmann weights are real and positive for ${\mathsf p}={\mathsf q}^*$, and $0<\alpha<\eta$.
The Boltzmann weights \eqref{rinfwts} satisfy the following boundary conditions
\begin{align}
\left.{\mathcal W}_\alpha(\sigma_i,\sigma_j)\right|_{\alpha=0}=1,\quad\left.{\mathcal W}_{\eta-\alpha}(\sigma_i,\sigma_j)\right|_{\alpha\rightarrow0}=\frac{1}{2{\mathcal S}(\sigma_i)}(\delta(\sin(x_i\!+\!x_j))\,\delta_{m_i,-m_j}+\delta(\sin(x_i\!-\!x_j))\,\delta_{m_i,m_j})\,,
\end{align}
where ${\mathcal S}(\sigma_i)\neq0$.
The Boltzmann weights \eqref{rinfwts}, and \eqref{rinfswt}, satisfy the star-triangle relation
\begin{equation}
\label{rinfstr}
\begin{array}{r}
\displaystyle\sum_{m_0\in\mathbb{Z}}\,\int^\pi_0\! dx_0\,\mathcal{S}(\sigma_0)\mathcal{W}_{\eta-\alpha_i}(\sigma_i,\sigma_0)\mathcal{W}_{\eta-\alpha_j}(\sigma_j,\sigma_0)\mathcal{W}_{\eta-\alpha_k}(\sigma_k,\sigma_0)=\mathcal{W}_{\alpha_i}(\sigma_j,\sigma_k)\mathcal{W}_{\alpha_j}(\sigma_i,\sigma_k)\mathcal{W}_{\alpha_k}(\sigma_j,\sigma_i)\,,
\end{array}
\end{equation}
with $\eta=\alpha_i+\alpha_j+\alpha_k$.
Note that a similar but different identity, involving an integral and sum over continuous and discrete variables respectively, was recently obtained by Gahramanov and Rosengren in the form of a pentagon identity from $3\!-\!d$ $\mathcal{N}=2$ supersymmetric gauge theories \cite{GR13}. From this point of view, it would also be interesting to find an interpretation of equation \eqref{rinfstr}, if any, in terms of a new duality for the $r\rightarrow\infty$ reduction of the $S_1\times S_3/\mathbb{Z}_r$ superconformal indices \cite{BNY13,Y14}.
\subsection{Gamma function limit}
From the star-triangle relation \eqref{rinfstr}, one can take a further limit to obtain another new solution of the star-triangle relation, with Boltzmann weights given in terms of the Euler gamma function. The latter star-triangle relation was recently obtained by the author \cite{K14}.
Consider the following limit of the elliptic nomes
\begin{equation}
{\mathsf p}=\textrm{{\large e}}^{-\hbar},\;\;{\mathsf q}=\textrm{{\large e}}^{-\hbar},\;\;\eta=\hbar,\quad\hbar\rightarrow0 \,,
\end{equation}
and the following scaling limit of the continuous spins $x_j$ and spectral parameters $\alpha$ from Section \ref{sec:rinf}
\begin{equation}
x_j\rightarrow\hbar x_j,\;\;\alpha\rightarrow\hbar\alpha\,,\quad\hbar\rightarrow0\,.
\end{equation}
Under the rescaling of the spins $\sigma_j=(x_j\hbar,m_j)$, the asymptotics of \eqref{qdef}, in the previous section as $\hbar\rightarrow0$, are given in terms of the Euler gamma function $\Gamma(z)$ by
\begin{equation}
Q(\sigma_j)\simeq(4\hbar)^{\mathsf{i} x_j}\,\frac{\Gamma(\frac{1+|m_j|+\mathsf{i} x_j}{2})}{\Gamma(\frac{1+|m_j|-\mathsf{i} x_j}{2})}\,.
\end{equation}
Then as $\hbar\rightarrow0$, the asymptotics of the Boltzmann weights \eqref{rinfwts}, \eqref{rinfswt}, and normalisation \eqref{rinfnorm}, are given by
\begin{equation}
{\mathcal S}(\sigma_j)\simeq\frac{1}{2\pi}(4\hbar)^{2}(x_j^2+m_j^2),\quad\kappa(\alpha\hbar)\simeq(8\hbar)^{-\alpha}\,\frac{\Gamma(\frac{1-\alpha}{2})}{\Gamma(\frac{1+\alpha}{2})}\,,
\end{equation}
and\footnote{The following compact notation for products of the gamma function, $\Gamma(x\pm y)=\Gamma(x+y)\Gamma(x-y)$, is now used for convenience.}
\begin{equation}
\kappa(\alpha\hbar)\,{\mathcal W}_{\alpha\hbar}(\sigma_i,\sigma_j)\simeq(4\hbar)^{-4\alpha}\,\frac{\Gamma(\frac{1-\alpha+(m_i-m_j)\pm\mathsf{i}(x_i-x_j)}{2})\,\Gamma(\frac{1-\alpha+(m_i+m_j)\pm\mathsf{i}(x_i+x_j)}{2})}{\Gamma(\frac{1+\alpha+(m_i-m_j)\pm\mathsf{i}(x_i-x_j)}{2})\,\Gamma(\frac{1+\alpha+(m_i+m_j)\pm\mathsf{i}(x_i+x_j)}{2})}\,.
\end{equation}
In the above limit the star-triangle relation \eqref{rinfstr}, formally reduces to the following star-triangle relation
\begin{equation}
\label{strmsg}
\begin{array}{r}
\displaystyle\sum_{m_0\in\mathbb{Z}}\,\int^{\infty}_{-\infty}\! dx_0\,
\,\mathcal{S}(\sigma_0)\mathcal{W}_{\eta-\alpha_i}(\sigma_i,\sigma_0)\mathcal{W}_{\eta-\alpha_j}(\sigma_j,\sigma_0)\mathcal{W}_{\eta-\alpha_k}(\sigma_k,\sigma_0)=\mathcal{W}_{\alpha_i}(\sigma_j,\sigma_k)\mathcal{W}_{\alpha_j}(\sigma_i,\sigma_k)\mathcal{W}_{\alpha_k}(\sigma_j,\sigma_i)\,,
\end{array}
\end{equation}
where $\eta=1=\alpha_i+\alpha_j+\alpha_k$, and\footnote{A typo appeared in the Boltzmann weight $\mathcal{S}_(\sigma_j)$ in a previous paper \cite{K14}, it has been corrected here by the addition of the factor of $\frac{1}{2}$.}
\begin{equation}
\label{2spinwts}
\mathcal{S}(\sigma_j)=\displaystyle\frac{1}{4\pi}(x_j^2+m_j^2)\;,\quad\mathcal{W}_{\alpha}(\sigma_i,\sigma_j)
=\displaystyle\frac{\Gamma(\frac{1+\alpha}{2})}{\Gamma(\frac{1-\alpha}{2})}\,
\frac{\Gamma(\frac{1-\alpha-(m_i+m_j)\pm\mathsf{i}(x_i+x_j)}{2})\,\Gamma(\frac{1-\alpha-(m_i-m_j)\pm\mathsf{i}(x_i-x_j)}{2})}
{\Gamma(\frac{1+\alpha-(m_i+m_j)\pm\mathsf{i}(x_i+x_j)}{2})\,\Gamma(\frac{1+\alpha-(m_i-m_j)\pm\mathsf{i}(x_i-x_j)}{2})}\;.
\end{equation}
The spins of the model now take their values $x_i\in\mathbb{R}$, $m_i\in\mathbb{Z}\,,$ and the spectral parameter is restricted to $0<\alpha<\eta$, which is a physical regime of the model. These Boltzmann weights also obey the spin reflection identity \eqref{spinrefl}, however are no longer $\pi$-periodic in the spin.
The Boltzmann weights \eqref{2spinwts} also satisfy the following boundary conditions
\begin{align}
\left.{\mathcal W}_\alpha(\sigma_i,\sigma_j)\right|_{\alpha=0}=1,\quad\left.{\mathcal W}_{\eta-\alpha}(\sigma_i,\sigma_j)\right|_{\alpha\rightarrow0}=\frac{1}{2{\mathcal S}(\sigma_i)}(\delta(x_i\!+\!x_j)\,\delta_{m_i,-m_j}+\delta(x_i\!-\!x_j)\,\delta_{m_i,m_j})\,,
\end{align}
where ${\mathcal S}(\sigma_i)\neq0$. For additional details on this star-triangle relation, the reader is referred to the previous publication \cite{K14}.
Note that different solutions to the star-triangle relation with Boltzmann weights given in terms of the Euler gamma function, were previously found in relation to the chiral Potts model \cite{AuYangPerk:ninfcp}. The Boltzmann weights for these star-triangle relations are for models that contain only, either continuous, or discrete valued spins. It would be interesting to determine if there exists some relation between these solutions, and the star-triangle relation given in \eqref{strmsg}.
As was previously remarked \cite{K14}, the appearance of discrete and continuous valued spins here resemble the elliptic model obtained by Yamazaki from quiver gauge theory \cite{Yamazaki2013}. It has been shown here now, how these models are connected through the star-triangle relation \eqref{str}, the latter relation implying the star-star relation for the two-component spin case.
The alternate method \cite{K14} to obtain \eqref{strmsg} was to use a scaling limit of the hyperbolic beta integral solution of the star-triangle relation \cite{Spiridonov-statmech}, that resulted in the star-triangle relation given by \eqref{strmsg}.\footnote{Bazhanov, Mangazeev, and Sergeev originally used this limiting procedure to obtain a related solution to the star-triangle relation, with Boltzmann weights depending only on the differences of spins.\cite{BMS07b}} The asymptotics of the hyperbolic beta integral in the strong coupling regime, are such that sharp delta function shaped peaks appear when the real valued spins take integer values. These asymptotics are manifest in the strong coupling limit as additional discrete integer spin variables, as appearing in \eqref{strmsg}. One might then ask whether the elliptic variant \eqref{str} of this star-triangle relation, arises in the strong coupling limit of some as yet unknown star-triangle relation. Such a relation should also then have implications for supersymmetric gauge theories, as well as providing a case of an interesting new summation/integration identity, perhaps in terms of more general special functions.
\section{Conclusion}
A new solution to the star-triangle relation was given in \eqref{str}, for an Ising type model whose spins contain integer and real valued components. The Boltzmann weights of this model are obtained from Yamazaki's Gauge/YBE correspondence and the related solution to the star-star relation \cite{Yamazaki2013}. The star-triangle relation \eqref{str} implies the two component spin case of the star-star relation given by Yamazaki. In Appendix \ref{app:proof} a proof is presented of a new elliptic hypergeometric summation/integration identity which contains the star-triangle relation \eqref{str} as a particular case. The new identity contains six integer variables, in addition to six complex variables, and contains Spiridonov's elliptic beta integral \cite{Spiridonov-beta} as a particular case.
Two further solutions of the star-triangle relation \eqref{rinfstr}, \eqref{strmsg} were given that arise as limiting cases of \eqref{str}. The Boltzmann weights for these star-triangle relations similarly describe Ising type models with integer and real valued spin components. The star-triangle relation \eqref{rinfstr} appears to be new, while the star-triangle relation \eqref{strmsg} was previously obtained by the author, using a different limiting case. In each case the Boltzmann weights are normalised such that \eqref{fzero} holds for the corresponding lattice model.
It would be interesting to determine the exact role of the three star-triangle relations \eqref{str}, \eqref{rinfstr}, and \eqref{strmsg}, in the gauge theory setting. The Gauge/YBE duality described by Yamazaki, implies that the star-triangle relation \eqref{str} should correspond to Seiberg duality of the $4\!-\!d$ $\mathcal{N}=1$ $S_1\times S_3/\mathbb{Z}_r$ index for $SU(2)$ quiver gauge theory. While the star-triangle relation \eqref{rinfstr} appears to be related to the pentagon identity recently obtained by Gahramanov and Rosengren \cite{GR13}, and is thus expected to correspond to a duality of indices in $3\!-\!d$ $\mathcal{N}=2$ supersymmetric gauge theory \cite{BNY13,Y14}.
It would also be of interest to mathematically prove star-star relations for multi-component spins given by Yamazaki, that correspond to multivariate generalisations of elliptic hypergeometric integral identities of the type in \eqref{str3}. It may be possible to do this by adapting proofs given by Rains for the continuous variable cases \cite{Rains-transformations}, and would likely result in new elliptic hypergeometric identities involving an integral and sum over continuous and discrete variables respectively.
\section*{Acknowledgments}
I thank Vladimir Bazhanov for suggesting the problem, useful advice, and reading the manuscript. I thank Alexander Bobenko and Yuri Suris for fruitful discussions and their hospitality during my stay at the Technische Universit\"{a}t Berlin. I also thank Hjalmar Rosengren for his interest to this work and interesting correspondence. Note that a star-triangle relation closely related to \eqref{rinfstr} was indpendently found \cite{GahSpi}, which became known to the author after these results were completed. The author is supported by the DFG Collaborative Research Center TRR 109, ``Discretization in Geometry and Dynamics''.
\app{Proof of \eqref{str}}
\label{app:proof}
In this section a proof of \eqref{str} is given. This is based on and follows closely Spiridonov's proofs of the elliptic beta integrals \cite{Spiridonov-proofs,Spiridonov-essays}.\footnote{See also Wilf and Zeilberger's work \cite{WilfZeil}.} One major difference is that rather than considering the integral over a closed contour encircling the origin, the integral is considered over the interval $[0,2\pi]$ (the difference between the contours is a simple change of variables). This is done for convenience, primarily to avoid calculations involving roots of complex numbers.
Recall the definition of the elliptic nomes
\begin{equation}
{\mathsf p}=\textrm{{\large e}}^{\mathsf{i}\pi\sigma}\,,\quad{\mathsf q}=\textrm{{\large e}}^{\mathsf{i}\pi\tau}\,,\quad\textrm{{\large e}}^{-2\eta}={\mathsf p}{\mathsf q}\,,\quad\mathop{\hbox{\rm Im}}\nolimits(\sigma),\;\mathop{\hbox{\rm Im}}\nolimits(\tau) >0\,,
\end{equation}
and now define
\begin{equation}
\zeta=\mathsf{i}\pi(1+\tau/2-\sigma/2)\,,
\end{equation}
and the following function
\begin{equation}
\varphi(z,m)=\left(-2\eta-2\mathsf{i} z+2\zeta(\llbracket m\rrbracket -\llbracket-m\rrbracket)/3\right)\llbracket m\rrbracket _\pm/(4r)\,,
\end{equation}
with $z\in\mathbb{C}$, $m\in\mathbb{Z}$, $r$ defined in \eqref{rdef}, and $\llbracket m\rrbracket\in\{0,1,\ldots,r-1\}$ denotes $m$ modulus $r$.\footnote{This is equivalent to the definition of $\llbracket m\rrbracket_r$ from Section \ref{sec:str}, with the $r$ subscript now dropped} Define $\Gamma$ to be the lens elliptic gamma function \cite{Yamazaki2013} in the following form\footnote{This is related to $\Phi(z,m)$ of \eqref{legf} by a change of variables $\textrm{{\large e}}^{-\varphi(z,m)}\Gamma(-2z+\mathsf{i}\eta,m)$, and squaring the elliptic nomes ${\mathsf p}\rightarrow{\mathsf p}^2$, ${\mathsf q}\rightarrow{\mathsf q}^2$.}
\begin{equation}
\label{legf2}
\Gamma(z,m)=\textrm{{\large e}}^{\varphi(z,m)}\prod_{j,k=0}^\infty\frac{1-\textrm{{\large e}}^{-\mathsf{i} z}{\mathsf p}^{-\llbracket m\rrbracket }({\mathsf p}{\mathsf q})^{j+1}{\mathsf p}^{r(k+1)}}{1-\textrm{{\large e}}^{\mathsf{i} z}{\mathsf p}^{\llbracket m\rrbracket }({\mathsf p}{\mathsf q})^j{\mathsf p}^{rk}}\frac{1-\textrm{{\large e}}^{-\mathsf{i} z}{\mathsf q}^{-r+\llbracket m\rrbracket }({\mathsf p}{\mathsf q})^{j+1}{\mathsf q}^{r(k+1)}}{1-\textrm{{\large e}}^{\mathsf{i} z}{\mathsf q}^{r-\llbracket m\rrbracket }({\mathsf p}{\mathsf q})^j{\mathsf q}^{rk}}\,.
\end{equation}
It is useful to introduce the following compact notation for products of this function
\begin{equation}
\Gamma(x \pm z,m\pm n):=\Gamma(x+z,m+n)\,\Gamma(x-z,m-n),\quad \Gamma(x\pm 2z,m\pm 2n):=\Gamma(x+2z,m+2n)\,\Gamma(x-2z,m-2n)\,.
\end{equation}
The poles of the lens elliptic gamma function \eqref{legf2} are located at the points
\begin{equation}
z=-\pi\sigma\left(rj+\llbracket m\rrbracket\right)-2\mathsf{i}\eta k\,,\,-\pi\tau\left(r(j+1)-\llbracket m\rrbracket\right)-2\mathsf{i}\eta k\,,
\end{equation}
and its zeros are located at the points
\begin{equation}
z=\pi\sigma\left(r(j+1)-\llbracket m\rrbracket\right)+2\mathsf{i}\eta(k+1)\,,\,\pi\tau\left(rj+\llbracket m\rrbracket\right)+2\mathsf{i}\eta(k+1)\,,
\end{equation}
for $j,k=0,1,\ldots$.
The lens elliptic gamma function \eqref{legf2} obeys the following useful identities
\begin{equation}
\Gamma(z,m)=\frac{1}{\Gamma(2\mathsf{i}\eta-z,-m)}\,,
\end{equation}
and
\begin{equation}
\Gamma(z+n\pi\sigma,m-n)=\left(\prod_{j=0}^{n-1}\theta(z+j\pi\sigma,m-j\,|\,\tau)\right)\,\Gamma(z,m)\,,\quad n=0,1,\ldots\,,
\end{equation}
where the theta function is defined as
\begin{equation}
\label{thtdef}
\theta(z,m\,|\,\tau)=\textrm{{\large e}}^{\phi(z,m)}(\textrm{{\large e}}^{\mathsf{i} z}{\mathsf q}^{\llbracket-m\rrbracket};{\mathsf q}^r)_\infty(\textrm{{\large e}}^{-\mathsf{i} z}{\mathsf q}^{r-\llbracket-m\rrbracket};{\mathsf q}^r)_\infty\,,
\end{equation}
and
\begin{equation}
\phi(z,m)=\frac{\varphi(z+\pi\sigma,m-1)}{\varphi(z,m)}=\left(\zeta(r-1)(r+1)/3-\pi\mathsf{i}(\tau+2)\llbracket m\rrbracket_\pm-\mathsf{i}(z+\pi)(r-1-2\llbracket-m\rrbracket)\right)/(2r)\,.
\end{equation}
The theta function \eqref{thtdef} obeys the following useful identities
\begin{equation}
\theta(-z,-m\,|\,\tau)=-\textrm{{\large e}}^{\mathsf{i}(2\pi\llbracket m\rrbracket-z)/r}\theta(z,m\,|\,\tau)\,,\quad\theta(z+nr\pi\tau,m\,|\,\tau)=\textrm{{\large e}}^{\mathsf{i} n(\pi-z-\pi\tau(nr-1)/2)}\theta(z,m\,|\,\tau)\,,
\end{equation}
where $n\in\mathbb{Z}$.
A more general identity than the star-triangle relation \eqref{str}, is the following summation/integration identity\footnote{Spiridonov's elliptic beta integral \cite{Spiridonov-beta} corresponds to the $r=1$ case of this integral.}
\begin{equation}
\label{str3}
\displaystyle({\mathsf q}^r;{\mathsf q}^r)_\infty({\mathsf p}^r;{\mathsf p}^r)_\infty\,\sum_{y=0}^{r-1}\int_0^{2\pi}\,\frac{dz}{4\pi}\,\frac{\prod_{i=1}^6\Gamma(t_i\pm z,u_i\pm y)}{\Gamma(\pm 2z,\pm 2y)}\displaystyle=\!\!\!\!\prod_{1\leq i<j\leq6}\Gamma(t_i+t_j,u_i+u_j)\,,
\end{equation}
where
\begin{equation}
{\mathsf p},{\mathsf q},t_i\in\mathbb{C},\quad u_i\in\mathbb{Z},\quad |{\mathsf p}|,|{\mathsf q}|<1,\quad\mathop{\hbox{\rm Im}}\nolimits(t_i)>0\,,\quad i=1,\ldots,6\,,
\end{equation}
and the variables are restricted to satisfy
\begin{equation}
\sum_{i=1}^6t_i= 2\mathsf{i}\eta,\quad\sum_{i=1}^6u_i=0\,.
\end{equation}
The star-triangle relation \eqref{str} is related to the identity \eqref{str3} by the change of variables
\begin{equation}
\label{cov}
\begin{array}{rclrclrcl}
\displaystyle t_1&\!\!=\!\!&\displaystyle{ x_1+\mathsf{i} \alpha_1}, & t_3&\!\!=\!\!&\displaystyle{ x_3+\mathsf{i}\alpha_3}, & t_5&\!\!=\!\!&\displaystyle{x_2-\mathsf{i}(\alpha_1 +\alpha_3 -\eta)}\,, \\
\displaystyle t_2&\!\!=\!\!&\displaystyle{-x_1 +\mathsf{i}\alpha_1}, & t_4&\!\!=\!\!&\displaystyle{-x_3+\mathsf{i}\alpha_3}, & t_6&\!\!=\!\!&\displaystyle{-x_2-\mathsf{i}(\alpha_1+\alpha_3-\eta)}\,,
\end{array}
\end{equation}
and
\begin{equation}
\label{cov}
\begin{array}{rclrclrcl}
\displaystyle u_1&\!\!=\!\!&\displaystyle m_1,& u_3&\!\!=\!\!&\displaystyle m_3, & u_5&\!\!=\!\!& \displaystyle m_2\,, \\
\displaystyle u_2&\!\!=\!\!&\displaystyle -m_1, & u_4&\!\!=\!\!& \displaystyle -m_3, & u_6&\!\!=\!\!& \displaystyle -m_2\,.
\end{array}
\end{equation}
The identity \eqref{str3} is what is to be proven. This identity can be re-written in the equivalent form
\begin{equation}
\label{str2}
I(t_1,\ldots,t_5,u_1,\ldots,u_5)=\sum_{y=0}^{r-1}\,\int_0^{2\pi}\,\rho(z,y,t_1,\ldots,t_5,u_1,\ldots,u_5)\,dz=\frac{4\pi}{({\mathsf q}^r;{\mathsf q}^r)_\infty({\mathsf p}^r;{\mathsf p}^r)_\infty}\,,
\end{equation}
where
\begin{equation}
\rho(z,y,t_1,\ldots,t_5,u_1,\ldots,u_5)=\frac{\prod_{i=1}^5\Gamma(t_i\pm z,u_i\pm y)\,\Gamma(A-t_i,U-u_i)}{\Gamma(\pm 2z,\pm 2y)\,\Gamma(A\pm z,U\pm y)\,\prod_{1\leq i<j\leq5}\,\Gamma(t_i+t_j,u_i+u_j)}\,,
\end{equation}
and
\begin{equation}
A=\sum_{i=1}^5t_i,\quad U=\sum_{i=1}^5u_i\,.
\end{equation}
The integral \eqref{str3} is then recovered by setting $t_6=2\mathsf{i}\eta-A$, and $u_6=-U$.
The integrand $\rho$ is $2\pi$-periodic in $z$
\begin{equation}
\rho(z+2\pi k,y,t_1,\ldots,t_5,u_1,\ldots,u_5)=\rho(z,y,t_1,\ldots,t_5,u_1,\ldots,u_5),\,
\end{equation}
for $k\in\mathbb{Z}$.
For $|\mathop{\hbox{\rm Im}}\nolimits(A)|<|\mathop{\hbox{\rm Im}}\nolimits(2\mathsf{i}\eta)|$, the integrand $\rho$ has the following poles lying in the upper half plane
\begin{equation}
\label{rhopoles1}
\begin{array}{c}
\displaystyle\left\{t_i+\pi\sigma\left(rj+\llbracket u_i-y\rrbracket\right)+2\mathsf{i}\eta k+2\pi n,t_i+\pi\tau\left(r(1+j)-\llbracket u_i-y\rrbracket\right)+2\mathsf{i}\eta k+2\pi n\right\}\,,\\[0.3cm]
\displaystyle\left\{-A+\pi\sigma\left(r(j+1)-\llbracket U+y\rrbracket\right)+2\mathsf{i}\eta(k+1)+2\pi n,-A+\pi\tau\left(rj+\llbracket U+y\rrbracket\right)+2\mathsf{i}\eta(k+1)+2\pi n\right\}\,,
\end{array}
\end{equation}
and the following poles lying in the lower half plane
\begin{equation}
\label{rhopoles2}
\begin{array}{c}
\displaystyle\left\{-t_i-\pi\sigma\left(rj+\llbracket u_i+y\rrbracket\right)-2\mathsf{i}\eta k+2\pi n,-t_i-\pi\tau\left(r(1+j)-\llbracket u_i+y\rrbracket\right)-2\mathsf{i}\eta k+2\pi n\right\}\,,\\[0.3cm]
\displaystyle\left\{A-\pi\sigma\left(r(j+1)-\llbracket U-y\rrbracket\right)-2\mathsf{i}\eta(k+1)+2\pi n,A-\pi\tau\left(rj+\llbracket U-y\rrbracket\right)-2\mathsf{i}\eta(k+1)+2\pi n\right\}\,,
\end{array}
\end{equation}
for $i=1,2,\ldots,5$, $n\in\mathbb{Z}$, and $j,k=0,1,\ldots$. By analyticity and periodicity one may also consider the integral \eqref{str2} (or \eqref{str3}), over any contour between the endpoints $z=0,2\pi$ in the strip $0<\mathop{\hbox{\rm Re}}\nolimits(z)<2\pi$, such that the contour separates the points in the two sets of poles \eqref{rhopoles1} and \eqref{rhopoles2} that lie in this strip, then the $t_i$ can be chosen to be any complex numbers as long as such a contour exists.
The idea of the proof is to use a difference equation for $\rho$ to show that $I(t_1,\ldots,t_5,u_1,\ldots,u_5)$ is independent of $t_i$ and $u_i$, $i=1,\ldots,5$, and thus only depends on ${\mathsf p}$ and ${\mathsf q}$. Then one can evaluate $I(t_1,\ldots,t_5,u_1,\ldots,u_5)$ using residues at a special value of $t_i$ and $u_i$, to give \eqref{str2}.
The first step is to establish the following relation
\begin{equation}
\label{Ifunct}
I(t_1+\pi\sigma r,t_2,\ldots,t_5,u_1,\ldots,u_5)=I(t_1,\ldots,t_5,u_1,\ldots,u_5)\,.
\end{equation}
To establish this, observe that $\rho$ satisfies the following difference equation
\begin{equation}
\label{funct}
\begin{array}{l}
\displaystyle\rho(z,y,t_1+\pi\sigma,t_2,\ldots,t_5,u_1-1,u_2,\ldots,u_5)-\rho(z,y,t_1,\ldots,t_5,u_1,\ldots,u_5) \\[0.3cm]
\displaystyle\quad=G(z-\pi\sigma,y+1,t_1,\ldots,t_5,u_1,\ldots,u_5)-G(z,y,t_1,\ldots,t_5,u_1,\ldots,u_5)\,,
\end{array}
\end{equation}
where $G$ is defined as
\begin{equation}
\label{gdef}
\begin{array}{l}
\displaystyle G(z,y,t_1,\ldots,t_5,u_1,\ldots,u_5)=\displaystyle\rho(z,y,t_1,\ldots,t_5,u_1,\ldots,u_5)\,\times\\[0.3cm]
\displaystyle \qquad\qquad\qquad\qquad \textrm{{\large e}}^{2\mathsf{i}\pi\llbracket y-u_1\rrbracket/r}\,\frac{\textrm{{\large e}}^{\mathsf{i} t_1/r}}{\textrm{{\large e}}^{\mathsf{i} z/r}} \,\frac{\prod_{i=1}^5\theta(t_i+z,u_i+y\,|\,\tau)}{\prod_{i=2}^5\theta(t_1+t_i,u_1+u_i\,|\,\tau)}\frac{\theta(t_1+A,u_1+U\,|\,\tau)}{\theta(2z,2y\,|\,\tau)\,\theta(A+z,U+y\,|\,\tau)}\,.
\end{array}
\end{equation}
To see that \eqref{funct} holds, divide both sides of \eqref{funct} by $\rho(z,y,t_1,\ldots,t_5,u_1,\ldots,u_5)$, and one obtains
\begin{equation}
\label{thtfunct}
\begin{array}{l}
\displaystyle\frac{\theta(t_1+z,u_1+y\,|\,\tau)\,\theta(t_1-z,u_1-y\,|\,\tau)}{\theta(A+z,U+y\,|\,\tau)\,\theta(A-z,U-y\,|\,\tau)}\prod_{i=2}^5\frac{\theta(A-t_i,U-u_i\,|\,\tau)}{\theta(t_1+t_i,u_1+u_i\,|\,\tau)}-1 \\[0.6cm]
\displaystyle\qquad\qquad=(-)\frac{\textrm{{\large e}}^{\mathsf{i} t_1/r}\,\theta(t_1+A,u_1+U\,|\,\tau)}{\prod_{i=2}^5\theta(t_1+t_i,u_1+u_i\,|\,\tau)}\left(\frac{\textrm{{\large e}}^{-\mathsf{i} z/r+2\mathsf{i}\pi\llbracket y-u_1\rrbracket/r}\prod_{i=1}^5\theta(t_i+z,u_i+y\,|\,\tau)}{\theta(2z,2y\,|\,\tau)\,\theta(A+z,U+y\,|\,\tau)}+\right. \\[0.6cm]
\displaystyle\left.\qquad\qquad\qquad\qquad\qquad\qquad\textrm{{\large e}}^{\mathsf{i} z/r}\textrm{{\large e}}^{2\mathsf{i}\pi\left(\llbracket y-u_1+1\rrbracket+\llbracket-2y-1\rrbracket\right)/r}\frac{\prod_{i=1}^5\theta(t_i-z,u_i-y\,|\,\tau)}{\theta(-2z,-2y\,|\,\tau)\,\theta(A-z,U-y\,|\,\tau)}\right)\,.
\end{array}
\end{equation}
Both sides of this relation are elliptic functions of $z$, sharing the same poles and corresponding residues, then from Liouville's theorem, the difference of both sides is a constant. The constant may be shown to be zero. To check that \eqref{thtfunct} holds is straightforward, and involves simplifying many expressions in terms of $\llbracket m\rrbracket$. Some more about the identity \eqref{thtfunct} is explained in Appendix \ref{subsec:todo}.
Now integrate both sides of \eqref{funct}, over $0\leq z\leq2\pi$, to obtain
\begin{equation}
\label{functinteg}
\begin{array}{l}
\displaystyle I(t_1+\pi\sigma,t_2,\ldots,t_5,u_1-1,u_2,\ldots,u_5)-I(t_1,\ldots,t_5,u_1,\ldots,u_5) \\[0.3cm]
\displaystyle\qquad\qquad\qquad=\sum_{y=0}^{r-1}\left(\int_{-\mathsf{i}\pi\mathop{\hbox{\rm Im}}\nolimits(\sigma)}^{2\pi-\mathsf{i}\pi\mathop{\hbox{\rm Im}}\nolimits(\sigma)}-\int_0^{2\pi}\right)dz\,G(z,y,t_1,\ldots,t_5,u_1,\ldots,u_5)\,,
\end{array}
\end{equation}
where $2\pi$-periodicity of $G$ has been used for the first integral on the right hand side. This first integral is over a straight line connecting the two points $z=-\mathsf{i}\pi\mathop{\hbox{\rm Im}}\nolimits(\sigma)$ and $z=2\pi-\mathsf{i}\pi\mathop{\hbox{\rm Im}}\nolimits(\sigma)$. Setting $\mathop{\hbox{\rm Im}}\nolimits(t_i)>0$, and $\mathop{\hbox{\rm Im}}\nolimits(A)<\mathop{\hbox{\rm Im}}\nolimits(\pi\tau)$, the function G in \eqref{gdef} has poles in the upper half plane at the points
\begin{equation}
\label{Gpoles1}
\begin{array}{c}
\displaystyle\left\{t_i+\pi\sigma\left(rj+\llbracket u_i-y\rrbracket\right)+2\mathsf{i}\eta k+2\pi n,t_i+\pi\tau\left(r(1+j)-\llbracket u_i-y\rrbracket\right)+2\mathsf{i}\eta k+2\pi n\right\}\,,\\[0.3cm]
\displaystyle\left\{-A+\pi\sigma\left(r(j+1)+\llbracket U+y\rrbracket\right)+2\mathsf{i}\eta(k+1)+2\pi n,-A+\pi\tau\left(r(j+1)+\llbracket U+y\rrbracket\right)+2\mathsf{i}\eta(k+1)+2\pi n\right\}\,, \\[0.3cm]
\displaystyle\left\{-A+\pi\tau\left(r(j+1)+\llbracket U+y\rrbracket\right)+2\pi n,-A+\pi\tau\left(\llbracket U+y\rrbracket\right)+2\mathsf{i}\eta (k+1)+2\pi n\right\}\,,
\end{array}
\end{equation}
and for $\llbracket U+y\rrbracket\neq0$,
\begin{equation}
\label{Gpoles11}
\left\{-A+\pi\tau\llbracket U+y\rrbracket+2\pi n\right\}\,,
\end{equation}
and the following poles in the lower half plane
\begin{equation}
\label{Gpoles2}
\begin{array}{c}
\displaystyle\left\{-t_i-\pi\sigma\left(r(j+1)+\llbracket u_i+y\rrbracket\right)-2\mathsf{i}\eta(k+1)+2\pi n,-t_i-\pi\tau\left(r(j+1)-\llbracket u_i+y\rrbracket\right)-2\mathsf{i}\eta(k+1)+2\pi n\right\}\,,\\[0.3cm]
\displaystyle\left\{-t_i-\pi\sigma\left(r(j+1)+\llbracket u_i+y\rrbracket\right)-2\mathsf{i}\eta k+2\pi n,-t_i-\pi\sigma\left(rj+\llbracket u_i+y\rrbracket\right)-2\mathsf{i}\eta (k+1)+2\pi n\right\}\,,\\[0.3cm]
\displaystyle\left\{A-\pi\sigma\left(r(j+1)-\llbracket U-y\rrbracket\right)-2\mathsf{i}\eta(k+1)+2\pi n,A-\pi\tau\left(rj+\llbracket U-y\rrbracket\right)-2\mathsf{i}\eta(k+1)+2\pi n \right\}\,,
\end{array}
\end{equation}
and for $\llbracket u_i+y\rrbracket\neq0$,
\begin{equation}
\label{Gpoles22}
\left\{-t_i-\pi\sigma\llbracket u_i+y\rrbracket+2\pi n\right\}\,,
\end{equation}
for $i=1,\ldots,5$, $n\in\mathbb{Z}$, and $j,k=0,1,\ldots$. Since for $\mathop{\hbox{\rm Im}}\nolimits(t_i)>0$ and $\mathop{\hbox{\rm Im}}\nolimits(A)<\mathop{\hbox{\rm Im}}\nolimits(\pi\tau)$, there are no poles in the strip $0>\mathop{\hbox{\rm Im}}\nolimits(z)>-\mathop{\hbox{\rm Im}}\nolimits(\pi\sigma)$, one can shift the contour of integration in \eqref{functinteg} from the line $\mathop{\hbox{\rm Im}}\nolimits(z)=-\mathop{\hbox{\rm Im}}\nolimits(\pi\sigma)$, to the real axis on $[0,2\pi]$, to obtain zero on the right hand side, thus
\begin{equation}
\label{Ifunct2}
I(t_1+\pi\sigma,t_2,\ldots,t_5,u_1-1,u_2,\ldots,u_5)=I(t_1,\ldots,t_5,u_1,\ldots,u_5)\,.
\end{equation}
Applying this identity $r$ times (as long as parameters remain in our allowed region), one has now obtained Equation \eqref{Ifunct}:
\begin{equation}
I(t_1+\pi\sigma r,t_2,\ldots,t_5,u_1,\ldots,u_5)=I(t_1,\ldots,t_5,u_1,\ldots,u_5)\,.
\end{equation}
In a similar fashion, by requiring that $\mathop{\hbox{\rm Im}}\nolimits(A)<\mathop{\hbox{\rm Im}}\nolimits(\pi\sigma)$, one can establish the following difference relation
\begin{equation}
I(t_1+\pi\tau r,t_2,\ldots,t_5,u_1,\ldots,u_5)=I(t_1,\ldots,t_5,u_1,\ldots,u_5)\,.
\end{equation}
For now set $\mathop{\hbox{\rm Re}}\nolimits(\sigma),\mathop{\hbox{\rm Re}}\nolimits(\tau)=0$, $\mathop{\hbox{\rm Im}}\nolimits(\sigma)>\mathop{\hbox{\rm Im}}\nolimits(\tau)$, and $r\sigma k\neq r\tau k$ for any $n,k=0,1,\ldots$. Also let the real parts of $A$, and $t_i$, $i=1,\ldots 5$, be non-zero and differ from each other. For the integral $I(t_1,\ldots,t_5,u_1,\ldots,u_5)$ in \eqref{str2}, deform the contour of integration from $z\in[0,2\pi]$, such that the poles in \eqref{rhopoles1}, and the following points lie above the contour
\begin{equation}
\begin{array}{rcl}
\displaystyle\left\{t_1+\mathsf{i}\pi x\right.&\!\!\!\!|\!\!\!\!&\displaystyle x\geq\min\left(\mathop{\hbox{\rm Im}}\nolimits(\sigma)\llbracket u_1-y\rrbracket,\mathop{\hbox{\rm Im}}\nolimits(\tau)(r-\llbracket u_1-y\rrbracket)\right)\,\wedge \\[0.3cm]
&& \left.\displaystyle x\leq\max\left(\mathop{\hbox{\rm Im}}\nolimits(\sigma)\llbracket u_1-y\rrbracket,\mathop{\hbox{\rm Im}}\nolimits(\tau)(r-\llbracket u_1-y\rrbracket)\right)+2\mathop{\hbox{\rm Im}}\nolimits(\sigma)r\right\}\,,\\[0.3cm]
\displaystyle\left\{-A+2\mathsf{i}\eta+\mathsf{i}\pi x\right.&\!\!\!\!|\!\!\!\!&\displaystyle x\geq\min\left(\mathop{\hbox{\rm Im}}\nolimits(\sigma)(r-\llbracket U+y\rrbracket),\mathop{\hbox{\rm Im}}\nolimits(\tau)\llbracket U+y\rrbracket\right)-2\mathop{\hbox{\rm Im}}\nolimits(\sigma)r\;\wedge \\[0.3cm]
&& \left.\displaystyle x\leq\max\left(\mathop{\hbox{\rm Im}}\nolimits(\sigma)(r-\llbracket U+y\rrbracket),\mathop{\hbox{\rm Im}}\nolimits(\tau)\llbracket U+y\rrbracket\right)\right\}\,,
\end{array}
\end{equation}
and the poles in \eqref{rhopoles2}, and the following points lie below the contour
\begin{equation}
\begin{array}{rcl}
\displaystyle\left\{-t_1-\mathsf{i}\pi x\right.&\!\!\!\!|\!\!\!\!&\displaystyle x\geq\min\left(\mathop{\hbox{\rm Im}}\nolimits(\sigma)\llbracket u_1+y\rrbracket,\,\mathop{\hbox{\rm Im}}\nolimits(\tau)(r-\llbracket u_1+y\rrbracket)\right)\,\wedge \\[0.3cm]
&& \left.\displaystyle x\leq\max\left(\mathop{\hbox{\rm Im}}\nolimits(\sigma)\llbracket u_1+y\rrbracket,\,\mathop{\hbox{\rm Im}}\nolimits(\tau)(r-\llbracket u_1+y\rrbracket)\right)+2\mathop{\hbox{\rm Im}}\nolimits(\sigma)r\right\}\,,\\[0.3cm]
\displaystyle\left\{A-2\mathsf{i}\eta-\mathsf{i}\pi x\right.&\!\!\!\!|\!\!\!\!&\displaystyle x\geq\min\left(\mathop{\hbox{\rm Im}}\nolimits(\sigma)(r-\llbracket U-y\rrbracket),\,\mathop{\hbox{\rm Im}}\nolimits(\tau)(\llbracket U-y\rrbracket)\right)-2\mathop{\hbox{\rm Im}}\nolimits(\sigma)r\;\wedge \\[0.3cm]
&& \left.\displaystyle x\leq\max\left(\mathop{\hbox{\rm Im}}\nolimits(\sigma)(r-\llbracket U-y\rrbracket),\,\mathop{\hbox{\rm Im}}\nolimits(\tau)(\llbracket U-y\rrbracket)\right)\right\}\,,
\end{array}
\end{equation}
These sets of points correspond to lines in the complex plane with constant real part. Depending on the values of $\mathop{\hbox{\rm Re}}\nolimits(t_1)$ and $\mathop{\hbox{\rm Re}}\nolimits(A)$, one should translate a set of points by $2\pi k$ if needed, so that the points always lie in the strip $0\leq\mathop{\hbox{\rm Re}}\nolimits z\leq2\pi$, and the contour of integration remains in this strip. Using \eqref{Ifunct} one then performs $n$ shifts on the variable $t_1$, in the form $t_1\rightarrow t_1+\pi\tau r$, until $t_1+\pi\tau rn$ enters the set of points $\left\{t_1+\pi\sigma r x\,|\,1\leq x \leq 2\right\}$. Then one transforms $t_1\rightarrow t_1-\pi\sigma r$. Under these transformations the poles of the integrand $\rho$ never cross the contour of integration. Thus one obtains
\begin{equation}
I(t_1+\pi\tau rj-\pi\sigma rk,t_2,\ldots,t_5,u_1,\ldots,u_5)=I(t_1,\ldots,t_5,u_1,\ldots,u_5)\,,
\end{equation}
for all $j,k=0,1,\ldots$ such that $\pi\mathop{\hbox{\rm Im}}\nolimits(\tau)rj-\pi\mathop{\hbox{\rm Im}}\nolimits(\sigma)rk\in [0,\pi\mathop{\hbox{\rm Im}}\nolimits(\sigma)r]$.
The set of such points is dense thus $I$ does not depend on $t_1$, and by symmetry on any $t_i$. Then from \eqref{Ifunct2} one has
\begin{equation}
I(t_1+\pi\sigma,t_2,\ldots,t_5,u_1-1,u_2,\ldots,u_5)=I(t_1,\ldots,t_5,u_1-1,u_2,\ldots,u_5)=I(t_1,\ldots,t_5,u_1,\ldots,u_5)\,.
\end{equation}
It follows that $I$ also does not depend on $u_1$ and by symmetry on any $u_i$. Thus $I$ can only depend on ${\mathsf p}$ and ${\mathsf q}$.
Set each $u_i=0$. In the limit $t_1+t_2\rightarrow0$, $\rho$ vanishes and the only contribution to the integral $I$, is from two finite residues coming from poles which cross the contour of integration for $y=0$. Then evaluating the integral $I$ in this limit from its residues gives the right hand side of \eqref{str2}.
By analytic continuation one may then extend the domain of parameter values to that allowed by the contour between the endpoints $z=0,2\pi$.
\app{The identity \eqref{thtfunct}}
\label{subsec:todo}
In order to analyse the identity \eqref{thtfunct} at its poles, it is convenient to write the residues of both sides of \eqref{thtfunct}, in terms of the following theta function\footnote{This more resembles the standard theta function appearing in the literature {\it e.g.} \cite{Spiridonov-proofs,Rains-transformations}, with a change of variables. }
\begin{equation}
\label{oldtheta}
\theta(z\,|\,\tau)=(\textrm{{\large e}}^{\mathsf{i} z};{\mathsf q}^r)_\infty(\textrm{{\large e}}^{-\mathsf{i} z}{\mathsf q}^r;{\mathsf q}^r)_\infty\,.
\end{equation}
Then one finds that arguments of theta functions appearing on both sides of \eqref{thtfunct} differ by a simple shift $\pi\tau rk$, $k\in\mathbb{Z}$. The theta function \eqref{oldtheta} obeys the following useful identity
\begin{equation}
\label{thtident}
\theta(z+\pi\tau rk\,|\,\tau)=\frac{\theta(z\,|\,\tau)}{(-\textrm{{\large e}}^{\mathsf{i} z})^k\,\textrm{{\large e}}^{\mathsf{i}\pi\tau rk(k-1)/2}},\quad k\in\mathbb{Z}\,.
\end{equation}
To show that \eqref{thtfunct} holds requires repeated use of this identity.
The right hand side of the identity \eqref{thtfunct} is chosen to have an equal set of poles and residues with the left hand side. One wants to show that both sides of \eqref{thtfunct} define elliptic functions, and share the same sets of poles and residues. Then by Liouville's theorem the difference of both sides is a constant, which may be found to be zero. To do this, it should first be shown that no additional poles appear on right hand side of \eqref{thtfunct}. Also it should be shown that both sides are invariant under the shift $z\rightarrow z+\pi\tau r$, and thus define elliptic functions of $z$.
With the use of \eqref{thtident}, and identities in Appendix C, the calculations involved are straightforward and are summarised below.
\\\\
{\it No additional poles appear on right hand side of \eqref{thtfunct} :} On the right hand side of \eqref{thtfunct}, there should be no poles appearing at the points $2z=\pi\tau\left(jr-\llbracket-2y\rrbracket\right)$, or equivalently at $2z=\pi\tau\left(jr+\llbracket2y\rrbracket\right)$, $j\in\mathbb{Z}$. By using the identity \eqref{thtident}, at the poles $2z=\pi\tau\left(jr-\llbracket-2y\rrbracket\right)$ the residues of the two terms on the right hand side of \eqref{thtfunct} will differ by a factor
\begin{equation}
\exp\left(\mathsf{i}\pi k_{m}+\mathsf{i}\pi\sigma k_p+\mathsf{i}\pi\tau k_q+Ak_A+\sum_{i=1}^5{t_i}k_{t_i}\right)\,,
\end{equation}
for some $k_{m},k_p,k_q,k_A,k_{t_i}\in\mathbb{R}$. It can be shown that this factor is independent of the integer $j$, and is in fact unity, {\it i.e} $k_p,k_q,k_A,k_{t_i}=0$ and $k_{m}=0\mbox{ mod }2$. Thus no additional poles appear on the right hand side of \eqref{thtfunct}.
\\\\
{\it Invariance under $z\rightarrow z+\pi\tau r$: } One may use the relation \eqref{thtident} to show that both sides are invariant under $z\rightarrow z+\pi\tau r$. This is the most straightforward property to check.
\\\\
{\it Difference of both sides of \eqref{thtfunct} is zero: } Evaluating both sides of \eqref{thtfunct} at the point $z=-t_1-\pi\tau\llbracket -u_1-y\rrbracket$, the left hand side gives -1, and the right hand side gives
\begin{equation}
\displaystyle\left.-\frac{\textrm{{\large e}}^{\mathsf{i} t_1/r}\textrm{{\large e}}^{\mathsf{i} z/r}\textrm{{\large e}}^{2\mathsf{i}\pi\left(\llbracket y-u_1+1\rrbracket+\llbracket-2y-1\rrbracket\right)/r}\,\theta(t_1+A,u_1+U\,|\,\tau)\,\prod_{i=1}^5\theta(t_i-z,u_i-y\,|\,\tau)}{\theta(-2z,-2y\,|\,\tau)\,\theta(A-z,U-y\,|\,\tau)\,\prod_{i=2}^5\theta(t_1+t_i,u_1+u_i\,|\,\tau)}\right|_{z=-t_1-\pi\tau\llbracket -u_1-y\rrbracket}\,.
\end{equation}
After inspection, all theta functions in the above expression cancel up to an overall factor that makes the whole expression equal to -1. Thus the difference of both sides of \eqref{thtfunct} is zero at the point $z=-t_1-\pi\tau\llbracket -u_1-y\rrbracket$. Then since it has previously been shown that both sides are elliptic functions of $z$ sharing the same set of poles and conrresponding residues, by Liouville's theorem the difference of both sides is a constant, which must be zero. Thus the identity \eqref{thtfunct} holds.
\app{Useful identities}
Here $m\in\mathbb{Z}$, $\llbracket m\rrbracket$ denotes $m\mbox{ mod }r$, and $\llbracket m\rrbracket_\pm$ denotes $\llbracket m\rrbracket\llbracket-m\rrbracket$.
\begin{equation}
\llbracket m \rrbracket_\pm=\llbracket -m\rrbracket_\pm
\end{equation}
\begin{equation}
\llbracket-m\rrbracket+\llbracket m-1\rrbracket=r-1
\end{equation}
\begin{equation}
\llbracket m-1\rrbracket_\pm-\llbracket m\rrbracket_\pm+2\llbracket-m\rrbracket=r-1
\end{equation}
\begin{equation}
\llbracket m+1\rrbracket_\pm-\llbracket m\rrbracket_\pm+2\llbracket m\rrbracket=r-1
\end{equation}
\begin{equation}
((2\llbracket m\rrbracket -r)\llbracket m\rrbracket_\pm-(2\llbracket m-1\rrbracket-r)\llbracket m-1\rrbracket_\pm=-(r-1)(r-2)-6\llbracket-m\rrbracket+6\llbracket m\rrbracket_\pm
\end{equation}
\begin{equation}
((2\llbracket m\rrbracket -r)\llbracket m\rrbracket_\pm-(2\llbracket m+1\rrbracket-r)\llbracket m+1\rrbracket_\pm=(r-1)(r-2)+6\llbracket m\rrbracket-6\llbracket -m\rrbracket_\pm
\end{equation}
|
2,877,628,090,278 | arxiv | \section{Introduction \label{sec:outline}}
The Standard Model (SM) of particle physics is very successful and it has withstood all precision tests over almost 40 years. All SM particles have been discovered\footnote{In addition neutrinos were found to be massive, which requires in its simplest form only the addition of three right handed fermionic singlets.}, except for the ingredient connected to electroweak Symmetry Breaking (EWSB), namely the Higgs boson. The Large Hadron Collider (LHC) was built to test EWSB and to find or rule out the Higgs boson. In addition, the LHC aims at detecting new physics which is suggested to exist in the TeV range in order to solve the so-called hierarchy problem. However, the ATLAS and CMS detectors at LHC have so far not found any sign of new physics and the remaining Higgs mass range has shrunk considerably \cite{ATLAS-CONF-2011-157}. This suggests to think about scenarios where nothing but the SM is seen.
The essence of the hierarchy problem \cite{Weinberg:1975gm,*Weinberg:1979bn,*Gildener:1976ai,*Susskind:1978ms,*tHooft:1980xb} is the fact that quantum corrections generically destroy the separation of two scales of scalar Quantum Field Theories (QFT). It is thus not possible to understand how the electroweak scale could be many orders of magnitude smaller than the scale of an embedding QFT. Conventional solutions of the hierarchy problem stay within QFT. One solution is to postulate a new symmetry (Supersymmetry) which cancels the problematic quadratic divergences. Alternatively the scalar sector may be considered effective (composite) such that form factors remove large quadratic divergences. Another idea is that the Higgs particle could be a pseudo-Goldstone-boson such that its mass is naturally somewhat lower than a scale where richer new physics exists. However, none of these ideas has so far shown up in experiments.
This prompts us to consider new ideas. An important observation is the fact that the SM can anyway not be valid up to an arbitrary high energy scale due to triviality \cite{Lindner:1985uk} and since gravity must affect elementary particle physics latest at the Planck scale, \(M_{pl}=\unit[1.2\times10^{19}]{GeV}\). Note that this introduces a conceptual asymmetry, as gravity is known to be non-renormalizable, i.e. it {\em cannot} be a QFT in the usual sense and requires fundamentally new ingredients. This looks bad from the perspective of renormalizable gauge theories, also since we can so far only guess which concepts might be at work. However, this may also be good in two ways: First, renormalizable QFTs do not allow to calculate absolute masses and absolute couplings. Any embedding of the SM into some other renormalizable QFT (like GUTs) would therefore only shift the problem to a new theory which is also unable to determine the absolute values of these quantities. In other words: The problems of gravity may be a sign of physics based on new concepts which may ultimately allow to determine absolute masses, mixings and couplings. However, there is no need for the SM to be directly embedded into gravity and various layers of conventional gauge theories (LR, PS, GUT, ...) could be in-between. The second reason why an embedding involving new concepts beyond renormalizable QFTs might be good is that this asymmetry might offer new solutions to the hierarchy problem. In other words: The conventional approach towards interpreting quadratic divergences to the Higgs mass correction by simply substituting \(\Lambda^2\rightarrow M_{pl}^2\) might not be correct if Planck scale physics is based on new physical concepts different from conventional QFT. This view is also shared by Meissner and Nicolai \cite{Meissner:2007xv}. The point is that the unknown new concepts may allow for mechanisms which stabilize a low-lying effective QFT from the perspective of the Planck scale. From the perspective of the low-lying effective QFT this may then appear to be a hierarchy problem if one tries to embed into a renormalizable QFT instead of the theory which is based on the new, extended concepts.
\setlength{\unitlength}{5cm}
\begin{figure}
\begin{center}
\begin{picture}(1,1)
\put(0,0.1){\vector(0,1){0.8}}
\put(0,0.1){\vector(1,0){1}}
\put(1.01,0){$\mu$}
\put(-0.15,0.93){$m_H^2(\mu)$}
\put(0,0.2){\line(4,1){0.65}}
\put(-0.23,0.2){$m_H^2(v)$}
\put(0.6,0){$M_{pl}$}
\multiput(0.65,0.1)(0,0.05){17}
{\line(0,1){0.02}}
\put(0.67,0.5){\tiny Non-field theoretic}
\put(0.67,0.45){\tiny quantum gravity}
\put(0.67,0.4){\tiny region}
\put(0.45,0.55){\vector(1,-1){0.15}}
\put(0.65,0.36){\circle*{0.05}}
\put(0.1,0.65){\tiny Imprint of Higgs mass}
\put(0.1,0.6){\tiny left by quantum gravity}
\put(0.2,0.85){\textbf{QFT}}
\put(0.17,0.78){\textbf{regime}}
\put(0.7,0.85){\textbf{Beyond}}
\put(0.73,0.78){\textbf{QFT}}
\end{picture}
\end{center}
\label{fig:remnant}
\caption{The SM Higgs mass could be determined and fixed by unknown physics connected to quantum gravity, which should be based on new concepts other than conventional QFT. The running of the Higgs mass from Planck scale down to electroweak scale is fully dictated by the SM as a QFT. }
\end{figure}
\setlength{\unitlength}{1pt}
In condensed matter physics for instance, the energy of a superconductor in Ginzburg-Landau theory is described by:
\begin{align}
E&\approx \alpha |\phi|^2 +\beta |\phi|^4+\ldots
\end{align}
where \(\alpha\) and \(\beta\) are phenomenological parameters. These parameters have to be determined by experiment itself in the Ginzburg-Landau theory framework, but they can be calculated from the microscopic theory of superconductivity, namely the BCS theory. In this sense the microscopic theory fixes the parameter or the boundary condition of the low energy effective theory. The parameters in low energy Ginzburg-Landau theory ``know'' the boundary condition set by the underlying BCS theory, but many dynamical details of the BCS theory are lost in the Ginzburg-Landau effective theory, even though BCS theory does not provide a mechanism to explain hierarchies.
The above considerations prompt us to speculate that the SM might be valid up to the Planck scale, where it is embedded directly into Planck scale physics without any intermediate energy scale. The new concepts behind the Planck scale physics might then offer a solution to the hierarchy problem which is no longer visible when one looks at the SM only. In analogy to the superconductivity example the only way how the SM would ``know'' about such an embedding could be special boundary conditions similar to compositeness conditions or auxiliary field conditions in theories where redundant degrees of freedom are eliminated in embeddings. This scenario is depicted in Fig.~(\ref{fig:remnant}).
Following the logic outlined above it would be essential to have only the weak and the Planck scale and nothing in between, since otherwise it would require to solve the hierarchy problem within QFT. We therefore forbid any kind of new intermediate energy scale between weak and Planck scale in order to avoid the large hierarchy between Higgs and heavy intermediate particle's mass. This view is also shared by the spirit of \(\nu\mathrm{MSM}\), proposed by Shaposhnikov \cite{Shaposhnikov:2007nj}. Even without any intermediate scale one might still ask why there should not be any radiative corrections of Planck scale size to the Higgs mass parameter. We do not have an argument here but we point the interested reader to the works of Bardeen \cite{Bardeen:1995kv} that have argued about the spurious nature of the quadratic correction.
In the logic of the above arguments we assume therefore in this letter simply that the SM is valid up to the Planck scale, and that quantum gravity leaves certain boundary conditions for the Higgs quartic coupling \(\lambda\).
\section{Boundary conditions on \texorpdfstring{$\lambda$}{lambda} at the Planck scale\label{sec:boundary}}
It is well known that the SM cannot be extrapolated to arbitrarily high energies. A first constraint comes from the fact that at very high energies far above the Planck scale the \(U(1)_Y\) gauge coupling diverges. A more important piece of theoretical input comes from the SM Higgs sector: for small Higgs masses of approximately less than \(\unit[127]{GeV}\), the contributions from top loops drive the Higgs self-coupling towards negative values before the Planck scale and thus make the Higgs potential unstable \cite{Lindner:1988ww,Sher:1988mj}. For Higgs masses larger than approximately \(\unit[170]{GeV}\), the contribution from the Higgs self-coupling drives itself towards a Landau pole before the Planck scale, this is the so-called \emph{triviality bound} \cite{Lindner:1985uk}.
Suppose that the LHC only detects the SM Higgs boson with a mass in the range between \(\unit[127-150]{GeV}\) and nothing else. The triviality and vacuum stability bounds then imply that the SM can be effectively valid up to the Planck scale. The SM parameters could then be directly determined by Planck scale physics and one might ask the question if there is a way to gain information on this type of physics from measurements performed at LHC. Motivated by an asymptotic safety scenario of gravity, Shaposhnikov and Wetterich \cite{Shaposhnikov:2009pv} have, for example, proposed that both the Higgs self-interaction $\lambda$ and its beta function $\beta_{\lambda}$ should simultaneously vanish at the Planck scale, from which they derive the prediction of $m_H=126$ GeV. At first sight it seems remarkable that both conditions can be fulfilled at the same time and this prompted us to look at such type of boundary conditions in more detail.
We discuss therefore the following boundary conditions which we imagine to be imposed on the SM in the spirit of this paper by some version of quantum gravity\footnote{Boundary conditions of this type have also been discussed in the context of anthropic considerations in the multiverse \cite{Hall:2009nd}.}:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item Vacuum stability\\ \(\lambda(M_{pl})=0\) \cite{Froggatt:1995rt,Shaposhnikov:2009pv,Casas:1994qy,Casas:1996aq,Sher:1988mj,Lindner:1988ww,Isidori:2007vm}.
\item vanishing of the beta function of \(\lambda\)\\ \(\beta_{\lambda}(M_{pl})=0\) \cite{Froggatt:1995rt,Shaposhnikov:2009pv}.
\item the Veltman condition \\ $\mathrm{Str}\mathcal{M}^2=0$ \cite{Chaichian:1995ef,Veltman:1980mj,Decker:1979nk}, \\ which states that the quadratic divergent part of the one-loop radiative correction to the Higgs bare mass parameter $m^2$ should vanish\footnote{Note that the notation of \(\mathrm{Str}\mathcal{M}^2\) for the Veltman condition includes only the correction to \(m\) by the running coupling which is not the direct matching of respective pole masses. Note also, that we have only included the top quark Yukawa coupling \(\lambda_t\) and omitted the other Yukawa couplings, as they do not contribute significantly to the Higgs mass running compared to the contributions from \(\lambda\), \(\lambda_t\), and \(SU(2)_L\times U(1)_Y\) gauge couplings \(g_2\) and \(g_1\). It is known that the Veltman condition is scheme dependent as the quadratic divergence from different particles is not necessarily the same. However if we assume a common cut-off for all the particle contributions, which may appear appropriate for our scenario, we will obtain a range of Higgs masses which is still not excluded by the experiments.}:
\begin{align}
\delta m^2 &=\frac{\Lambda^2}{32\pi^2 v^2} \mathrm{Str}\mathcal{M}^2
\label{eq:strm} \displaybreak[0] \\
&=\frac{1}{32\pi^2}\left(\frac{9}{4}g_2^2+\frac{3}{4}g_1^2+6\lambda-6\lambda_t^2 \right)\Lambda^2.
\label{eq:veltman}
\end{align}
\item vanishing anomalous dimension of the Higgs mass parameter\\ \(\gamma_m(M_{pl})=0,\, m(M_{pl})\neq0\).
\end{itemize}
As we aim to determine the Higgs mass due to different boundary conditions for \(\lambda\) imposed at the Planck scale, we use renormalization group equations (RGEs) to evolve the couplings. The relevant one- and two-loop beta functions required for solving the RGE are listed in App.~(\ref{sec:a1}). As the uncertainty in the top mass is the dominant source of uncertainty for the resulting Higgs mass prediction we treat the top mass as a free parameter (within a certain range) and show the dependence on the top mass explicitly. For each top mass value we determine the corresponding Higgs mass due to a given boundary condition on \(\lambda(M_{pl})\). For that we need to convert the top pole mass to its corresponding \(\overline{\mathrm{MS}}\) Yukawa coupling:
\begin{equation}
\lambda_t(M_t)=\frac{\sqrt{2}M_t}{v}\left(1+\delta_t(M_t) \right),
\label{eq:toppole}
\end{equation}
where \(\delta_t(M_t)\) is the matching correction for top mass. The list of matching conditions used for \(\delta_t\) is given in App.~(\ref{sec:a2}). The matching scale is chosen to be \(\mu=M_t\), which is a suitable choice for a low Higgs mass range \cite{Hambye:1996wb}. We consider also the threshold effects in the beta functions: The known gauge couplings \(g_i(M_Z)\) run to the scale \(\mu=M_t\) without including the top loop contribution, and then the value of \(g_i(\mu=M_t)\) will be used in subsequent RGE, where we compute all the coupled differential equation of \(g_i\), \(\lambda\) and \(\lambda_t\) with boundary condition imposed. With suitable boundary conditions imposed for \(\lambda\) at the Planck scale, we can extract the Higgs mass at \(\mu=M_t\) after solving the RGE. The \(\overline{\mathrm{MS}}\) Higgs quartic coupling is then matched to the pole mass with:
\begin{align}
\lambda(M_t)=\frac{M_H^2}{2v^2}(1+\delta_H(M_t)),
\label{eq:higgspole}
\end{align}
where the Higgs matching \(\delta_H\) is given at App.~(\ref{sec:a2}). Repeating the procedure for different values of the top pole mass, we obtain the Higgs mass determinations plotted in Fig.~(\ref{fig:pd1}).
Next we discuss some details regarding the boundary condition imposed for \(\lambda\) at the Planck scale, starting with the vacuum stability bound. To obtain the vacuum stability bound we need to consider two cases in solving the coupled differential equations \cite{Casas:1994qy,Casas:1996aq}:
\begin{enumerate}
\item We first impose the boundary conditions at tree-level, i.e. \(\lambda(M_{pl})=0\) and apply the one-loop beta functions and anomalous dimension equations in solving the RGEs numerically.
\item Two-loop beta functions and anomalous dimension for \(m\) are considered in our RGE and the effective potential is considered in one-loop approximation. The condition that we would need to impose is:
\begin{align}
\lambda(M_{pl})&=\frac{1}{32\pi^2}\left(\frac{3}{8}(g_1^2(M_{pl})+g_2^2(M_{pl}))^2\left[\frac{1}{3}\right.\right. \nonumber \\
&\quad\left.-\log\frac{g_1^2(M_{pl})+g_2^2(M_{pl})}{4}\right]+6\lambda_t^4(M_{pl}) \nonumber \\ &\quad\left[\log\frac{\lambda_t^2(M_{pl})}{2}-1\right]+\frac{3}{4}g_2^4(M_{pl})\left[\frac{1}{3}\right. \nonumber \\
&\quad\left.\left.-\log\frac{g_2^2(M_{pl})}{4}\right]\right)
\end{align}
\end{enumerate}
Effectively we want to be consistent with the vacuum stability bound obtained from the effective potential. With this approach we can estimate the uncertainty from the difference of Higgs masses obtained via different cases above, which is effectively due to the omission of higher-order contributions to the beta functions and correction to the effective potential.
For the case of the second boundary condition, we only impose the tree level Veltman condition given in Eq.~\eqref{eq:strm}, as higher order loops always come with \(\log(M_{pl}/\mu)\) \cite{Einhorn:1992um}, which will be cancelled if the running couplings are evaluated at Planck scale \cite{Kolda:2000wi}. This is true, however, only if the complete beta functions for all the couplings are used to resum the complete order of logarithms.
\begin{figure*}
\centering
\includegraphics[width=.8\textwidth]{mtmh_new2.pdf}
\caption{Higgs and top (pole) mass determinations for different boundary conditions at the Planck scale. The coloured bands correspond to the conditions discussed in the text and which are also labelled in the insert. The middle of each band is the best value, while the width of the band is a ``RGE error band'' inferred from assuming that all omitted higher orders in the beta functions beyond two loops are limited by the difference between the one and two loop results. Note that the Veltman condition is truncated at the point where its Higgs mass prediction violates the vacuum stability bound (both at two-loops). The gray-hatched line at the bottom is the lower direct Higgs mass bound from LEP. Similarly the purple (brown) lines indicate the LHC Higgs searches at 95\% (90\%) CL from the 2010 data. The black dashed lines show the electroweak precision fit from GFitter \cite{Baak:2011ze,Bardin:1999yd,*Arbuzov:2005ma} for 68\%, 95\% and 99\% confidence intervals (which include limits from radiative corrections and also the direct searches). }
\label{fig:pd1}
\end{figure*}
The gray hatched line in Fig.~(\ref{fig:pd1}) depicts the lower direct Higgs mass search bound from LEP \cite{Barate:2003sz}. The coloured hatched lines give the combined exclusion limits from ATLAS and CMS of \(\unit[141-476]{GeV}\) at 95\% confidence level (CL) and \(\unit[132-476]{GeV}\) at 90\% CL \cite{ATLAS-CONF-2011-157}. The experimentally favoured parameter range in the $m_t$--$m_h$ plane taking into account direct Higgs searches and and electroweak precision measurements are indicated by the GFitter region \cite{Baak:2011ze,Bardin:1999yd,*Arbuzov:2005ma} in the plot. We show the dependence on the top mass outside of the range \(\unit[172.3-174.1]{GeV}\), corresponding to the best world average value of top quark mass \(\unit[173.2\pm0.9]{GeV}\) \cite{Lancaster:2011wr}, as methods that determine the top mass directly from the $t\overline{t}$ cross-section favour a smaller value of \(\unit[168.9^{+3.5}_{-3.4}]{GeV}\) \cite{Langenfeld:2009wd}.
We observe that most of the Higgs masses given by different conditions tend to overlap in the vicinity of the best determined value of the top mass. The triviality bound, represented by the approximate condition \(\lambda(M_{pl})=\pi\), yields a range of Higgs masses which is already excluded at 95\% CL by the Tevatron and LHC. The Higgs masses generated by the other conditions however are still allowed and not excluded yet by the experiments. The Veltman condition is truncated at the point where its Higgs mass calculated with two-loop beta functions starts to cross the vacuum stability bound obtained by two-loop RGEs. This is done in order to show the exact crossing point of these two conditions from two-loop RGEs.
Throughout this work, we define a measure of the uncertainties involved in the calculation as the difference between using one and two-loop beta functions for all the relevant SM couplings in the determination of the Higgs pole mass. To be more precise, we define a ``RGE error band'' as the difference in determining the Higgs mass for a boundary condition of \(\lambda(M_{pl})\) with one and two-loop beta functions while the matching conditions remain the same for both cases. Possible errors due to the matching conditions will be discussed below. We caution the reader that this procedure probably overestimates the error stemming from neglecting higher order contributions to the beta functions. While there is no universally accepted way of estimating the theoretical uncertainties\footnote{See \cite{Cacciari:2011zr} for a recent attempt at rigorously characterizing a perturbative theoretical uncertainty.} , there are other approaches to define the theoretical error used in the literature. E.g. in Ref.~ \cite{EliasMiro:2011aa}, the authors define the theoretical error by the scale dependence of the matching condition $\lambda(M_t)$ and $\lambda_t(M_t)$ while neglecting the effect of higher order RGEs and arrive at an 3 GeV uncertainty. In Ref. \cite{Bezrukov:2009fr}, the authors estimate the theoretical uncertainty by comparing the situation where matching has been performed at $\mu=M_t$ to the case $\mu=M_Z$ and get an uncertainty of $2.2$ GeV. We comment on possibilities to reduce the theoretical uncertainty at the end of the paper. If one rather believes in this method of error estimation, the error curves shink accordingly. In our plot, the aforementioned ``RGE error band'' is represented by the bandwidth of each curve, with its center representing the Higgs mass obtained from two-loop RGE running. The upper edges of the bandwidths consist of the Higgs masses obtained from one-loop RGEs.
We consider also the uncertainty on the curves due to the error of strong coupling constant \(\alpha_s=0.1184(7)\) \cite{Nakamura:2010zzi} and we obtain \(\pm \unit[1]{GeV}\) uncertainty to the Higgs mass, which is negligible when quadratically added to the bandwidth in Fig.(\ref{fig:pd1}). Due to the relatively large ``RGE error band'', the error propagation from the strong coupling constant can be safely ignored. The theoretical error on the Higgs mass due to the matching uncertainty \cite{Hambye:1996wb,Hempfling:1994ar} between top Yukawa \(\overline{\mathrm{MS}}\) coupling and top pole mass is also considered. Comparing our vacuum stability band obtained with Casas et al.\ \cite{Casas:1994qy,Casas:1996aq}, a discrepancy of around \(\pm\unit[7]{GeV}\) for the Higgs mass value obtained via two-loop RGE is observed. This mismatch can be explained by the omission of two-loop QCD matching condition by the authors of Refs.~\cite{Casas:1994qy,Casas:1996aq}, as they only considered one-loop QCD, electroweak and QED contribution in the top mass matching condition. Since we would like to consider only the uncertainties due to the number of loops of the beta function used but not the errors caused by omission of better matching precision, we include the QCD matching between top Yukawa \(\overline{\mathrm{MS}}\) coupling and top pole mass up to three-loop. The resulting Higgs mass determined by the vacuum stability with two-loop RGE agrees with Ellis et al.\ \cite{Ellis:2009tp}. The \(\alpha\alpha_s\) correction \cite{Jegerlehner:2003sp} is neglected in our analysis as it only gives a small contribution. The Higgs pole mass is matched with \(\lambda\) at top pole mass scale, i.e.\ the renormalization scale of \(\lambda\) is set to be at \(\mu=M_t\). Since the higher order matching conditions for \(\lambda\) has not been calculated in any literature, only the expression of \(\delta_H\) given in App.~(\ref{sec:a2}) will be used as our matching. If the matching of \(\lambda\) to the Higgs pole mass is only performed at tree-level, the resulting discrepancy of the Higgs pole mass with respect to the one-loop matching result is found to be less than \(\unit[1]{GeV}\). Therefore, we can safely assume that higher order matching condition for \(\lambda\) will not yield a larger uncertainty.
The error estimation for the boundary condition \(\beta_{\lambda}(M_{pl})=0\) is not the same as for the other conditions. A careful treatment of the Higgs mass extraction has to be implemented in this case. The one-loop beta function of \(\lambda\) is a quadratic equation of \(\lambda\), and for a given top mass we obtain two positive solutions of the boundary condition \(\beta_{\lambda}(M_{pl})=0\) at the Planck scale, and thus obtain two equally valid low-energy Higgs mass determinations. In Fig.(\ref{fig:pd1}) these two branches of solutions generate a hook-shaped trajectory in the \(M_H-M_t\) plane. The hook ends where \(\lambda\) starts to take negative value. Due to the mismatch of the end of the trajectory when either one-loop or two-loop beta function is applied, a larger ``RGE error band'' has to be taken into account, where we generate error bars that cover the distance of the mismatch and plot a band to cover all the error bars. Besides the mismatch mentioned above, there exists also another source of error, namely number of loops of the beta functions implemented in the condition \(\beta_{\lambda}(M_{pl})=0\). In principle one should apply the full beta function as the boundary condition, but in practice this is impossible and therefore we have to check the possible uncertainty which arises due to the number of loops in \(\beta_{\lambda}\) used in the boundary condition. The errors lie, however, within the band. A similar uncertainty due to number of loops used as boundary condition also appears in the \(\gamma_m(M_{pl})=0\) condition and its uncertainty is larger in comparison to the \(\beta_{\lambda}(M_{pl})=0\) condition.
Another possible source of uncertainties on the determination of the Higgs mass from the Planck scale boundary conditions comes from the value of the Planck scale. The difference of the Higgs masses obtained from a certain boundary condition imposed at a different value of the Planck scale, e.g.\ \(\mu=M_{pl}/\sqrt{8\pi}\), are negligible for most of the boundary conditions. However the discrepancies are larger for the lower branch of the \(\beta_{\lambda}=0\) condition. We will not discuss further these uncertainties.
Throughout this work we assume that neutrinos do not play a significant role in our Higgs mass prediction. We would like to caution the reader that neutrinos are indeed massive and could couple with the \(\mathcal{O}(1)\) Yukawa coupling to the Higgs, e.g. in the canonical type I see-saw mechanism. For a neutrino mass of \(m_{\nu}\approx \mathcal{O}(\unit[1]{eV})\) and a see-saw scale of approximately \(\unit[10^{15}]{GeV}\), the Yukawa coupling of neutrino would be around \(\mathcal{O}(1)\). This large coupling between Higgs boson and neutrino could alter the prediction on Higgs mass by modifying the beta function of Higgs quartic coupling, as pointed out by Ref.~\cite{Casas:1999cd}. However the interesting case, i.e. SM plus extension of neutrino sector with \(\mathcal{O}(1)\) Yukawa coupling can be valid up to Planck scale, has been excluded by recent Tevatron and LHC searches \cite{ATLAS-CONF-2011-157}. We therefore implicitly assume a scenario where the neutrino Yukawa couplings are not significantly larger than for example the bottom Yukawa coupling.
\section{Is a low Higgs mass favoured? \label{sec:probability}}
All the generic boundary conditions which we discussed prefer a low Higgs mass \(\unit[127]{GeV} \lesssim M_H\lesssim \unit[145]{GeV}\) which fits amazingly good to the existing experimental direct lower and upper bounds from LEP, Tevatron and LHC. These values fit furthermore very well to those preferred by electroweak precision measurement. One could ask therefore if this is by chance or if it has a specific reason. Let us therefore first look at random values of \(\lambda(M_{pl})\) pretending in this way that every value could be realized by the new physics at the Planck scale. We assume therefore for a moment, that all values of \(\lambda(M_{pl})\) in the range of \(\lambda\in[0,\pi]\) have equal probability\footnote{In principle one could consider all values up to infinity since there seems to be no a priori reason to limit \(\lambda(M_{pl})\). Extending the upper end is however only strengthening the arguments, since this would favour even more a higher Higgs mass. Note that this is connected to the triviality bound and the corresponding focussing of RGE trajectories towards low scales. See Fig.(\ref{fig:run}) for this effect.}. We show therefore first in Fig.(\ref{fig:run}) the running of \(\lambda\) from the Planck scale to the weak scale for some values in the interval \(\lambda\in[0,\pi]\). Note that most values of \(\lambda(M_{pl})\) lead to \(\lambda\) at the Fermi scale which is greater than 0.2, which corresponds to \(M_H>\unit[150]{GeV}\).
\begin{figure}
\includegraphics[scale=0.9]{inverse.pdf}
\caption{Running of \(\lambda\) from the Planck scale to the Fermi scale. Different values of \(\lambda(M_{pl})\) and it can be seen that a large parameter space of \(\lambda(M_{pl})\) tends to induce \(\lambda(M_t=\unit[173]{GeV})\gtrsim\, 0.186\) which is equivalent to Higgs mass greater than \(\unit[150]{GeV}\).}
\label{fig:run}
\end{figure}
To analyse the effect further we randomly generate 600 values for \(\lambda(M_{pl})\) and the top pole mass ranging from \(\unit[170-175]{GeV}\) and put the resulting Higgs masses into the scatter plot Fig.(\ref{fig:histo}). Note that most values of \(\lambda\) at the Planck scale lead to a Higgs mass between \(\unit[160]{GeV}\) and \(\unit[175]{GeV}\). For the chosen interval \(\lambda(M_{pl})\in[0,\pi]\) we find \(\approx90\%\) in this range which has been excluded by the Tevatron and by the CMS and ATLAS experiments. Note that only less than 5\% of the generated Higgs masses are allowed by experiments.
\begin{figure}
\includegraphics[scale=0.6]{lambda.pdf}
\caption{Scatter plot of Higgs mass at the Fermi scale determined by random \(\lambda\) at the Planck scale with random top mass constrained to the interval \(\left[170,175\right]\) GeV.}
\label{fig:histo}
\end{figure}
Looking at Fig.(\ref{fig:histo}) one immediately notices that \(\lambda(M_{pl}) = 0\) which corresponds to \(M_H = \unit[127]{GeV}\) is a special condition which fits very well to the experimental findings. This is also the vacuum stability bound and it corresponds therefore to the lightest Higgs mass in the SM when it is valid up to the Planck scale. A lower Higgs mass would require some extra new physics at lower scales, which would rule out the logic of our paper. Including errors this leads to a lower bound \(M_H \gtrsim \unit[122]{GeV}\) for our scenario.
Fig.(\ref{fig:histo}) also shows that it is non-trivial that all the boundary conditions which we discuss in this paper lead to viable Higgs masses. However, there is a systematic understanding why all our conditions work. The point is that all our conditions such as \(\beta_{\lambda}(M_{pl})=0\) or \(\mathrm{Str}\mathcal{M}^2(M_{pl})=0\) are conditions connected to quantum loops for the Higgs mass parameter \(m\) and the quartic coupling \(\lambda\) and we can therefore always write \(\lambda\) as a function of the gauge couplings and the top Yukawa coupling, i.e. \(\lambda=f(g_i,\lambda_t)\). This leads generically to smaller values than a random choice, since loop factors, i.e. factors of \(1/16\pi^2\) are present and since top mass loops (fermion loop) carry a minus sign, which leads to cancellations, pushing the value of \(\lambda\) even smaller. One can therefore understand why the cases which we discussed all systematically lead to viable Higgs mass predictions. In that sense none of them is special and there may exits more interesting boundary conditions which also lead to viable Higgs masses. This also implies that a Higgs mass in the currently most favoured range does not clearly select any model or scenario which leads to a boundary condition that has such loop and top mass suppression factors. However, vice versa one might argue that current data point to boundary conditions which must involve such suppression factors, which is an interesting observation. It is also intriguing to ask in this context why the top mass is much heavier than other quarks such that it compensates other loop contributions which would drive the Higgs boson heavier. This appears to be an interesting ``conspiracy'' in favour of the logic of this paper which works if the boundary condition is imposed at a high scale like the Planck scale. All these arguments break of course down if LHC or any future experiment detects any sign of new physics that couples directly to the Higgs boson. Even an indirect coupling, i.e. radiative correction to \(\lambda\) at loop level is severe enough to alter the running of \(\lambda\) drastically. So far there is no compelling evidence for additional physics beyond the SM, whereas the SM Higgs search seems to indicate some excesses of Higgs like events in the range of \(\unit[130]{GeV}\) to \(\unit[140]{GeV}\), albeit at only about \(2\sigma\) \cite{ATLAS-CONF-2011-157}.
\section{Discussion and conclusions \label{sec:merge}}
The LHC has an excellent chance to find the SM Higgs boson and we emphasize in this paper that the left-over values lie in a range which is well motivated by various Planck scale boundary conditions. We argued that this Higgs mass range is special and that it might be related to embeddings of the SM as a QFT into some form of quantum gravity, which is based on concepts {\em beyond} QFT. The SM would still be an effective field theory which is valid up to the Planck scale, but the asymmetry in the concepts might allow to understand the famous hierarchy problem from the perspective of the new concepts at the Planck scale, while it would only appear unnatural from the low energy point of view. In other words: The large hierarchy between the Planck and electroweak scale might only be a problem as long as we look at it from the SM perspective. An important point is then that such a scenario makes only sense if there are no intermediate scales, since this would require a QFT solution of the hierarchy between the electroweak and intermediate scales.
The proposed scenario requires that the Higgs coupling can be evolved to the Planck scale, which implies strict lower and upper bounds on the coupling. The upper bound is the so-called triviality bound which is approximately \(\unit[170]{GeV}\) and it is interesting to note that the Higgs mass is below this value, as otherwise the couplings could not be evolved up to the Planck scale. The lower bound is the so-called vacuum stability bound, i.e. the condition \(\lambda(M_{pl})=0\). We carefully evaluated its value and error with the latest data at two loops and provided an error estimate. For \(M_t=\unit[173.2]{GeV}\) we find \(m_H = \unit[127\pm 5]{GeV}\) which implies that the Higgs mass must be heavier than \(\unit[122]{GeV}\).
It is important to note that the vacuum stability bound, or equivalently the condition \(\lambda(M_{pl})=0\), is very special. It implies that the Higgs self-interaction at the electroweak scale is entirely generated by radiative corrections of the RGE evolution from a vanishing coupling at the Planck scale. The Higgs mass would therefore be connected to the gauge and Yukawa couplings which enter into the RGEs.
\begin{figure}
\includegraphics[scale=0.7]{vacuumerror.pdf}
\caption{A blow up of the vacuum stability bound in the interesting Higgs and top mass region. The blue line in the center represents the vacuum stability bound obtained via two-loop beta functions, which has been thoroughly discussed in main text. The yellow band represents the uncertainties of the Higgs mass obtained via two-loop RGEs due to \(\alpha_s\) uncertainties. The outer blue band is identical to the blue band in Fig.~(\ref{fig:pd1}) and it represents the full ``RGE error band'' estimated from difference between one- and two-loop RGEs. With the best world average top pole mass \(\unit[173.2]{GeV}\) the inferred Higgs mass from the vacuum stability condition \(\lambda(M_{pl})=0\) is \(\unit[127 \pm 5]{GeV}\).}
\label{fig:vacuumerror}
\end{figure}
Several comments should be carefully considered in this context:
\begin{enumerate}
\item The Higgs central value \(m_H = \unit[127]{GeV}\) is obtained via two-loop beta function running from the vacuum stability condition at the Planck scale to the weak scale regime. Fig.~(\ref{fig:vacuumerror}) shows the uncertainties due to the omission of higher orders to be \(\pm\unit[5]{GeV}\). This appears a reasonable way to arrive at a conservative error estimation for the Higgs mass due to the lack of higher order RGEs, but it should not be over interpreted. This implies that the exact lower bound for the Higgs mass is limited by this conservative estimation.
\item Precision top mass analysis is required to determine the exact value of the Higgs mass predicted via vacuum stability. The reason why we want to stress on this specific result is that to date, there is no general consensus on what type of top mass is actually measured via kinematic reconstruction \cite{hoang:lec}. At the Tevatron, the main method used for the top mass extraction actually ``measures'' the Pythia mass, which is a Monte-Carlo simulated template mass. Strictly speaking the top pole mass is not a well defined quantity, as the top quark does not exist as free parton. The top mass that the Tevatron has measured is based on the final state of the decay products. On the other hand the running \(\overline{\mathrm{MS}}\) top mass can be extracted directly from the total cross section in the top pair production. In this sense, one can obtain a complementary information of the top mass from the production phase. By converting the \(\overline{\mathrm{MS}}\) mass to the pole mass via matching conditions, the top pole mass value \(\unit[168.9^{+3.5}_{-3.4}]{GeV}\) extracted with this method by Langenfeld et al.\ \cite{Langenfeld:2009wd} is found to be lower than the world best average value.
However, this way of extracting the top mass suffers from larger numerical uncertainties. As we can be seen from Figs.~(\ref{fig:pd1}) and (\ref{fig:vacuumerror}), a change of the top mass by \(\unit[2]{GeV}\) changes the Higgs mass prediction by \(\unit[6]{GeV}\).
\item The electroweak vacuum might in principle be metastable. However, most of the Higgs mass region for metastability has already been ruled out by LEP, although not entirely excluded. The finite temperature metastability region, however, with the local SM assumed to be stable against thermal fluctuations up to Planck scale temperatures, allows the entire region from the LEP bound to the vacuum stability.
Hence should LHC discover the SM Higgs boson with its mass lower than the one predicted by two-loop RGE vacuum stability bound, there is a possibility that the SM electroweak vacuum is not the stable one (see Refs.~\cite{Ellis:2009tp,Espinosa:2007qp,Isidori:2001bm,Isidori:2007vm,Froggatt:2001pa,EliasMiro:2011aa} for more details). However, we would also like to remind that metastability bounds depend on the fastest process conceived for the transition to the true vacuum. Any faster process occurring once anywhere in the Universe would reduce or eliminate the metastability region.
\end{enumerate}
We discussed in Fig.~(\ref{fig:histo}) that only a small percentage of randomly generated boundary conditions for \(\lambda(M_{pl})\) lead to Higgs masses which are still allowed by experiments. On the other hand we presented in Fig.~(\ref{fig:pd1}) results for a set of boundary conditions which all lead to Higgs masses in the allowed region and we explained how this can be systematically understood. The point is that the chosen boundary conditions emerge from conditions which have loop suppression factors, making them rather small compared to a random choice. Fig.~(\ref{fig:pd1}) shows that there exist different working conditions and others might be found which work as well. This implies that once loop suppression factors have been included into the boundary condition it is not easily possible to distinguish between various models or scenarios, but growing precision will nevertheless reduce the number of options somewhat.
The conclusion from this observations is in the scenario of this paper that quantum gravity should not generate a random value of \(\lambda\) at the Planck scale, but it should somehow select conditions
that lead to a small Higgs self-coupling $\lambda$ at the Planck scale. These conditions could possibly be some remnant of symmetry in the full quantum theory of gravity. The Yukawa and \(\lambda\) couplings could or maybe even should have a common origin from the Planck scale physics, as the top quark contribution miraculously cancels the contribution of the Higgs boson such that the SM can be extrapolated to the Planck scale.
We have shown in this paper that the chosen boundary conditions for \(\lambda\) yield a Higgs mass which is in the allowed range. The fact that different boundary conditions have overlapping regions could imply that more than one of them is simultaneously realized at the Planck scale, which is an intriguing possibility. For instance if we demand that \(\lambda(M_{pl})=0\) and \(\mathrm{Str}\mathcal{M}^2(M_{pl})=0\) are both satisfied then it is possible that the Higgs quartic coupling is only generated radiatively and the quadratic divergence vanishes.
This appears puzzling from a low energy perspective, however, from the Planck scale physics perspective this could be natural, as these two conditions could have a common origin of some unknown connection between the gauge, Yukawa and Higgs quartic couplings. Other combinations of listed boundary conditions can also be considered, as for instance those proposed in Ref.~\cite{Shaposhnikov:2009pv} where the authors obtained from the asymptotic safety of gravity \(\lambda\approx0\) and \(\beta_{\lambda}\approx0\) near the Planck scale. Similar conditions also appeared in Ref.~\cite{Froggatt:1995rt}, where the authors demanded the principle of multiple point criticality.
One might ask how much the situation can be improved with better measurements of the Higgs and top masses. ATLAS and CMS \cite{Hohlfeld:684112,Ball:2007zza,DeRoeck:2007zza} should be capable to detect a Higgs mass with a precision of \(0.1\%\) to \(1\%\) with an integrated luminosity of \(\unit[30]{fb}^{-1}\).
This precision is encouraging, but unfortunately the determination of the high energy boundary conditions is plagued by the relatively large uncertainties due to the lack of higher order RGEs. Three loop beta functions and other improvements would be required in order to reduce the errors of the band in Fig.~(\ref{fig:pd1}). Without these theoretical improvements it will be hard to differentiate the different boundary conditions. However, such improvements would be important if the SM would indeed be a theory which is valid up to the Planck scale. Precise determinations of the Higgs and top masses could then be used to identify the correct boundary conditions for \(\lambda\) within the remaining uncertainties. A complete calculation of the three-loop beta functions of the SM would be very important in this case. This would, however, also require the determination of the matching conditions to at least two-loop order. While the two-loop QCD matching condition has been included for \(\lambda_t(m_t)\) in this work, the corresponding contribution to \(\lambda(m_t)\) is not known and would be necessary to reduce the matching uncertainty alongside the other two-loop contributions to \(\lambda(m_t)\) and \(\lambda_t(m_t)\).
With around \(\unit[5]{fb^{-1}}\) of integrated luminosity collected by ATLAS and CMS, it is possible to find or exclude a SM Higgs mass in the region from \(\unit[114-600]{GeV}\) by the end of next year. A discovery of SM Higgs boson will anyhow be significant for advancement of particle physics. There exists an excellent chance to find all sort of TeV-scale new physics, but if the LHC finds nothing but a SM Higgs then this would be very much in favour of the spirit of this paper. A key question would be which new concepts could be involved in quantum gravity such that the correct SM boundary conditions arise.
\vspace*{2mm}
\noindent {\bf Note added:} After this work has been completed, ATLAS has reported the latest Higgs mass exclusion regions at 95\% CL \cite{ATLAS-CONF-2011-163} ranging from \(\unit[112.7-115.5]{GeV}\), \(\unit[131-237]{GeV}\) and \(\unit[251-453]{GeV}\) while CMS has excluded the Higgs masses in the range \(\unit[127-600]{GeV}\) \cite{CMS-PAS-HIG-11-032}.
\vspace*{2mm}
\noindent {\bf Acknowledgements:} We would like to thank E. Gross for interesting discussions on the exact value of the vacuum stability bound. M.H.~acknowledges support by the International Max Planck Research School for Precision Tests of Fundamental Symmetries.
|
2,877,628,090,279 | arxiv | \section{Introduction and notations}
First we recall some definitions and fix notations. We shall be mainly
interested in two kinds of SLE: chordal SLE in the upper half-plane
$\H$, from a real point to $\infty$; and radial SLE in the unit disk
$\mathbb{U}$, from a boundary point to $0$. Corresponding SLEs in other
(simply connected) domains are obtained by conformal equivalence. For
general background on SLE, see \cite{RS01,W1,Law}. Also, we will use
freely results on the restriction property and the ``loop soup''
(see \cite{LSW3,LW,W3}).
Consider the family of ODEs, indexed by $z$ in $\H$:
$$\partial_tg_t(z)=\frac 2{g_t(z)-W_t}$$
with initial conditions $g_0(z)=z$, where $W_t$ is some real-valued
(continuous) function. These chordal Loewner equations are defined up
to explosion time $\tau_z$ (maybe infinite). Define:
$$K_t=\overline{\{z\in\H:\tau_z<t\}}.$$
Then $(K_t)_{t\geq 0}$ is an increasing family of compact subsets of
$\overline\H$; moreover, $g_t$ is the unique conformal equivalence
$\H\setminus K_t\rightarrow \H$ such that (hydrodynamic normalization
at $\infty$):
$$g_t(z)=z+o(1).$$
For any compact subset $K$ of $\overline{\H}$ such that $\H\setminus
K$ is simply connected, we denote by $\phi$ the unique conformal
equivalence $\H\rightarrow \H\setminus K$ with hydrodynamic normalization
at $\infty$; so that $g_t=\phi_{K_t}$.
The coefficient of $1/z$ in the Laurent expansion of $g_t$ at $\infty$
is by definition the half-plane capacity of $K_t$ at infinity; this
capacity equals $(2t)$.
If $W_t=x+\sqrt\kappa B_t$ where $(B_t)$ is a standard Brownian
motion, then the Loewner chain $(K_t)$ (or the family $(g_t)$) defines
the chordal Schramm-Loewner Evolution with parameter $\kappa$ in
$(\H,x,\infty)$. The chain $K_t$ is generated by the trace $\gamma$, a
continuous process taking values in $\overline\H$, in the following
sense: $\H\setminus K_t$ is the unbounded connected component of
$\H\setminus\gamma_{[0,t]}$.
The trace is a continuous non self-traversing curve. It is a.s. simple
if $\kappa\leq 4$ and a.s. space-filling if $\kappa\geq 8$.
In the radial case, Loewner's equations are indexed by $z\in\mathbb{U}$,
$$\partial_tg_t(z)=-g_t(z)\frac {g_t(z)+\xi_t}{g_t(z)-\xi_t}$$
and $g_0(z)=z$, $\xi$ takes values in the unit circle. The hull $K_t$
is defined as above, and $g_t$ is the unique conformal equivalence
$\mathbb{U}\setminus K_t\rightarrow \mathbb{U}$ with $g_t(0)=0$,
$g'_t(0)>0$. Moreover, $g'_t(0)=e^{-t}$. If
$\xi_t=\xi_0\exp(i\sqrt\kappa B_t)$, where $B$ is a standard Brownian
motion, one gets radial $\SLE_\kappa$ from $\xi_0$ to $0$ in $\mathbb{U}$.
Note that chordal SLE depends only on two boundary points, and radial
SLE depends on one boundary and one bulk point. In several natural
instances, one needs to track additional points on the boundary. This
has prompted the introduction of $\SLE(\kappa,\rho)$ processes in
\cite{LSW3}, generalized in \cite{D4}. The driving Brownian motion is
replaced by a semimartingale which has local Girsanov density w.r.t.
the original Brownian motion.
In the chordal case, let $\underline\rho$ be a multi-index, i.e. :
$$\underline\rho\in \bigcup_{i \ge 0} \mathbb{R}^i$$
Let $k$ be the length of $\underline\rho$; if $k=0$, one simply
defines $\SLE(\kappa,\varnothing)$ as a standard $\SLE_\kappa$. If
$k>0$, assume the existence of processes $(W_t)_{t\geq 0}$ and
$(Z^{(i)}_t)_{t\geq 0}$, $i\in\{1\dots k\}$ satisfying the SDEs:
\begin{equation}\label{E1}
\left\{\begin{array}{l}dW_t=\sqrt\kappa dB_t+\sum_{i=1}^k\frac{\rho_i}{W_t-Z^{(i)}_t}dt\\
dZ^{(i)}_t=\frac 2{Z^{(i)}_t-W_t}dt\end{array}\right.
\end{equation}
and such that the processes $(W_t-Z^{(i)}_t)$ do not change sign.
Then we define the chordal $\SLE_\kappa(\underline\rho)$ process starting from
$(w,z_1,\dots z_k)$ as a chordal Schramm-Loewner evolution the driving process
of which has the same law as $(W_t)$ as defined above, with
$W_0=w,Z^{(i)}_0=z_i$.
In the radial case, assume the existence of processes $(\xi_t)_{t\geq 0}$ and
$(\chi^{(i)}_t)_{t\geq 0}$, $i\in\{1\dots k\}$ satisfying the SDEs:
\begin{equation}\label{E2}
\left\{\begin{array}{l}d\xi_t=(i\xi_t\sqrt\kappa dB_t-\frac\kappa
2\xi_tdt)+\sum_{i=1}^k\frac{\rho_i}2\left(-\xi_t\frac{\xi_t+\chi^{(i)}_t}{\xi_t-\chi^{(i)}_t}\right)dt\\
d\chi^{(i)}_t=-\chi^{(i)}_t\frac{\chi^{(i)}_t+\xi_t}{\chi^{(i)}_t-\xi_t}dt\end{array}\right.
\end{equation}
The processes $\xi$, $\chi^{(i)}$ may bounce on each other but not cross.
This defines radial $\SLE_\kappa(\underline\rho)$ in the unit
disk. Note the factor $1/2$ before the $\rho_i$ parameters in the SDE:
this is to ensure coherence with the chordal case.
\section{Examples of commutation}
We begin by discussing how properties of SLE (e.g. reversibility
and duality) yield natural examples of commutation relations.
{\bf Reversibility:} consider a chordal SLE in $(\H,0,\infty)$,
$\gamma$ its trace. For simplicity, assume that $\kappa\leq 4$, so
that the trace is a.s. simple. Define
$\hat\gamma_t=\gamma_{1/t}$. Then $\hat\gamma$ is a simple curve from
$\infty$ to $0$ in $\H$ (for transience of SLE, see
\cite{RS01}). After a time change $s=s(t)$, $\hat\gamma$ is such that
$-1/\hat\gamma_{[0,s]}$ has capacity $2s$. Then, according to
reversibility, $\hat\gamma$ a $\SLE_\kappa$ in $\H$ from $\infty$ to
$0$. (Note that for all $0<\kappa\leq 4$ and involution of $\H$ of
type $z\mapsto -\lambda/z$, this defines a somewhat intricate
measure-preserving involution of the Wiener space).
Admitting reversibility, one can define a chordal SLE growing ``from
both ends'' in the following fashion: let $B$ be a standard Brownian
motion, with filtration ${\mathcal F}$, $\gamma$ the trace of the
associated $\SLE_\kappa$, and $\hat\gamma$ as above. Then consider
$(K_{t,s}=\gamma_{[0,t]}\cup\hat\gamma^{-1}_{[0,s]})_{s,t\geq 0}$ and
the filtration $({\mathcal G}_{t,s})$ it generates. For any $(t_0,s_0)$,
$(K_{t_0+t,s_0})_t$ is a (time-changed) chordal SLE in $\H\setminus
K_{t_0,s_0}$, from $\gamma_{t_0}$ to $\hat\gamma_{s_0}$; conversely
$(K_{t_0,s_0+s})_s$ is a (time-changed) chordal SLE in $\H\setminus
K_{t_0,s_0}$, from $\hat\gamma_{s_0}$ to $\gamma_{t_0}$.
Together with conformal equivalence, this gives the following Markov property: if $f_{t,s}$ is a conformal
equivalence $(\H\setminus K_{t,s},\gamma_t,\hat\gamma_ s)\rightarrow
(\H,0,\infty)$ with some normalization (e.g. $f_{t,s}(1)=1$), then
$f_{t_0,s_0}(K_{t_0+t,s_0+s})$ is up to a time-change
$\mathbb{R}_+^2\rightarrow \mathbb{R}_+^2$ a copy of $(K_{t,s})$ independent from
${\mathcal G}_{t_0,s_0}$.
{\bf Duality:} Duality relates the boundary of non-simple SLE
($\kappa>4$) with corresponding simple SLEs ($\kappa'=16/\kappa$). Let
us try to formulate a precise conjecture in a ``dual'' fashion. We
elaborate on restriction formulae identities discussed in \cite{D4}.
Let $\kappa>4$, $\kappa'=16/\kappa$. Consider the configuration $(\H,x,y,z,\infty)$, where $x<y<z$. Define a Loewner chain from $0$ to $\infty$ as follows:
the chain $(K_t)_{t\leq\tau_z}$ is an
$\SLE_\kappa(\kappa/2-4,-\kappa/2)$ in $\H$, started from $(x,y,z)$,
aiming at $\infty$, and stopped at time $\tau_z$ when the trace hits
$z$ (which it does with probability 1). Then $(K_{t+\tau_z})_{t\geq
0}$ is a $\SLE_\kappa(\kappa-4)$ in $\H\setminus K_{\tau_z}$, started
from $(z,z^+)$ and aiming at $\infty$.
The right-boundary of $K_\infty=\bigcup_{t\geq 0}K_t$ is a simple
curve from $z$ to $\infty$ in $\H$; denote by $(\delta_u)$ the
corresponding Loewner trace (i.e. $\delta_{[0,\infty)}$ is the
right-boundary of $K_\infty$ and $\delta_{[0,u]}$ has half-plane
capacity $(2u)$\ ).
Now consider a configuration $(\H,x',y',z',\infty)$, where $x'<y'<z'$,
and let $\gamma'$ be the trace of the chordal
$\SLE_{\kappa'}(-\kappa'/2,\kappa'-2)$ in $\H$ started from
$(z',x',y')$ and aiming at $\infty$.
Then we can formulate:
\begin{Conj} The following statements hold:\\
(i) The law of $\delta$ is that of $\gamma$, where $x=x'$, $y=y'$, $z=z'$.\\
(ii) The law of $(\phi_{\delta_{[0,u]}}(K_t))_{t\geq 0}$ conditionally
on $\delta_{[0,u]}$ is (up to a time-change) that of a copy of $(K_t)$
started from $(x',y',z')=\phi_{\delta_{[0,u]}}(x,y,\delta_u)$.
\end{Conj}
This conjecture can be interpreted in terms of multiple SLEs: one can
grow simultaneously the chain $(K_t)$ and its (final)
right-boundary. One also get a Markov property similar to the one
discussed for reversibility.
{\bf Locality:} The scaling limit of the exploration process for
critical site percolation on the triangular lattice is $\SLE_6$
(see \cite{Sm1,CamNew}). For some boundary conditions, one can define several
exploration paths. Consider for instance the following situation:
$(D,x_1,\dots, x_{2n})$ is a simply connected domain with $(2n)$ marked
boundary points in cyclical order. The segments
$(x_1,x_2),\dots,(x_{2n-1},x_{2n})$
(resp. $(x_2,x_3),\dots,(x_{2n},x_{1})$) are set to blue
(resp. yellow). Then one can start an exploration process at each of
the boundary points $x_i$; these are well-defined up to some
disconnection event.
One can also consider some conditional versions: for instance,
critical percolation in $(\H,0,1)$, where the half-lines $(\infty,0)$ and
$(1,\infty)$ are blue and $(0,1)$ is yellow, conditionally on the
existence of a yellow path from $(0,1)$ to infinity (this is a
singular conditioning, related to the one-arm half-plane
exponent). Now the exploration processes started from $0$ and $1$
resp. can be defined for all time. The two traces intersect at pivotal
points for the conditioning event.
One may also consider the following
situation: a conformal rectangle, with sides alternately blue and
yellow. Hence, one can start four exploration processes (one at each
vertex). Then condition on a Cardy crossing event (e.g. the two blue
sides are connected by a blue path). One can note that in this
situation, the Girsanov drift terms are not rational functions.
{\bf Restriction:} The restriction property of $\SLE_{8/3}$ can be
used to get commutation relations. For instance, consider a simply
connected domain with four marked points on the boundary, say
$(\H,a,b,c,d)$. One can define two independent $\SLE_{8/3}$'s, from
$a$ to $b$ and $c$ to $d$ resp., and condition them on not
intersecting. Then, from the restriction property, this system of two
SLEs has a natural Markov property, and also a restriction property.
More precisely, let $\gamma$ and $\gamma'$ be the traces of these
SLEs, $(g_t)$ the family of conformal equivalences of the first one
(for some time parameterization). Then (${\mathcal L}$ denotes
probability distributions)
\begin{align*}
{\mathcal L}(g_t(\gamma_{(t,\infty)}),g_t(\gamma')|\gamma\cap\gamma'=\varnothing)
&={\mathcal
L}(g_t(\gamma_{(t,\infty)}),g_t(\gamma')|\gamma_{(0,t)}\cap\gamma'=\varnothing,g_t(\gamma_{(t,\infty)})\cap
g_t(\gamma')
)\\
&={\mathcal
L}(\tilde\gamma,\tilde\gamma'|\tilde\gamma\cap
\tilde\gamma'=\varnothing)
\end{align*}
where $\tilde\gamma$ and $\tilde\gamma'$ are independent $\SLE_{8/3}$'s
going from $g_t(\gamma_t)$ to $g_t(b)$ and from $g_t(c)$ to $g_t(d)$
resp. (using the Markov property for $\gamma$ and the restriction
property for $\gamma'$). For the restriction property, note that, for
any hull $A$ disjoint from $\{a,b,c,d\}$:
\begin{align*}
\P((\gamma\cup\gamma)'\cap
A=\varnothing|\gamma\cap\gamma'=\varnothing)&=\frac{\P(\gamma\cap\gamma'=\gamma\cap
A=\gamma'\cap A=\varnothing)}{\P(\gamma\cap\gamma'=\varnothing)}\\
&=\frac{\P(\gamma\cap\gamma'=\varnothing|\gamma\cap
A=\gamma'\cap A=\varnothing)\P(\gamma\cap
A=\gamma'\cap A=\varnothing)}{\P(\gamma\cap\gamma'=\varnothing)}\\
&=\frac{\P(\phi_A(\gamma)\cap\phi_A(\gamma')=\varnothing)\P(\gamma\cap
A=\gamma'\cap A=\varnothing)}{\P(\gamma\cap\gamma'=\varnothing)}\\
&=\P(\gamma\cap
A=\varnothing)\P(\gamma'\cap A=\varnothing)\frac{\psi(\phi_A(a,b,c,d))}{\psi(a,b,c,d)}
\end{align*}
where $\psi(a,b,c,d)$ is the probability that the two independent
$\SLE_{8/3}$ do not intersect.
If $\kappa\in (0,8/3)$, one can consider two independent
$\SLE_\kappa$, a corresponding independent loop soup, and condition on the event:
no loop intersects the two SLEs. A standard computation shows that the
probability $\psi_\kappa$ of this event is given by:
$$\psi_\kappa(\infty,0,x,1)=\frac{\Gamma(4/\kappa)\Gamma(12/\kappa-1)}{\Gamma(8/\kappa)\Gamma(8/\kappa-1)}x^{2/\kappa}\hphantom{F}_2F_1\left(\frac 4\kappa,1-\frac
4\kappa;\frac 8\kappa;x\right)$$
In a domain with $(2n)$ marked points on the boundary in cyclical
order, say $(\H,x_1,\dots x_{2n})$, for a given pairing of $\{x_1,\dots
x_{2n}\}$, define $n$ independent $\SLE_\kappa$, with endpoints
determined by the pairing. Consider $(n-1)$ auxiliary independent loop
$L_1,\dots, L_{n-1}$
soups, with intensity $\lambda_\kappa$.
One can consider the event: for $k=1,\dots,n-1$, no loop in $L_k$
intersects more than $k$ SLEs.
This has positive probability iff the pairing is a non-crossing
one. As is well known, there are $C_n$ of these pairings, where $C_n$
is the $n$-th Catalan's number:
$$C_n=\frac{{{2n}\choose{n}}}{n+1}$$
Then one can condition on this event to get $n$ non intersecting
$\SLE_\kappa$'s, that have an appropriate Markov property and
restriction property. This situation is discussed in details in
\cite{D7}; when $\kappa=2$, this is directly connected to
Fomin's formulae \cite{Fomin,KozL}.
{\bf Wilson's algorithm:} In the case $\kappa=8$, $\kappa'=2$, the
Uniform Spanning Tree and the Loop-Erased Random Walk converge to
$\SLE_8$ and $\SLE_2$ resp. (see \cite{LSW2}). As pointed out in
\cite{LSW2}, duality follows from these convergence and Wilson's
algorithm, that gives an exact relation between UST and LERW at the
discrete level (\cite{Wi}).
Let us formulate a precise duality identity in this
situation. Consider $(K_t)$ a chordal $\SLE_8$ in $(\H,0,\infty)$. Let
$G$ be the (random) leftmost point visited by this $\SLE$ before
$\tau_1$. Then a standard SLE computation (see e.g. \cite{W1}) yields:
$$\P(G\leq -g)=\frac 1\pi\int_0^{g}\frac{dt}{(1+t)\sqrt t}=\frac
2\pi\arctan(\sqrt g)$$
This distribution is the exit distribution of a random walk with
normal reflection on $\mathbb{R}^+$, absorbed on $\mathbb{R}^-$, and started from 1 (as is
readily seen by mapping $\H$ to a quadrant by $z\mapsto\sqrt z$ and a
reflection argument).
At the discrete level, we are considering a UST wired on $\mathbb{R}^-$ and
free and $\mathbb{R}^+$. The branch connecting $1$ to $\mathbb{R}^-$ is a LERW started
from $1$ and reflected on $\mathbb{R}^+$. By a slight modification of the
arguments of \cite{LSW2} (considering the Poisson kernel for this
reflected random walk gives ``harmonic martingales'' for the
time-reverted LERW), one gets that conditionally on $G$, the boundary
of $K_{\tau_1}$, which is a random simple curve connecting $G$ and $1$,
and the scaling limit of this LERW, is chordal $\SLE_2(-1,-1)$ in $(\H,G,1)$
started from $(G,0,\infty)$.
Wilson's algorithm gives more information. The boundary $\partial
K_{\tau_1}$ divides $\H$ in two simply connected domains $\H_0$ and
$\H_\infty$, with $0$ and $\infty$ in their respective boundary. Then,
conditionally on $\partial
K_{\tau_1}$, the original $\SLE_8$ is the concatenation of a chordal $\SLE_8$
in $(\H_0,0,1)$ and a chordal $\SLE_8$
in $(\H_\infty,1,\infty)$.
For small times, the law of the original $\SLE_8$ conditionally on
$G=g$ (in the regular conditional probability sense) is easy to work
out. Consider the following martingale (with usual notations):
$$\arctan\sqrt\frac{W_t-g_t(x)}{g_t(1)-W_t}$$
Differentiating w.r.t $x$, one gets a local martingale:
$$\frac{g'_t(x)}{g_t(1)-g_t(x)}\sqrt\frac{W_t-g_t(x)}{g_t(1)-W_t}$$
Using this as a Girsanov density, one finds that the conditional
$\SLE_8$ is a chordal $\SLE_8(-4,4)$ in $(\H,0,\infty)$ started from $(0,G,1)$.
For symmetry, and from reversibility, the chordal $\SLE_8$
in $(\H_\infty,1,\infty)$ is a time-reversed chordal $\SLE_8$
in $(\H_\infty,\infty,1)$.
Consider now these different processes as chordal SLEs in $\H$ aiming
at 1 (and not $\infty$). Then we have \\
an $\SLE_2(-1,-1)$ started from $(G,0,\infty)$\\
an $\SLE_8(-4,2)$ started from $(0,G,\infty)$\\
an $\SLE_8(-4,2)$ started from $(\infty,G,0)$\\
We shall see later that this fits in infinitesimal relations for $\SLE_\kappa(\underline\rho)$.
One can extend the situation as follows: in the discrete setting,
consider $n$ points on $\mathbb{R}^+$ and $n$ points on $\mathbb{R}^-$:
$$x_n<\cdots <x_1<0<y_1<\cdots<y_n.$$
Consider a UST with the same boundary conditions as before, and the
smallest subtree containing $y_1,\dots,y_n$ and $\mathbb{R}^-$. Condition on
the event that this subtree has no triple point in the bulk. Then it
consists in the union of $n$ disjoint paths in the bulk and $\mathbb{R}^-$. Now
condition on the endpoints of these branches being $x_n,\dots,x_1$,
and take this to the scaling limit. Using Wilson's algorithm and
Fomin's formulae (\cite{Fomin}), everything can be made explicit, and
this defines $n$ ``non-intersecting'' $\SLE_2$'s in the upper half-plane
(with $(2n+2)$ marked points on the boundary).
\section{Commutation of infinitesimal generators}
\subsection{The commutation framework}
We have seen natural examples where two SLEs could be grown in a
common domain in a consistent fashion. In this section, we discuss
necessary infinitesimal conditions. We shall define a ``global''
commutation condition, of geometric nature, and express its consequence
in terms of infinitesimal generators, which is of algebraic nature.
Let us consider the following chordal situation: the domain is $\H$, SLEs aim at
$\infty$, and $(x,y,z_1,\dots,z_n)$ are $(n+2)$ (distinct) points on
the real line; the point at infinity is also a marked point. We want to grow two infinitesimal hulls (with capacity of
order $\varepsilon$) at $x$ and $z$ respectively.
We can either grow a hull $K_\varepsilon$ at $x$, and then another one at $y$
in the pertubed domain $\H\setminus K_\varepsilon$, or proceed in the other
order. The coherence condition is that these two procedures yield the
same result.
Let us make things more rigorous. Consider a Loewner
chain $(K_{s,t})_{(s,t)\in{\mathcal T}}$ with a double time index, so that
$K_{s,t}\subset K_{s',t'}$ if $s'\geq s$, $t'\geq t$ and $K_{s,t}\neq
K_{s',t'}$ if $(s',t')\neq (s,t)$. We only consider chains up to time
reparameterization $\mathbb{R}_+^2\rightarrow\mathbb{R}_+^2$. We also assume that
$K_{s,t}=K_{s,0}\cup K_{0,t}$.
The time
set ${\mathcal T}$ may be random, but includes a.s. a neighbourhood of
$(0,0)$ in $\mathbb{R}_+^2$. Also, if $s\leq s'$, $t\leq t'$, $(s',t')\in{\mathcal
T}$, then $(s,t)\in{\mathcal T}$. Define
$g_{s,t}$ the conformal equivalence $\H\setminus K_{s,t}\rightarrow
\H$ with hydrodynamic normalization at infinity
($g_{s,t}=\phi_{K_{s,t}}$ with the earlier notation), and the continuous
traces $\gamma$, $\tilde\gamma$, such that:
$$\gamma_{s,t}=\lim_{\varepsilon\searrow 0}\overline{ g_{s,t}(K_{s+\varepsilon,t}\setminus
K_{s,t})}, \tilde\gamma_{s,t}=\lim_{\varepsilon\searrow 0} \overline{g_{s,t}(K_{s,t+\varepsilon}\setminus
K_{s,t})}$$
where $\gamma_{0,t}=x$ for
all $(0,t)\in{\mathcal T}$, and similarly $\tilde\gamma_{s,0}=y$ for
all $(s,0)\in{\mathcal T}$.
Furthermore, assume that
the following conditions are satisfied:
\begin{Def}
Let $(K_{s,t})_{(s,t)\in{\mathcal T}}$ be a random Loewner chain with
double time indexing; the associated conformal equivalences are
$g_{s,t}=\phi_{K_{s,t}}$. We say that $(K_{s,t})$ is an
$\SLE(\kappa,b,\tilde\kappa,\tilde b)$ if:
\begin{enumerate}
\item The time set ${\mathcal T}$ is a.s. open, connected, and a
neighbourhood of ${(0,0)}$ in $\mathbb{R}_+^2$. The ranges of the traces $\gamma_{\mathcal T}$, $\tilde\gamma_{\mathcal
T}$ are disjoint and $z_1,\dots, z_n\notin K_{s,t}$ for $(s,t)\in{\mathcal T}$.
\item Let $\sigma$ (resp. $\tau$) be a stopping time in the filtration
generated by $(K_{s,0})_{(s,0)\in{\mathcal T}}$
(resp. $(K_{0,t})_{(0,t)\in{\mathcal T}}$). Let also ${\mathcal T'}=\{(s,t):
(s+\sigma,t+\tau)\in {\mathcal T}\}$
and $(K'_{s,t})_{(s,t)\in{\mathcal
T'}}=\left(\overline{g_{\sigma,\tau}(K_{s+\sigma,t+\tau}\setminus
K_{s,t})}\right)$. Then $(K'_{s,0})_{(s,0)\in{\mathcal T'}}$ is distributed
as a stopped $\SLE_\kappa(b)$, i.e an $\SLE$ driven by:
$$dX_s=\sqrt{\kappa} dB_s+ b(X_s, g_s(y),\dots, g_s(z_i),\dots)dt$$
Likewise $(K'_{0,t})_{(0,t)\in{\mathcal T'}}$ is distributed
as a stopped $\SLE_{\tilde\kappa}(\tilde b)$, i.e an $\SLE$ driven by:
$$dY_t=\sqrt{\tilde\kappa} d\tilde B_t+ \tilde b(\tilde g_t(
x),Y_t,\dots, \tilde g_t(z_i),\dots)dt$$
\end{enumerate}
\end{Def}
Here $B$, $\tilde B$ are standard Brownian motions,
$(g_s)$, $(\tilde g_t)$ are the associated conformal equivalences, $b$, $\tilde
b$ are some
smooth, translation invariant, and homogeneous of degree $(-1)$
functions. If
$A^x$, $A^y$ are two increasing functions of hulls growing at $x$ and $y$
resp. (e.g. the half-plane capacity), we shall be particularly
interested in stopping times of type $\sigma=\inf(s:A^x(K_{s,0})\geq
a^x)$, $\tau=\inf(t:A^y(K_{0,t})\geq
a^y)$.
Note that $(X_s,\dots, g_t(z_i),\dots)$ is a Markov process. Let $P$ be its
semigroup and ${\mathcal L}$ its infinitesimal generator.
Similarly, $(\tilde g_t(\hat x),Y_t,\dots)$ is a Markov process
with semigroup $Q$ and infinitesimal generator ${\mathcal M}$. We are
interested in what conditions on the functions $b$ and $\tilde b$ are
implied by these assumptions (the existence of an $\SLE(\kappa,b,\tilde
\kappa,\tilde b)$).
So let $F$ be a test function $\mathbb{R}^{n+2}\rightarrow \mathbb{R}$, and $c>0$ be
some constant (ratio of speeds). We apply the previous assumptions
with $A^x=A^y={\rm cap}$ (the half-plane capacity), $a^x=2\varepsilon$,
$a^y=2c\varepsilon$. We are interested in the hull $K_{\sigma,\tau}$. Two ways
of getting from $K_{0,0}$ to $K_{\sigma,\tau}$ are (symbolically):
$$
\xymatrix{K_{0,0}\ar[r]\ar[d]& K_{\sigma,0}\ar[d]\\
K_{0,\tau}\ar[r]&K_{\sigma,\tau}
}$$
and our assumptions give a description of these transitions.
So consider the following procedure:
\begin{itemize}
\item run the first SLE (i.e. $\SLE_\kappa(b)$), started from $(x,y,\dots, z_i,\dots)$ until it
reaches capacity $2\varepsilon$.
\item then run independently the second SLE
(i.e. $\SLE_{\tilde\kappa}(\tilde b)$) in $g_\varepsilon^{-1}(\H)$
until it reaches capacity $2c\varepsilon$; this capacity is measured {\em in the
original half-plane}. Let $\tilde g_{\tilde\varepsilon}$ be the corresponding
conformal equivalence.
\item one gets two hulls resp. at $x$ and $y$ with capacity $2\varepsilon$
and $2c\varepsilon$; let $\phi=\tilde g_{\tilde \varepsilon}\circ g_\varepsilon$ be the
normalized map removing these two hulls.
\item expand $\mathbb{E}(F(\tilde g_{\tilde\varepsilon}(X_\varepsilon),\tilde Y_{\tilde \varepsilon}))$ up to order two in $\varepsilon$.
\end{itemize}
This describes (in distribution) how to get from $K_{0,0}$ to
$K_{\sigma,0}$, and then from $K_{\sigma,0}$ to $K_{\sigma,\tau}$.
From the Loewner equation, it appears that $\partial_t
g'_t(w)=-2g'_t(w)/(g_t(w)-W_t)^2$. Hence
$g'_\varepsilon(y)=1-2\varepsilon/(y-x)^2+o(\varepsilon)$. From the scaling property of
half-plane capacity, we get:
$$\tilde\varepsilon=c\varepsilon\left(1-\frac {4\varepsilon}{(y-x)^2}\right)+o(\varepsilon^2)$$
i.e $\tilde\varepsilon$ is deterministic up to order two in $\varepsilon$.
Denote by ${\mathcal L}$ and ${\mathcal M}$ the infinitesimal generators of the two SLEs:
\begin{align*}
{\mathcal L}&=\frac\kappa 2\partial_{xx}+b(x,y,\dots)\partial_x+\frac 2{y-x}\partial_y+\sum_{i=1}^n\frac 2{z_i-x}\partial_i\\
{\mathcal M}&=\frac\kappa 2\partial_{yy}+\tilde b(x,y,\dots)\partial_y+\frac 2{x-y}\partial_x+\sum_{i=1}^n\frac 2{z_i-y}\partial_i
\end{align*}
where $\partial_i=\partial_{z_i}$. Let $w=(x,y,\dots, z_i,\dots)$,
$w'=(X_\varepsilon,g_\varepsilon(y),\dots g_\varepsilon(z_i),\dots )$, $w''=(\tilde
g_{\tilde\varepsilon}(X_\varepsilon),\tilde Y_{\tilde\varepsilon},\dots \tilde g_{\tilde\varepsilon}\circ
g_\varepsilon(z_i),\dots)$. Now:
\begin{align*}
\mathbb{E}(F(w'')|w)&=\mathbb{E}(F(w'')|w'|w)=P_\varepsilon\mathbb{E}(Q_{\tilde\varepsilon}F|w')(w)\\
&=P_\varepsilon\mathbb{E}\left((1+\tilde\varepsilon {\mathcal M}+\frac{{\tilde\varepsilon}^2}2 {\mathcal M}^2)F(w')\right)(w)
=P_\varepsilon Q_{c\varepsilon(1-4\varepsilon/(y-x)^2)}F(w)+o(\varepsilon^2)\\
&=\left(1+\varepsilon{\mathcal L}+\frac{\varepsilon^2}2{\mathcal
L}^2\right)\left(1+c\varepsilon\left(1-\frac{4\varepsilon}{(y-x)^2}\right){\mathcal
M}+\frac{c^2\varepsilon^2}2{\mathcal M}^2\right)F(w)+o(\varepsilon^2)\\
&=\left(1+\varepsilon({\mathcal L}+c{\mathcal M})+\varepsilon^2\left(\frac 12{\mathcal L}^2+\frac
{c^2}2{\mathcal M}^2+ c{\mathcal L}{\mathcal M}-\frac {4c}{(y-x)^2}{\mathcal
M}\right)\right)F(w)+o(\varepsilon^2)
\end{align*}
If we first grow a hull at $z$, then at $x$, one gets instead:
$$\left(1+\varepsilon({\mathcal L}+c{\mathcal M})+\varepsilon^2\left(\frac 12{\mathcal L}^2+\frac
{c^2}2{\mathcal M}^2+ c{\mathcal M}{\mathcal L}-\frac {4c}{(z-x)^2}{\mathcal
L}\right)\right)F(w)+o(\varepsilon^2)$$
Hence the commutation condition reads:
\begin{equation*}
\left[{\mathcal L},{\mathcal M}\right]=\frac{4}{(y-x)^2}\left({\mathcal M}-{\mathcal L}\right)
\end{equation*}.
After simplifications, one gets:
\begin{align*}
\left[{\mathcal L},{\mathcal M}\right]+\frac{4}{(y-x)^2}\left({\mathcal L}-{\mathcal M}\right)=&
(\kappa\partial_x\tilde b-\tilde\kappa\partial_y b)\partial_{xy}\\
&+\left[\frac{2\partial_xb}{y-x}+\sum_i\frac{2\partial_ib}{y-z_i}-\tilde
b\partial_yb+\frac{2b}{(y-x)^2}+\frac{2\kappa-12}{(x-y)^3}-\frac{\tilde\kappa}2\partial_{yy}b\right]\partial_x\\
&-\left[\frac{2\partial_y\tilde b}{x-y}+\sum_i\frac{2\partial_i\tilde
b}{x-z_i}-b\partial_x\tilde b+\frac{2\tilde
b}{(x-y)^2}+\frac{2\tilde\kappa-12}{(y-x)^3}-\frac{\kappa}2\partial_{xx}\tilde
b\right]\partial_y
\end{align*}
So the commutation condition reduces to three differential
conditions involving $b$ and $\tilde b$; note the non-linear terms
$\tilde b\partial_yb $ and $b\partial_x\tilde b$.
\subsection{Rational solutions}
{\bf Case $n=0$:}
If $n=0$, then $b(x,y)=\rho/(x-y)$ and $\tilde
b(x,y)=\tilde\rho/(y-x)$. Then the commutation condition reduces to:
$$\left\{\begin{array}{l}
\kappa\tilde\rho=\tilde\kappa\rho\\
(\tilde\kappa-4)\rho-\rho\tilde\rho+12-2\kappa=0\\
(\kappa-4)\tilde\rho-\rho\tilde\rho+12-2\tilde\kappa=0
\end{array}\right.$$
We are only interested in the case $\kappa,\tilde\kappa>0$. Then:
$$\left\{\begin{array}{l}
\tilde\rho=\tilde\kappa\rho/\kappa\\
(\tilde\kappa-4)\rho-\rho^2\tilde\kappa/\kappa+12-2\kappa=0\\
(\kappa-4)\rho\tilde\kappa/\kappa-\rho^2\tilde\kappa/\kappa+12-2\tilde\kappa=0
\end{array}\right.$$
The last two are polynomials in $\rho$ that have a common root if and only if their resultant vanishes. This resultant (a polynomial in the coefficients) equals:
$$\frac{12\tilde\kappa(\tilde\kappa-\kappa)^2(\kappa\tilde\kappa-16)}{\kappa^3}$$
So either $\tilde\kappa=\kappa$, and then
$\rho=\tilde\rho\in\{2,\kappa-6\}$, or $\tilde\kappa=16/\kappa$, and
then $\rho=-\kappa/2$, $\tilde\rho=-\tilde\kappa/2$.
Let us comment briefly on these solutions. The condition
$\kappa\tilde\kappa=16$ obviously points at duality. The case
$\tilde\kappa=\kappa$, $\tilde\rho=\rho=\kappa-6$ corresponds in fact
to reversibility. Indeed, one has:
\begin{Lem}
An $\SLE_\kappa(\rho_1,\dots,\rho_n,\rho)$ in $\H$ started from
$(x,z_1,\dots,z_n,y)$ and aiming at $\infty$ is identical in law to a
(time-changed)
$\SLE_\kappa(\rho_1,\dots,\rho_n,\kappa-6-\rho-\sum_i\rho_i)$ in $\H$
started from $(x,z_1,\dots,z_n,\infty)$ and aiming at $y$, up to
disconnection of $y$.
\end{Lem}
\begin{proof}
Let $(g_t)$ be the family of conformal equivalences for the first SLE,
$(K_t)$ the corresponding hulls, $W$ its driving process,
$Y^{(i)}_t=g_t(z_i)$, $Y_t=g_t(y)$. Consider the homographies:
$$\varphi_t(u)=\frac{g'_t(y)}{Y_t-u}-\frac{g''_t(y)}{2g'_t(y)}$$
and $\tilde g_t=\phi_{\varphi_0(K_t)}$. Then $\tilde
g_t\circ\varphi_0=\varphi_t\circ g_t$, and $(\tilde g_t)$ defines a
time-changed Loewner chain. Let $\tilde W_t=\varphi_t(W_t)$, $\tilde
Y^{(i)}_t=\varphi_t(\tilde Y^{(i)}_t)$, $V_t=\varphi_t(\infty)$. Then,
from It\^o's formula:
\begin{align*}
d\tilde
W_t&=d\left(\frac{g'_t(y)}{Y_t-W_t}-\frac{g''_t(y)}{2g'_t(y)}\right)=\frac{g'_t(y)}{(Y_t-W_t)}\left(-\frac
{2dt}{(Y_t-W_t)^2}-\frac{d(Y_t-W_t)}{Y_t-W_t}+\frac{\kappa
dt}{(Y_t-W_t)^2}-\frac{2dt}{(Y_t-W_t)^2}\right)\\
&=\frac{g'_t(y)}{(Y_t-W_t)}\left(\frac{dW_t}{Y_t-W_t}+\frac{(\kappa-6)dt}{(Y_t-W_t)^2}\right)\\
&=\frac{g'_t(y)\sqrt\kappa}{(Y_t-W_t)^2}dB_t+\frac{g'_t(y)^2}{(Y_t-W_t)^4}\left(\sum_i\rho_i\frac{Y_t-W_t}{Y^{(i)}_t-W_t}.\frac
1{\tilde W_t-V_t}+\frac{\kappa-6-\rho}{\tilde W_t-V_t}\right)dt\\
\end{align*}
Note that the cross-ratio is conformally invariant, so
$$\frac{Y_t-W_t}{Y^{(i)}_t-W_t}=[\infty,Y^{(i)}_t,Y_t,W_t]=[V_t,\tilde Y^{(i)}_t,\infty,\tilde W_t]=\frac{\tilde Y^{(i)}_t-V_t}{\tilde Y^{(i)}_t-\tilde W_t}$$
After a time change $ds=g'_t(y)^2dt/(Y_t-W_t)^4$, one gets:
\begin{align*}
d\tilde W_s&=\sqrt\kappa d\tilde B_s+\sum_i\rho_i\frac{\tilde
Y^{(i)}_t-V_t}{(\tilde Y^{(i)}_t-\tilde W_t)(\tilde
W_t-V_t)}dt+\frac{\kappa-6-\rho}{\tilde W_t-V_t}dt\\
&=\sqrt\kappa d\tilde B_s+\sum_i\frac{\rho_i}{\tilde Y^{(i)}_t-\tilde
W_t}dt+\frac{\kappa-6-\rho-\sum_i\rho_i}{\tilde W_t-V_t}dt
\end{align*}
which defines a $\SLE_\kappa(\rho_1,\dots,\rho_n,\kappa-6-\rho-\sum_i\rho_i)$.
\end{proof}
In particular, an $\SLE_\kappa(\kappa-6)$ started from $(x,y)$ and
stopped at $\tau_y$ is simply an $\SLE_\kappa$ from $x$ to $y$. For
$\kappa=6$ and $n=0$, this is locality.
{\bf Parametric case:}
Assume the following forms for the drift terms $b$, $\tilde b$:
\begin{align*}
b(x,y,z_i)&=\frac\rho{x-y}+\sum_i\frac{\rho_i}{x-z_i}\\
\tilde b(x,y,z_i)&=\frac{\tilde\rho}{y-x}+\sum_i\frac{\tilde\rho_i}{y-z_i}
\end{align*}
Then the commutation conditions are:
$$\left\{\begin{array}{l}
\kappa\tilde\rho=\tilde\kappa\rho\\
(\tilde\kappa-4)\rho-\rho\tilde\rho+12-2\kappa=0\\
(\kappa-4)\tilde\rho-\rho\tilde\rho+12-2\tilde\kappa=0\\
\rho\tilde\rho_i=2\rho_i\\
\tilde\rho\rho_i=2\tilde\rho_i
\end{array}\right.$$
As we have seen, the first three conditions imply that
$\kappa=\tilde\kappa$, $\rho=\tilde\rho\in\{2,\kappa-6\}$, or
$\kappa\tilde\kappa=16$, $\rho=-\kappa/2$,
$\tilde\rho=-\tilde\kappa/2$. Now, if the $\rho_i$, $\tilde\rho_i$ are
not all zero, $\rho\tilde\rho=4$, which happens if
$\rho=\tilde\rho=2$, or in the case $\kappa\tilde\kappa=16$. To sum up,
the solutions are:\\
(i) $\kappa=\tilde\kappa$, $\rho=\tilde\rho=\kappa-6$,
$\rho_i=\tilde\rho_i=0$ (1 free parameter)\\
(ii) $\kappa=\tilde\kappa$, $\rho=\tilde\rho=2$, $\rho_i=\tilde\rho_i$
($(1+n)$ free parameters)\\
(iii) $\kappa\tilde\kappa=16$, $\rho=-\kappa/2$,
$\tilde\rho=-\tilde\kappa/2$,
$\tilde\rho_i=-(\tilde\kappa/4)\rho_i=-(4/\kappa)\rho_i$ ($(1+n)$ free
parameters)\\
These examples are ``rational'' (i.e. the drift terms are rational
functions). Yet, important examples (deduced from locality and
restriction) are transcendental. In the next section, we recast these
commutation conditions as integrability conditions, satisfied by all
these examples.
\section{Integrability conditions}
\subsection{Integrability for SLE commutation relations}
In the previous paragraph, we derived the following commutation
conditions:
$$\left\{\begin{array}{l}
\kappa\partial_x\tilde b-\tilde\kappa\partial_y b=0\\
\displaystyle\frac{2\partial_xb}{y-x}+\sum_i\frac{2\partial_ib}{y-z_i}-\tilde
b\partial_yb+\frac{2b}{(y-x)^2}+\frac{2\kappa-12}{(x-y)^3}-\frac{\tilde\kappa}2\partial_{yy}b=0\\
\displaystyle\frac{2\partial_y\tilde b}{x-y}+\sum_i\frac{2\partial_i\tilde
b}{x-z_i}-b\partial_x\tilde b+\frac{2\tilde
b}{(x-y)^2}+\frac{2\tilde\kappa-12}{(y-x)^3}-\frac{\kappa}2\partial_{xx}\tilde
b=0
\end{array}\right.$$
Now, from the first equation, one can write:
$$b=\kappa\frac{\partial_x\psi}{\psi}, \tilde b=\tilde\kappa\frac{\partial_y\psi}{\psi}$$
for some non-vanishing function $\psi$ (at least locally).
It turns
out that the second condition now writes:
$$-\kappa\partial_x\left(\frac{\frac{\tilde\kappa}
2\partial_{yy}\psi+\sum_i\frac{2\partial_i\psi}{z_i-y}+\frac{2\partial_x\psi}{x-y}+(1-\frac
6\kappa)\frac\psi{(x-y)^2}}{\psi}\right)=0.$$
Symmetrically, the last equation is:
$$-\tilde\kappa\partial_y\left(\frac{\frac{\kappa}
2\partial_{xx}\psi+\sum_i\frac{2\partial_i\psi}{z_i-x}+\frac{2\partial_y\psi}{y-x}+(1-\frac
6{\tilde\kappa})\frac\psi{(y-x)^2}}{\psi}\right)=0.$$
This means that a non-vanishing solution of
$$\left\{\begin{array}{l}
\displaystyle\frac{\kappa}
2\partial_{xx}\psi+\sum_i\frac{2\partial_i\psi}{z_i-x}+\frac{2\partial_y\psi}{y-x}+\left(\left(1-\frac
6{\tilde\kappa}\right)\frac 1{(y-x)^2}+h_1(x,z)\right)\psi=0\\
\displaystyle\frac{\tilde\kappa}2\partial_{yy}\psi+\sum_i\frac{2\partial_i\psi}{z_i-y}+\frac{2\partial_x\psi}{x-y}+\left(\left(1-\frac
6\kappa\right)\frac 1{(x-y)^2}+h_2(y,z)\right)\psi=0
\end{array}\right.$$
yields drift terms $b,\tilde b$ that satisfy the commutation
condition. Obviously, these differential operators are infinitesimal
generators of the SLEs, with an added coefficient before the constant
term.
The problem is now to find functions $h_1$, $h_2$ such that
the above system has solutions (integrability conditions).
Note that
we have not considered yet the conditions: $b,\tilde b$ translation
invariant and homogeneous of degree $(-1)$.
This implies that $\psi$ can be chosen to be translation invariant and
homogeneous of some fixed degree.
So assume that we are given $h_1,h_2$, and a
non-vanishing (translation-invariant, homogeneous) solution $\psi$ of this system. Let:
$${\mathcal M}_1=\frac{\kappa}
2\partial_{xx}\psi+\sum_i\frac{2\partial_i\psi}{z_i-x}+\frac{2\partial_y\psi}{y-x}+\left(1-\frac
6{\tilde\kappa}\right)\frac 1{(y-x)^2},{\mathcal M}_2=\frac{\tilde\kappa}2\partial_{yy}\psi+\sum_i\frac{2\partial_i\psi}{z_i-y}+\frac{2\partial_x\psi}{x-y}+\left(1-\frac
6\kappa\right)\frac 1{(x-y)^2}$$
Then $\psi$ is annihilated by all operators in the left ideal generated by
$({\mathcal M}_1+h_1)$,$({\mathcal M}_2+h_2)$, including in particular:
\begin{align*}
{\mathcal M}&=[{\mathcal M}_1+h_1,{\mathcal M}_2+h_2]+\frac 4{(x-y)^2}\left(({\mathcal
M}_1+h_1)-({\mathcal M}_2+h_2)\right)\\
&=[{\mathcal M}_1,{\mathcal M}_2]+\frac{4}{(x-y)^2}({\mathcal M}_1-{\mathcal M}_2)+([{\mathcal M}_1,h_2]-[{\mathcal M}_2,h_1])+\frac
{4(h_1-h_2)}{(x-y)^2}\\
&=\frac{3(\kappa\tilde\kappa-16)(\kappa-\tilde\kappa)}{\kappa\tilde\kappa(x-y)^4}+\left(\left(\frac{2\partial_y}{y-x}+\sum_i\frac{2\partial_i}{z_i-x}\right)h_2-\left(\frac{2\partial_x}{x-y}+\sum_i\frac{2\partial_i}{z_i-y}\right)h_1\right)+\frac
{4(h_1-h_2)}{(x-y)^2}
\end{align*}
This is an operator of order 0, so it must vanish
identically. Considering the pole at $x=y$, this implies in particular
$\tilde\kappa\in\{\kappa,16/\kappa\}$, since the fourth-order pole
must vanish. Then the second-order pole must also vanish, so
$h_1(x,z)=h(x,z)$, $h_2(y,z)=h(y,z)$ for some $h$.
So this condition boils down to a functional equation on $h$.
For illustration, consider the following variation on an earlier
example:
a chordal $\SLE_{8/3}$ from $x$ to $y$ is conditioned not to intersect an
independent restriction measure from $z$ to $\infty$ with index
$\nu$. Let $\varphi(x,y,z)$ be the probability of
non-intersection. Then $\varphi$ is annihilated by the operator:
$$\frac\kappa 2\partial_{xx}+\frac{\kappa-6}{x-y}\partial_x+\frac
2{y-x}\partial_y+\frac 2{z-x}\partial_z-\frac{2\nu}{(x-z)^2}$$
where $\kappa=8/3$. Obviously $\varphi$ can be expressed in terms of a
hypergeometric function. If $\psi=(y-x)^{-2\alpha}\varphi$,
$\alpha=\alpha_\kappa=5/8$, then $\psi$ is annihilated by the conjugate operators:
$$\frac\kappa 2\partial_{xx}+\frac
2{y-x}\partial_y+\frac
2{z-x}\partial_z-\frac{2\alpha}{(x-y)^2}-\frac{2\nu}{(x-z)^2},
\frac\kappa 2\partial_{yy}+\frac
2{x-y}\partial_x+\frac 2{z-y}\partial_z-\frac{2\alpha}{(y-x)^2}-\frac{2\nu}{(y-z)^2}
$$
where we also use reversibility for $\SLE_{8/3}$. It is easy to check
that in general $\tilde\kappa\in\{\kappa,16/\kappa\}$, $h(x,z)=-2\nu/(x-z)^2$
is a solution of the integrability condition
above. More generally, if $n$ points $z_1,\dots,z_n$ are marked on the
real line, a (particular) solution of the integrability condition is given by
$\tilde\kappa\in\{\kappa,16/\kappa\}$,
$$h(x,z)=-2\sum_i\frac{\mu_i}{(z_i-x)^2}-2\sum_{i<j}\nu_{ij}\left(\frac{1}{z_i-x}-\frac{1}{z_j-x}\right)^2=\sum_i\frac{\mu'_i}{(z_i-x)^2}+\sum_{i<j}\frac{\nu'_{ij}}{(z_i-x)(z_j-x)}$$
where $\mu_i,\nu_{ij}$ are real parameters. When
$\kappa=\tilde\kappa=8/3$, $\mu_i,\nu_{ij}\geq 0$, and $x<y<z_1<\cdots
z_n$, it is easy to think of a probabilistic situation corresponding
to this. Consider a chordal $\SLE_{8/3}$ from $x$ to $y$, and
condition it not to intersect independent one-sided restriction
samples $z_i\leftrightarrow\infty$ (with index $\mu_i$) and
$z_i\leftrightarrow z_j$ (with index $\nu_{i,j}$). Then reversibility
for the conditional $\SLE$ corresponds to a partition function
$\psi$ solving PDEs where $h$ is as above.
Let us get back to the functional equation for $h$:
\begin{equation}\label{inth}
\left(\left(\frac{\partial_y}{y-x}+\sum_i\frac{\partial_i}{z_i-x}\right)h(y,z)-\left(\frac{\partial_x}{x-y}+\sum_i\frac{\partial_i}{z_i-y}\right)h(x,z)\right)+2\frac
{h(x,z)-h(y,z)}{(x-y)^2}=0.
\end{equation}
We want to prove that the only solutions to this functional equation
(translation invariant and homogeneous of degree $(-2)$) are the
rational functions given above when there are at most $3$ marked $z$
points (including infinity).
By expanding in $\varepsilon$ where $y=x+\varepsilon$, one sees that $h$ must be
annihilated by the family of operators:
$$\ell_{0,n}=\frac n{(n+1)(n+2)}\partial_x^{n+2}+\sum_i\left(\frac{\partial_i\partial_x^n}{z_i-x}-\frac{n!\partial_i}{(z_i-x)^{n+1}}\right)$$
for $n\geq 0$. Also, $h$ must be translation invariant and homogeneous
of degree $-2$. So for $n=2$, one can write $h(x,y,z)=\tilde
h((y-x)/(z-x))/(z-x)^2$ for instance, and $\tilde h$ satisfies a
third-order ODE (since $\ell_{0,1}h=0$); but we have already exhibited 3 linearly independent
solutions in this case, so the classification is complete for $n=1,2$.
In particular, in the case $n=2$, this is closely related to the
discussion in Section 8.5 of \cite{LSW3}.
When $n\geq 3$, the configuration $(\H,z_1,\dots,z_n,\infty)$
corresponds to a $(n-2)$-dimensional moduli space. We already know $n(n+1)/2$
(linearly independent) rational solutions. We proceed to show that
arbitrary (smooth) functions on this ``residual moduli space'' lead to
solutions of the functional equation.
Define $\ell_x$, $\ell_y$ to be the differential operators:
$$\ell_x=\frac 2{y-x}\partial_y+\sum_i\frac 2{z_i-x}\partial_i,{\rm\ \
\ }\ell_y=\frac 2{x-y}\partial_x+\sum_i\frac 2{z_i-y}\partial_i$$
representing the Loewner flow with singularities at $x,y$
respectively, restricted to marked points. Now the operators
$$\tilde\ell_x=\frac 6{y-x}\partial_x+\ell_x,{\rm\ \
\ }\tilde\ell_y=\frac 6{x-y}\partial_y+\ell_y$$
are the generators of $\SLE_0(-6)$, i.e. the hyperbolic geodesic from
$x$ to $y$. In this case, we have seen that the commutation relation:
$$[\tilde\ell_x,\tilde\ell_y]=\frac 4{(x-y)^2}(\tilde\ell_y-\tilde\ell_x)$$
is satisfied. Hence if $f$ is a positive, translation invariant, homogeneous of
degree 0 function of $z_1,\dots,z_n$ (i.e. a function on the
``residual moduli space''), then the weight:
$$h(x,z)=\tilde \ell_x\log f(z)=\ell_x\log f(z)$$
satisfies the functional equation (\ref{inth}) and is translation
invariant and homogeneous of degree $(-2)$.
Prompted by these solutions, we want to get a global interpretation of
the differential condition (\ref{inth}). Consider two simple paths
$\gamma$, $\tilde\gamma$ in $\H$ started from $x,y$ respectively, non
intersecting and driven by smooth functions, for simplicity. The union
of the two paths can be parametrized by Loewner's equation in two
extremal fashions: exploring $\gamma$ first and then $\tilde\gamma$,
or exploring $\tilde\gamma$ first and then $\gamma$. Define:
$$c(\gamma,z)=\int_0^\tau h(x_t,z_1(t),\dots,z_n(t))dt$$
where $\gamma$ has half-plane capacity $2\tau$, and $z_1(.),\dots,z_n(.)$ follow the
Loewner flow driven by $(x_t)$. If $\gamma$, $\tilde\gamma$ have half-plane capacity
$\varepsilon,c\varepsilon$ respectively, then:
$$c(\gamma,z)+c(\phi_{\gamma}(\tilde\gamma),\phi_\gamma(z))=
c(\tilde\gamma,z)+c(\phi_{\tilde\gamma}(\gamma),\phi_{\tilde\gamma}(z))+O(\varepsilon^2).$$
This is exactly the content of (\ref{inth}), the corrections being
as in the commutation relations. So it is easy to see (and
straightforward to write, using comparisons of Loewner chains etc...)
that for macroscopic paths $\gamma$, $\tilde\gamma$, we have:
$$c(\gamma,z)+c(\phi_{\gamma}(\tilde\gamma),\phi_\gamma(z))=
c(\tilde\gamma,z)+c(\phi_{\tilde\gamma}(\gamma),\phi_{\tilde\gamma}(z)).$$
Then we can define $c(\gamma\cup\tilde\gamma,z)$ to be this
quantity. Similarly, one can grow a first half of $\gamma$, then a
half of $\tilde\gamma$, then grow the end of $\gamma$,\dots, and get the same quantity. If $A$ is any smooth hull (not intersecting the $z_i$'s),
one can define $c(A,z)=c(\partial A,z)$, where $\partial A$ is a
smooth curve. Notice that this does not depend on the orientation of
$\partial A$. Indeed, let $x,y$ be the endpoints of $\partial A$,
$\gamma$, $\tilde\gamma$ as above, such that $\gamma$ and
$\tilde\gamma$ are at distance at most $\varepsilon$ of $\partial A$ in the
Hausdorff metric. Then $\phi_\gamma(\tilde\gamma)$ and
$\phi_{\tilde\gamma}(\gamma)$ have small half-plane capacity, and one
can apply the previous result. If the boundary $\partial A$ is
described as the union of two arcs starting at $x,y$ respectively, one
gets the same result.
Also, if $A,B$ are two smooth hulls
contained in a compact hull not intersecting the $z_i$'s, then, by
similar arguments, we have
the Lipschitz condition:
$$|c(A,z)-c(B,z)|\leq k({\rm cap}(A\cup B)-{\rm cap}(A\cap B)).$$
Hence we can extend the definition of $c$ by approximation by smooth hulls.
So if $h$ is a function solving (\ref{inth}), then we can define a
function $C=\exp c$ on hulls and residual configurations such that:
\begin{enumerate}
\item For all hulls $A,B$,
$C(A.B,z)=C(B,\phi_A(z))C(A,z)$.
\item If $A$ is a hull of half-plane capacity $\varepsilon$ located at $x$, then
$C(A,z)=1+2\varepsilon h(x,z)+o(\varepsilon)$.
\end{enumerate}
Here $A.B$ designates the concatenation of the two hulls $A,B$:
$\phi_{A.B}=\phi_B\circ\phi_A$.
Conversely, if a nice function $C$ satisfying (i) is given, we can
recover a function $h$ satisfying (\ref{inth}) (its derivative in the
direction $\ell_x$). So far, we have considered the following $C$'s:
$$C(A,z_1,\dots,z_{n+1})=\prod_{i<j}\left(\phi'_A(z_i)\phi'_A(z_j)\left(\frac{z_j-z_i}{\phi_A(z_j)-\phi_A(z_i)}\right)^2\right)^{\nu_{ij}}\frac
{f(\phi_A(z_1),\dots,\phi_A(z_{n+1}))}{f(z_1,\dots,z_{n+1})}$$
where the marked point at infinity is now $z_{n+1}$. All the factors
here are conformally invariant.
Let ${\mathcal H}$ designates a semigroup of hulls (as in
\cite{LSW3}). The residual configuration is
$(\H,z_1,\dots,z_{n+1}=\infty)$ (it no longer depends on the marked
points $x,y$).
More precisely, an element of ${\mathcal H}$ is a compact
subset $A$ of $\overline\H$ such that $\H\setminus A$ is simply-
connected and $\partial A\cap\mathbb{R}\subset \overline{A\cap\H}$; the
semigroup is concatenation: $\phi_{A.B}=\phi_B\circ\phi_A$.
Let ${\mathcal F}$ be a vector space of
functions of the residual configuration $(z_1,\dots,z_{n+1})$ (say
smooth functions). Then ${\mathcal H}$ acts on ${\mathcal F}$ by
$A.f=f\circ\phi_A$ (note that this is not everywhere defined). If $C$
is as above, $C$ can be seen as a map ${\mathcal H}\rightarrow{\mathcal F}$ by
$C(A)(z)=C(A,z)$. Then the condition (i) reads:
$$C(A.B)=C(A)(A.C(B))$$
which is saying that $C$ is a 1-cocycle for (multiplicative) group
cohomology for the ${\mathcal H}$-module ${\mathcal F}$. This is formal since
${\mathcal H}$ is only a semigroup and the operation is not everywhere
defined. Then the question is to determine the first cohomology group
$H^1({\mathcal H},{\mathcal F})$, restricted to cocycles with M\"obius covariance:
$$C(\phi(A),\phi(z))=C(A,z)$$
where $\phi$ an homography. Let $c_{ij}$ be the cocycle:
$$c_{ij}(A)(z)=\phi'_A(z_i)\phi'_A(z_j)\left(\frac{z_j-z_i}{\phi_A(z_j)-\phi_A(z_i)}\right)^2.$$
We have a natural map:
\begin{align*}
\mathbb{R}^{n(n+1)/2}&\longrightarrow H^1({\mathcal H},{\mathcal F})\\
(\nu_{ij})_{i<j}&\longmapsto \prod_{i<j}(c_{ij})^{\nu_{ij}}
\end{align*}
The content of Section 8.5 in \cite{LSW3} is that this is an
isomorphism when $n\leq 2$. At this point, it is not quite clear
whether this is onto in general. Though it not one-to-one as soon as $n\geq
3$, since e.g. $c_{12}c_{34}/c_{13}c_{24}$ is a coboundary. It is easy
to see that its image has dimension $n+1$ and is generated for
instance by $(c_{12},c_{13},\dots,c_{1,n+1},c_{23})$.
So let us extend the discussion in \cite{LSW3} when $n\geq
3$. This involves several complications. The idea is to restrict to a
subgroup of ${\mathcal H}$ (or rather a subalgebra of its tangent algebra)
fixing a configuration $(z_1,\dots,z_{n+1})$. A cocyle restricts to a
character of this subgroup, that must vanish on commutators. This
gives differential conditions for tangent cocycles, that can then be integrated.
We begin by fixing three marked points at $0,1,\infty$ (which we
can do by a M\"obius transformation). The other marked points are $z_1,\dots,z_{n-2}$.
Consider the real Lie algebra generated by the Loewner fields:
$$A(x)\partial_w=\frac{w(w-1)}{w-x}\partial_w$$
that fix $0,1,\infty$.
Consider the subalgebra fixing the marked points
$z_1,\dots,z_{n-2}$. This includes the fields:
$$A(x_1,\dots,x_{n-1})=\det(A(x_i)(z_j))_{1\leq i,j\leq n-1}=\prod_{i=1}^{n-1}z_i(z_i-1)\frac{V(x_1,\dots,x_{n-1})V(z_1,\dots,z_{n-1})}{\prod_{i,j}z_i-x_j}$$
with the convention $w=z_{n-1}$, where $V$ is the Vandermonde
polynomial $V(x_1,\dots,x_{n-1})=\prod_{i<j}(x_j-x_i)$ (Cauchy determinant).
By taking limits of such vector fields (as $x_1,\dots,x_n\rightarrow
x$, normalizing by $V(x_1,\dots,x_n)$), one gets
$\tilde A(x)=(n-2)!P(w)/(w-x)^{n-1}$, where
$P(w)=w(w-1)(w-z_1)\cdots(w-z_{n-2})$ is a polynomial of
degree $n$.
The (tangent) cocycle $dC$ restricted to this subalgebra is a morphism
to the (trivial) Lie algebra $\mathbb{R}$, so it vanishes on commutators.
By considering the limit of $[\tilde A(x),\tilde A(y)]$ as $y\searrow
x$, we see that $\hat A(x)=(2n-1)!P(w)^2/(w-x)^{2n}$ is annihilated by
$dC$. It is easy to see that:
$$\hat A(x)=\left(\partial_x^{2n-1} P(x)\partial_x^{2-n}\right)\tilde A(x).$$
Note that $(\partial_x^{2n-1} P(x)\partial_x^{2-n})$ is a
well-defined linear differential operator of degree $n+1$, with
polynomial coefficients. It follows that:
$$dC(\tilde A(x))=a+\frac{a_0}{x^{n-1}}+\frac{a_1}{(x-1)^{n-1}}+\sum_{j=1}^{n-2}\frac{b_j}{(x-z_j)^{n-1}}$$
where the coefficients depend on the marked points
$z_1,\dots,z_{n-2}$. We want to integrate this to get information on
$dC(A(x_1,\dots))$.
Consider the atomic measure (in the variable $x$):
$$m=\sum_i(-1)^{n-1-i}V(x_1,\dots,\hat {x_i},\dots)\delta_{x_i}.$$
One can think of $m$ as the Vandermonde determinant $V(x_1,\dots,x_{n-1})$
where the last row $(x_1^{n-2},\dots,x_{n-1}^{n-2})$ is replaced
by $(\delta_{x_1},\dots,\delta_{x_{n-1}})$.
Then we have the decomposition:
$$\frac{V(x_1,\dots,x_{n-1})}{\prod_i (w-x_i)}=\left\langle m,\frac
1{w-x}\right\rangle=(-1)^n\left\langle m^{(2-n)},\frac
{(n-2)!}{(w-x)^{n-1}}\right\rangle$$
where $m^{(2-n)}$ can be chosen with a continuous, compactly supported
density (since $\langle m,1\rangle=\cdots=\langle m,x^{n-3}\rangle=0$,
which follows from the Vandermonde form of $m$). It follows that:
$$A(x_1,\dots,x_{n-1})=(-1)^n\prod_{i=1}^{n-2}z_i(z_i-1)\frac{V(z_1,\dots,z_{n-2})}{\prod_{i<n-1,j\leq
n-1}z_i-x_j}\left\langle m^{(2-n)},\tilde A(x)\right\rangle$$
and consequently:
$$dC(A(x_1,\dots,x_{n-1}))=\prod_{i=1}^{n-2}z_i(z_i-1)\frac{V(z_1,\dots,z_{n-2})}{\prod_{i<n-1,j\leq
n-1}z_i-x_j}\left\langle m,ax^{n-2}+\frac{a_0}{x}+\frac{a_1}{x-1}+\sum_{j=1}^{n-2}\frac{b_j}{x-z_j}\right\rangle$$
for some coefficients $a,a_0,b_j$ depending on
$z_1,\dots,z_{n-2}$. From the Vandermonde form of $m$ and simple
manipulations, we get:
$$dC(A(x_1,\dots,x_{n-1}))=\prod_{i=1}^{n-2}z_i(z_i-1)\frac{V(z_1,\dots,z_{n-2})V(x_1,\dots,x_{n-1})}{\prod_{i<n-1,j\leq
n-1}z_i-x_j}\left(a+\frac{a_0}{\prod_jx_j}+\frac{a_1}{\prod_jx_j-1}+\sum_{k=1}^{n-2}\frac{b_k}{\prod_jx_j-z_k}\right).$$
Using the Cauchy determinant formula (and its derivative w.r.t $z_k$), it is then
easy to see that:
$$\displaystyle{
\left|\begin{array}{ccc}
\frac 1{x_1-z_1}&\dots&\frac 1{x_{n-1}-z_1}\\
\vdots&&\vdots\\
\frac 1{x_1-z_{n-2}}&\dots&\frac 1{x_{n-1}-z_{n-2}}\\
R(x_1)(z)&\dots&R(x_{n-1})(z)
\end{array}\right|=0}$$
where:
$$R(x)=dC(A(x))-\left(a+\frac{a_0}{x}+\frac{a_1}{x-1}+\sum_{j=1}^{n-2}\frac{b_j}{(x-z_j)^2}\right)$$
which depends implicitly on the $z$ variables (and we have replaced
$a_0$ with $a_0/\prod_i z_i$, $a_1$ with $a_1/\prod_i (z_i-1)$; since
we use $0,1$ for normalization, this induces some asymmetry).
For the part corresponding to the coefficient $a$, one can
consider the limit $w\rightarrow\infty$ of $A(x_1,\dots,x_{n-1}$).
If $Q(x)=(x-z_1)\dots (x-z_{n-2})$, after row operations, we get:
$$
\left|\begin{array}{ccc}
1&\dots&1\\
\vdots&&\vdots\\
x_1^{n-3}&\dots& x_{n-1}^{n-3}\\
Q(x_1)R(x_1)(z)&\dots&Q(x_n)R(x_{n-1})(z)
\end{array}\right|=0$$
If we apply $\partial_{x_1}^{n-2}$ to this
expression (using multilinearity in rows), we get
$\partial_{x_1}^{n-2}Q(x_1)R(x_1)(z)=0$. Hence:
$$dC(A(x))=a+\frac{a_0}{x}+\frac{a_1}{x-1}+\sum_{j=1}^{n-2}\frac{c_j}{x-z_j}+\sum_{j=1}^{n-2}\frac{b_j}{(x-z_j)^2}$$
where coefficients may depend on $z$.
In terms of $h$, this means that:
$$h(x,z)=\sum_i\frac{\mu_i(z)}{(x-z_i)^2}+\sum_i\frac{\rho_i(z)}{x-z_i}$$
where $\mu_i,\rho_i$ are translation invariant and homogeneous of
degree $0,-1$ respectively (for normalization reasons, we have to
divide $dC(A(x))$ by $x(x-1)$).
So we can substitute this expression in the commutation equation (\ref{inth}), and get
after some simplifications:
$$\sum_{i,j}\left(\frac 1{(y-z_j)(x-z_i)^2}-\frac 1{(x-z_j)(y-z_i)^2}\right)\partial_j\mu_i+\left(\frac 1{(y-z_j)(x-z_i)}-\frac 1{(x-z_j)(y-z_i)}\right)\partial_j\rho_i=0$$
as a rational function in $x,y$. This implies that
$\partial_j\mu_i=0$ for all $i,j$ (considering first the coefficient
of $(x-z_i)^{-2}$, and then letting $y$ vary). Similarly, considering
the coefficient of $(x-z_i)^{-1}$, we get:
$$\sum_j\frac{\partial_j\rho_i-\partial_i\rho_i}{y-z_j}
=0$$
and letting $y$ vary, we get the cross-derivative condition
$\partial_i\rho_j=\partial_j\rho_i$.
Hence we can find constant coefficients $\mu_i$ and a function $f(z)$ such that:
$$h(x,z)=\sum_i\frac{\mu_i}{(x-z_i)^2}+\ell_x\log f(z).$$
Note that $f(z)=(z_j-z_i)^{\nu_{ij}}$ produces the term
$2\nu_{ij}/(x-z_i)(x-z_j)$. So we can rewrite $h$ as:
$$h(x,z)=\sum_{i,j}\nu_{ij}\left(\frac 1{x-z_i}-\frac 1{x-z_j}\right)^2+\ell_x\log f(z)$$
in a non-unique fashion (e.g. take $f$ the power of a cross-ratio; here $z_{n+1}=\infty$). Since:
$$\partial_j\sum_i\rho_i=(\sum_i\partial_i)\rho_j=0,{\rm\ \ }
\partial_j\sum_iz_i\rho_i=(\sum_iz_i\partial_i\rho_j)+\rho_j=0$$
$f$ is translation invariant and homogeneous of some fixed
degree. This degree can be set to 0 by adjusting the $\nu_{ij}$'s.
One can freely set all the $\nu_{ij}$'s to zero except
$(\nu_{12},\dots, \nu_{1,{n+1}},\nu_{23})$ e.g., which gives $(n+3)$
numerical invariants.
Let us sum up the previous discussion.
\begin{Lem}
If the system
$$\left\{\begin{array}{l}
\displaystyle\frac{\kappa}
2\partial_{xx}\psi+\sum_i\frac{2\partial_i\psi}{z_i-x}+\frac{2\partial_y\psi}{y-x}+\left(\left(1-\frac
6{\tilde\kappa}\right)\frac 1{(y-x)^2}+h_1(x,z)\right)\psi=0\\
\displaystyle\frac{\tilde\kappa}2\partial_{yy}\psi+\sum_i\frac{2\partial_i\psi}{z_i-y}+\frac{2\partial_x\psi}{x-y}+\left(\left(1-\frac
6\kappa\right)\frac 1{(x-y)^2}+h_2(y,z)\right)\psi=0
\end{array}\right.$$
admits a non-vanishing solution $\psi$ (smooth, homogeneous and translation
invariant), then:
\begin{enumerate}
\item Either $\tilde\kappa=\kappa$ or $\tilde\kappa=16/\kappa$.
\item The functions $h_1$, $h_2$ can be written as $h_1(x,z)=h(x,z)$,
$h_2(y,z)=h(y,z)$, where:
$$h(x,z)=\sum_{1\leq i<j\leq n+1}2\nu_{ij}\left(\frac 1{x-z_i}-\frac 1{x-z_j}\right)^2+\ell_x\log f(z)$$
where $\nu_{ij}$ are constant parameters and $f(z)$ is a conformally
invariant function of the marked points $(z_1,\dots,z_{n+1})$.
\end{enumerate}
Moreover the weight vector $(\nu_{ij})_{i<j}$ is well-defined modulo
the relations $\nu_{ij}+\nu_{kl}=\nu_{il}+\nu_{jk}$.
\end{Lem}
Conversely, it is natural to ask whether these conditions are
sufficient for the existence of a non vanishing solution $\psi$.
In the situation where $h$ is the rational function:
$$h(x,z)=\sum_i\frac{\mu_i}{(x-z_i)^2}+\sum_{i<j}\frac{\nu_{ij}}{(x-z_i)(x-z_j)}$$
then we can find an elementary solution
of the form:
$$\psi(x,y,z)=\left(\prod_i(z_i-x)^{a_i}(z_i-y)^{\tilde a_i}\right)(y-x)^b\prod_{i<j}(z_j-z_i)^{b_{ij}}$$
If $\tilde\kappa=\kappa$, we can pick $b=2/\kappa$, $\tilde a_i=a_i$,
$b_{ij}=\nu_{ij}/2$ and $a_i$ a solution of
$(\kappa/2)a_i(a_i-1)+2a_i+\mu_i=0$ for all $i$. If
$\tilde\kappa=16/\kappa$, set $b=-1/2$, $\tilde a_i=-a_i\kappa/4$, $b_{i,j}=\nu_{ij}/2$ and $a_i$ a solution of
$(\kappa/2)a_i(a_i-1)+2a_i+\mu_i=0$ for all $i$.
Finding $\psi$ of a prescribed homogeneity degree is much more
difficult.
For a general $h$, we shall later discuss martingale interpretations
of solutions.
If one considers $n$ SLEs, $n\geq 2$, one gets a system of $n$ linear
PDEs (with coefficients $h_1,\dots h_n$ to be specified). We give an
example of this situation in the next section.
\subsection{Restriction, locality and cocycles}
In the previous subsection, we made some progress on the question of classifying
restriction measures, which we now explicit (in the chordal setting).
A configuration is a simply connected domain $D$ with $n$ marked
points on the boundary: $(D,z_1,\dots,z_n)$ (say $D$ has Jordan
boundary). The following definition is a natural extension of \cite{LSW3}. We call a restriction measure a
collection $\mu$ of measures parametrized by the configuration
$(D,z_1,\dots,z_n)$ such that:
\begin{enumerate}
\item The measure $\mu_{(D,z_1,\dots)}$ is supported on simply-
connected compact
subsets of the compactification of $D$ that intersect the boundary of
$D$ exactly at $z_1,\dots,z_n$. Also, $\partial K\cap\partial D\subset \overline{K\cap D}$.
\item (Conformal invariance) If $\phi:(D,z_1,\dots)\rightarrow(D',z_1',\dots)$ is an
equivalence of configurations, $\phi_*\mu_{(D,\dots)}=\mu_{(D',\dots)}$.
\item (Restriction property) If $(D,z_1,\dots)$ and $(D',z_1,\dots)$ are configurations such
that $D'\subset D$ and $\partial D'\cap\partial D$ contains
neighbourhoods of $z_1,\dots,z_n$, then:
$\mu_D(.|.\subset D')=\mu_{D'}$.
\end{enumerate}
Given such a restriction measure (strictly speaking, collection of
measures, or measure-valued function on the moduli space), we can
define a cocycle: $C_\mu(A,z)=\mu_{(\H,z)}(\{K: K\cap A=\varnothing\})$.
Indeed, the tower property of conditional expectations:
$$\mu_{(\H,z)}(.|\subset \H\setminus A.B)=
\mu_{(\H,z)}(.|\subset \H\setminus A.B|\subset \H\setminus A)
=\mu_{(\H\setminus A,z)}(.|\subset \H\setminus A.B)
=\mu_{(\H,A.z)}(.|\subset \H\setminus B)$$
translates into the cocycle condition: $C_\mu(A.B,z)=C_\mu(A,z)C_\mu(B,A.z)$.
Besides, $\mu$ is entirely determined by $C_\mu$, since the events
$\{K\cap A=\varnothing\}$ constitute a $\pi$-system generating the full
$\sigma$-algebra for random compact sets $K$ satisfying the
topological condition (i).
We have proved that under a regularity assumption $C_\mu$ can be
expressed as:
$$C_\mu(A,z)=\prod_{i<j}\left(\phi'_A(z_i)\phi'_A(z_j)\left(\frac{z_j-z_i}{\phi_A(z_j)-\phi_A(z_i)}\right)^2\right)^{\nu_{ij}}\frac
{f(\phi_A(z_1),\dots,\phi_A(z_n))}{f(z_1,\dots,z_n)}.$$
The question is what cocycles can be realized through a restriction
measure. A result of \cite{LSW3} is that if $n=2$, then the cocycle
$\left(\phi'_A(z_1)\phi'_A(z_2)\left(\frac{z_2-z_1}{\phi_A(z_2)-\phi_A(z_1)}\right)^2\right)^{\nu}$
corresponds to a restriction measure iff $\nu\geq 5/8$. Also, there is
an operation $\bullet$ on restriction measures (filling of the union of
independent samples) such that $C_{\mu_1\bullet\mu_2}=C_{\mu_1}C_{\mu_2}$.
Let us now discuss locality; the close relation between locality and
restriction is stressed in \cite{LSW3}.
Consider a configuration $(D,a,b,c)$ with three marked points on the
boundary, in cyclic order.
Using chordal $\SLE_6$, one can define a distribution $\mu$ relative
to such configurations and satisfying:
\begin{enumerate}
\item $\mu_{(D,a,b,c)}$ is supported on simply connected compact subsets $K$ of the
compactification of $D$ whose intersection with $\partial D$ consists
in a point $X\in (bc)$ and an arc contained in $(cb)$ and containing
$a$. Also, $\partial K\cap\partial D\subset \overline{K\cap D}$.
\item (Conformal invariance) If $\phi:(D,a,b,c)\rightarrow
(D',a',b',c')$ is an equivalence of configurations, then
$\phi_*\mu_{(D,a,b,c)}=\mu_{(D',a',b',c')}$.
\item (Locality) Let $\mu_{(D,a,b,c,x)}$ be the disintegrated measure
$\mu_{(D,a,b,c)}(.|X\in dx)$.
If $(D,a,b,c)$ and $(D',a,b',c')$ are configurations such
that $D'\subset D$, $b'\in (ab)$, $c'\in(ca)$, $\partial
D'\cap\partial D$ contains the arcs $(ab')$, $(c'a)$ and
a neighbourhood of $x$, then:
$$\mu_{(D,a,b,c,x)}(.|K\subset D',\partial K\subset (ab')\cup\{x\}\cup
(c'a))=\mu_{(D',a,b',c',x)}.$$
\end{enumerate}
Let us associate a cocycle to this collection of measures. We phrased
the locality property as a restriction property for disintegrated
measures, so we can define as above:
$$C(A,z)=\mu_{(\H,a,b,c,x)}(\{K:K\subset \H\setminus A\})$$
where $z=(a,b,c,x)$. With these conditions, the measures are entirely
determined by the distribution of $X$. More precisely, if this
distribution is $f(x)dx$ for the configuration $(D,a,b,c)$ and $\phi$
is the equivalence $(D,a,b,c)\rightarrow (D',a,b',c')$, then
$C_\mu(A,z)=\phi'(x) f\circ\phi(x)/f(x)$, or, without assumptions on the
normalization of the equivalence $\phi$:
$$C_\mu(\phi,z)=\frac{c_{ax}c_{bc}^{1/2}}{c_{ab}^{1/2}c_{ac}^{1/2}}(\phi,z)\frac {f\circ\phi(z)}{f(z)}$$
and again one can recover $\mu$ from $C_\mu$. One can think of several
extensions to configurations with more points. Consider the following
example, coming from percolation. For critical percolation in a
rectangle $(ABCD)$, consider the highest (resp. lowest) open path from
$(AB)$ to $(CD)$ with endpoints $A',B',C',D'$ respectively. This gives
a measure (with mass $<1$) on hulls delimited by the two
paths. Disintegrated measures (w.r.t. $A',\dots,D'$) have the
restriction property. The measure on hulls is determined by the joint
distribution $(A',B',C',D')$, than can be obtained by taking partial
derivatives of Cardy's formula (see \cite{D5} for related questions).
\section{A particular case}
In this section, we discuss the important situation where all marked
points on the boundary are growth points for commuting SLEs. This
situation is studied in greater details in \cite{D7}.
As described earlier, consider the half-plane $\H$ with
$(2n)$ marked points on the boundary, $(x_1,\dots, x_{2n})$, in
cyclical order. Consider $n$ independent
$\SLE_{8/3}$ from $x_{2i-1}$ to $x_{2i}$, $i=1,\dots,n$. Define
$\psi(x_1,\dots,x_{2n})$ to be the probability of no pairwise
intersection. This function is
invariant under the full M\"obius group ($\infty$ is only used for
normalization). Now, if $\gamma_i$ is the trace of the $i$-th SLE, define
\begin{align*}
\psi(x_i)
&=\P(\gamma_i\cap\gamma_j=\varnothing, 1\leq i< j\leq n)\\
&=\P(\gamma_1(0,t)\cap(\cup_{j>1}\gamma_j)=\varnothing,\gamma_1(t,\infty)\cap(\cup_{j>1}\gamma_j)=\varnothing,
\gamma_i\cap\gamma_j=\varnothing, 2\leq i< j\leq n)\\
&=\prod_{j>1}\P(\gamma_j\cap\gamma_1(0,t)=\varnothing)\P(\tilde\gamma_i\cap\tilde\gamma_j=\varnothing, 1\leq i< j\leq n)
\end{align*}
where $\tilde\gamma_i$ are independent $\SLE_{8/3}$ in the domain
$(\H,g_t(\gamma_1(t)),g_t(x_2),\dots,g_t(x_{2n}))$ and $(g_t)$ are the
conformal equivalences associated with the first SLE. This relies on
the Markov property for $\gamma_1$, the restriction property for each
$\gamma_j$, $j>1$, and induction on $n$. As a consequence, the following process:
$$\psi(W_t,g_t(x_2),\dots,g_t(x_{2n}))\prod_{j>1}\left(g'_t(x_{2j-1})g'_t(x_{2j})\left(\frac{x_{2j}-x_{2j-1}}{g_t(x_{2j})-g_t(x_{2j-1})}\right)^2\right)^{\alpha_\kappa}$$
is a martingale, where $\alpha_\kappa=(6-\kappa)/2\kappa$, $\kappa=8/3$. Now, one
can do this starting at each point $x_i$ (since $\SLE_{8/3}$ is
reversible). This implies that $\psi$ is annihilated by the operators
($k=1,\dots, 2n$):
$$\frac\kappa 2\partial_{kk}+\sum_{l\neq k}\frac{2\partial_{l}}{x_l-x_k}+(\kappa-6)\frac{\partial_{k}}{x_{k}-x_{\iota(k)}}+\frac{\kappa-6}\kappa\sum_{\{j,\iota(j)\}\neq
\{k,\iota(k)\}}\left(\frac1{x_{j}-x_{k}}-\frac1{x_{\iota(j)}-x_{k}}\right)^2
$$
where $\iota$ defines the chosen pairing $\iota(2i-1)=2i$, $\iota(2i)=2i-1$.
This is not very symmetrical. It is easy to see that the function
$$\psi(\dots x_i\dots)\prod_j(x_{2j}-x_{2j-1})^{1-6/\kappa}$$
is annihilated by the operators:
$$\left\{\begin{array}{l}\displaystyle\frac\kappa 2\partial_{k,k}+\sum_{l\neq
k}\frac{2\partial_{l}}{x_l-x_k}+\frac{\kappa-6}\kappa\sum_{l\neq
k}\frac 1{(x_l-x_k)^2}, \hspace{1cm} k=1,\dots, 2n\\
\sum_k\partial_k\\
\sum_k x_k\partial_k-n(1-6/\kappa)\\
\sum_k x_k^2\partial_k-(1-6/\kappa)(x_1+\cdots+ x_{2n})
\end{array}\right.$$
the last three ones corresponding to the invariance of $\psi$ under the Moebius group.
In fact, as is discussed in \cite{D7}, one can make sense of this
sytem for any $\kappa\in (0,8/3)$ using appropriate loop-soups.
Let us make a few remarks on this system. First, each choice of a
non-crossing pairing of the $(2n)$ boundary points yields a solution;
there are $C_n$ such pairings. If $\kappa=6$, this is the system
satisfied by crossing probabilities for critical percolation in a
$(2n)$-gon with alternating boundary conditions. The number of these
crossing probabilities is the number of non-crossing partitions of the
set of blue edges, which is known to be $C_n$. In the case $n=2$, it
is trivial to solve this system, which reduces to a hypergeometric
equation (and $C_2=2$). If $n=3$, one can write this system in a
Pfaffian form, proving that its rank is indeed $C_3=5$. In the case
$\kappa=6$, $n=3$, and configurations with 3-fold symmetry, one can
express solutions in terms of $_3F_2$. Finally, one
can take the limit $\kappa\rightarrow\infty$ of the system; in this
case, solutions are polynomials, and it is easy to see that the rank
of the system is $C_n$ for all $n$. Euler integrals for solutions of
this system are discussed in \cite{D7}.
\section{Local commutation}
In this section we see how to go from infinitesimal commutation
relations to commutation (in law) of SLE hulls. Recall from Section 3
the definition of an $\SLE(\kappa,b,\tilde\kappa,\tilde b)$. We have seen in the
previous sections that the existence of such an SLE implies conditions
on $(\kappa,b,\tilde\kappa,\tilde b)$ (in particular either
$\tilde\kappa=\kappa$ or $\tilde\kappa=16/\kappa$). Conversely, assume
that the data $(\kappa,b,\tilde\kappa,\tilde b)$ satisfies the
appropriate conditions. We will see that this implies the existence of
an $\SLE(\kappa,b,\tilde\kappa,\tilde b)$. Note that this is not
saying anything on the long time behaviour of such an SLE. The
questions involving collisions of commuting SLEs are delicate
and cannot be handled by these methods.
\begin{Prop}
Consider the upper half-plane $\H$ with $n+2$ distinct marked real
points $x,y,z_1,\dots,z_n$. Assume that $\kappa,\tilde\kappa$ are
positive numbers and $b,\tilde b$ are smooth functions, translation
invariant and homogeneous of degree $(-1)$, such that the following
relation holds:
\begin{equation*}
\left[{\mathcal L},{\mathcal M}\right]=\frac{4}{(y-x)^2}\left({\mathcal M}-{\mathcal L}\right)
\end{equation*}
where ${\mathcal L}$ (resp. ${\mathcal M}$) is the infinitesimal generator of
$\SLE_\kappa(b)$ (resp. $\SLE_{\tilde\kappa}(\tilde b)$) growing at
$x$ (resp. $y$). Then there exists an
$\SLE(\kappa,b,\tilde\kappa,\tilde b)$.
\end{Prop}
We will use the following lemma.
Let $\phi_0$ be some conformal
equivalence $\H\setminus K\rightarrow\H$, with hydrodynamic
normalization at infinity. Let us call $\phi_0$-capacity the increasing function
on hulls: ${\rm cap}\circ\phi_0^{-1}-{\rm cap}(K)$.
\begin{Lem}
With the hypotheses of the Proposition,
let $D_1$, $D_2$ be disjoint compact neighbourhoods of $x$, $y$ resp.,
with Jordan boundary, not containing any other marked point; $\eta_1$,
$\eta_2$ are two positive numbers.
Then the two following procedures define the same
probability law on pairs of chains in $\H$:
\begin{enumerate}
\item Grow an $\SLE_\kappa(b)$ at $x$ until it exits $D_1$
or its $\phi_0$-capacity exceeds $\eta_1$,
Then grow an independent
$\SLE_{\tilde\kappa}(\tilde b)$ at $y$ in the remaining domain,
until it exits $D_2$ or its $\phi_0$-capacity exceeds $\eta_2$.
\item Grow an $\SLE_{\tilde\kappa}(\tilde b)$ at $y$ until it exits
$D_2$
or its $\phi_0$-capacity exceeds $\eta_2$.
Then grow an independent
$\SLE_\kappa(b)$ at $x$ in the remaining domain,
until it exits $D_1$
or its $\phi_0$-capacity exceeds $\eta_1$.
\end{enumerate}
\end{Lem}
\begin{proof}
Informally, the argument is the following: divide the two SLEs in $n$
segments; one has to prove that one can either grow the $n$ segments
of the first SLE, then the $n$ segments of the second SLE, or the
other way round and get the same law. The permutation of two segments
(of the two SLEs) induces an error term of $O(n^{-3})$, from the
infinitesimal commutation relations. One needs $n^2$ such
permutations; letting $n$ go to infinity, one gets the result. The
uniformity in the error terms is provided by the restriction to paths
in the disjoint compact neighbourhoods $D_1$, $D_2$.
For simplicity, we will consider only the case where $\phi_0=\Id$ (and
the $\phi_0$-capacity is the ordinary half-plane capacity). For the general case,
one has to replace fixed times by corresponding stopping times; the
proof goes otherwise unchanged.
For positive times $S$, $T$, let $\mathbb{E}_1$ designate the expectation for
pairs of random curves obtained by growing first the $\SLE$ in $D_1$
up to time $S$ (half-plane capacity $2S$), and then the $\SLE$ in $D_2$
up to time $T$. The symbol $\mathbb{E}_2$ refers to expectation for the
reversed construction. The driving process for each of these Loewner
chains (seen in the original half-plane) is denoted by $X$, $Y$. Let
$\tau_1$ be the time at which the first SLE exits $D_1$, and $\tau_2$
the corresponding time for the second SLE. We will prove that
$$\mathbb{E}_1\left(.{\bf 1}_{\tau_1>S,\tau_2>T}\right)=\mathbb{E}_2\left(.{\bf 1}_{\tau_1>S,\tau_2>T}
\right)$$
as measures on $C_0([0,S],\mathbb{R})\times C_0([0,T],\mathbb{R})$. To recover the
statement of the lemma, one then considers the measures:
$$\mathbb{E}_i\left(.({\bf 1}_{\tau_1>S,\tau_2>T}-{\bf 1}_{\tau_1>S+ds,\tau_2>T})\right)$$
for $i=1,2$. So we can work with fixed times $S$ and $T$. Note that
$2\tau_i$ is bounded by the half-plane capacity of $D_i$, $i=1,2$.
Let $S_0=0<S_1<\cdots< S_m=S$ and $T_0=0<T_1<\cdots< T_m=T$ be
fixed sequences of times ($m\geq 1$). Also, let $(\tilde\varphi_i)_{0\leq
i\leq m}$, $(\tilde\psi_i)_{0\leq i\leq m}$ be test functions (i.e. in
$C^\infty_c(\mathbb{R})$). By a monotone class argument, we need only to see that:
$$\mathbb{E}_1\left(\prod_k\tilde\varphi_k(X_{S_k})\tilde\psi_k(Y_{T_k}){\bf 1}_{\tau_1>S,\tau_2>T}\right)=
\mathbb{E}_2\left(\prod_k\tilde\varphi_k(X_{S_k})\tilde\psi_k(Y_{T_k}){\bf 1}_{\tau_1>S,\tau_2>T}\right)$$
For $n\geq 1$, consider increasing
sequences $(s_i)_{0\leq i\leq mn}$, $(t_i)_{0\leq i\leq mn}$, where
$s_{nj}=S_j$, $t_{nj}=T_j$, and the increments $(s_{i+1}-s_i)$,
$(t_{i+1}-t_i)$ go uniformly to 0. Define $\varphi_{ni}=\tilde\varphi_i$,
$\psi_{ni}=\tilde\psi_i$, and $\varphi_i=\psi_i=1$ if $n$ does not
divide $i$.
Note that the commutation relation holds for functions of the positions
of all marked points in the Loewner flow. For convenience, we will
approximate the event $\{\tau_1>S,\tau_2>T\}$ by a function of an
extended flow. More precisely, let $\delta>0$ be a (small) positive
number and $N$ a (large) integer. Mark $N$ points $z_1,\dots z_N$
on the Jordan boundaries of $D_1$ and $D_2$ (one can also mark their
conjugates $\overline z_1,\dots,\overline z_N$, extending the flow by
Schwarz reflection). For instance, one can
choose them so that the Hausdorff distance between $\partial
D_1\cup\partial D_2$ and $\{z_1,\dots z_N\}$ is minimal ($N$ being
fixed).
Let ${\mathcal K}=\{(K,u)\}$ where $K$ is a compact hull included in $D_1$
and $u$ is a point in $\partial D_1\cap K$. Then ${\mathcal K}$ is a
compact set (using the Hausdorff metrics on compact subsets of
$D_1$), so one can choose $\delta>0$ so that:
$$\delta>\max_{(K,u)\in{\mathcal K}}\left(\min_i |\phi_K(z_i)-\phi_K(u)|\right)$$
and the corresponding inequality holds for hulls in $D_2$. Also, it is
easy to see that one can choose $\delta$ so that it goes to zero as $N$
goes to infinity, by a compacity argument.
Let $f_\delta$ be a smooth function of the
variables $(x,y,z_1,\dots z_N)$, taking values in $[0,1]$, such that it vanishes if $|x-z_i|<\delta$
or $|y-z_i|<\delta$ for some $i\in\{1,\dots,N\}$ and equals 1 if
$|x-z_i|>2\delta$, $|y-z_i|>2\delta$ for all $i$. One can assume that one of
the $z_i$'s is real and between $x$ and $y$, and similarly, the
other marked points (that influence the drift) are separated from $x$,
$y$ by one of the ``spectator'' $z_i$'s. The choice of
$\delta$ ensures that $f_\delta$ vanishes as soon as an SLE crosses $\partial
D_i$, $i=1,2$.
Let $W$ denote the full configuration (images of growth points and marked
points in the Loewner flow). Then we just have to prove that:
$$\lim_{n\rightarrow\infty}(\mathbb{E}_1-\mathbb{E}_2)\left(\prod_k\varphi_k(X_{s_k})f_\delta(W_{(s_k,0)})\psi_k(Y_{t_k})f_\delta(W_{(0,t_k)})\right)=0$$
and then let $N\nearrow\infty$, $\delta\searrow 0$ to get the result for
stopped SLEs. So it what follows we may replace
$\varphi_k(X_{s_k})f_\delta(W_{(s_k,0)})$ with
$\varphi_k(W_{(s_k,0)})$ (a function of the configuration), and
similarly $\psi_k(X_{s_k})f_\delta(W_{(s_k,0)})$ with
$\psi_k(W_{(s_k,0)})$
Consider also two random curves $\gamma$, $\hat\gamma$ started from
$x$ (resp. $y$) in $\H$, parameterized by half-plane capacity.
Let $\phi_{s,t}=\phi_{\gamma_{[0,s]}\cup\hat\gamma_{[0,t]}}$, and $W_{s,t}$
is the configuration $\phi_{s,t}(\gamma_s,\hat\gamma_t,z_1,\dots
z_N,\dots)$. We will also abbreviate $x_k=W_{(s_k,0)}$, $y_k=W_{(0,t_k)}$.
Consider two permutations $\sigma$ and $\sigma'$ of $\{s_1,\dots
s_{mn},t_1,\dots t_{mn}\}$, increasing for the partial order generated
by $s_k< s_{k+1}$, $t_k<t_{k+1}$, and such that $\sigma$ and
$\sigma'$ differ by a transposition of two consecutive elements. For
instance $\sigma=(s_1,\dots s_{mn},t_1,\dots t_{mn})$ and
$\sigma'=(s_1,\dots s_{mn-1},t_1,s_{mn},t_2,\dots t_{mn})$.
Suppose that $\gamma,\hat\gamma$ are obtained from the permutation $\sigma$ in the following fashion:
if $\sigma=(\sigma_1, s_{i+1}, \sigma_2)$, $t_j$ is the maximal $t_.$
element in $\sigma_1$, and
$\phi=\phi_{s_i,t_j}$, then
$\phi(\gamma_{[s_i,s_{i+1}]})$ is an $\SLE_\kappa(b)$ started from
$W_{s_i,t_j}$ and independent of
$\phi$ conditionally on its starting state
(stopped so that $\phi_{s_i,0}(\gamma_{[s_i,s_{i+1}]})$ has capacity $2(s_{i+1}-s_i)$). Likewise, if $\sigma=(\sigma_1, t_{j+1}, \sigma_2)$, $s_i$
is the maximal $s_.$ element in $\sigma_1$, and
$\phi=\phi_{s_i,t_j}$, then
$\phi(\gamma_{[t_j,t_{j+1}]})$ is an $\SLE_{\tilde\kappa}(\tilde b)$ started from
$W_{s_i,t_j}$ and independent of
$\phi$ conditionally on its starting state. The symbol $\mathbb{E}$ is
expectation for this construction (relative to $\sigma$), and $\mathbb{E}'$ is
the corresponding expectation obtained from $\sigma'$.
Let $\sigma=(\sigma_1,s_i,t_j,\sigma_2)$ and
$\sigma'=(\sigma_1,t_j,s_i,\sigma_2)$. Then:
\begin{align*}
\mathbb{E}'\left(\prod_k\varphi_k(x_k)\psi_k(y_k)\right)&=\mathbb{E}'\left(\left(\prod_{k<i,l<j}\varphi_k(x_k)\psi_l(y_l)\right)\varphi_i(x_i)\psi_j(y_j)\left(\prod_{k>i,l>j}\varphi_k(x_k)\psi_l(y_l)\right)\right)
\end{align*}
(Here the $f_\delta$ are implicitly included in the $\varphi_k$, $\psi_k$).
The expectation of the last part of the product conditionally on
$\gamma_{[0,s_i]},\hat\gamma_{[0,t_j]}$ is a function of
$W_{s_i,t_j}$,
Denote
by $F(u,v)$ this function, which is the same under $\mathbb{E}$ and $\mathbb{E}'$; by
induction and standard regularity results (the drift terms stay
bounded as long as the functional does not vanish), it is
easily seen that $F$ is a smooth function ;
the existence of regular conditional probability is clear for
the same reasons.
Now, consider:
$$\mathbb{E}\left(\left.
\varphi_i(x_i)\psi_j(y_j)F(\phi_{s_i,t_j}(\gamma_{s_i},\hat\gamma_{t_j}))\right|W_{s_{i-1},t_{j-1}}\right)-\mathbb{E}'\left(\left.
\varphi_i(x_i)\psi_j(y_j)F(\phi_{s_i,t_j}(\gamma'_{s_i},\hat\gamma'_{t_j}))\right|W_{s_{i-1},t_{j-1}}\right)$$
Assume that $i,j$ are not multiples of $n$ (and traces are away from
the boundaries of $D_1$, $D_2$ at time $i,j$). Then $\varphi_i=\psi_j=1$, and this difference is
$O(n^{-3})$, from the infinitesimal commutation relation.
If $i$ or $j$ is a multiple of $n$, note that, if $x'=\phi_{s_i,t_{j-1}}(\gamma_{s_i})$, $y'=\phi_{s_{i-1},t_{j}}(\gamma_{t_j})$,
$x''=\phi_{s_i,t_{j}}(\gamma_{s_i})$,
$y''=\phi_{s_{i},t_{j}}(\gamma_{t_j})$, then:
$$x'=x''-\frac {2(t_j-t_{j-1})}{x''-y''}+O(n^{-2})$$
using the backward Loewner flow; one gets a similar expression for
$y'$, and these hold under $\mathbb{E}$ and $\mathbb{E}'$. So in this case the
difference is $O(n^{-2})$.
To get from $\sigma=(s_1,\dots,s_{mn},s_1,\dots,s_{mn})$ to
$\sigma'=(t_1,\dots,t_{mn},t_1,\dots,t_{mn})$, one needs $(mn)^2$
transpositions ($(mn)$ transpositions to bring $t_1$ in first
position, then $(mn)$ transpositions to bring $t_2$ in second
position, ...). For such a transposition $(s_i,t_j)$, $i$ or $j$ is a
multiple of $n$ in $m^2(2n-1)$ case. This transposition is valid as
long as the paths stay in $D_1,D_2$ (more precisely, as long as the
$f_\delta$ terms are 1). Conversely, if a path is close to the
boundary of $D_1,D_2$, the functional is zero with probability close
to one. Hence:
$$(\mathbb{E}_1-\mathbb{E}_2)\left(\prod_k\varphi_k(X_{s_k})\psi_k(Y_{t_k}){\bf 1}_{E(n,N,\delta)}\right)=m^2(n-1)^2O(n^{-3})+m^2(2n-1)O(n^{-2})=O(C(N,\delta)/n)$$
where $X_s=\phi_{s,0}(\gamma_s)$, $Y_t=\phi_{0,t}(\hat\gamma_t)$, and
$E(n,N,\delta)$ is the event than none of the $f_\delta$'s vanishes at
a sampled time. The error term is uniform in $n$ but depends on
$N,\delta$.
As $n$ goes to infinity
($N$, $\delta$ being fixed), the
probability that the first SLE crosses $\partial D_1$ without $f_\delta$
vanishing at one of the sampled times $s_i$ goes to zero (since in
this case $f_\delta$ vanishes on an open set of times). So we can
assume that the SLEs stay in $D_1$, $D_2$, hence we have uniformity in
the $O(n^{-3})$ estimate of the commutation condition. The last case
to study is when the trace gets close to the boundary,
say $|x_{i-1}-z_j|<2\delta$ for some $i,j$, without actually crossing
it. The probability of this event goes to zero as $N$ goes to infinity
and $\delta$ goes to zero.
So the above estimate is valid up to an event of negligible
probability, viz. either an SLE crosses $\partial D_1$ or $\partial D_2$
without the functional vanishing or one of the $f_\delta$ is less than
one and yet the functional does not vanish.
Taking the limit as $n$ goes to infinity and
$N\nearrow\infty$, $\delta\searrow 0$ (so that
$C(N,\delta)/n\rightarrow 0$), one gets the stated
identity, that is :
$$\mathbb{E}_1\left(.{\bf 1}_{\tau_1>S,\tau_2>T}\right)=\mathbb{E}_2\left(.{\bf 1}_{\tau_1>S,\tau_2>T}
\right).$$
This concludes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of the Proposition.]
Let $D_1$, $D_2$ be as in the lemma. We grow an $\SLE_\kappa(b)$ in $D_1$ until it
reaches $\partial D_1$, and then in the remaining domain an
$\SLE_{\tilde\kappa}(\tilde b)$ in $D_2$ until it
reaches $\partial D_2$. This defines a Loewner chain
$(K_{s,t})_{(s,t)\in{\mathcal T}}$. We will prove that this chain is an
$\SLE(\kappa,b,\tilde\kappa,\tilde b)$, i.e. it has the appropriate
Markov property.
Let $0=S_0<S_1<\cdots<S_k=\infty$ and $0=T_0<T_1<\cdots<T_k=\infty$ be sequences
of fixed times. Let $\sigma$ be a permutation of the symbols
$(S_1,\dots, S_k, T_1,\dots, T_k)$, which is
increasing for the partial order generated by $S_i<S_{i+1}$,
$T_i<T_{i+1}$. Let $(K^\sigma_{s,t})$ be the (random) Loewner chain obtained by
growing SLEs alternatively in $D_1$ and in $D_2$ according to
$\sigma$, stopping the SLEs when they reach $\partial D_1,\partial
D_2$. For instance, if $\sigma=(S_1,T_1,T_2,S_2,\dots)$, one grows the first
$\SLE$ to half-plane capacity $2S_1$ (and stop it if it reaches $\partial
D_1$), then the second $\SLE$ to half-plane capacity $2S_2$, measured
in the original half-plane (and stop it if it reaches $\partial
D_1$), and then again the first
$\SLE$ to half-plane capacity $2S_2$ (and stop it \dots), \dots.
To alleviate notations, we will use the convention that for a Loewner
chain $(\tilde K_{s,t})$, $\tilde K_{s,t}=\tilde K_{s\wedge S,t\wedge T}$, where $S$ is the
time at which $(\tilde K_{s,0})_s$ exits $D_1$ (resp. $T$ is the
time at which $(\tilde K_{0,t})_t$ exits $D_2$).
If $\sigma$ and $\sigma'$ differ by a single tranposition,
i.e. $\sigma=(\sigma_1,S_{k_1},T_{k_2},\sigma_2)$,
$\sigma'=(\sigma_1,T_{k_2},S_{k_1},\sigma_2)$, then we can apply the
lemma with $\phi_0=\phi_{K^{\sigma_1}_{S_{k_1},T_{k_2}}}$, $D'_1=\phi_0(D_1)
$, $D'_2=\phi_0(D_2)$. This proves that we can couple
$(K^\sigma_{s,t})$ and $(K^{\sigma'}_{s,t})$. By induction, we can
couple (simultaneously) the chains $(K^\sigma_{s,t})$ for all
admissible permutations $\sigma$. By construction, for
$\sigma=(S_1,\dots S_k, T_1,\dots, T_k)$, $K^\sigma$ is distributed as $K$.
Now, for any $k_1,k_2\in\{0,\dots k\}$, one can consider the
permutation:
$$\sigma=(S_0,\dots,S_{k_1},T_0,\dots,T_{k_2},S_{k_1+1},\dots,S_k,T_{k_2+1},\dots
T_k).$$
The previous coupling proves the Markov property for the fixed
time $(S_{k_1},T_{k_2})$ (i.e the chain
$$(g_{S_{k_1},T_{k_2}}(K_{S_1+s,T_2}\setminus K_{S_1,T_2}))$$
is a stopped $\SLE_\kappa(b)$, and the same thing holds for the other $\SLE$).
This still holds for stopping times supported on
$\{(S_{k_1},T_{k_2}),k_1,k_2=0,\dots k\}$. Since the subdivisions
$S_0<\cdots< S_k$ and $T_0<\cdots< T_k$ were arbitrary, this also holds
for stopping times with finite support, and by a limiting argument for
all stopping times (as for the classical Markov property).
\end{proof}
\section{Classification of commuting SLEs}
We can now conclude the general study of pairs of commuting chordal SLEs in a
simply connected domain. In the upper half-plane $\H$, with $(2n+2)$
marked points $(x,y,z_1,\dots,z_n)$ on the real line (and one marked
point $z_{n+1}$ at infinity), consider two
parameters $\kappa,\tilde\kappa$, and two smooth functions of the
configuration $(x,y,z_1,\dots,z_n)$,
translation invariant and homogeneous of degree $-1$. Let ${\mathcal L}$ be
the infinitesimal generator of the $\SLE_\kappa(b)$ growing at $x$
(driven by $(X_s)$, $(g_s)$ are the corresponding conformal
equivalences), and
${\mathcal M}$
the infinitesimal generator of the $\SLE_{\tilde\kappa}(\tilde b)$
growing at $y$ (driven by $(Y_t)$, $(\tilde g_t)$ are the corresponding conformal
equivalences). By a cocycle $C(\phi,z)$ we mean a function of the form:
$$C(\phi,z)=\prod_{1\leq i<j\leq n+1}\left(\phi'(z_i)\phi'(z_j)\left(\frac{z_j-z_i}{\phi(z_j)-\phi(z_i)}\right)^2\right)^{\nu_{ij}}\frac{f(\phi(z_1),\dots,\phi(z_n))}{f(z_1,\dots,z_n)}$$
where $f$ is a non-vanishing function (translation invariant and
homogeneous of degree 0). So $f$ is a conformally invariant function
of the marked points $(z-1,\dots,z_n,z_{n+1}=\infty)$ and can be seen
as a function on the residual (i.e. not invovlving the positions of
$x,y$) moduli space.
If $n\geq 3$, this decomposition is not
unique, as discussed in Section 4.
\begin{Thm}
The following assertions are equivalent.
\begin{enumerate}
\item There exists an $\SLE(\kappa,b,\tilde\kappa,\tilde b)$.
\item The infinitesimal generators satisfy the relation:
\begin{equation*}
\left[{\mathcal L},{\mathcal M}\right]=\frac{4}{(y-x)^2}\left({\mathcal M}-{\mathcal L}\right)
\end{equation*}
\item $\tilde\kappa=\kappa$ or $\tilde\kappa=16/\kappa$,
$b=\kappa\partial_x\psi/\psi$, $\tilde
b=\tilde\kappa\partial_y\psi/\psi$, where $\psi$ is
a non-vanishing solution of the system:
$$\left\{\begin{array}{l}
\displaystyle\frac{\kappa}
2\partial_{xx}\psi+\sum_i\frac{2\partial_i\psi}{z_i-x}+\frac{2\partial_y\psi}{y-x}+\left(\left(1-\frac
6{\tilde\kappa}\right)\frac 1{(y-x)^2}+h(x,z)\right)\psi=0\\
\displaystyle\frac{\tilde\kappa}2\partial_{yy}\psi+\sum_i\frac{2\partial_i\psi}{z_i-y}+\frac{2\partial_x\psi}{x-y}+\left(\left(1-\frac
6\kappa\right)\frac 1{(x-y)^2}+h(y,z)\right)\psi=0
\end{array}\right.$$
where
$$h(x,z)=\sum_i\frac{\mu_i}{(x-z_i)^2}+\sum_{i<j}\frac{\nu_{ij}}{(x-z_i)(x-z_j)}+\ell_x\log(f(z)),$$
the $\mu_i$, $\nu_{ij}$ are constant parameters, and $f$ is a function
on the residual moduli space.
\item $\tilde\kappa=\kappa$ or $\tilde\kappa=16/\kappa$,
and there is
a non-vanishing function $\psi$ and a cocycle $C$ such that if:
\begin{align*}
Z_s=&\psi(X_s,g_s(y),\dots,g_s(z_i),\dots)g'_s(y)^{\alpha_{\tilde\kappa}}C(g_s,z)\\
\tilde Z_t=&\psi(\tilde g_t(x),Y_t,\dots,\tilde g_t(z_i),\dots)\tilde
g'_t(x)^{\alpha_\kappa}
C(\tilde g_t,z)
\end{align*}
then $(Z_s)$ is a local martingale for chordal
$\SLE_\kappa(x\rightarrow\infty)$, $(\tilde Z_t)$ is a local martingale for chordal
$\SLE_{\tilde\kappa}(y\rightarrow\infty)$, and $\SLE_\kappa(b)$ is the
Girsanov transform of chordal $\SLE_\kappa$ by $(Z_s)$,
$\SLE_{\tilde\kappa}(\tilde b)$ is the
Girsanov transform of chordal $\SLE_{\tilde\kappa}$ by $(\tilde Z_t)$.
\end{enumerate}
\end{Thm}
Note that there is no loss of generality in considering two (rather
than $m\geq 2$) commuting SLEs. Indeed, the only conditions will be
the pairwise conditions. It is also easy to see that the proofs for
local commutation can be adapted for $m$ $\SLE$s (though notations
become quite heavy). Let us explicit, say, condition (iii) in this
situation. On the real line, $(m+n)$ points $(y_1,\dots, y_m,z_1,\dots,
z_n)$ are marked, and we want to grow $m$ SLEs ($\SLE_{\kappa_i}(b_i)$,
$i=1,\dots, m$) at $y_1,\dots,y_m$. Then
$\{\kappa_1,\dots,\kappa_m\}\subset\{\kappa_1,16/\kappa_1\}$,
$b_i=\kappa_i\partial_{y_i}\psi/\psi$, where $\psi$ is annihilated by
the operators:
$$\frac{\kappa}
2\partial_{y_iy_i}+\sum_{j\neq i}\frac{2\partial_{y_j}}{y_j-y_i}+
\sum_{j}\frac{2\partial_{z_j}}{z_j-y_i}
-2\left(\sum_{j\neq i}\frac{\alpha_{\kappa_j}}{(y_j-y_i)^2}+\sum_j\frac{\mu'_j}{(z_j-y_i)^2}+\sum_{j<j'}\frac{\nu'_{jj'}}{(z_j-y_i)(z_{j'}-y_i)}+\ell_{y_i}\log(f(z))
\right)$$
for some parameters $\mu'_j$, $\nu'_{jj'}$ and some function $f$ on
the residual moduli space.
\section{Restriction formulae for non-intersecting SLEs}
In this section we specialize to a simple parametric case, where $n$
SLEs started from distinct points on the real line are aiming at
infinity; there are only $n$ marked points on the real line (and one
at infinity). Each of the $n$ SLEs is an
$\SLE_\kappa(\underline\rho)$, where $\underline\rho=(2,\dots 2)$.
In this situation, we can not only define locally a $n$-parameter
Loewner chain, but also define it globally if $\kappa\leq 4$. Indeed,
the only thing preventing from a global definition is the possibility
of collisions of marked points. But such collisions a.s. don't happen
for these $\SLE_\kappa(2,\dots,2)$, so we can actually define a chain
with full time set $\mathbb{R}_+^n$.
If the $n$ starting points collapse to zero, we get $n$
``non-intersecting'' SLEs starting at $0$ and ending at
$\infty$. Restriction formulae are derived for these Loewner chains
(indexed by $\mathbb{R}_+^n$). This gives a simple realization of the
exponents $h_{1;n+1}(\kappa)$ (see also \cite{W2}).
The radial case ($n$ ``non-intersecting'' SLEs started from the
boundary and aiming at a single bulk point) is also studied, and
restriction formulae then give the exponent $2h_{0;n/2}(\kappa)$.
\subsection{The chordal case}
Let $y_1<\cdots<y_n$ be $n$ real points. Consider the infinitesimal generators:
$${\mathcal L}_i=\frac\kappa 2\partial_{ii}+\left(\sum_{j\neq i}\frac
{2}{y_i-y_j}\right)\partial_i+\sum_{j\neq i}\frac
2{y_j-y_i}\partial_j$$
Then, from the previous computations (parametric case), we see that the following
commutation relations are satisfied:
$$\left[{\mathcal L}_i,{\mathcal L}_j\right]=\frac{4}{(y_j-y_i)^2}\left({\mathcal
L}_j-{\mathcal L}_i\right).$$
As mentioned earlier, this ensures (if $\kappa\leq 4$) the
existence of a process $(K_{s_1,\dots,s_n})$ such that:\\
$(\phi_{K_{s_1,\dots,s_n}}(K_{s_1,\dots s_i+s, \dots s_n}))_{s\geq 0}$
is an $\SLE_\kappa(2,\dots 2)$ started from
$(\phi(\gamma^{i}_{s_i}))=(\phi(\gamma^{1}_{s_1}),\dots,\phi(\gamma^{n}_{s_n}))$,
independent from $(K_{t_1,\dots t_n})_{t_j\leq s_j}$, where
$\phi=\phi_{K_{s_1,\dots,s_n}}$, $K_{0,\dots, s_i,\dots
0}=\gamma^i_{[0,s_i]}$. Assume that this process is started from
$(y_1,\dots,y_n)$, i.e. $\gamma^i_0=y_i$, and define
$K_\infty=\bigcup_i\gamma^i_{[0,\infty]}\subset\overline\H$.
Consider now a hull $A\subset\overline\H$, that does not intersect
$\{y_1,\dots,y_n\}$.
Let $\lambda_\kappa=(6-\kappa)(8-3\kappa)/2\kappa$. If $\kappa\leq
8/3$, and $L$ is an independent random loop soup with intensity
$\lambda_\kappa$ in $\H$, define
$K_\infty^L$ to be (the filling of) the union of $K_\infty$ and the
loops in $L$ that intersect it.
Then, if $\kappa\leq 8/3$, the following restriction formula holds:
\begin{Lem} The probability that $K_\infty^L$ does not intersect
$A$ is given by:
$$\P(K_\infty^L \cap
A=\varnothing)=\prod_i\phi_A'(y_i)^{(6-\kappa)/2\kappa}\prod_{i<j}\left(\frac{\phi_A(y_j)-\phi_A(y_i)}{y_j-y_i}\right)^{2/\kappa}$$
\end{Lem}
\begin{proof}
Define ${\underline s}=(s_1,\dots s_n)$, $h_{\underline
s}=\phi_{\phi_{K_{\underline s}}(A)}$, and $Y^{(i)}_{\underline
s}=\phi_{\underline s}(\gamma^i_{s_i})$. Then, from the definition of
$K_{\underline s}$ and Lemma 4 in \cite{D4}, one sees that:
$$s_k\longmapsto \prod_i h'_{\underline s}(Y^{(i)}_{\underline
s})^{(6-\kappa)/2\kappa}\prod_{i<j}\left(\frac{h_{\underline s}(y_j)-
h_{\underline
s}(y_i)}{y_j-y_i}\right)^{2/\kappa}\exp\left(\lambda_\kappa\int_{0}^{s_k}\frac{Sh_{s_1,\dots
\sigma_k, \dots s_n}(Y^{(k)}_{s_1,\dots \sigma_k, \dots
s_n})}6d\sigma_k\right)$$
is a bounded martingale for all $k$, $s_1,\dots, s_n$. From the
properties of the Brownian loop soup, it appears that the following
semimartingale (proportional to the first one) is also a bounded
martingale:
$$s_k\longmapsto \prod_i h'_{\underline s}(Y^{(i)}_{\underline
s})^{(6-\kappa)/2\kappa}\prod_{i<j}\left(\frac{h_{\underline s}(y_j)-
h_{\underline s}(y_i)}{y_j-y_i}\right)^{2/\kappa}\P(K_{\underline
s}^L\cap A=\varnothing)$$
Hence, for all ${\underline s}=(s_1,\dots s_n)$, one gets (using $n$
different martingales):
$$\prod_i\phi_A'(y_i)^{(6-\kappa)/2\kappa}\prod_{i<j}\left(\frac{\phi_A(y_j)-\phi_A(y_i)}{y_j-y_i}\right)^{2/\kappa}=\mathbb{E}\left(\prod_i
h'_{\underline s}(Y^{(i)}_{\underline
s})^{(6-\kappa)/2\kappa}\prod_{i<j}\left(\frac{h_{\underline s}(y_j)-
h_{\underline s}(y_i)}{y_j-y_i}\right)^{2/\kappa}\P(K_{\underline
s}^L\cap A=\varnothing)\right)$$
Now, as $\inf\underline s$ goes to infinity, the product in the
right-hand side converges to $\P(K_\infty^L \cap A=\varnothing)$,
which concludes the proof.
\end{proof}
Define the conformal weight $h_{p;q}=h_{p;q}(\kappa)$ by:
$$h_{p;q}=\frac{(p\kappa-4q)^2-(\kappa-4)^2}{16\kappa}$$
Then, if $y_1,\dots y_n$ collapse to zero, the above formula reduces to:
$$\P(K_\infty^L \cap A=\varnothing)=\phi_A'(0)^{h_{1;n+1}}$$
The role of the conformal weights $h_{1;n+1}$ in the context of restriction
measures and $\SLE_\kappa(\rho)$ is discussed in \cite{W2}.
\begin{Cor}[Restriction property]
Let $\kappa\leq 8/3$, $L$ a loop soup with intensity $\lambda_\kappa$.
Conditionally on $\{K_\infty^L\cap A=\varnothing\}$
and up to a time change, $((\phi_A(K_{\underline s}))_{\underline
s},\phi_A(\{\delta\in L:\delta\cap A=\varnothing\}))$
is distributed as $((K_{\underline s})_{\underline s},L)$, where
$(K_{\underline s})$ is started from $\phi_A(y_1),\dots
\phi_A(y_n)$. In particular, the collection of measures on hulls
$K_\infty^L$ indexed by the starting configuration
$(\H,z_1,\dots,z_n,z_{n+1}=\infty)$ has the restriction property.
\end{Cor}
\begin{proof}
From the previous lemma, the result is a straightforward application
of the Girsanov theorem and the restriction property of the loop soup.
The assertion on the restriction property can be derived directly by
applying the previous formula to concatenation of hulls $A.B$.
\end{proof}
\subsection{The radial case}
Recall the definition of radial $\SLE_\kappa(\underline\rho)$:
assume the existence of processes $(\xi_t)_{t\geq 0}$ and
$(\chi^{(i)}_t)_{t\geq 0}$, $i\in\{1\dots k\}$, satisfying the SDEs:
\begin{equation*}
\left\{\begin{array}{l}d\xi_t=(i\xi_t\sqrt\kappa dB_t-\frac\kappa
2\xi_tdt)+\sum_{i=1}^k\frac{\rho_i}2\left(-\xi_t\frac{\xi_t+\chi^{(i)}_t}{\xi_t-\chi^{(i)}_t}\right)dt\\
d\chi^{(i)}_t=-\chi_t\frac{\chi_t+\xi_t}{\chi_t-\xi_t}dt\end{array}\right.
\end{equation*}
Then the ODEs $dg_t(z)=-g_t(z)(g_t(z)+\xi_t)/(g_t(z)-\xi_t)dt$ define
radial $\SLE_\kappa(\underline\rho)$ in the unit disk $\mathbb{U}$.
First, we briefly discuss commutation conditions in the radial
case. Suppose that $\chi_1,\dots,\chi_n$ are $n$ points on
the unit circle. One considers two SLEs growing at $\chi_1$ and
$\chi_n$ resp., assuming that the drift terms are functions of the
$\chi_i$. We think of functions annihilated by infinitesimal
generators as expected values of some events; it is quite natural to
express these real-valued functions in angular coordinates:
$\chi_j=\exp(i\theta_j)$. Reasoning as in the chordal case,
if ${\mathcal L}$ and ${\mathcal M}$ are the infinitesimal generators of the two
SLEs, the commutation condition reads:
$$[{\mathcal L},{\mathcal M}]=\frac 1{\sin((\theta_n-\theta_1)/2)^2}({\mathcal L}-{\mathcal M}).$$
The generator for a radial $\SLE_\kappa(\rho_1,\dots,\rho_{n-1})$
started from $(\chi_1,\dots,\chi_n)$ is:
$${\mathcal L}=\frac\kappa 2\partial_{11}+\sum_{i>1}\cot\left(\frac{\theta_i-\theta_1}2\right)\left(\partial_i-\frac{\rho_i}2\partial_1\right)$$
By analogy with the chordal case, one can find solutions for this
commutation relation:
\begin{enumerate}
\item
Two $\SLE_\kappa(\rho)$ started from $(\chi_1,\chi_2)$ and
$(\chi_2,\chi_1)$ resp., $\rho\in\{2,\kappa-6\}$.
\item $n$ $\SLE_\kappa(2,\dots,2)$ started from
$(\chi_i,\chi_1,\dots,\widehat{\chi_i},\dots\chi_n)$.
\end{enumerate}
Let us comment briefly on the case (i). For $\rho=\kappa-6$, this is
only chordal reversibility in a radial normalization (as may be seen
by slightly modifying the argument for chordal-radial equivalence when
$\kappa=6$). In the case $\rho=2$, this gives a model of ``pinned
chordal SLE'', i.e chordal SLE ``conditionally'' on the trace visiting
a given bulk point, for $\kappa<8$. More precisely, start from a
chordal SLE in radial normalization (hence, up to a time change,
radial $\SLE_\kappa(\kappa-6)$). Then the first moment estimate in
\cite{B1} relies on the computation of the leading eigenvector for the
associated infinitesimal generator. This yields a local martingale:
$$s\mapsto\sin\left(\frac{\theta^{(1)}_s-\theta^{(2)}_s}2\right)^{8/\kappa-1}e^{(1-\kappa/8)s}$$
corresponding of the probability that the SLE trace gets infinitely
close to the bulk point 0. Using this as a Girsanov density, one gets
a radial $\SLE_\kappa(2)$. Note that for $\kappa=8$, this density is
1, and $\kappa-6=2$.\\
There are other examples with two boundary points. Consider a chordal
$\SLE_\kappa$ from $\chi_1$ to $\chi_2$ and condition it to leave $0$
on its left (resp. right); this can be made explicit (see
\cite{S1}). Once again, the drift terms are (generically in $\kappa$) transcendental.
In the case (ii), just as in the chordal case (at least if $\kappa\leq
4$), based on the infinitesimal commutation relations, one can define
a $n$-braids radial SLE. The question of such a definition, from a CFT
point of view, appears in \cite{Ca3,Ca3c}.
Note also that summing the $n$ generators here gives the generator of
Dyson's Brownian motion.
As above, we study the case (ii) from the restriction point of view.
First, we have to derive restriction formulae for radial $\SLE_\kappa(\underline\rho)$.
Let $A$ be a hull of $\overline \mathbb{U}$ (i.e. A is a compact subset of $\overline \mathbb{U}$,
$A\cap\overline\mathbb{U}=\overline{(A\cap\mathbb{U})}$, $\mathbb{U}\setminus A$ is equivalent
to $\mathbb{U}$ and $0\notin A$). For any such hull, denote by $\phi_A$ the
unique conformal equivalence $\mathbb{U}\setminus A\rightarrow \mathbb{U}$ such that
$\phi_A(0)=0$ and $\phi_A'(0)>0$. Suppose that $\xi_0,\chi^{(i)}_0$
are not in $A$. Then $h_t=\phi_{g_t(A)}$ is defined at least for small
times. Recall that
$\lambda_\kappa=(8-3\kappa)(6-\kappa)/2\kappa$. Then the following
result (analogous to Lemma 4 in \cite{D4} and generalizing a result
stated in \cite{LSW3}) holds:
\begin{Lem}
In this situation, define $\tilde\xi_t=h_t(\xi_t)$, $\tilde\chi^{(i)}_t=h_t(\chi^{(i)}_t)$:
\begin{align*}
M^\varnothing_t&=\left(\frac{h'_t(\xi_t)\xi_t}{\tilde\xi_t}\right)^{\frac{(6-\kappa)}{2\kappa}} h'_t(0)^{\frac{(6-\kappa)(\kappa-2)}{8\kappa}}\exp\left(-\lambda_\kappa\int_0^t\frac{\xi_s^2Sh_s(\xi_s)}6ds\right)\\
M^{(i)}_t&=\left(\frac{h'_t(\chi^{(i)}_t)\chi^{(i)}_t}{\tilde\chi^{(i)}_t}\right)^{\frac{\rho_i(\rho_i+4-\kappa)}{4\kappa}}\left(\frac{(\tilde\chi^{(i)}_t-\tilde\xi_t)^2}{\tilde\chi^{(i)}_t\tilde\xi_t}\frac{\chi^{(i)}_t\xi_t}{(\chi^{(i)}_t-\xi
_t)^2} \right)^{\frac{\rho_i}{2\kappa}}h'_t(0)^{\frac{\rho_i(\rho_i+4)}{8\kappa}}\\
M^{(i,j)}_t&=\left(\frac{(\tilde\chi^{(i)}_t-\tilde\chi^{(j)}_t)^2}{\tilde\chi^{(i)}_t\tilde\chi^{(j)}_t}\frac{\chi^{(i)}_t\chi^{(j)}_t}{(\chi^{(i)}_t-\chi^{(j)}_t)^2}
\right)^{\frac{\rho_i\rho_j}{4\kappa}}h'_t(0)^{\frac{\rho_i\rho_j}{4\kappa}}\\
\end{align*}
Note that all fractions are real numbers. Then
$(M^\varnothing\prod_iM^{(i)}\prod_{i<j}M^{(i,j)})$ is defined up to
some random positive time (possibly infinite), and is a local martingale.
\end{Lem}
\begin{proof}
This is a rather straightforward transposition to the radial case of results
and methods in \cite{LSW3}, which we discuss for the sake of
completeness. We use freely a complex-variable version of It\^o's formula.
Let $\tilde g_t=\phi_{\phi_A(K_t)}$, with the usual notations, so that
$h_t\circ g_t=\tilde g_t\circ \phi_A$. Then $(\tilde g_t)$ is a
time-changed radial Loewner chain:
$$\partial_t\tilde g_t=-\tilde g_t\frac{\tilde g_t+\tilde\xi_t}{\tilde
g_t-\tilde\xi_t}da_t$$
and $da_t=(h'_t(\xi_t)\xi_t/\tilde\xi_t)^2$. This is the key
``commutative diagram'' argument of \cite{LSW3}. Then, standard
differential calculus yields:
\begin{align*}
\partial_t h_t&=-h_t\frac{h_t+\tilde \xi_t}{h_t-\tilde
\xi_t}\partial_ta_t+h'_t.z\frac{z+\xi_t}{z-\xi_t}\\
[\partial_t
h_t](\xi_t)&=3\xi_th'_t(\xi_t)+3\xi_t^2h''_t(\xi_t)-3\frac{(\xi_th'_t(\xi_t))^2}{\tilde\xi_t}\\
\partial_th'_t&=-h'_t\frac{h_t+\tilde \xi_t}{h_t-\tilde
\xi_t}\partial_ta_t+h_th'_t\frac{2\tilde\xi_t}{(h_t-\tilde\xi_t)^2}\partial_ta_t+h_t''.z\frac{z+\xi_t}{z-\xi_t}+h'_t.\left(\frac{z+\xi_t}{z-\xi_t}-\frac{2z\xi_t}{(z-\xi_t)^2}\right)\\
[\partial_th'_t](\xi_t)&=\left(\frac 43\xi_t^2h_t'''-\frac{(\xi_th''_t)^2}{2h'_t}+3\xi_th''_t-\frac{\xi_t^2(h'_t)^3}{\tilde\xi_t^2}+h'_t\right)(\xi_t)\\
\partial_th'_t(0)&=h'_t(0)(\partial_ta_t-1)
\end{align*}
Relax the assumptions for the moment, and assume that $\xi$ satisfies
the SDE:
$$d\xi_t=i\xi_t\sqrt\kappa dB_t-\frac\kappa2\xi_tdt+\xi_tb_tdt$$
where $B$ is a standard Brownian motion and $b_t$ is a (progressive) drift coefficient.
Define:
\begin{align*}
\sigma_t&=\frac{\xi_th''_t(\xi_t)}{h'_t(\xi_t)}+1-\frac{\xi_th'_t(\xi_t)}{\tilde\xi_t}\\
\sigma^{(i)}_t&=\frac{\chi^{(i)}_t+\xi_t}{\chi^{(i)}_t-\xi_t}-\frac{\tilde\chi^{(i)}_t+\tilde\xi_t}{\tilde\chi^{(i)}_t-\tilde\xi_t}\frac{\xi_th'_t(\xi_t)}{\tilde\xi_t}
\end{align*}
Then:
\begin{align*}
\frac
{dM^\varnothing_t}{M^\varnothing_t}&=\frac{6-\kappa}{2\kappa}\sigma_t\left(i\sqrt\kappa
dB_t+b_tdt\right)\\
\left(\frac{\rho_i}{2\kappa}\right)^{-1}\frac{dM^{(i)}_t}{M^{(i)}_t}&=
\sigma^{(i)}_t\left(i\sqrt\kappa
dB_t+b_tdt+\frac{\rho_i}{2}\frac{\xi_t+\chi^{(i)}_t}{\xi_t-\chi^{(i)}_t}dt\right)-\frac{(6-\kappa)}{2}\sigma_t\frac{\tilde\chi^{(i)}_t+\tilde\xi_t}{\tilde\chi^{(i)}_t-\tilde\xi_t}\frac{\xi_th'_t(\xi_t)}{\tilde\xi_t}dt\\
\left(\frac{\rho_i\rho_j}{4\kappa}\right)^{-1}\frac{\partial_tM^{(i,j)}_t}{M^{(i,j)}_t}&=
\sigma^{(i)}_t\sigma^{(j)}_t+\frac{\xi_t+\chi^{(i)}_t}{\xi_t-\chi^{(i)}_t}\sigma^{(j)}_t+\frac{\xi_t+\chi^{(j)}_t}{\xi_t-\chi^{(j)}_t}\sigma^{(i)}_t
\end{align*}
Given that
$b_t=-\sum_i\frac{\rho_i}2\frac{\xi_t+\chi^{(i)}_t}{\xi_t-\chi^{(i)}_t}$,
if $N$ denotes $(M^\varnothing\prod_iM^{(i)}\prod_{i<j}M^{(i,j)})$, one gets:
\begin{align*}
\frac{dN_t}{N_t}&=\frac{dM^\varnothing_t}{M^\varnothing_t}+\sum_i\left(\frac{dM^{(i)}_t}{M^{(i)}_t}+\frac{d\langle
M^\varnothing_t,M^{(i)}_t\rangle}{M^\varnothing_tM^{(i)}_t}\right)
+\sum_{i<j}\left(\frac{dM^{(i,j)}_t}{M^{(i,j)}_t}+\frac{d\langle
M^{(i)}_t,M^{(j)}_t\rangle}{M^{(i)}_tM^{(j)}_t}\right)\\
&=\sigma'_tdB_t+\frac{6-\kappa}{2\kappa}b_t\sigma_t+\sum_i\left(\frac{6-\kappa}{2\kappa}\sigma_t\frac{\rho_i}2\frac{\xi_t+\chi^{(i)}_t}{\xi_t-\chi^{(i)}_t}+\frac{\rho_i}{2\kappa}\sigma^{(i)}_t\left(b_t+\frac{\rho_i}{2}\frac{\xi_t+\chi^{(i)}_t}{\xi_t-\chi^{(i)}_t}\right)\right)dt\\
&\hphantom{=}+\sum_{i<j}\frac{\rho_i\rho_j}{4\kappa}\left(\frac{\xi_t+\chi^{(i)}_t}{\xi_t-\chi^{(i)}_t}\sigma^{(j)}_t+\frac{\xi_t+\chi^{(j)}_t}{\xi_t-\chi^{(j)}_t}\sigma^{(i)}_t\right)dt\\
&=\sigma'_tdB_t
\end{align*}
\end{proof}
As in the chordal case, the symmetry of these formulae when
$\underline\rho=(2,\dots,2)$ enables to derive restriction formulae
for $n$-braid SLEs.
\begin{Lem}
Let $(K.)$ be a radial $n$-braid SLE in $\mathbb{U}$ started from distinct points
$\chi_1,\dots,\chi_n$, and $A$ be a hull not intersecting these
points. If $\kappa\leq 8/3$, and $L$ is an independent loop soup with
intensity $\lambda_\kappa$, then:
$$\P(K_\infty^L \cap
A=\varnothing)=|\phi_A'(0)|^{2h_{0;n/2}}\prod_i|\phi_A'(\chi_i)|^{h_{1;2}}\prod_{i<j}\left|\frac{\phi_A(\chi_j)-\phi_A(\chi_i)}{\chi_j-\chi_i}\right|^{2/\kappa}$$
\end{Lem}
If $\chi^{(1)}_0,\dots,\chi^{(n)}_0$ collapse to $\chi$, the above formula reduces to:
$$|\phi_A'(\chi)|^{h_{1;n+1}}|\phi_A'(0)|^{2h_{0;n/2}}$$
\begin{Cor}[Restriction property]
Let $\chi_1,\dots,\chi_n$ and $K_\infty^L$ as above. This defines a family of
probability measures $\mu_{\kappa,\underline\chi}$ on ``sea star''
hulls $K'=K^L$.
This family has the
restriction property:\\
for all hull $A$, $(\phi_A)_*\mu_{\kappa,\underline\chi}(.|K'\cap A=\varnothing)=\mu_{\kappa,(\phi_A)_*(\underline\chi)}$
\end{Cor}
\begin{proof}
As in the chordal case.
\end{proof}
\section{Multiply connected domains}
One can think of SLE as a diffusion in a configuration space. The
diffusion coefficients are constrained by the conformal invariance
requirement. If the associated moduli space is a point, then the
coefficients are constant parameters; this situation corresponds to
chordal and radial SLE, and $\SLE(\kappa,\rho)$ (and also ``annulus
SLE''). If the moduli space
is larger, then SLE is essentially specified by the data of diffusion
coefficients as functions on the moduli space; so we are no longer in
the parametric situation. We only discuss the ``constant $\kappa$''
case, for physical and technical reasons. Also, we will be mainly
interested in expressing necessary conditions for reversibility in
multiply connected domains, so we will not carry the discussion in the
same degree of generality as in the simply connected case.
In the case of a simply connected domains with $2n$ points marked on
the boundary, $\SLE$ is specified by $\kappa$ and a function of $(2n-3)$
independent cross-ratios of the $2n$ boundary points. If we add the
requirement that the $\SLE$ commutes with $(2n-1)$ $\SLE$s started at the
other points, then we have to choose a ``partition function'' $\psi$
as discussed earlier. This function belongs to the finite-dimensional solution
space of a holonomic system derived from the commutation conditions,
and the situation is parametric again ($1+C(2n,n)$ parameters).
Similarly, in the case of multiply connected domain, we want to
restrict the diffusion coefficients to the ``physically relevant''
ones.
We consider in particular the case of chordal SLE (going from $x$ to
$y$, $x$ and $y$ on the same boundary component) in a multiply
connected domain.
There are at least two ways to describe SLE in a multiply connected
domain. The first one, that follows closely the simply connected case,
consists in choosing a parametric family of standard domains (a section of the
moduli space), and writing explicit diffusion equations for the
parameters; this is the approach of \cite{D3,BauFr}. Another route,
following Makarov and Zhan (see \cite{DZT}), consists in using a local chart at
the growth point and a ``conformally invariant SDE'', so that the path
distribution does not depend on the choice of local chart. In the first case,
the diffusion coefficient is a function on the moduli space; in the
other case, SLE is specified by a ``partition function'', which is a
conformally covariant function on the configuration space; taking its
log derivative (w.r.t. the growth point), one gets a function on the
moduli space. We will use this second framework, that better suits our
purposes.
So let ${\mathfrak C}$ be the configuration space of $(g+1)$-connected
plane domains with $(m+2)$ points marked on the boundary and $n$ points
marked in the bulk. Two of the marked points, $x$ and $y$ are on the same
component of the boundary. Denote by ${\mathfrak M}$ the associated moduli
space.
First we briefly summarize the local chart approach. Any configuration
is equivalent to a configuration of type $(\H\setminus K,x,\dots)$ where $x$ is
real and $K$ is a compact subset of $\H$ (with $g$ connected component). By conformal invariance, we need
only to define SLE for these configurations, and need to do it
coherently (independently of choices). Let $h$ be a conformal
equivalence between $c=(\H\setminus K,x,y=\infty,\dots)$ and
$h_*c=(\H\setminus K',x',y'=\infty,\dots)$. SLE in
$c$ is defined by the chordal Loewner equations and an SDE:
$$\partial_t g_t=\frac 2{g_t-W_t}, dW_t=\sqrt\kappa dB_t+b(c_t)dt$$
From \cite{LSW3}, we can write the SDE for the driving process of
the image of the SLE by $h_0=h$:
$$dh_t(W_t)=h'_t(W_t)dW_t+\left(\frac\kappa 2-3\right)h''_t(W_t)dt$$
where $h_t=\tilde g_t\circ h\circ g_t^{-1}$, and $\tilde g_t$ also
solves the chordal Loewner equations (though with a time
change). After a time change, and at time 0, one sees that the
condition necessary for invariance of the SDE is the following
covariance condition:
$$b(h_*c)=\frac{b(c)}{h'(x)}+\left(\frac\kappa 2-3\right)\frac{h''}{h'^2}(x).$$
Here $h$ is normalized by $h(\infty)=\infty$, $h'(\infty)=1$
(hydrodynamic normalization at infinity).
Let $\psi$ be a positive function on the configuration space. We say
that $\psi$ is $\alpha$-covariant if:
\begin{enumerate}
\item (M\"obius invariance) For any $c=(D\setminus K,\dots)$, $c'=(D'\setminus K',\dots)$
where $D,D'$ are simply connected, $h:c\rightarrow c'=h_*c$ is an
equivalence of configurations, and $h$ extends to a conformal
equivalence $D\rightarrow D'$, one has:
$$\psi(h_*c)=\psi(c)$$
\item (covariance) For any $c=(D\setminus K,x,y,\dots)$, $c'=(D\setminus K',x,y,\dots)$
where $D$ is simply connected, $x,y\in\partial D$, $\partial D$ is
smooth at $x,y$, and $h:c\rightarrow c'=h_*c$ is an
equivalence of configurations, one has:
$$\psi(h_*c)=(h'(x)h'(y))^{-\alpha}\psi(c)$$
\end{enumerate}
Note that the function
$\psi$ is completely determined by these conditions and its restriction
to a section of the moduli space. Let us give three (important) examples
of such covariant functions, say for annuli with two marked points on one
component of the boundary: $c=(D,x,y)$.
\begin{enumerate}
\item
In $c=(D,x,y)$, assume that the arc $(xy)$
is blue and $(yx)$ is yellow. Let $\psi(c)$ be the probability that
$(xy)$ and $(yx)$ are connected to the other boundary component by a
blue (resp. yellow) cluster in the scaling limit of critical
percolation. Alternatively, $\psi(c)$ is the corresponding $\SLE_6$
probability (see \cite{D3}). Then $\psi$ is $0$-covariant. (Also,
$\psi=1$ is $0$-covariant; this is a version of locality for $\SLE_6$).
\item
In his thesis, Beffara uses the results of \cite{LSW3} and an
inclusion-exclusion argument to prove the following: let $\psi(c)$ be
the probability that chordal $\SLE_{8/3}$ from $x$ to $y$ in the
(filled) domain avoids the hole (resp. leaves the hole on its left,
resp. leaves the hole on its right). Then $\psi$ is $5/8$-covariant.
\item
If $\kappa=2$, $\alpha_\kappa=1$, and $\SLE_2$ is the scaling
limit of Loop-Erased Random Walks (\cite{LSW2,DZ}); these walks are
closely related to some discrete harmonic quantities. Define:
$$\psi(c)=\frac{\partial^2}{\partial n_x\partial n_y} G_D(w,w')$$
the normal derivative at $x$ and $y$ of the Green kernel (which is
symmetric in the two variables). Then the invariance property of the
Green kernel implies that $\psi$ is $1$-covariant. In general domains,
this is the (chordal version of) Harmonic Random Loewner Chain (HRLC)
as defined by
Zhan in \cite{DZT}. Similar harmonic constructions exist for
$\kappa=8$, $\alpha_\kappa=-1/8$, using normal reflection on some
boundary components.
\end{enumerate}
If $c=(\H\setminus K,x,y=\infty,\dots)$, $h_*c=(\H\setminus
K',x',y'=\infty,\dots)$ are equivalent configurations, $h'(\infty)=1$,
and
we define
$b=\kappa\partial_x\psi/\psi$, we get:
$$b(h_*c)=\frac\kappa{h'(x)}\frac{\partial_x(h'(x)^{-\alpha}\psi(c))}{h'(x)^{-\alpha}\psi(c)}=\frac{b(c)}{h'(x)}-\kappa\alpha\frac{h''}{h'^2}(x)$$
where $c$ is implicitly a function of $x$ (everything else being fixed).
So if $\alpha=\alpha_\kappa=h_{1;2}(\kappa)=(6-\kappa)/2\kappa$, one
can define an SLE starting from the $\alpha$-covariant partition function
$\psi$ (at least up to some positive stopping time). We denote this
by $\SLE_\kappa(\psi)$. This is well-defined for some positive time; we will not consider here the (difficult)
questions of long-time behaviour.
\subsection{Commutation conditions}
Now assume we are given two $\alpha_\kappa$-covariant functions
$\psi_1$, $\psi_2$ on the configuration space, and use
them to define SLEs starting resp. at $x$ and $y$.
purposes).
In a configuration $c=(\H\setminus K,x,y,\dots)$, one grows $\SLE$s at
$x$ and $y$ up to capacity $\varepsilon$, $\varepsilon'$ (seen from infinity in
$\H$; this is also arbitrary) and consider the effect on functions of
$x$ and $y$ (after erasing hulls using chordal SLE). As earlier, this
leads to the commutation conditions on $\psi_1$,
$\psi_2$. To make the argument neater, we compute on a section of the
moduli space, as in \cite{D3,BauFr}. As before, there is a marked
point on the boundary used for normalization.
In fact, in this chordal setup,
it is more convenient not to quotient by automorphisms that fix this point; so all
the construction will commute with scaling and translation.
For definiteness, consider the following type of configurations: the
upper half-plane $\H$ minus horizontal slits with appropriate marked
points (including $\infty$). The only equivalences between such
configurations are given by scaling and translation. Let ${\mathfrak D}$ be
this family.
Consider an element $D_0=(H_0,x,y,\dots)\in {\mathfrak D}$, that is $H_0$ is
the half-plane minus some horizontal slits. We grow an
$\SLE_\kappa(\psi_1)$ at $x$ up to half-plane capacity $\varepsilon$, then an
$\SLE_{\kappa}(\psi_2)$ at $y$ (in the remaining domain) stopped when it
reaches capacity $\varepsilon$ (in the original domain). We then revert the
order of the
procedure and look for necessary conditions for these two procedures
to yield the same distribution (of configuration). In particular, we
compare the effect on moduli, up to second order in $\varepsilon$.
So consider a Loewner chain $(K_t)$ growing at $x$. Let $g_t$ be a conformal
equivalence $\H\setminus K_t\rightarrow \H$, and $f_t$ a conformal
equivalence $D_0\setminus K_t\rightarrow D_t$ for some $D_t\in{\mathfrak
D}$. Everything is uniquely defined if we impose hydrodynamic
normalization at infinity for $g_t,f_t$. Then let $h_t=f_t\circ g_t^{-1}$. By
construction, $g_t$ solves the Loewner equations:
$$\partial_t g_t=\frac 2{g_t-X_t}, dX_t=\sqrt\kappa
dB_t+\kappa\frac{\partial_x\psi_1(H_t)}{\psi_1(H_t)}dt$$
where $\psi_1$ is evaluated at the configuration $H_t=(g_t(H_0\setminus
K_t),X_t,\dots)$. Now consider $(\partial_t f_t)\circ f_t^{-1}$. This is a meromorphic
function on $D_t$, taking real values on $\mathbb{R}$, vanishing at $\infty$,
regular except at $x$ where it has a simple pole, and with constant
imaginary part on the slits. It is easy to see that for each $D\in
{\mathfrak D}$, there is a unique function $V_D$ (Schwarz kernel) satisfying these conditions
and with residue $2$ at $x$. Also define $A_D,B_D$ by:
$$V_D(w)=\frac 2{w-x}+A_D+B_D(w-x)+O((w-x)^2)$$
Then:
\begin{align*}
\partial_t f_t&=h'_t(X_t)^2 V_{D_t}\circ f_t\\
\partial_t h_t(w)&=\partial_t (f_t\circ g_t^{-1})(w)=h'_t(X_t)^2 V_{D_t}\circ h_t(w)-\frac{2h'_t(w)}{w-X_t}\\
\partial_t h'_t(w)&=h'_t(X_t)^2.(h'_t.V'_{D_t}\circ h_t)(w)-\frac{2h''_t(w)}{w-X_t}+\frac{2h'_t(w)}{(w-X_t)^2}.
\end{align*}
Taking limits at $w=X_t$, one gets:
\begin{align*}
(\partial_t h_t)(X_t)&=h'_t(X_t)^2A_{D_t}-3h''_t(X_t)\\
(\partial_t h'_t)(X_t)&=h'_t(X_t)^3B_{D_t}+\left(\frac {h''_t}{2h'_t}-\frac{4h'''_t}3\right)(X_t)
\end{align*}
which gives in particular
\begin{align*}
dh_t(X_t)&=h'_t(X_t)dX_t+\left(\frac\kappa
2-3\right)h''_t(X_t)dt+h'_t(X_t)^2A_{D_t}dt\\
dh'_t(X_t)&=h''_t(X_t)dX_t+\left(\frac
{h''_t(X_t)^2}{2h'_t(X_t)}+\left(\frac\kappa 2-\frac 43\right)h'''_t(X_t)\right)dt+h'_t(X_t)^3B_{D_t}dt
\end{align*}
as in \cite{LSW3} (and \cite{D3} for the case of annuli, using the
explicit Villat kernel). Also:
$$\partial_t f'_t(y)=h'_t(X_t)^2. (f'_t. V'_{D_t}\circ f_t)(y).$$
Now ${\mathfrak
D}$ can be parametrized by a list of complex numbers
$(x,y,z_1,\dots,z_m)$ containing marked points on $\mathbb{R}$, other marked
points (including endpoints of horizontal slits) and their
conjugates. Then the infinitesimal generator at $D$ is:
$${\mathcal L}_1=\frac\kappa 2\partial_{xx}
+\left(A_D+\kappa\frac{\partial_x\psi_1}{\psi_1}\right)\partial_x
+V_D(y)\partial_y+\sum_i V_D(z_i)\partial_{z_i}$$
with $V_D=V_{(x,y,z_1,\dots)}$ and $\psi_1$ is evaluated at $D$; the
restriction
of $\psi_1$ to ${\mathfrak D}$ can also be seen as a function of $(x,y,z_1,\dots)$.
The generator ${\mathcal L}_2$ associated with the $\SLE_{\kappa}(\psi_2)$
growing at $y$ is derived in similar fashion, and we get the
commutation condition:
$$[{\mathcal L}_1,{\mathcal L}_2]=2V'_y(x){\mathcal L_1}-2V'_x(y){\mathcal L_2}$$
where $V_x(z)=2/(z-x)+\cdots$ and $V_y(z)=2/(z-y)+\cdots$ are the
Schwarz kernels with poles at $x$ and $y$ respectively (and depend
implicitly on the other moduli). We will use the notation:
$$\ell_x=V_x(y)\partial_y+\sum V_x(z_i)\partial_i,{\rm\ \ }
\ell_y=V_y(x)\partial_x+\sum V_y(z_i)\partial_i$$
for the part that does not involve $\kappa$, $\psi_1$, $\psi_2$.
Expanding the commutation condition
, we get the equations:
\begin{equation}\label{cmu1}
\partial_{xy}\log(\psi_1/\psi_2)=0
\end{equation}
and:
\begin{equation}\label{cmu2}
\begin{split}
-{\mathcal L_2}\left(A_x+\kappa\frac{\partial_x\psi_1}{\psi_1}\right)+{\mathcal
L}_1(V_y(x))+2V'_x(y)V_y(x)-2V'_y(x)\left(A_x+\kappa\frac{\partial_x\psi_1}{\psi_1}\right)&=0\\
-{\mathcal L_1}\left(A_y+\kappa\frac{\partial_y\psi_2}{\psi_2}\right)+{\mathcal
L}_2(V_x(y))+2V'_y(x)V_x(y)-2V'_x(y)\left(A_y+\kappa\frac{\partial_y\psi_2}{\psi_2}\right)&=0
\end{split}\end{equation}
All other
conditions are identities not involving $\psi_1$, $\psi_2$:
\begin{equation}\label{cmu3}
(\ell_x+2V'_x(y))V_y(z)=(\ell_y+2V'_y(x))V_x(z)
\end{equation}
For this, note that $V_y(z)$ does not depend on $x$ and other marked
points, but depends on endpoints of horizontal slits. Considering the
difference between left-hand side and right-hand side as a function of $z$, in particular its
expansion at $x$, $y$, one sees that it extends to a
bounded holomorphic function on the Schottky double of $D$ and
vanishes at infinity, hence is identically 0. The probabilistic
interpretation of the Schwarz kernel $V$ given in \cite{Law2} can also
be used to prove this identity, along the lines of Lemma 14 (ii) below.
Consider the conditions (\ref{cmu1}),(\ref{cmu2}).
From (\ref{cmu1}), one can write $\psi_1=\psi_2=\psi$, by multiplying
$\psi_1$ by a function that does not depend on $x$ (so that the
definition of $\SLE_\kappa(\psi_1)$ is not affected), and doing the
same for $\psi_2$. One can write:
\begin{align*}
{\mathcal
L}_1\left(\frac{\partial_y\psi}{\psi}\right)-\partial_y\left(\frac{{\mathcal
L}_1\psi}{\psi}\right)
&=\frac{[{\mathcal L}_1,\partial_y]\psi}{\psi}+\kappa\frac{\partial_y\psi(\partial_x\psi)^2}{\psi^3}-\kappa\frac{\partial_x\psi\partial_{xy}\psi}{\psi^2}\\
&=-V'_x(y)\partial_y\log(\psi)-\kappa\partial_{xy}\log(\psi)\partial_x\log(\psi)+\kappa\partial_y\log(\psi)(\partial_x\log(\psi))^2-\kappa\frac{\partial_x\psi\partial_{xy}\psi}{\psi^2}\\
&=-V'_x(y)\partial_y\log(\psi)-\kappa\partial_y\left(\partial_x\log\psi\right)^2
\end{align*}
Let $\check{\mathcal L}_1$ be the differential operator obtained by setting
$\psi_1=1$ in ${\mathcal L}_1$:
$$\check{\mathcal L}_1=\frac\kappa 2\partial_{xx}
+A_x\partial_x
+V_D(y)\partial_y+\sum_i V_D(z_i)\partial_{z_i}=\frac\kappa 2\partial_{xx}
+A_x\partial_x
+\ell_x$$
and $\tilde {\mathcal L_2}$ is defined in the same fashion. Then (\ref{cmu2})
can be written as:
$$\kappa\partial_y\left(\frac{\check{\mathcal L}_1\psi}\psi\right)=\check{\mathcal L}_2(V_x(y))-\check{\mathcal L}_1(A_y)+2V'_y(x)V_x(y)-2A_yV'_x(y)$$
and similarly for the other condition. Consider now (\ref{cmu3}),
applied to a marked point $z=y+\varepsilon$:
\begin{eqnarray*}
0&=&V_x(y)\partial_yV_y(z)+V_x(z)\partial_zV_y(z)+\sum_iV_x(z_i)\partial_{z_i}V_y(z)-V_y(x)\partial_xV_x(z)-V_y(z)\partial_zV_x(z)-\sum_iV_y(z_i)\partial_{z_i}V_x(z)\\
&&+2V'_x(y)V_y(z)-2V'_y(x)V_x(z)\\
&=&V_x(y)\left(\frac
2{\varepsilon^2}+\partial_yA_y-B_y+(\partial_yB_y-2C_y)\varepsilon\right)+\left(V_x(y)+\varepsilon
V'_x(y)+\frac{\varepsilon^2}2V''_x(y)+\frac{\varepsilon^3}6V'''_x(y)\right)\left(-\frac
2{\varepsilon^2}+B_y+2C_y\varepsilon\right)\\
&&+\sum_iV_x(z_i)\partial_{z_i}(A_y+\varepsilon B_y)
-V_y(x)\partial_xV_x(y)-V_y(x)\partial_xV'_x(y)\varepsilon-\left(\frac
2\varepsilon+A_y+B_y\varepsilon\right)\left(V'_x(y)+\varepsilon V''_x(y)+\frac{\varepsilon^2}2
V'''_x(y)\right)\\
&&-\sum_iV_y(z_i)\partial_{z_i}(V_x(y)+\varepsilon V'_x(y))
+2V'_x(y)\left(\frac
2\varepsilon+A_y+B_y\varepsilon\right)-2V'_y(x)V_x(y)-2V'_y(x)V'_x(y)\varepsilon+O(\varepsilon^2)
\end{eqnarray*}
where $C_y$ is such that
$V_y(z)=2/\varepsilon+A_y+B_y\varepsilon+C_y\varepsilon^2+O(\varepsilon^3)$.
Considering the coefficients of $\varepsilon^0$, $\varepsilon^1$, it follows that:
\begin{equation}\label{cmu4}
(\ell_x+2V'_x(y))(A_y)-(\check{\mathcal
L}_2+2V'_y(x))(V_x(y))+\left(\frac{\kappa} 2-3\right)V_x''(y)
=0
\end{equation}
and
\begin{equation}\label{cmu5}
(\ell_x+2V'_x(y))(B_y)-(\check{\mathcal
L}_2+2V'_y(x))(V'_x(y))+\left(\frac{\kappa} 2-\frac 43\right)V_x'''(y)
=0
\end{equation}
Of course one can exchange the roles of $x$ and $y$ in these
identities. So (\ref{cmu2}) can be written as:
$$\kappa\partial_y\left(\frac{\check{\mathcal L}_1\psi}\psi\right)=
\left(\frac{\kappa} 2-3\right)V_x''(y)$$
or (with the symmetric condition):
$$\partial_y\left(\frac{\left(\check{\mathcal L}_1+\alpha_{\kappa} V'_x(y)\right)\psi}\psi\right)=\partial_x\left(\frac{\left(\check{\mathcal L}_2+\alpha_\kappa V'_y(x)\right)\psi}\psi\right)=0.$$
\subsection{Restriction martingales}
Restriction-like martingales in multiply-connected domains involving
harmonic invariants are studied in
\cite{Law2,DZT}, in particular when $\kappa=2$. When $\kappa\neq 2$,
these ``harmonic'' martingales are distinct from those we are
discussing in these sections, though methods are fairly similar.
Let $D$ be a subdomain of $\H$ (which we can freely assume if we want
to define M\"obius invariant distributions); the boundary of $D$
contains open real segments around $x,y\in\mathbb{R}$.
Define: $\Gamma(D,x)=\mu^{bub}_x(\{\delta\subsetneq D\})$, where
$\mu^{bub}_x$ is the bubble measure rooted at $x$ (see \cite{LW}). Then:
$$\Gamma(D,x)=\partial_{n_y}\left(M_\H(y,x)- M_D(y,x))\right)_{|y=x}$$
where $M_D$ denotes the minimal function with singularity at $x$,
i.e. the positive harmonic function in $D$ that extends continuously to 0 on
the boundary except at $x$, with normalization
$M_D(x+i\varepsilon,x)=\varepsilon^{-1}(1+O(\varepsilon))$. This function can be obtained by
taking the normal derivative at $x$ of the Green's function $G_D(y,x)$
(with adequate normalization).
Let $\phi$ be a
conformal equivalence $D\rightarrow D'=\phi(D)$.
From the conformal
invariance property of the Green's function, it is easy to derive the
covariance property of the minimal function:
$$M_D(y,x)=\phi'(x)M_{\phi(D)}(\phi(y),\phi(x))$$
and then the Schwarzian-like covariance property for $\Gamma$:
$$\Gamma(D,x)=\phi'(x)^2\Gamma(\phi(D),\phi(x))-\frac{S\phi(x)}6.$$
Note that in the case where $D$ is simply connected,
$\Gamma(D,x)=-S\phi_D(x)/6$, and this is the usual covariance property
of Schwarzian derivatives:
$$S(\phi_D)=S(\phi_{D'}\circ\phi)=(\phi')^2(S\phi_{D'})\circ\phi+S\phi$$
We are interested in the following situation: let $\gamma$ be a
chordal $\SLE_\kappa$ in $\H$, say $\kappa\leq 8/3$, and $L$ is an
independent loop soup with intensity $\lambda_\kappa$. Condition on
the event that $\gamma^L=\gamma\cup\{\delta\in
L:\delta\cap\gamma\neq\varnothing\}$ stays in $D_0$. It is not clear
at this point that the probability of this event is $\alpha_\kappa$-
covariant, hence that the resulting distribution is conformally
invariant.
First consider a chordal $\SLE_\kappa$ from $x$ to $y$ in $\H\supset
D_0$ (a chordal $\SLE$ unaware of the presence of holes), and an
$\alpha_\kappa$-covariant function $\varphi$ on the moduli space. By M\"obius
invariance, one can send $y$ to infinity (and use hydrodynamic
normalization); so $(X_t/\sqrt\kappa)$ is now a standard Brownian
motion. Here $g_t:D_0\setminus K_t\rightarrow H_t$ is a conformal
equivalence that extends through the holes, $h_t:H_t\rightarrow
D_t$ is a conformal equivalence and $D_t\in{\mathfrak D}$ is a standard domain.
Let $M_t=\varphi(D_0\setminus K_t,\dots)$. From the covariance assumption,
$$M_t=h'_t(X_t)^\alpha\varphi(h_t(X_t),\dots).$$
Set $\alpha=\alpha_\kappa=(6-\kappa)/2\kappa$. Then:
$$\frac{dh'_t(X_t)^\alpha}{h'_t(X_t)^\alpha}=\alpha\frac{h''_t(X_t)}{h'_t(X_t)}dX_t-\frac{\lambda_\kappa}6
Sh_t(X_t)dt+\alpha h'_t(X_t)^2B_{D_t}dt$$
where $\lambda_\kappa=(8-3\kappa)(6-\kappa)/2\kappa$ and $Sh_t$ is the
Schwarzian derivative. Besides:
\begin{align*}
d\varphi(h_t(X_t),\dots,f_t(z_i),\dots)=&h'_t(X_t)\partial_x\varphi dX_t+\left[\frac\kappa 2 h'_t(X_t)^2\partial_{x,x}+
\left(\frac\kappa
2-3\right)h''_t(X_t)\partial_x\right.\\
&\left.+h'_t(X_t)^2A_{D_t}\partial_x+h'_t(X_t)^2\sum_iV_{D_t}(z_i)\partial_{z_i}\right]\varphi(h_t(X_t),\dots,f_t(z_i),\dots)dt
\end{align*}
Let:
$$Z_t=M_t\exp\left(-\lambda_\kappa\int_0^t\Gamma(H_s,X_s)ds\right)=M_t\exp\left(-\lambda_\kappa\int_0^t\left(h'_s(X_s)^2\Gamma(D_s,h_s(X_s))-\frac
{Sh_s(X_s)}6\right)ds\right)$$
Then $Z_t$ is a local martingale iff $\varphi$
(restricted to ${\mathfrak D}$, hence considered as a function of the
parameters $x,\dots,z_i,\dots$) is annihilated by the differential operator:
$${\mathcal
M}=\frac\kappa 2\partial_{xx}+A_D\partial_x+\ell_x+\alpha
B_D-\lambda_\kappa\Gamma_x$$
where $\Gamma_x=\Gamma(D,x)$ in the standard domain $D$.
If $y$ is finite (and another marked point is used for normalization),
one can compute along the same lines. So if $C_t$ is the covariance
factor:
$$C_t=\left(h'_t(X_t)h'_t(Y_t)\left(\frac{Y_t-X_t}{h_t(Y_t)-h_t(X_t)}\right)^2\right)^\alpha$$
then
\begin{eqnarray*}
\frac {dC_t}{C_t}&=&\left(\alpha\frac{h''_t(X_t)}{h'_t(X_t)}dX_t-\frac{\lambda_\kappa}6
Sh_t(X_t)dt+\alpha h'_t(X_t)^2B_{D_t}dt\right)\\
&&+\alpha\left(h'_t(X_t)^2V'_{D_t}(h_t(Y_t))+\frac{2}{(Y_t-X_t)^2}\right)dt
+2\alpha\left(\frac{dX_t}{X_t-Y_t}+(5-\kappa)\frac{dt}{(Y_t-X_t)^2}\right)\\
&&-\frac{2\alpha}{h_t(X_t)-h_t(Y_t)}\left(h'_t(X_t)dX_t+\left(\frac\kappa
2-3\right)h''_t(X_t)dt+h'_t(X_t)^2(A_{D_t}-V_{D_t}(h_t(Y_t))-\frac{3}{h_t(X_t)-h_t(Y_t)})dt\right)\\
&&+2\kappa\alpha^2\left(\frac{h''_t(X_t)}{h'_t(X_t)(X_t-Y_t)}-\frac{h''_t(X_t)}{h_t(X_t)-h_t(Y_t)}-2\frac{h'_t(X_t)}{(X_t-Y_t)(h_t(X_t)-h_t(Y_t))}\right)dt\\
&=&\alpha\left(\frac{h''_t(X_t)}{h'_t(X_t)}+\frac{2}{X_t-Y_t}-\frac{2h'_t(X_t)}{h_t(X_t)-h_t(Y_t)}\right)\sqrt\kappa
dB_t-\frac{\lambda_\kappa}6
Sh_t(X_t)dt\\
&&+\alpha h'_t(X_t)^2\left(B_{D_t}+V'_{D_t}(h_t(Y_t))-\frac{2}{h_t(X_t)-h_t(Y_t)}(A_{D_t}-V_{D_t}(h_t(Y_t))-\frac{3}{h_t(X_t)-h_t(Y_t)})\right)dt
\end{eqnarray*}
So $M_t$ is a local martingale iff $\varphi$ is annihilated by the operator:
$${\mathcal
M}=\frac\kappa 2\partial_{xx}+(A_D+\frac{\kappa-6}{x-y})\partial_x+\ell_x+\alpha
\left(B_D+V'_D(y)+\frac{2(V_D(y)-A_D)}{x-y}+\frac{6}{(x-y)^2}\right)-\lambda_\kappa\Gamma_x$$
Observe that:
\begin{align*}
(x-y)^{-2\alpha}{\mathcal M}(x-y)^{2\alpha}&=\frac\kappa 2\partial_{xx}+A_D\partial_x+\ell_x+\alpha
\left(B_D+V'_D(y)\right)+\lambda_\kappa\Gamma_x\\
&=\tilde {\mathcal
L_1}+\alpha (B_D+V'_D(y))-\lambda_\kappa\Gamma_x
\end{align*}
The conjugation corresponds to a change of reference measure ($\SLE$
aiming at $\infty$ rather than $\SLE$ aiming at $y$).
We sum up the discussion of this section. As before, ${\mathfrak C}$ is a
configuration space, $x$ and $y$ are marked points on a boundary
component of a configuration; ${\mathfrak M}$ is the associated moduli
space, and ${\mathfrak D}$ is a class of standard domains, with associated
Schwarz kernels $V$. We note $V_x$, $V_y$ to emphasize the pole of
the kernel.
\begin{Prop}
(i) Let $\psi_1$, $\psi_2$ be $\alpha_\kappa$-covariant functions;
consider an $\SLE_\kappa(\psi_1)$ growing at $x$ and an $\SLE_\kappa(\psi_2)$
growing at $y$. These two SLEs satisfy commutation condition iff there
is an $\alpha_\kappa$-covariant function $\psi$ such that
$\SLE_\kappa(\psi_i)$, $i=1,2$, is distributed as an $\SLE_\kappa(\psi)$ growing at
$x$ (resp. $y$) and $\psi$ satisfy the conditions:
$$\partial_y\left(\frac{\left(\check{\mathcal L}_1+\alpha V'_x(y)\right)\psi}\psi\right)=\partial_x\left(\frac{\left(\check{\mathcal L}_2+\alpha V'_y(x)\right)\psi}\psi\right)=0$$
(ii) Let $\varphi$ be an $\alpha_\kappa$-covariant function. Let $c\in
{\mathfrak C}$ be a configuration. One can assume that $c=D_0$ is a
standard domain. Consider a chordal $\SLE_\kappa$ from $x$
to $y$ in $c$ with holes erased; let $(c_t)_t$ be the corresponding
family of configurations. Let $h_t$ be the map from $c_t$ to a
standard domain $D_t\in {\mathfrak D}$, with hydrodynamic normalization. Let:
$$Z_t=\varphi(c_t)\exp\left(-\lambda_\kappa\int_0^t
\Gamma(c_s,\gamma_s)ds\right)$$
Then $Z$ is a local martingale iff $\varphi$ restricted to ${\mathfrak D}$
is annihilated by the operator:
$${\mathcal M}=
(x-y)^{2\alpha}\left(\check {\mathcal L_1}+\alpha (B_x+V'_x(y))-\lambda_\kappa\Gamma_x\right)(x-y)^{-2\alpha}$$
(iii) In the situation of $(ii)$, consider the Girsanov transform of
chordal $\SLE$ by $Z$. Then the resulting process is an
$\SLE_\kappa(\psi)$, where $\psi$ is an $\alpha_\kappa$-covariant function
whose restriction to ${\mathfrak D}$ is annihilated by the operator:
$$\tilde {\mathcal L_1}+\alpha_\kappa (B_x+V'_x(y))-\lambda_\kappa\Gamma_x$$
In particular, $\SLE_\kappa(\psi)$ started at $x$ and
$\SLE_\kappa(\psi)$ started at $y$ satisfy the commutation
conditions.
\end{Prop}
\subsection{The case $\kappa=0$}
In this section, we address the following (non trivial) question: how
to define a (deterministic) reversible $\SLE_0$ in a multiply connected domain ?
This requires implicitly the domain Markov property and conformal
invariance. While there are many ways to construct conformally
invariant chords (e.g. as level lines or flow lines of harmonic
invariants), it is not so obvious to satisfy all conditions
simultaneously. We propose here a construction based on the
restriction ideas (and do not claim to prove anything rigorous about
it).
Consider a nice subdomain $H_0$ of $\H$ (i.e. $\overline{\H\setminus
H_0}$ is compact and has finitely many components), $x_0\in\mathbb{R}$,
$(x_0-\varepsilon,x_0+\varepsilon)\subset \partial H_0$ for some $\varepsilon>0$; we attempt to
describe an $\SLE_0(x_0\rightarrow\infty)$ in $H_0$. The idea is to take
$\gamma$ a chordal $\SLE_\kappa(x_0\rightarrow\infty)$ in $\H$ conditionally on
$\gamma^L$ staying in $H_0$, where $L$ is an independent loop soup
with intensity $\lambda_\kappa$, and take the limit as $\kappa\searrow
0$. On the one hand, the unconditional chordal $\SLE_\kappa$ converges to a hyperbolic
geodesic in $\H$, which is not conformally invariant; on the other
hand the intensity of the loop soup diverges. We shall define a total
cost summing the large deviation rate for Brownian motion and the loop
soup term, which will give a conformally covariant functional.
To check conformal invariance, we will use comparisons of Loewner
chains in conformally equivalent domains as in \cite{LSW3} (see also
the previous subsections).
Let us fix some notations. Let $\phi_0: H_0\rightarrow\tilde H_0$ be a
conformal equivalence to $\tilde H_0$ (and $\tilde H_0$ satisfies the same properties
as $H_0$). Consider the Loewner flow:
$$\partial_t g_t=\frac 2{g_t-X_t}$$
that maps $H_0$ to $H_t$ (and extends to $\H$). This Loewner chain is
mapped by $\phi_0$ to a Loewner chain corresponding to the flow:
$$\partial_t \tilde g_t=\frac {2\phi_t'(X_t)^2}{\tilde g_t-\tilde X_t}$$
where $\phi_t=\tilde g_t\circ\phi_0\circ g_t^{-1}$ is a conformal
equivalence $H_t\rightarrow \tilde H_t$ (and $\tilde X_t=\phi_t(X_t)$ etc...).
Consider now the functional on driving processes:
$$I(X)=I(X,H_0)=\frac 12\int_0^\infty(\partial_t X_t)^2dt+24\int_0^\infty
\Gamma(H_t,X_t)dt.$$
Informally, as $\kappa\searrow 0$, a driving process $X$ has weight
$\propto\exp(-I(X)/\kappa)$, the first term being the large deviations
rate for Brownian motion, the second term coming from the loop soup
conditioning ($\lambda_\kappa\sim 24/\kappa$). We want to prove that
$I(X)=I(\tilde X)+c$, where $\tilde X$ has time parameter:
$$s(t)=\int_0^t\phi_t'(X_u)^2du.$$
Taking into account the time change, we get:
$$I(\tilde X)=I(\tilde X,\tilde H_0)=\frac 12\int_0^\infty\left(\frac{\partial_t\tilde
X_t}{\phi'_t(X_t)^2}\right)^2\phi'_t(X_t)^2dt+24\int_0^\infty
\phi'_t(X_t)^2\Gamma(\tilde H_t,\tilde X_t)dt.$$
From the covariance of $\Gamma$, we have:
$$\Gamma(H_t,X_t)=\phi'_t(X_t)^2\Gamma(\tilde H_t,\tilde X_t)-\frac {(S\phi_t)(X_t)}6.$$
Also, comparisons of Loewner chains yield:
\begin{align*}
\partial_t\tilde X_t&=\phi'_t(X_t)\partial_tX_t-3\phi_t''(X_t)\\
\partial_t(\phi_t'(X_t))&=\phi''_t(X_t)\partial_tX_t+\left(\frac{\phi_t''}{2\phi_t'}-\frac
43\phi'''_t\right)(X_t)
\end{align*}
It follows that:
\begin{align*}
I(\tilde X)-I(X)&=\frac
12\int_0^\infty\left(-6\frac{\phi_t''}{\phi_t'}(X_t)\partial_tX_t+9\left(\frac{\phi_t''}{\phi_t'}\right)^2(X_t)\right)dt+4\int_0^\infty
S\phi_t(X_t)dt\\
&=\frac
12\int_0^\infty\left(-6\frac{\phi_t''}{\phi_t'}(X_t)\partial_tX_t-3\left(\frac{\phi_t''}{\phi_t'}\right)^2(X_t)+8\frac{\phi_t'''}{\phi_t'}(X_t)\right)dt=-3\int_0^\infty\frac{\partial_t(\phi_t'(X_t))}{\phi_t'(X_t)}dt\\
&=3\log(\phi_0'(x_0))
\end{align*}
since $\phi_t'(X_t)\rightarrow 1$ ($\phi_t$ has hydrodynamic
normalization at $\infty$). Thus we get the covariance relation:
$$I(\phi_*X,\phi_*H)=I(X,H)+3\log(\phi'(x_0)).$$
So provided that $I$ attains a minimum, the minimizing path is
conformally invariant and Markov. Also, one can minimize over a subset
of paths satisfying particular global topological conditions
(e.g. leaving a hole on the left).
Assuming that chordal $\SLE_\kappa$
in $\H$ is reversible, this is also reversible (the conditioning event
does not depend on the orientation of the paths). For a
configuration $(H,x,\infty)$, denote:
$$\psi(H)=\sup_X\left(\exp^{-I(H,X)}\right)$$
which is $(-3)$-covariant. With the previous notations, one can write
the coherence condition:
$$\psi(H)=\sup_{X_{[0,t]}}\left(\psi(H_t)\exp\left(-\frac 12\int_0^t(\partial_s X_s)^2ds+24\int_0^t
\Gamma(H_s,X_s)ds\right)\right)$$
By conformal invariance, one can assume that $H$ is a standard
configuration. Let $h_t:H_t\rightarrow D_t$ be the normalized
conformal equivalence to a standard domain. Then:
$$\psi(D_t)=\psi(H_t)(h'_t(X_t))^{-3}$$
and $\partial_t(h_t(X_t))_{|t=0}=A_x$, $\partial_t(h'_t(X_t))_{|t=0}=B_x$.
Let $\overline X$ denote a maximizing driving function.
Expanding the
previous condition for small $t$, one gets:
$$(\ell_x+A_x\partial_x+(\partial_t\overline X_t)\partial_x+3B_x)\psi-\left(\frac
12(\partial_t \overline X_t)^2+24\Gamma_x\right)\psi=0$$
at time $t=0$ (where $\psi$ is seen as a function on standard configurations). Besides, the left-hand side is optimal for
$\partial_t\overline X_t$. This implies that:
$$\partial_t \overline X_t=\frac{\partial_x\psi}{\psi}{\rm\ \ and\ \ }
\frac{(\partial_x\psi)^2}{2\psi}+(\ell_x+A_x\partial_x+3B_x-24\Gamma_x)\psi=
0.
$$
One can also get this equation by considering:
$$\left(\frac\kappa 2\partial_{xx}+A_x\partial_x+\ell_x+\alpha_\kappa B_x-\lambda_\kappa\Gamma_x\right)\psi_\kappa=0$$
and then rewrite the equation for $(\psi_\kappa)^{\kappa}$ and take
the limiting equation as $\kappa\searrow 0$. Now if the target point
is not $\infty$ but a finite point $y$, one can proceed similarly. The
condition obtained on $\psi$ for growth at $x$, $y$ gives the
commutation condition.
\subsection{Towards a classification}
In simply connected domains, we obtained a complete classification of
commuting SLEs. In the multiply connected case, it appears to be
much more technical, so we shall only outline some elements.
First, if we don't assume {\em a priori} that the two SLEs have the
same $\kappa$, then it is not hard to check that the commutation
condition for an $\SLE_\kappa(\psi)$ growing at $x$ and an
$\SLE_{\tilde\kappa}(\psi)$ growing at $y$ writes:
$$\partial_y\left(\frac{\left(\check{\mathcal L}_1+\alpha_{\tilde\kappa} V'_x(y)\right)\psi}\psi\right)=\partial_x\left(\frac{\left(\check{\mathcal L}_2+\alpha_\kappa V'_y(x)\right)\psi}\psi\right)=0$$
where now $\check{\mathcal L}_2=\frac{\tilde\kappa}2\partial_{yy}+\cdots$. Also,
if $\tilde\kappa\neq\kappa$, the covariance condition for $\psi$ is
modified as follows:
\begin{enumerate}
\item (M\"obius invariance) For any $c=(D\setminus K,\dots)$, $c'=(D'\setminus K',\dots)$
where $D,D'$ are simply connected, $h:c\rightarrow c'=h_*c$ is an
equivalence of configurations, and $h$ extends to a conformal
equivalence $D\rightarrow D'$, one has:
$$\psi(h_*c)=\psi(c)$$
\item (covariance) For any $c=(\H\setminus K,x,x',y,y',\dots)$, $c'=(\H\setminus K',\dots)$
where $D$ is simply connected, $x,y\in\partial D$, $\partial D$ is
smooth at $x,y$, and $h:c\rightarrow c'=h_*c$ is an
equivalence of configurations, one has:
$$\psi(h_*c)=\left(h'(x)h'(x')\left(\frac{x-x'}{h(x)-h(x')}\right)^2\right)^{-\alpha_\kappa}\left(h'(y)h'(y')\left(\frac{y-y'}{h(y)-h(y')}\right)^2\right)^{-\alpha_{\tilde\kappa}}\psi(c)$$
\end{enumerate}
where $x',y'$ are new target points for the SLEs.
We now revert to the case $\kappa=\tilde\kappa$ (and the two SLEs are
``aiming at each other'') as discussed earlier and further study the
commutation conditions. Consider the operators:
$${\mathcal M}_1=\check {\mathcal L_1}+\alpha V'_x(y)+h_1,{\mathcal M}_2=\check {\mathcal L_2}+\alpha V'_y(x)+h_2$$
where $\partial_y h_1=\partial_x h_2=0$. To define two commuting SLEs,
we have to find functions $h_1,h_2,\psi$ such that ${\mathcal M}_1\psi={\mathcal
M}_2\psi=0$.
Say $\kappa=8/3$, and consider a chordal $\SLE_{8/3}$ in a simply
connected domain conditioned to avoid some holes. Then the conditional
$\SLE$ can be represented as an $\SLE_{8/3}(\psi)$, and $\psi$
(restricted to a section of the moduli space)
is annihilated by a differential operator, coming from the restriction
property. Since $\SLE_{8/3}$ is revertible, so is the
conditional version; this gives a commutation condition in a
multiply connected domain. For $\SLE_2$, we know that the restriction construction is the scaling
limit of a revertible discrete model, hence the restriction weight
$h_1(x)=\alpha_2 B_x-\lambda_2 \Gamma_x$ should be
a solution of this functional equation.
We now give a direct derivation of this fact.
\begin{Lem}
\begin{enumerate}
\item If there exists a non-vanishing function $\psi$ such that ${\mathcal M}_1\psi={\mathcal
M}_2\psi=0$, then $h_1,h_2$ satisfy:
$$(\check{\mathcal L}_1+2V'_x(y))(\alpha V'_y(x)+h_2)=(\check{\mathcal
L}_2+2V'_y(x))(\alpha V'_x(y)+h_1)$$
\item The following identity holds:
$$(\ell_x+2V'_x(y))\Gamma_y-\frac{V'''_x(y)}6
=(\ell_y+2V'_y(x))\Gamma_x-\frac{V'''_y(x)}6.
$$
\item The condition (i) is satisfied if $h_1(x,\dots)=h(x,\dots)$,
$h_2(y,\dots)=h(y,\dots)$, and
$$
h(x,\dots)=\alpha_\kappa B_x-\lambda_\kappa\Gamma_x+\ell_x f$$
where $f$ is a function on standard configurations that does not depend on $x,y$.
\end{enumerate}
\end{Lem}
\begin{proof}
(i)
If $\psi$ is such that ${\mathcal M}_1\psi={\mathcal
M}_2\psi=0$, then also ${\mathcal M}_0\psi=0$ where:
\begin{eqnarray*}
{\mathcal M}_0&=&[{\mathcal M}_1,{\mathcal M}_2]-2V'_y(x){\mathcal M_1}+2V'_x(y){\mathcal M_2}\\
&=&\left(\check{\mathcal L}_1 V_y(x)-\check{\mathcal
L}_2A_x+2V'_x(y)V_y(x)-2A_xV'_y(x)+\kappa\alpha V_y''(x)\right)\partial_x\\
&&-\left(\check{\mathcal L}_2 V_x(y)-\check{\mathcal
L}_1A_y+2V'_y(x)V_x(y)-2A_yV'_x(y)+\kappa\alpha
V_x''(y)\right)\partial_y\\
&&+\sum\left(\check{\mathcal L}_1 V_y(z_i)-\check{\mathcal L}_2 V_x(z_i)+2V'_x(y)V_y(z_i)-2V'_y(x)V_x(z_i)\right)\partial_{z_i}\\
&&+\left((\check{\mathcal L}_1+2V'_x(y))(\alpha V'_y(x)+h_2)-(\check{\mathcal
L}_2+2V'_y(x))(\alpha V'_x(y)+h_1)\right)
\end{eqnarray*}
The first-order terms vanish, from (\ref{cmu3}), (\ref{cmu4}).
(ii)
We want to interpret the left-hand side and the right-hand side of the
equation in terms of Brownian measures. Let us start with the
left-hand side. In the standard domain $D$, we grow a vertical slit
$\gamma$ at $x$. For small $t$, $H_t=D\setminus \gamma_{[0,t]}$ is
conformally equivalent to a standard domain $D_t$; the conformal
equivalence $f_t$ can be expanded in $t$ as:
$$f_t(z)= z+tV_x(z)+o(t)$$
where $V_x$ is the Schwarz kernel with pole at $x$ in the domain $D$.
Hence $f'_t(y)^2=1+2tV_x'(y)+o(t)$ and $Sf_t(y)=tV'''_x(y)+o(t)$.
Let us consider
$\Gamma(H_t,y)$. From the covariance property of $\Gamma$, we get:
\begin{align*}
\Gamma(H_t,y)-\Gamma(H_0,y)&=f'_t(y)^2\Gamma(D_t,f_t(y))-\frac {Sf_t(y)}6-\Gamma(H_0,y)\\
&=t\left((V_x(y)\partial_y+\sum_i V_x(z_i)\partial_{z_i}+2V'_x(y))\Gamma_y-\frac{V'''_x(y)}6\right)+o(t)
\end{align*}
Now $\Gamma(H_t,y)-\Gamma(H_0,y)$ is the measure for Brownian bubbles
rooted at $y$ of bubbles that intersect $\gamma_{[0,t]}$ as well as
one of the holes. Consider $\tilde\gamma$ a vertical slit growing at
$y$, and the loop soup measure:
\begin{align*}
\mu^{loop}_\H\left(\{\delta:\delta\cap\gamma\neq\varnothing,\delta\cap\tilde\gamma\neq\varnothing,\delta\subsetneq
H_0\}\right)&=\mu^{bub}_x\left(\{\delta:\delta\cap\tilde\gamma\neq\varnothing,\delta\subsetneq
H_0\}\right)(t+o(t))\\
&=\mu^{bub}_y\left(\{\delta:\delta\cap\gamma\neq\varnothing,\delta\subsetneq
H_0\}\right)(t+o(t))
\end{align*}
as follows from Proposition 11 in \cite{LW}. Here we are considering
loops that intersect the two small slits $\gamma,\tilde\gamma$. We can
root such loops either near $x$ or $y$, which gives us the right-hand
side and the left hand-side of the claimed identity.
(iii)
As we did earlier, we can expand the condition (i) at $y=x$, writing
$h_1=\alpha B_x-\lambda_\kappa \Gamma_x+h(x,\dots)$, $h_2=\alpha
B_y-\lambda_\kappa \Gamma_y +h(y,\dots)$ for some unknown function $h$.The
functional equation for $h$ is the linear equation :
$$(\ell_x+2V'_x(y))h(y)=
(\ell_y+2V'_y(x))h(x)$$
(from (\ref{cmu5}) and (ii)). Note that this equation does not involve
$\kappa$. Also, using the identity \ref{cmu3}, one can write:
$$(\ell_x+2V'_x(y))\ell_y+(\dots)\partial_x=
(\ell_y+2V'_y(x))\ell_x+(\dots)\partial_y.$$
So if $f$ does not depend on $x,y$, $h(x)=\ell_x f$ gives a solution
of the functional equation.
\end{proof}
Let us consider again the
equation ``without second member'':
$$(\ell_x+2V'_x(y))h(y)=(\ell_y+2V'_y(x))h(x)$$
appearing in (iii).
From (\ref{cmu3}),
we see that if $z$ is a marked point, then $h(x)=V_x(z)$ is a solution
for any marked point $z$. Also, differentiating the identity (\ref{cmu3}) w.r.t. $z$,
two terms $V'_x(z)V'_y(z)$ cancel out, and we get:
$$(\ell_x+2V'_x(y))V'_y(z)=(\ell_y+2V'_y(x))V'_x(z)$$
So $h(x)=V'_x(z)$ is a solution. Similarly,
$h(x)=(V_x(z_2)-V_x(z_1))/(z_2-z_1)$ is also a solution. This
corresponds to rational solutions in the simply connected case. It is
not so clear what would be $f$ in these cases.
As in Section 4, we can interpret a solution $h$ as the tangent map of
a cocycle. More precisely, if $h$ is as above, then one can define a
function $C$ on hulls and residual configurations (configurations
determined by the domain and the marked points $z_1,\dots$) such that:
\begin{enumerate}
\item For all hulls $A,B$,
$C(A.B,z)=C(B,\phi_A(z))C(A,z)$.
\item If $A$ is a hull of half-plane capacity $\varepsilon$ located at $x$, then
$C(A,z)=1+2\varepsilon h(x,z)+o(\varepsilon)$.
\end{enumerate}
Some of the arguments we used in the simply connected case can be
replicated here; though a general classification of such cocycles
appears to be more difficult.
\vspace{1cm}
\noindent {\bf Acknowledgments.} I wish to thank Greg Lawler and
Wendelin Werner for stimulating and fruitful conversations.
\bibliographystyle{abbrv}
|
2,877,628,090,280 | arxiv |
\section*{Introduction}
In a recent paper~\cite{Aragona2020}, the authors observed a rather
surprising coincidence between the sequence of integers
\begin{equation*}
1, 2, 4, 7, 11, 16, 23, 32, 43, 57\ldots
\end{equation*}
representing the partial sums of the famous sequence $\{b_j\}$ of the
number of partitions of the integer $j$ into at least two distinct
parts, already studied by Euler~\cite{euler1748introductio}, and a sequence
of group-theoretical
invariants. Our sequence arises in connection with a problem in algebraic
cryptography, namely the study of the conjugacy classes of
affine elementary abelian regular subgroups of the symmetric group on
$2^n$ letters~\cite{cds06,calderini2017translation,Aragona2019}. This
is relevant in the cryptanalysis of block ciphers,
since it may trigger a variation of the well-known \emph{differential
attack}~\cite{bih91}: a statistical attack which allows to recover
information on the secret unknown key by detecting a bias in the
distribution of the \emph{differences} on a given set of ciphertexts
when the corresponding plaintext difference is known. In particular,
if $\mathbb F_2^n$ serves as the message space of a block cipher (see
e.g.~\cite{aes}) which has been proven secure with respect to
differential cryptanalysis~\cite{nyberg1995provable} and if $T$
represents the translation group on $\mathbb F_2^n$, any conjugate of
$T$ can be potentially used to define new alternative operations on $\mathbb F_2^n$ for a
successful differential attack~\cite{Civino2019}.
In~\cite{Aragona2020}, on the basis of the aforementioned interest,
the authors studied a chain of normalizers, which begins with the
normalizer $N_n^0$ of $T$ in a suitable Sylow $2$-subgroup $\Sigma_n$
of $\Sym(2^n)$ and whose $i$-th term $N_n^i$ is defined as the
normalizer in $\Sigma_n$ of the previous one. After providing some
experimental as well as theoretical evidence, the authors
conjectured~\cite[Conjecture~1]{Aragona2020} the number
$\log_{2}\Size{N^{i}_n : N^{i-1}_n}$ to be independent of $n$ for
$1\le i\le n-2$, and to be equal to the $(i+2)$-th term of the
sequence of the partial sums of the sequence $\{b_j\}$\footnote{\ The
sequence $b_j+1$ appears in several others areas of mathematics,
from number theory to commutative algebra~\cite{Enkosky2014}. In
particular, it was already known to Euler that $b_j+1$ corresponds
to the number of partitions of $j$ into odd parts (see~\cite[Chapter
16]{euler1748introductio} and \cite[\S 3]{Andrews2007}). Several
proofs of this {Euler's partition theorem} have been offered ever
since~\cite{Syl1882, Andrews1994, Kim1999}, and several important
refinements have been
obtained~\cite{Syl1882,Fine1988,Bes1994,Bous97,Straub2016}.}
previously mentioned~\cite[\url{https://oeis.org/A317910}]{OEIS}.
\newline \newline In this paper we completely settle this conjecture.
The first attempts to solve this problem were based on theoretical
techniques which clashed with their own growing
computational complexity. For this reason, we develop here a novel
framework to approach the problem from a different point of view. In
this new approach, indeed, we take into account both the imprimitivity
and the nilpotence of the Sylow $2$-subgroup $\Sigma_n$ to represent
its elements in terms of a special family of left-normed commutators,
that we call \emph{rigid commutators}, in a fixed set of
generators. Any such commutator $[X]$ can be identified with a subset
$X$ of $\{1,\dots,n\}$. The subgroups of $\Sigma_n$ that can be
generated by rigid commutators are called here
\emph{saturated subgroups}. A careful inspection led us to prove that
the normalizers $N^i_n$ are saturated subgroups. In particular, a set
of generators of $N^i_n$ can be obtained from a set of generators of
$N^{i-1}_n$ by adding the rigid commutators of the form $[X]$ for all
$X$ such that the elements of the complementary set of $X$ in
$\{1,\dots,k\}$, where $k=\max X \le n$, yield a partition of
$i+2-n+k$ into at least two distinct parts. This is the key to prove
the conjecture.
\newline
\newline
The advantage of adopting rigid
commutators is twofold. In the first place, they prove to be handy in
calculations with the use of the \emph{rigid commutator machinery}, a
dedicated set of rules which we develop in this paper. Secondly, rigid
commutators can be seen as factors in a \emph{unique factorization
formula} for the elements of any given saturated subgroup. This
representation is crucial in showing that the normalizers $N^i_n$ are
saturated. By means of this result and of the machinery, we derive an
algorithm which efficiently computes the normalizer chain.
\newline
\newline
The paper is organized as follows: in Section~\ref{sec:prel}
some basic facts on the Sylow $2$-subgroup $\Sigma_n$ of $\Sym(2^n)$
are recalled. Section~\ref{sec:commutators} is totally devoted to the
introduction and the study of rigid commutators and to the
construction of the rigid commutator machinery. In
Section~\ref{sec:main} the rigid commutator machinery is used to prove
the conjecture on the normalizer chain previously
mentioned~\cite[Conjecture~1]{Aragona2020}.
In Section~\ref{sec:normalizer_saturated} it is shown that each term of
the normalizer chain is a saturated group and an efficient procedure
to determine the rigid generators of the normalizers is derived. An
explicit construction of the normalizer chain in a specific case is
provided in Section~\ref{sec:computation}, and some open problems arising
from computational evidence are
discussed. Finally, some hints for future investigations
are presented in Section~\ref{sec:concl}.
\section{The Sylow $2$-subgroup of $\Sym(2^n)$}\label{sec:prel}
Let $n$ be a non-negative integer. We start recalling some well-known
facts about the Sylow $2$-subgroup
$\Sigma_n$ of the symmetric group on $2^n$ letters.
\newline
\newline
\noindent Let us consider the set
\begin{equation*}
\mathcal{T}_n=\bigl\{w_1\dots w_{n} \mid w_i \in
\{0,1\} \bigr\}
\end{equation*}
of binary words of length $n$, where $\mathcal{T}_0$ contains only
the empty word. The infinite rooted binary tree $\mathcal{T}$ is defined
as the graph whose vertices are $\bigcup_{j\ge 0} \mathcal{T}_j$ and
where two vertices, say $w_1\dots w_{n}$ and $v_1\dots v_{m}$, are
connected by an edge if $|m-n|=1$ and $w_i=v_i$ for
$1 \leq i \leq \min(m,n)$. The empty word is the root of the tree
and it is connected with both the two words of length $1$.
\noindent We can define a sequence $\Set{s_i}_{i \geq 1}$ of
automorphisms of this tree. Each $s_i$ necessarily fixes the root,
which is the only vertex of degree $2$. The automorphism $s_1$ changes
the value $w_1$ of the first letter of every non-empty word into
$\bar{w}_1\deq (w_1+1) \bmod 2$ and leaves the other letters
unchanged. If $i\ge 2$, we define
\begin{equation}\label{eq:generators}
(w_1\dots w_{n})s_i\deq
\begin{cases}
\text{empty word} & \text{if $n=0$} \\
w_1\dots \bar{w_i}\dots w_{n} & \text{if $n\ge i$ and $w_1=\dots=w_{i-1}=0$}\\
w_1\dots w_{n} & \text{otherwise.}
\end{cases}
\end{equation}
In general, $s_i$ leaves a word unchanged unless the word has length
at least $i$ and the letters preceding the $i$-th one are all zero, in
which case the $i$-th letter is increased by $1$ modulo $2$. If
$i \le n$ and the word $w_1\dots w_n\in \mathcal{T}_n$ is identified
with the integer
$1+\sum_{i=1}^{n}2^{n-i} w_{i}\in \Set{1,\dots, 2^n}$, then $s_i$ acts
on $ \mathcal{T}_n$ as the the permutation whose cyclic decomposition
is
\begin{equation*}
\prod_{j=1}^{2^{n-i}}(j,j+2^{n-i})
\end{equation*}
which has order $2$. In particular, the group $\Span{s_1,\dots,s_n}$
acts faithfully on the set $\mathcal{T}_n$, whose cardinality is
$2^n$, as a Sylow $2$-subgroup $\Sigma_n$ of the symmetric group
$\Sym(2^n)$ (see also Fig.~\ref{fig:tree}).
\begin{figure}
\begin{equation*}
\xymatrix{
\ar@{-}[d]_{w_1} & && & &\bullet \ar@{-}[dll]|0 \ar@{-}[drr]|1 && && \\
\ar@{.}[dd] & && \bullet \ar@{.}[ddl] \ar@{.}[ddr]\ar@{<->}[rrrr] |{s_1} & && & \bullet \ar@{.}[ddl] \ar@{.}[ddr] & &\\
&&&&&&&&& \\
\ar@{-}[d]_{w_{n}} & & \bullet \ar@{-}[dl]|0 \ar@{-}[dr]|1 &&&&&& \bullet \ar@{-}[dl]|0 \ar@{-}[dr]|1 &\\
& \bullet \ar@{<->}[rr]|{s_n} & & \bullet &&\dots && \bullet & & \bullet \\
& 1\ar@{<-}[u] & & 2\ar@{<-}[u] &&\dots && (2^{n}-1)\ar@{<-}[u] & & 2^n \ar@{<-}[u]
}
\end{equation*}
\caption{\footnotesize The action of $\Sigma_n$ on the subtree
$\bigcup_{i=0}^n \mathcal{T}_i$.}
\label{fig:tree}
\end{figure}
It is also well known that
\begin{equation*}
\Sigma_{n} = \Span{s_n} \wr \Sigma_{n-1} = \Span{s_n} \wr \dots \wr \Span{s_1} \cong \wr_{i=1}^n C_2
\end{equation*}
is the iterated wreath product of $n$ copies of the cyclic group
$C_2$ of order $2$.\newline
\newline
The \emph{support} of a permutation is the set of the
letters which are moved by the permutation. We say that two
permutations $\sigma$ and $\tau$ are \emph{disjoint} if they have
disjoint supports; two
disjoint permutations always commute.
The \emph{closure}
\begin{equation*}
S_i\deq\Span{s_i}^{\Span{s_1,\dots,s_i}}
\end{equation*}
is generated by disjoint conjugates of $s_i$, hence $S_i$ is an
elementary abelian $2$-group which is normalized by $S_j$ if $j\le i$.
Moreover,
$\Sigma_n=S_1 \ltimes \dots \ltimes S_n\cong \Sigma_{n-1} \ltimes
S_n$.
\section{Rigid commutators} \label{sec:commutators} The
\emph{commutator} of two elements $h$ and $k$ in a group $G$ is
defined as $[h,k]\deq h^{-1}k^{-1}hk=h^{-1}h^k$.
The \emph{left-normed commutator} of the $m$ elements
$g_1,\dots,g_m\in G$ is the usual commutator if $m=2$ and is
recursively defined by
\begin{equation*}
[g_1,\dots,g_{n-1},g_m]\deq \bigl[[g_1,\dots,g_{m-1}],g_m\bigr]
\end{equation*} if
$m\ge 3$. It is well known that the commutator subgroup $G'$ of a
finitely generated nilpotent group $G$ can be generated by left-normed
commutators involving only generators of
$G$~\cite[III.1.11]{Huppert1967}.
From now on, we will focus on left-normed commutators in
$s_1, \ldots, s_n$. For the sake of simplicity, we
write $[i_1,\dots,i_k]$ to denote the left-normed commutator
$[s_{i_1},\dots, s_{i_k}]$, when $k\ge 2$, and we also write $[i]$ to
denote the element $s_i$.
\begin{definition}\label{def:rigid_commutators}
A left-normed commutator $[i_1,\dots,i_k]$ is called \emph{rigid,
based at $i_1$ and hanging from $i_k$}, if $i_1>i_2>\dots >i_k$.
Given a subset $X=\Set{i_1,\dots, i_k} \subseteq \Set{1,\dots,n}$
such that $i_1>i_2>\dots > i_k$, the \emph{rigid commutator indexed
by $X$}, denoted by $[X]$, is the left-normed commutator
$[i_1,\dots,i_k]$. We set $[X]\deq 1$ when $X=\emptyset$. The set
of all the rigid commutators of $\Sigma_n$ is denoted by
$\mathcal{R}$ and
we let $\mathcal{R}^*\deq \mathcal{R}\setminus\Set{[\emptyset]}$.
\end{definition}
At the end of this section we prove that every permutation in the
Sylow $2$-subgroup $\Sigma_n$ can be expressed, in a unique way, as a
product of the objects previously defined. To this purpose, we
develop below a set of rules to perform computations with (rigid)
commutators.
\subsection{Rigid commutator machinery}\label{subsec:rcm}
Let $1 \leq i_1, i_2, \ldots, i_k \leq n$ be integers and let us
consider the commutator $[i_1,\dots ,i_k]$. The following facts are
easily checked.
\begin{fact}\label{fact1}
Denoting by $i = \max\Set{i_1,\dots ,i_k}$, the commutator
$[i_1,\dots ,i_k]$ is a product of conjugates of $s_{i}$ by way of
elements in $\Span{s_{i_1},\dots ,s_{i_k}}$ and thus it belongs to
$S_i$. Any two such conjugates commute, since they belong to the
same $S_i$.
\end{fact}
\begin{fact}\label{fact2}
As a direct consequence of Fact~\ref{fact1}, if
$\max\Set{i_1,\dots ,i_k}= \max\Set{j_1,\dots ,j_l}$ then
$[i_1,\dots ,i_k]$ and $[j_1,\dots,j_l]$ commute.
\end{fact}
\noindent Note that if $g\in S_i$ and $h\in S_j$, then $[g,h]\in S_k$,
where $k=\max\Set{i,j}$, so $[g,h]^2=1$ since $S_k$ is elementary
abelian. It follows that $[g,h,h]=[g,h]^2[g,h,h]=[g,h^2]=[g,1]=1$.
As a consequence we have:
\begin{fact}\label{lem:expunge_new}
If $k \geq 2$ and $i_j = i_{j+1}$ for some $1 \leq j \leq k-1$, then
$[i_1,\dots, i_k]=1$.
\end{fact}
The following result is crucial since it allows us to rewrite every
commutator as a rigid commutator.
\begin{lemma}\label{lem:compound_commutators}
Let $k\ge 2$ and $l\ge 1$ be integers. If
\begin{equation*}
c\deq[[i_1,\dots,i_k],[j_1,\dots,j_l]]
\end{equation*}
is the commutator of the two rigid commutators $[i_1,\dots,i_k]$ and
$[j_1,\dots,j_l]$, then
\begin{enumerate}
\item \label{item:symmetry}the order of $c$ divides $2$, so
$ c=[[j_1,\dots,j_l],[i_1,\dots,i_k]]$;
\item \label{item:same_start} if $i_1=j_1$, then $c=1$;
\item \label{item:expunged}if $l\ge 2$ and $i_k >j_l $, then $j_l$
can be dropped, i.e.
\begin{equation*}
c= [[i_1,\dots,i_k],[j_1,\dots,j_{l-1}]];
\end{equation*}
\item \label{item:disjoint_comm} if $i_1>j_1$,
$\Set{i_1,\dots,i_k} \cap \Set{j_1,\dots,j_l}=\emptyset$, and
$s\deq\max\Set{h \mid i_h> j_1}$, then
\begin{equation*}
c=[i_1,\dots, i_s, j_1];
\end{equation*}
\item \label{item:sameend} if $l\ge 2$ and $i_k=j_l$, then
\begin{equation*}
c= [[i_1,\dots,i_{k-1}],[j_1,\dots,j_{l-1}], j_l ];
\end{equation*}
\item \label{item:general_commutator} if $i_s >j_1 \ge i_{s+1}$,
then
\begin{equation*}
c=[i_1,\dots, i_s, j_1, h_1,\dots,
h_t],
\end{equation*}
where $h_1 > \dots > h_t$ and $\Set{h_1, \dots, h_t}\deq \Set{i_1,\dots,i_k} \cap
\Set{j_1,\dots,j_l}$. Moreover, $c=1$ if
$j_1\in {\Set{i_1, \dots, i_k}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let us prove each claim separately.
\begin{enumerate}
\item The claim $c^2 = 1$ depends on the fact that $c\in S_i$, where
the index $i$ is defined as
$i\deq\max\Set{i_1,\dots,i_k,j_1,\dots,j_l}$.
\item If $i_1=j_1$, then both of $[i_1,\dots,i_k]$ and
$[j_1,\dots,j_l]$ belong to $S_{i_1}$ which is abelian, thus the
claim follows.
\item Assume that $l\geq 2$ and $j_l < i_k$. In this case
\begin{align*}
c&=[i_1,\dots,i_k][i_1,\dots,i_k]^{[j_1,\dots,j_{l-1}]s_{j_l}[j_1,\dots,j_{l-1}]s_{j_l}} \\
&= [i_1,\dots,i_k]\bigl([i_1,\dots,i_k]^{[j_1,\dots,j_{l-1}]}\bigr)^{s_{j_l}[j_1,\dots,j_{l-1}]s_{j_l}}.
\end{align*}
The permutations $s_{j_l}[j_1,\dots,j_{l-1}]s_{j_l}$ and
$[i_1,\dots,i_k]^{[j_1,\dots,j_{l-1}]}$ are disjoint: the first
one has support contained in
$\Set{2^{n-j_l}+1,\dots , 2^{n-j_l+1}}$ and the support of the
second one is contained in
\begin{equation*}
\Set{1,\dots, 2^{n-\min(i_k,j_{l-1})+1}}
\subseteq \Set{1,\dots,
2^{n-j_l}}.
\end{equation*}
Hence
\begin{equation*}
c=[i_1,\dots,i_k][i_1,\dots,i_k]^{[j_1,\dots,j_{l-1}]}=[[i_1,\dots,i_k],[j_1,\dots,j_{l-1}]],
\end{equation*}
which proves the claim.
\item The claim follows by a repeated applications of items
\eqref{item:expunged} and \eqref{item:symmetry}.
\item
For every $x,y\in G \deq\Span{s_{n},\dots, s_{i_{l}+1}}$ the permutations $x$ and $y^{s_{j_l}}$ are disjoint and so they commute. In particular, if $x^2=1$, then $[x,s_{j_l}]^2= (xx^{s_{j_l}})^2= x^2(x^2)^{s_{j_l}}=1$. If $a,b\in G $ are such that $a^2=b^2=1$, then
\begin{align*}
[[a,b],s_{j_l}] &= [abab, s_{j_l}]\\
&= ababa^{s_{j_l}} b^{s_{j_l}}a^{s_{j_l}}b^{s_{j_l}} = aa^{s_{j_l}}bb^{s_{j_l}}aa^{s_{j_l}} b b^{s_{j_l}} \\
& = [a,s_{j_l}] [b,s_{j_l}] [a,s_{j_l}] [b,s_{j_l}] \\
&= [a,s_{j_l}]^{-1} [b,s_{j_l}]^{-1} [a,s_{j_l}] [b,s_{j_l}] =[[a,s_{j_l}],[b,s_{j_l}]].
\end{align*}
For $a\deq [i_1,\dots,i_{k-1}]$ and $b\deq [j_1,\dots,j_{l-1}]$, we have
\begin{equation*}
[[i_1,\dots,i_{k-1},j_l],[j_1,\dots,j_{l-1}, j_l ]]= [[i_1,\dots,i_{k-1}],[j_1,\dots,j_{l-1}], j_l ],
\end{equation*}
as required.
\item An iterated use of items \eqref{item:symmetry},
\eqref{item:expunged} and \eqref{item:sameend} yields
\begin{equation*}
c=[[i_1,\dots, i_s], [j_1, \dots, j_v],
h_1,\dots, h_t]
\end{equation*} if $j_1 > i_{s+1} \ge h_1$, where
the intersection
$\Set{i_1,\dots, i_s}\cap \Set{j_1, \dots, j_v}=\emptyset$ is
trivial, while, if $j_1=h_1=i_{s+1}$, then
$c=[[[i_1,\dots, i_s,h_1], h_1],\dots, h_t]$. By
Fact~\ref{lem:expunge_new}, the commutator
$[[i_1,\dots, i_s,h_1], h_1]$ is trivial, and so $c=1$. We may
then assume that $j_1 > i_{s+1} \ge h_1$. By
\eqref{item:disjoint_comm}, we obtain the equality
$[[i_1,\dots, i_s], [j_1, \dots, j_v]]=[i_1,\dots, i_s, j_1]$,
therefore \begin{equation*}
c=[i_1,\dots, i_s, j_1, h_1,\dots,
h_t]
\end{equation*}
as claimed. \qedhere
\end{enumerate}
\end{proof}
A repeated application of Lemma~\ref{lem:compound_commutators} shows
that every left-normed commutator $[i_1,\dots,i_k]$ can be written as
a commutator $[j_1,\dots,j_l]$, where
$\Set{j_1,\dots,j_l} \subseteq\Set{i_1,\dots,i_k}$ and
$j_h \ge j_{h+1}$ for all $1\leq h \leq l-1$. If $j_h=j_{h+1}$ for
some $h$, then Fact~\ref{lem:expunge_new} shows that
$[j_1,\dots,j_h,j_{h+1}]=1$, which in turn implies $[j_1,\dots,j_l]=1$.
This fact is summarized in the following result.
\begin{proposition}\label{prop:no_repetitions}
Any left-normed commutator $[i_1,\dots,i_k]$ can be written as a
rigid commutator $[j_1,\dots,j_l]$, for a suitable subset
$\Set{j_1,\dots,j_l} \subseteq\Set{i_1,\dots,i_k}$.
\end{proposition}
It is worth noticing here that rigid commutators are the images of
P.~Hall's basic commutators~\cite{Hall1934} under the presentation
of the group $\Sigma_n$ as a factor of the $n$-generated free group,
once the order of the generators is reversed.
\subsection{Saturated subgroups}
In this section we give a representation of the elements of $\Sigma_n$
in terms of rigid commutators.
\begin{lemma}\label{lem:basis}
The set of all the rigid commutators $[X]\in \mathcal{R}$, where $X$
varies among the subsets of $\Set{1,\dots,n}$ such that $\max(X)=i$,
is a basis for $S_i$.
\end{lemma}
\begin{proof}
Let $1 \leq i \leq n$. To prove the claim, we look at
$S_i$ as a $2^{i-1}$-dimensional vector space over $\F_2$.
Proceeding by backward induction on $j$, for $i \geq j \geq 1$, we
show that the set of all the rigid
commutators based at $i$ and hanging from $h$, for some {$h\ge j$},
is linearly independent. When $j=i$ there is nothing to prove.
Assume
\begin{equation}\label{eq:claim}
\prod_{i>i_1>\dots > i_t\ge j}[i,i_1,\dots, i_t]^{e_{i,i_1,\dots, i_t}} = 1,
\end{equation}
where the exponents are in $\F_2$. We aim at proving that all the
exponents are $0$. From Eq.~\eqref{eq:claim} we have
\begin{equation*}
\prod_{i>i_1>\dots > i_t> j}[i,i_1,\dots, i_t]^{e_{i,i_1,\dots, i_t}}\prod_{i>i_1>\dots > i_{t-1} > i_t= j}[i,i_1,\dots, i_{t-1}, j]^{e_{i,i_1,\dots,i_{t-1}, j}} = 1,
\end{equation*}
and so
\begin{multline}\label{eq:basis}
\prod_{i>i_1>\dots > i_t> j}[i,i_1,\dots, i_t]^{e_{i,i_1,\dots, i_t}} = \\
\left [\left(\prod_{i>i_1>\dots > i_{t-1} > i_t= j}[i,i_1,\dots,
i_{t-1}]^{e_{i,i_1,\dots,i_{t-1}, j}}\right) ,j\right ] .
\end{multline}
Note that if the permutation on the right-hand side
of~Eq.~\eqref{eq:basis} is non-trivial, then it moves some $x$ with
$x>2^{n-j}$, which is fixed by the one on the left-hand side. Hence
the permutations on both sides are trivial. By
induction, the exponents in the left-hand side of
Eq.~\eqref{eq:basis} are all $0$. Now, the commutator map
\begin{equation*}
[\,\cdot ,s_j]\colon \Span{s_{j+1},\dots,s_n} \to
\Span{s_{j},\dots,s_n}
\end{equation*}
is injective, hence the equality
\begin{equation*}
\left [\left(\prod_{i>i_1>\dots > i_{t-1} > i_t= j}[i,i_1,\dots,
i_{t-1}]^{e_{i,i_1,\dots,i_{t-1}, j}}\right) ,j\right ] =1
\end{equation*}
implies
\begin{equation*}
\prod_{i>i_1>\dots > i_{t-1} > i_t= j}[i,i_1,\dots,
i_{t-1}]^{e_{i,i_1,\dots,i_{t-1}, j}}=1.
\end{equation*}
Again, by the inductive hypothesis, we find
$e_{i,i_1,\dots,i_{t-1}, j}=0$ for every choice of
$ i_1 >\dots >i_{t-1}$. As the number of rigid commutators based at
$i$ equals the dimension of $S_i$, the proof is complete.
\end{proof}
We can now state our first main result as a straightforward
consequence of Lemma~\ref{lem:basis}. Let us call a \emph{proper
order} $\prec$ on $\mathcal{R}^*$ any total order refining the
partial order defined by $[i_1,\dots,i_t] \prec [j_1,\dots,j_l]$ if
$i_1 < j_1$. Here we denote by $\mathcal{P}_{n}$ the power set of
$\Set{1,\dots,n}$.
\begin{theorem}\label{thm:unique_representation}
Given a proper order $\prec$ in $\mathcal R^*$, every element
$g\in \Sigma_n$ can be uniquely represented in the form
\begin{equation*}
g
= \prod_{Y\in \mathcal{P}_{n}\setminus \Set{\emptyset}}[Y]^{e_{g}(Y)},
\end{equation*}
where the factors are ordered with respect to $\prec$ and
$e_{g} \colon \mathcal{P}_{n}\setminus\Set{\emptyset} \to \Set{0,1}$
is a function depending on $g$.
\end{theorem}
\begin{proof}
Since $\Sigma_n = S_1 \ltimes \dots \ltimes S_n$, the claim is a
straightforward consequence of Lemma~\ref{lem:basis}.
\end{proof}
Some of the following corollaries are straightforward and their proof
will be omitted.
\begin{corollary}\label{cor:size_subgroup}
If $G$ is a subgroup of $\Sigma_n$ containing $k$ distinct rigid
commutators, then $\Size{G}\ge 2^k$.
\end{corollary}
We now need a new concept which plays a key role in the remainder of
this work.
\begin{definition}
A subset $\mathcal{G}$ of $\mathcal{R}$ is called \emph{saturated}
if $\mathcal{G}\cup \Set{[\emptyset]}$ is closed under taking
commutators and the subgroup $G\deq\Span{\mathcal{G}}\le \Sigma_n$
is called a \emph{saturated subgroup}.
\end{definition}
\begin{remark}\label{rem:sturated_generation}
A subgroup $G\le \Sigma_n$ is saturated if and only if it can be
generated by some subset $\mathcal{X}$ of $\mathcal{R}$: indeed $G$
is also generated by the smallest saturated subset of
$\mathcal{G}\cap \mathcal{R}$ containing $\mathcal{X}$.
\end{remark}
\begin{corollary}\label{cor:saturated_subgroups}
Let $G\le \Sigma_n$ be a saturated subgroup generated by a saturated
set $\mathcal{G}\subseteq \mathcal{R}^*$ and let $\prec$ be any given
proper order on $\mathcal{G}$. Every element $g\in G$ has a unique
representation
\begin{equation*}
g=\prod_{c\in \mathcal{G}}c^{e_c(g)},
\end{equation*}
where the commutators in the product are ordered with respect to
$\prec$ and $e_c(g)\in \Set{0,1}$. In particular
$\Size{G}=2^{\Size{\mathcal{G}}}$.
\end{corollary}
\begin{corollary}\label{cor:homogeneus_components}
Let $G\le \Sigma_n$ be a saturated subgroup generated by a saturated
set $\mathcal{G}\subseteq \mathcal{R}^*$ and let $\prec$ any given
proper order on $\mathcal{G}$. If the product $c_1\cdots c_k\in G$,
where $c_i\in \mathcal{R}^*$, and
$c_1\precneqq c_{2} \precneqq \dots \precneqq c_k$, then
$c_i\in \mathcal{G}$ for all $1 \leq i \leq k$.
\end{corollary}
\begin{proof}
Note that since every rigid commutator belongs to some $S_i$, the
group $G$ has the semidirect product decomposition
$G=(G\cap S_1)\ltimes \dots \ltimes (G\cap S_n)$. In particular
every element of $G$ can be written as an ordered product of
elements of $\mathcal{G}$. Write $c_1\cdots c_k=g_1\cdots g_t$
where $g_i\in \mathcal{G}$ and $g_1\precneqq \dots \precneqq g_t$.
By Theorem~\ref{thm:unique_representation}, we have $k=t$ and
$c_i=g_i\in \mathcal{G}$.
\end{proof}
The next statement follows immediately from
Corollary~\ref{cor:homogeneus_components}.
\begin{corollary}\label{cor:homogeneus_components2}
Let $G\le \Sigma_n$ be a saturated subgroup. If $g=g_1\cdots g_n$,
where $g_i\in S_i$ for $1\leq i \leq n$, then $g\in G$ if and only
if $g_i\in G\cap S_i$ for $1 \leq i \leq n$. Moreover if
$g=h_1\cdots h_n$, where $h_i\in S_i$ for $1 \leq i \leq n$, then
$h_i=g_i$ for $1 \leq i \leq n$.
\end{corollary}
\section{Elementary abelian regular $2$-groups and their chain of
normalizers}\label{sec:main}
A vector space $T$ of dimension $n$ over $\F_2$ acts regularly over
itself as a group of translations. By way of this action, $T$ can be
seen as a regular elementary abelian subgroup of $\Sym(2^n)$, and any
other regular elementary abelian subgroup of $\Sym(2^n)$ is conjugate
to $T$ in $\Sym(2^n)$~\cite{Dixon1971}. The normalizer of $T$ in
$\Sym(2^n)$ is the affine group $\AGL(T)$, where $T$ embeds as the
normal subgroup of translations. For this reason, we refer to any of
the conjugates of $T$ as a \emph{translation subgroup} of $\Sym(2^n)$.
Every chief series $\mathfrak{F}=\Set{T_i}_{i=0}^n$ of $T$, where
$1=T_0 < T_1 < \dots < T_n=T$, is normalized by exactly one Sylow
$2$-subgroup $U_{\mathfrak{F}}$ of $\AGL(T)$. In \cite[Theorem p.
226]{Leinen1988} it is proved that every chief series $\mathfrak{F}$
of $T$ corresponds to a Sylow $2$-subgroup $\Sigma_{\mathfrak{F}}$ of
$\Sym(2^n)$ containing $T$ and having a chief series that intersects
$T$ in $\mathfrak{F}$. The correspondence
$\mathfrak{F}\mapsto \Sigma_\mathfrak{F}$ is a bijection between the
sets of the chief series of $T$ and the set of Sylow $2$-subgroups of
$\Sym(2^n)$ containing $T$. In \cite{Aragona2020} it is also pointed
out that
$U_{\mathfrak{F}}=N_{\Sigma_{\mathfrak{F}}}(T)=\Sigma_{\mathfrak{F}}\cap
\AGL(T)$. From now on the chief series $\mathfrak{F}$ will be fixed,
and so, without ambiguity, we will write $\Sigma_n$ and
$U_n$ to denote respectively
$\Sigma_{\mathfrak{F}}$ and $U_{\mathfrak{F}}$. In \cite{Aragona2019}
it is proved that $U_n$ contains, as normal subgroups, exactly two
conjugates of $T$, namely $T$ and $T_{U_n}=T^{g}$, {for some
$g \in \Sym(2^n)$}. It is also shown that the normalizer
$N_n^1=N_{\Sym(2^n)}(U_n)$ interchanges by conjugation these two
subgroups and that $N_n^1$ contains $U_n$ as a subgroup of index~$2$.
In particular, $N_n^1\le \Sigma_n$. In the following section we will
extend these results on $T, U_n, N_n^1$ to the entire chain of
normalizers, which is defined below.
\subsection{The normalizer chain}
The \emph{normalizer chain starting at $T$} is defined as
\begin{equation}\label{eq:normalizer_chain}
N_n^i \deq \begin{cases}
U_n = N_{\Sigma_n}(T)& \text{if $i=0$},\\
N_{\Sigma_n}(N^{i-1}_{n}) & \text{if $i\ge 1$.}
\end{cases}
\end{equation}
In \cite{Aragona2020} the authors proved that
$N_{\Sigma_n}(N_{n}^i)=N_{\Sym(2^n)}(N_{n}^i)$, for all $i\ge 0$,
computed the normalizer chain for $n \le 11$ by way of the computer
algebra package \textsf{GAP} \cite{GAP4}, and
conjectured that the index $\Size{N^{i+1}_n : N_n^{i}}$ does not
depend on $n$ for $n\ge i+3$~\cite[Conjecture 1]{Aragona2020}. In this
section we
prove this conjecture arguing by induction, by means of the rigid
commutator machinery developed in Section~\ref{subsec:rcm}. We start
by defining
\begin{equation*}
T\deq\Span{t_1,\dots, t_n}
\end{equation*}
where
\begin{equation*}
t_i\deq[s_i,s_{i-1},\dots,s_1]=[i,i-1,\dots,1] \in \mathcal{R}^*.
\end{equation*}
\begin{lemma}\label{lem:translation_subgroup}
$T$ is an elementary abelian regular subgroup of $\Sigma_n$. In
particular, $T$ is a translation subgroup of $\Sym(2^n)$.
\end{lemma}
\begin{proof}
$T$ is a subgroup of $\Sigma_n$ as it is generated by elements
belonging to $\Sigma_n$. By item \ref{item:general_commutator} of
Lemma~\ref{lem:compound_commutators} it follows that $[t_i,t_j]=1$,
so that $T$ is abelian. Note that $t_i^2=1$ as $t_i\in S_i$, and so
$T$ is elementary abelian of order at most $2^n$. Let us now prove
that $T$ is transitive. Let $1 \le x \le 2^n$ be an integer
represented as $x= 1+\sum_{i=1}^{n}2^{n-i} w_{i}$ in binary form and
let $t=\prod_{i=1}^nt_i^{w_i}$. A direct check shows that $t$ moves
$1$ to $x$. Since $T$ has an orbit with $2^n$ elements and it has
order at most $2^n$, it follows that $\Size{T}=2^n$ and that every
point stabilizer is trivial, therefore $T$ is a regular permutation
group on $\Set{1,\dots, 2^n}$.
\end{proof}
Let us now determine the permutations in $\Sigma_n$ normalizing $T$.
For $1 \leq j<i\leq n$ let us define
$X_{ij}\deq \Set{1,\dots,i}\setminus \Set{j}$ and
\begin{equation*}
u_{ij}\deq [X_{ij}] = [i,\dots,j+1,j-1,\dots,1] \in \mathcal R^*.
\end{equation*}
From now on we will set
\begin{equation*}
\mathcal{U}_n \deq \Set{t_1,\dots,t_n, u_{ij} \mid 1 \leq j < i \leq
n} \subseteq \mathcal{R}^*.
\end{equation*}
\begin{proposition}\label{prop:U_as_normalizer}
The group $\Span{\mathcal{U}_n}$ is the normalizer of $T$ in
$\Sigma_n$, i.e.
\begin{equation*}
U_n = \Span{T,u_{ij} \mid 1 \le j < i \le n}.
\end{equation*}
\end{proposition}
\begin{proof}
Let us set $U \deq \Span{T,u_{ij} \mid 1 \le j < i \le n}$ and
let us prove that $U = U_n = N_{\Sigma_n}(T)$. By
Lemma~\ref{lem:compound_commutators} we have
\begin{equation*}
[t_h,u_{ij}]=
\begin{cases}
1 & \text{if $h\ne j$} \\
t_i & \text{if $h=j$}.
\end{cases}
\end{equation*}
This shows that $U\le N_{\Sigma_n}(T) = U_n$ and that $\mathcal U_n$
is a saturated set. Therefore, from
Corollary~\ref{cor:saturated_subgroups},
$\Size{U}= 2^{\Size{\mathcal U_n}} = 2^{\frac{n(n+1)}{2}} =
\Size{U_n}$, which proves the claim.
\end{proof}
We aim at proving our second main result, providing the generators of
the normalizer $N_n^i$ in terms of rigid commutators. The result is
proved by induction on~$i\ge 1$.
\subsection*{Induction basis}
Let us denote by $\eta_n$ the rigid commutator based at $n$ and
hanging from $3$ such that no intermediate integer is missing, i.e.
\begin{equation}\label{eq:n11}
\eta_n\deq [n,n-1,\dots,3].
\end{equation}
We now prove that we can generate $N_n^1$ by appending $\eta_n$ to the
list $\mathcal{U}_n$ of the rigid commutators generating $U_n$.
\begin{proposition}\label{prop:n1}
If $n\ge 3$, then the group $\Span{\mathcal{U}_n,\eta_n}$ is the
normalizer $N_n^1$ of $U_n$ in $\Sigma_n$, i.e.
\begin{equation*}
N_n^1 = \Span{T,u_{ij},\eta_n \mid 1 \le j < i \le n}.
\end{equation*}
Moreover, $\Size{N_n^1:U_n}=2$.
\end{proposition}
\begin{proof}
By Lemma~\ref{lem:compound_commutators},
\begin{equation*}
[t_i,\eta_n] =
\begin{cases}
u_{n,2} & \text{if $i=1$} \\
t_n & \text{if $i=2$}\\
1 & \text{otherwise}
\end{cases}
\text{\quad and \quad } [u_{ij},\eta_n] =
\begin{cases}
u_{n,1}& \text{if $i=2$ and $j=1$}\\
1 & \text{otherwise}
\end{cases},
\end{equation*}
Thus the rigid commutator $\eta_n$ belongs to $N_{\Sigma_n}(U_n)$,
hence $\Span{U_n,\eta_n} \le N_{\Sigma_n}(U_n)$. Moreover
$U_n \cap S_n=\Span{t_n,u_{n,1},\dots,u_{n,n-1}}$ and so $\eta_n$,
which is based at $n$, is such that $\eta_n \notin U_n$. The claim
now follows from $\Size{N_{\Sigma_n}(U_n):U_n}=2$
\cite[Theorem~7]{Aragona2019}.
\end{proof}
\subsection*{Inductive step}
Let {$1\leq b \leq n$} and let
$I$ be a (possibly empty) subset of $\Set{1,2,\dots,b-1}$. We define
the \emph{rigid commutator based at $b$ and punctured at $I$} as
\begin{equation}
\puncture{b}{I} \deq [ \Set{1,\dots,b} \setminus
I] \in \mathcal{R}^*
\end{equation}
and, if $I = \Set{i_1,i_2,\dots,i_k}$ we also denote $\puncture{b}{I}$
by $\puncture{b}{i_1,i_2,\dots,i_k}$.\\
For example, the permutation $\eta_n$ defined in Eq.~\eqref{eq:n11} is
equal to $\puncture{n}{2,1}$. \medskip
\noindent We also define
\begin{equation}
\label{eq:C_lk}
\mathcal{W}_{ij}\deq \Set{\puncture{i}{I} \in \mathcal{R}^* \,\,\Big | \,\, I \subseteq
\Set{1,2,\dots,i-1}, \Size{I} \ge 2, \vphantom{\sum}\smash{\sum_{x \in I} x}=j \;
}
\end{equation}
for each $1 \leq i \leq n$ and $j$, and
\begin{equation}\label{def:Ni}
\mathcal{N}_n^{i}\deq
\begin{cases}
\mathcal{U}_n & \text{if $i=0$} \\
\mathcal{N}_n^{i-1} \dot\cup \left( \dot\bigcup_{j=1}^{i}
\mathcal{W}_{n+j-i,\,j+2} \right) & \text{for $i>0$.}
\end{cases}
\end{equation}
Note that, if $j \leq i-2$, then $\Size{ \mathcal{W}_{i,j}}=b_j$,
i.e.\ the number of partitions of $j$ into at least two distinct
parts. Our next goal is to prove that $N_n^i=\Span{\mathcal{N}_n^i}$
for each $0\le i \le n-2$, where $N_n^i$ is defined as in
Eq.~\eqref{eq:normalizer_chain}. Propositions~\ref{prop:U_as_normalizer}
and \ref{prop:n1} show that this is actually the case when
$i\in \{0,1\}$. \newline
\newline
In order to prove the general result, we need the following
reformulation of item~\ref{item:general_commutator} of
Lemma~\ref{lem:compound_commutators} to compute commutators of rigid
commutators written in punctured form.
\begin{proposition}\label{prop:punctured}
Let $1 \leq a,b \leq n$ and let $I$ and $J$ subsets of
$\Set{1,2,\dots,a-1}$ and $\Set{1,2,\dots,b-1}$ respectively. Then
\begin{equation*}
\bigl[\,
\puncture{a}{I} ,
\puncture{b}{J}
\, \bigr] =
\begin{cases}
\puncture{\max (a,b)}{\; (I \cup J)\setminus\Set{\min(a,b)}} &
\text{if $\min (a,b)\in I \cup J$}
\\
1 & \text{otherwise}.
\end{cases}
\end{equation*}
\end{proposition}
\begin{proof}
Let $c\deq \bigl[\, \puncture{a}{I} , \puncture{b}{J} \, \bigr]$. If
$a=b$, then $c=1$. Without loss of generality, we can assume that
$a> b$. By Lemma~\ref{lem:compound_commutators}, if $b \notin I$,
then $c=1$. If $b\in I$, the claim follows from item
\ref{item:general_commutator} of
Lemma~\ref{lem:compound_commutators}.
\end{proof}
In the following facts, we summarize some properties that will be useful in
the proof of the conjecture.
\begin{fact}\label{lem:N_i-1}
A commutator $\puncture{a}{J}$ such that $1 \leq a \leq n$ and
$J \subseteq \Set{1,2,\dots,a-1}$ belongs to $\mathcal{N}_n^{i}$ if
and only if one of the following conditions is satisfied:
\begin{enumerate}
\item \label{item:b} $J=\emptyset$, and so $\puncture{a}{J}=t_a$;
\item \label{item:c} $\Size{J}=1$, and so $\puncture{a}{J}=u_{aj}$
where $J=\Set{j}$;
\item \label{item:a} $\Size{J}\geq 2$, and
$\sum_{j\in J} j \le i+2-(n-a)$.
\end{enumerate}
\end{fact}
\begin{fact}\label{rem:reduction_for_N}
Note that for $2\le i\le n-2$ the set
$\mathcal{N}_n^i\cap (S_1\ltimes \dots \ltimes S_{n-1})$ is equal to
$\mathcal{N}_{n-1}^{i-1}$. Indeed, at the $i$-th iteration, the
newly generated elements of $\mathcal{N}_n^i$, which are those in
$\mathcal{N}_n^i\setminus \mathcal{N}_n^{i-1}$, are constructed by
\emph{lifting} the elements of
$\mathcal{N}_n^{i-1}\setminus \mathcal{N}_n^{i-2}$, i.e.\ by
replacing a rigid commutator based at $j$ with the rigid commutator
obtained by removing its left-most element, for $j \leq n$, and by
adding some new rigid commutators based at $n$, in accordance with
Eq.~\eqref{def:Ni}. Proceeding in this way it is easy to check that,
disregarding all the commutators based at $n$ in
$\mathcal{N}_n^{i}$, the lifted elements are exactly the elements of
$\mathcal{N}_{n-1}^{i-1}$. The reader is referred to
Section~\ref{sec:computation} for explicit examples.
\end{fact}
\begin{fact}\label{rem:reduction_for_N_v2}
In the proof of Proposition~\ref{prop:n1} we showed that
\( [\,\mathcal{N}_n^{1} , \mathcal{N}_n^{0} ] \subseteq
\mathcal{N}_n^{0} \cup \Set{[\emptyset]} \). Assuming by induction
on $2\le i\le n-2$ that
$[\,\mathcal{N}_{n-1}^{i-1},\mathcal{N}_{n-1}^{i-2}] \subseteq
\mathcal{N}_{n-1}^{i-2} {\cup \Set{[\emptyset]}}$ and using
Fact~\ref{rem:reduction_for_N}, we can conclude that
\begin{multline*}
[\,\mathcal{N}_n^{i} \cap (S_1\ltimes \dots \ltimes S_{n-1}),
\mathcal{N}_n^{i-1} \cap (S_1\ltimes \dots \ltimes S_{n-1})] =
\\
[\,\mathcal{N}_{n-1}^{i-1}, \mathcal{N}_{n-1}^{i-2}] \subseteq
\mathcal{N}_{n-1}^{i-2} {\cup \Set{1}} = \mathcal{N}_n^{i-1} \cap
(S_1\ltimes \dots \ltimes S_{n-1}) \cup \Set{[\emptyset]}.
\end{multline*}
Similarly,
\begin{equation*}
[\,\mathcal{N}_n^{i} \cap (S_1\ltimes \dots \ltimes S_{n-1}),
\mathcal{N}_n^{i} \cap (S_1\ltimes \dots \ltimes S_{n-1})]
\subseteq \mathcal{N}_n^{i} \cap (S_1\ltimes \dots \ltimes
S_{n-1}) \cup \Set{[\emptyset]}.
\end{equation*}
\end{fact}
From the previous fact we have that, in order to prove by induction on
$i$ that
$[\,\mathcal{N}_n^{i}, \mathcal{N}_n^{i-1}]\subseteq
\mathcal{N}_n^{i-1} \cup \Set{[\emptyset]}$ and that
$[\,\mathcal{N}_n^{i}, \mathcal{N}_n^{i}]\subseteq \mathcal{N}_n^{i}
\cup \Set{[\emptyset]}$, it suffices to show that
$[\,\mathcal{W}_{n,\,i+2}\, ,\, \mathcal{N}_n^{i-1}] \subseteq
\mathcal{N}_n^{i-1}\cup \Set{[\emptyset]}$ and that
$[\,\mathcal{W}_{n,\,i+2}\, ,\, \mathcal{N}_n^{i}] \subseteq
\mathcal{N}_n^{i}\cup \Set{[\emptyset]}$. This is accomplished in the
following result.
\begin{lemma}\label{lem:rigid_commutators_normalize}
If $i\le n-2$, then
$[\mathcal{N}_n^i,\mathcal{N}_n^{i-1}]\subseteq \mathcal{N}_n^{i-1}
\cup \Set{[\emptyset]}$ and
$[\mathcal{N}_n^i,\mathcal{N}_n^i]\subseteq \mathcal{N}_n^i \cup
\Set{[\emptyset]}$.
\end{lemma}
\begin{proof}
If
$\puncture{n}{I}\in \mathcal{W}_{n,\,i+2}\subseteq
\mathcal{N}_n^i\setminus \mathcal{N}_n^{i-1}$ and
$\puncture{a}{J} \in \mathcal{N}_n^{i-1}$ then, by
Proposition~\ref{prop:punctured},
\begin{equation*} c \deq\bigl[\, \puncture{n}{I} , \puncture{a}{J}
\, \bigr] =
\begin{cases}
\puncture{n}{\; (I \cup J)\setminus\Set{a}} & \text{if $a\in I$}\\
1 & \text{otherwise.}
\end{cases}
\end{equation*}
We may assume $a\in I$. From Fact~\ref{lem:N_i-1}, if
$\puncture{a}{J}$ is as in case \eqref{item:a}, we have
\begin{eqnarray*}
\sum_{x\in (I \cup J)\setminus\Set{a}} x &\le& \sum_{x\in J}x +\sum_{x\in I} x - a \\
&\le& i+1-(n-a) + i+2-(n-n) -a \\
&=& i+2 -(n-i-1) \\
&\le& i+2 -1=i+1,
\end{eqnarray*}
and so $c \in \mathcal{N}_n^{i-1}$. If $\puncture{a}{J}$ is as in
case \eqref{item:b}, i.e.\ $\puncture{a}{J}=t_a$, then we have
\begin{equation*}
\sum_{x\in (I \cup J)\setminus\Set{a}} x= \sum_{x\in I} x - a
=i+2-a \le i+1
\end{equation*}
and so, also in this case, $c\in \mathcal{N}_n^{i-1}$. Finally, if
$\puncture{a}{J}$ is as in case \eqref{item:c}, i.e.\
$\puncture{a}{J}=u_{a,j}$, we have
\begin{equation*}
\sum_{x\in (I \cup J)\setminus\Set{a}} x \le \sum_{x\in I} x - a
+j = i+2-(a-j) \le i+1
\end{equation*}
and again $c\in \mathcal{N}_n^{i-1}$. Similar computations prove that, if $\puncture{a}{J} \in \mathcal{N}_n^{i}$, then also
$c \in \mathcal{N}_n^{i}$.
\end{proof}
The following result is now straightforward.
\begin{proposition}\label{prop:size_of_Ni}
The set $\mathcal{N}_n^i$ is a saturated set of rigid commutators
and
$\Span{\mathcal{N}_n^{i}} \le
N_{\Sigma_n}\bigl(\Span{\mathcal{N}_n^{i-1}}\bigr)$. Moreover,
$\Size{\Span{\mathcal{N}_n^{i}}}=2^{\Size{\mathcal{N}_n^i}}$.
\end{proposition}
\begin{proof}
The claim follows from Lemma~\ref{lem:rigid_commutators_normalize},
Fact~\ref{rem:reduction_for_N_v2} and
Corollary~\ref{cor:saturated_subgroups}.
\end{proof}
We conclude this section with our main result showing that the $i$-th
term of the normalizer chain is actually generated by the set
$\mathcal{N}_n^i$ of rigid commutators defined in Eq.~\eqref{def:Ni}.
We prove, indeed, that the inclusion
$\Span{\mathcal{N}^i_{n}} \le
N_{\Sigma_n}\bigl(\Span{\mathcal{N}_n^{i-1}}\bigr)$ shown in the
previous proposition is actually an equality.
\begin{theorem}\label{thm:normalizers_indices}
For $i \leq n-2$, the group $\Span{\mathcal{N}_n^i}$ is the $i$-th
term $N_n^{i}$ of the normalizer chain.
\end{theorem}
\begin{proof}
The cases $i=0$ and $i=1$ has been addressed respectively in
Propositions~\ref{prop:U_as_normalizer} and \ref{prop:n1}. We
assume by induction on $i\ge 2$ that
$N_m^{j}=\Span{\mathcal{N}_m^j}$ for all $m\le n$ whenever
$j<i \le m-2$. In particular
\begin{equation*}N^j_m\cap
\Sigma_{m-1}=N^{j-1}_{m-1}=\Span{\smash{\mathcal{N}_{m-1}^{j-1}}}.
\end{equation*}
Notice that
\begin{equation*}
\begin{split}
\Span{\mathcal{N}_n^i}\cap \Sigma_{n-1} &=\Span{\mathcal{N}_{n-1}^{i-1}} \\
&= N_{n-1}^{i-1}= N_{\Sigma_{n-1}}(N_{n-1}^{i-2}) \\
&=N_{\Sigma_{n-1}}(N_{n}^{i-1}\cap \Sigma_{n-1})\\
&=N_{\Sigma_{n-1}}(N_{n}^{i-1}\cap \Sigma_{n-1}) \cap
N_{\Sigma_{n-1}}(N_{n}^{i-1}\cap S_n)=
N_{\Sigma_{n-1}}(N_{n}^{i-1}),
\end{split}
\end{equation*}
where the first equality in the last line holds since the following
inclusions
\begin{equation*}
[\mathcal{N}_{n-1}^{i-1}, \mathcal{N}_{n}^{i-1}]
\subseteq [\mathcal{N}_{n}^{i}, \mathcal{N}_{n}^{i-1}] \subseteq
\mathcal{N}_{n}^{i-1}\cup\Set{[\emptyset]}
\end{equation*}
imply that
$N_{\Sigma_{n-1}}(N_{n}^{i-1}\cap \Sigma_{n-1}) \subseteq
N_{\Sigma_{n-1}}(N_{n}^{i-1}\cap S_n)$. As $S_n$ is abelian, we
have
\begin{equation}\label{eq:NNSn}
\begin{split}
N^i_n&=N_{\Sigma_n}(N_{n}^{i-1}) = N_{\Sigma_{n-1}\ltimes
S_n}(N_{n}^{i-1}) \\&
=N_{\Sigma_{n-1}}(N_{n}^{i-1})N_{S_n}(N_{n}^{i-1}) =
\Span{\mathcal{N}_{n-1}^{i-1}} N_{S_n}(N_{n}^{i-1}).
\end{split}
\end{equation}
We are then left to determine
$N_{S_n}(N_{n}^{i-1})= \Set{x\in S_n \mid
[x,\mathcal{N}_{n-1}^{i-1}]\subseteq N_{n}^{i-1}\cap S_n}$. Let
us point out that, by Eq.~\eqref{eq:C_lk} and Eq.~\eqref{def:Ni},
the groups
\begin{equation*}
A\deq \Span{\mathcal{N}^{i}_n \cap S_n}\text{ and } B\deq
\Span{\vphantom{\bigcup}\smash{\bigcup_{j\ge i+1}}\mathcal{W}_{n,\,j+2}}
\end{equation*}
have trivial intersection and that $S_n=A\times B$. By
Lemma~\ref{lem:rigid_commutators_normalize} we have that $A$ is a
subgroup of $N_{n}^{i}\cap S_n$, for $1 \le j \le i$, so that
$N_{n}^{i}=A\times H$ where
\begin{equation*}
H\deq \Big\{x\in \Span{\vphantom{\bigcup}\smash{\bigcup_{j\ge i+1}} \mathcal{W}_{n,\,j+2}}
\,\,\Big| \,\, [x,\mathcal{N}_{n-1}^{i-1}]\subseteq
N_{n}^{i-1}\cap S_n\Big\}.\vphantom{\bigcup_{j\ge i+1}}
\end{equation*}
We denote a generic element of $H$ by
\begin{equation*}
x\deq \prod_{I\in \mathcal{I}}\puncture{n}{I}^{e_I},
\end{equation*}
where the product is taken over the set $\mathcal{I}$ of all the
subsets $I\subseteq \Set{1,\dots,n-1}$ such that
$\sum_{y\in I}y \ge i+3$. For $1 \leq l \leq n$ let
$\mathcal{I}_l=\Set{I\in \mathcal{I} \mid \min(I)=l}$. Let
$u= u_{l,\, l-1}$ if $l> 1$, or $u=t_1=[1]$ if $l=1$. Since
$x\in H$, we have that
\begin{equation*}
[x, u] = \begin{cases} \prod_{I \ni l}\puncture{n}{(I\cup
\Set{l-1})\setminus \Set{l}}^{e_I}& \text{if $l>1$}\\
\prod_{I \ni 1}\puncture{n}{I\setminus \Set{1}}^{e_I}& \ \text{if $l=1$}\\
\end{cases}
\end{equation*}
belongs to $ \mathcal{N}_{n}^{i-1},$ and in particular $e_I\ne 0$
implies
\begin{equation*} \sum_{y\in (I\cup \Set{l-1})\setminus \Set{l}}
y\le i+1.
\end{equation*}
If $I\in \mathcal{I}_l$ then
\begin{equation*}\sum_{y\in (I\cup \Set{l-1})\setminus \Set{l}} y=
\sum_{y\in I}
y-l+(l-1)=\sum_{y\in I} y-1 \ge i+2,\end{equation*} so that $e_I=0$ for
$I\in \mathcal{I}_l$. As
$\mathcal{I}=\bigcup_{l=1}^n\mathcal{I}_l$, we have $H=\Set{1}$.
This finally shows that $N_{S_n}(N^{n}_{i-1})=A=N^n_{i}\cap S_n =
\Span{\mathcal{N}_n^i\cap S_n}$
and, by Eq.~\eqref{eq:NNSn},
that
\begin{equation*}
\begin{split}
N^n_i=N_{\Sigma_n}(N^n_{i-1})&= \Span{\mathcal{N}_{n-1}^{i-1}}
N_{S_n}(N_{n}^{i-1}) = \Span{\mathcal{N}_{n-1}^{i-1} \cup
(\mathcal{N}_n^i\cap S_n)} \\
&= \Span{(\mathcal{N}_{n}^{i} \cap \Sigma_{n-1})\cup
(\mathcal{N}_n^i\cap S_n)} = \Span{\mathcal{N}_n^i},
\end{split}
\end{equation*}
as claimed.
\end{proof}
\subsection{Partitions into at least two distinct parts} This work was
motivated by the computational evidence that the number
$c_{i}\deq \log_2\Size{N^{i-2}_n : N^{i-3}_n}$ does not depend on $n$,
if $3\le i \le n$ \cite{Aragona2020}. The first terms of the sequence
$\Set{c_i}$ coincide with those of the sequence $\{a_i\}$ defined in
\cite[\url{https://oeis.org/A317910}]{OEIS}, where $a_i$ is the $i$-th
partial sum of the sequence $\{b_i\}$, where $b_i$ is the number of
partitions of $i$ into at least two distinct parts. Some values of the
aforementioned sequences are displayed in Table~\ref{tab:two}.
\begin{table}[hbt]
{\renewcommand{\arraystretch}{1.3}
\begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
$i$ & $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & 9& 10 &11&12&13&14\\
\hline\hline
${b_i}$ & 0& 0& 0& 1& 1& 2& 3& 4& 5& 7& 9& 11& 14& 17& 21\\
\hline
$a_i$ &0&0& 0& 1& 2& 4& 7& 11& 16& 23& 32& 43& 57& 74&95
\end{tabular}
\bigskip }
\caption{First values of the sequences $a_i$ and $b_i$}
\label{tab:two}
\end{table}
\newline We have developed the rigid commutator machinery as a
theoretical tool of investigation. It is not surprising anymore that
the equality $b_i=\Size{\mathcal{W}_{n,i}}$, where $\mathcal{W}_{n,i}$
is defined by Eq.~\eqref{eq:C_lk}, is the link with the mentioned
sequence. This combinatorial identity, Eq.~\eqref{eq:generators},
Proposition~\ref{prop:size_of_Ni} and
Theorem~\ref{thm:normalizers_indices} give at last a positive answer to
Conjecture~1 in \cite{Aragona2020}.
\begin{corollary}
For $1\le i\le n-2$, the number $\log_{2}\Size{N^{i}_n : N^{i-1}_n}$
is independent of $n$. It equals the $(i+2)$-th term of the
sequence $\Set{a_j}$ of the partial sums of the sequence $\Set{b_j}$
counting the number of partitions of $j$ into at least two distinct
parts.
\end{corollary}
\section{Normalizers of saturated
subgroups}\label{sec:normalizer_saturated}
In this section we will prove that the normalizer
$N\deq N_{\Sigma_n}(G)$ of a saturated subgroup $G$ of $\Sigma_n$ is
also saturated, provided that $T\le N$, and thus we can use our rigid
commutator machinery in the computation of $N$. In particular, for
$i\le n-2$, the machinery could be used as an alternative tool to
derive the theoretical description of $N_n^i$ as in
Theorem~\ref{thm:normalizers_indices}. Even if we do not have such a
description when $i>n-2$, the machinery can be anyway used to
efficiently compute via $\mathsf{GAP}$ the complete normalizer chain.\newline
\newline
We denote below by $N_i$ the intersection $N\cap S_i$.
\begin{proposition}\label{prop:normalizer_homogeneous}
If $G$ is a saturated subgroup of $\Sigma_n$, and
$N=N_{\Sigma_n}(G)$ is its normalizer in $\Sigma_n$, then
\begin{equation*}
N= N_1 \ltimes \cdots \ltimes N_n= \ltimes_{i=1}^n
N_{S_i}(G).
\end{equation*}
In particular if $x\in N$ and $x=x_1\cdots x_n$, with $x_i\in S_i$
for all $1 \leq i \leq n$, then $x_i\in N_i$ for all~$1 \leq i \leq n$.
\end{proposition}
\begin{proof}
Let $x\in N$ and write $x=x_{i_1}\dots x_{i_k}$ where
$ 1 \le i_1 < \dots < i_k \le i_{k+1}\deq n$ and
$x_ {i_j}\in S_{i_j}$, for $1 \leq j \leq k$. In order to prove our
claim we first show that $[x_{i_1},c]\in G$ for every non-trivial
rigid commutator $c$ of $G$. Since $G$ is generated by its own
non-trivial rigid commutators, it will follow that $x_{i_1}\in N$. As a
consequence, also $x_{i_2}\dots x_{i_k} \in N$. Thus, we may argue
by induction on $k$ to obtain that $x_{i_j}\in N$ for all
$1\leq j \leq k$.
Let $i$ be such that $c\in G\cap S_i$. Suppose first that
$i < i_1$. If $[c,x_{i_1}]= 1$, then $[c,x_{i_1}]\in G$. If
$[c,x_{i_1}]\ne 1$, then $[c,x]=[c,x_{i_1}]h\in G$, where
$[c,x_{i_1}]\in S_{i_1}$ and $h\in \prod_{t>i_1}S_t$. By
Corollary~\ref{cor:homogeneus_components2} we obtain that
$[c,x_{i_1}]\in G\cap S_{i_1}\le G$. If $i=i_1$, then trivially
$[c,x_{i_1}]=1\in G$. The last possibility is
$i_1 <\dots < i_m < i\le i_{m+1}$ for some $m\le k$. Suppose that
$[x_{i_1},c]\ne 1$. In this case
\begin{align*}
G\ni [x,c] &= [x_{i_1}\dots x_{i_k},c] = [x_{i_1},c]^{x_{i_2}\dots x_{i_k}} \cdot [x_{i_2}\dots x_{i_k},c] \\
&= \bigl([x_ {i_1},c]^{x_ {i_2}\dots x_ {i_m}}\bigr)^{x_ {i_{m+1}}\dots x_ {i_k}} \cdot [x_ {i_2}\dots x_ {i_k},c]\\
&=[x_ {i_1},c]^{x_ {i_2}\cdots x_ {i_m}} \cdot [[x_ {i_1},c]^{x_ {i_2}\cdots x_ {i_m}},x_ {i_{m+1}}\cdots x_ {i_k}] \cdot [x_ {i_2}\dots x_ {i_k},c].
\end{align*}
Let us consider the commutators
\begin{align*}
[x_ {i_1},c]^{x_ {i_2}\cdots x_ {i_m}}&=[x_ {i_1},c][[x_ {i_1},c], x_ {i_2}\cdots x_ {i_m}]=d_1\cdots d_t,\\
[[x_ {i_1},c]^{x_ {i_2}\cdots x_ {i_m}},x_ {i_{m+1}}\cdots x_ {i_k}]\ &= m_1\cdots m_r,\\
[x_ {i_2} \cdots x_ {i_m}\cdot x_ {i_{m+1}} \cdots x_ {i_k},c]&=f_1\cdots f_s \cdot l_1\cdots l_u,
\end{align*}
written as ordered product of distinct rigid commutators
\begin{equation*}
d_1,\dots, d_t, f_1,\dots, f_s \in
G\cap S_{i},
\end{equation*}
and
\begin{equation*}
m_1,\ldots, m_r, l_1,\dots, l_u \in G \cap
(S_{i+1}\ltimes \cdots \ltimes S_{n}).
\end{equation*}
Notice that
$\Set{d_1,\dots, d_t} \cap \Set{ f_1,\dots, f_s}=\emptyset$ since
the commutators $d_i$ are of the form $[X]$ for some set $X$ with
$i_1\in X$, whereas the commutators $f_j$ are of the form $[Y]$ for
some set $Y$ with $i_1\notin Y$. This yields
$[x_ {i_1},c]^{x_ {i_2}\cdots x_ {i_m}} \in G\cap S_i$ and so
$[x_ {i_1},c] \in G\cap S_i \le G$.
\end{proof}
\begin{lemma}\label{lem:normalizer_rigidc_components}
Suppose that $G$ is a saturated subgroup of $\Sigma_n$ normalized by
$T$. If $x_1,\dots, x_k\in S_j$ are distinct rigid commutators such
that $x=x_1\cdots x_k\in N$, then $x_i\in N$ for all
$1\leq i \leq k$.
\end{lemma}
\begin{proof}
Let $c_1,\ldots, c_h\in \mathcal R^*$ such that
$G=\Span{c_1,\ldots, c_h}$ and let us write every $c_s$ and $x_t$ in
punctured form: $c_s=\puncture{m_s}{C_s}$ and
$x_t=\puncture{j}{X_t}$.
Suppose first that $m_s <j$, so that
\begin{equation}\label{eq:expansion}
[c_s,x]=\prod_{t=1}^k d_{s,t} \in G\cap S_j,
\end{equation}
where
$d_{s,t}\deq[c_s,x_t] = \puncture{j}{C_s\cup (X_t\setminus
\Set{m_s})} $. Notice that if the commutator $d_{s,t}$ appears
only once in the product, then, by
Corollary~\ref{cor:homogeneus_components}, $d_{s,t}\in G$. If
$C_s\cap X_t=\emptyset$ for all $1\leq t \leq k$, then all the
non-trivial $d_{s,t}$ appearing in the product are distinct and
hence they appear only once in the product, so that $d_{s,t}\in G$
for all $1 \leq t \leq k$. If $C_s\cap X_t\ne \emptyset$, then the
commutator $d_{s,t}$ may appear more than once in the product
displayed in Eq.~\eqref{eq:expansion}. Let $l\in C_s\cap X_t$ and
consider the commutator
$c_{s,l}=[c_s, t_l] = \puncture{m_s}{C_s\setminus\Set{l}} \in G$ as
$t_l=[l,\dots,1]\in T\le N_{\Sigma_n}(G)$. Notice that
\begin{equation*}[ c_{s,l},x_t] = \puncture{j}{(C_s\setminus
\Set{l})\cup (X_t\setminus \Set{m_s})}= \puncture{j}{C_s\cup
(X_t\setminus \Set{m_s})} =[c_{s},x_t]=d_{s,t}.
\end{equation*}
Let $C=C_s\setminus\Set{l}$. We have determined a new rigid
commutator $c=c_{s,l}=\puncture{m_s}{C}\in G$ such that
$\Size{C\cap X_t} < \Size{C_s\cap X_t}$, that
$\Size{C} < \Size{C_s}$ and that $ d_{s,t}=[c,x_t]$ appears in the
expansion of $[c,x]$. Using the same strategy, after a finite number
of steps, we obtain $c=\puncture{m_s}{C}\in G$ such that
$C\cap X_t=\emptyset$. If $ d_{s,t}=[c,x_t]=[c,x_{t_1}]=d_{s,t_1}$,
for some $t_1\ne t$, then $C\cap X_{t_1}\ne \emptyset$, since
otherwise $X_t=X_{t_1}$ and consequently $x_t=x_{t_1}$ with
$t\ne t_1$, contrary to the hypotheses. Thus we may proceed in the
same way with $d_{s,t_1}$. Since at each step the cardinality of $C$
is strictly decreasing, after a finite number
of steps we find a $c\in G$ and $x_{t_r}$ such that
$d_{s,t}=d_{s,t_1}=\dots = d_{s,t_r}$ appears only once in $[c,x]$
giving $d_{s,t}\in G$. This finally shows that $d_{s,t}\in G$
for all $1\leq t \leq k$.\newline
\newline
If $j=m_s$ then $x_i$ and $c_s$ commute for all $i$ and there is nothing to prove.\newline
\newline
We are left with the case when $m_s>j$. As above, we have
\begin{equation*
[c_s,x]=\prod_{t=1}^k d_{s,t} \in G\cap S_j,
\end{equation*}
where
$d_{s,t}\deq[c_s,x_t] = \puncture{m_s}{(C_s \setminus \Set{j}) \cup
X_t} $. Reasoning as we did for $m_s<j$, we
obtain that $d_{s,t}\in G$ for all $1\leq t \leq k$.
In all the cases we have proved that $x_i\in N$ for all
$1\leq i \leq k$, which is our claim.
\end{proof}
As an easy consequence of
Proposition~\ref{prop:normalizer_homogeneous} and
Lemma~\ref{lem:normalizer_rigidc_components} we find the following
result.
\begin{theorem}\label{thm:normalizer_saturated}
The normalizer $N$ in $\Sigma_n$ of a saturated subgroup of
$\Sigma_n$ is also saturated, provided that $N$ contains $T$.
\end{theorem}
\begin{remark}\label{rem:normal_izer_closure}
Let $\mathcal{A}$ and $\mathcal{B}$ be two subsets of $\mathcal{R}$
such that
$\Set{t_1,\dots, t_n} \subseteq \mathcal{A} \subseteq \mathcal{B}$,
let $A=\Span{\mathcal{A}}$ and $B=\Span{\mathcal{B}}$ be the
corresponding saturated subgroups. It is easy to recognize that
$N_B(A)=\Span{b\in \mathcal{B} \mid [b,\mathcal{A}]\subseteq
\mathcal{A}\cup \Set{[\emptyset]}}$. Similarly, the normal
closure $A^B$ of $A$ in $B$ is the subgroup generated by the
intersection of all the subsets $\mathcal{C}$ of $\mathcal{R}$ such
that $\mathcal{A} \subseteq \mathcal{C} \subseteq\mathcal{B}$ and
$[\mathcal{C} ,\mathcal{B} ] \subseteq \mathcal{C} \cup
\Set{[\emptyset]}$. In particular, both the normalizer $N_B(A)$ and
the normal closure $A^B$ are saturated.
\end{remark}
\begin{remark}
The condition that $T$ is contained in the normalizer
$N=N_{\Sigma_n}(G)$ of a saturated subgroup $G$ cannot be removed
from the hypotheses of Theorem~\ref{thm:normalizer_saturated}. Indeed, if
$G=\Span{[n,\ldots,3]}$, then the product $[2]\cdot [2,1]$ is
contained in the centralizer of $A$, and hence also in the
normalizer $N$ of $A$, but none of the two rigid commutators $[2]$
or $[2,1] \in T$ normalizes $G$. In particular, by
Corollary~\ref{cor:homogeneus_components2}, the subgroup $N$ cannot
be saturated.
\end{remark}
\begin{remark}\label{rmk_algo}
Another proof of Theorem~\ref{thm:normalizers_indices} can be
obtained by Theorem~\ref{thm:normalizer_saturated}. Indeed, it is not
difficult, but rather tedious, to check that
\begin{equation*}\mathcal{N}_{n}^{i}=\Set{c\in \mathcal{R}^* \mid
[c,\mathcal{N}_{n}^{i-1}]\subseteq \mathcal{N}_{n}^{i-1} \cup
\Set{[\emptyset]}}.\end{equation*} for $0\le i \le n-2$.
The result then
follows by Proposition~\ref{prop:size_of_Ni}.
\end{remark}
From Theorems~\ref{thm:normalizers_indices}
and~\ref{thm:normalizer_saturated} and from Remark~\ref{rmk_algo} we
derive a straightforward corollary resulting in an algorithm whose
$\mathsf{GAP}$ implementation is publicly available at
\href{https://github.com/ngunivaq/normalizer-chain}{\textsf{GitHub}}\footnote{\
See \url{https://github.com/ngunivaq/normalizer-chain}}. This script
allows a significant speed-up in the computation of the normalizer $N$
of a saturated subgroup provided that $N$ contains $T$. We could
easily apply this script to compute our normalizer chain up to the
dimension $n=22$. For example, whereas the standard libraries required
one month on a cluster to compute the terms of the normalizer chain in
$\Sym(2^{10})$, our implementation of the rigid commutator machinery
gives the result in a few minutes, even on a standalone PC. With a
similar approach, we can also use rigid commutators to compute the
normal closure of a saturated subgroup. Some explicit calculations
are shown below in Section~\ref{sec:computation}. Let
$\mathcal{M}_n^i$ be the set of all the rigid commutators belonging to
$N_n^i$. From Theorem~\ref{thm:normalizer_saturated}, the subgroups
$N_n^i$ are saturated, hence $N_n^i = \Span{\mathcal{M}_n^i}$ for all
$i \geq 1$.
\begin{corollary}\label{algo}
The set $\mathcal{M}_n^i$ is the largest subset of $\mathcal{R}$
that normalizes $\mathcal{M}_n^{i-1}$, i.e.\
\begin{equation*}\mathcal{M}_n^i=
\Set{c\in \mathcal{R} \mid [c,\mathcal{M}_n^{i-1}] \subseteq
\mathcal{M}_n^{i-1}}.\end{equation*} Moreover,
$\mathcal{N}_n^i =\mathcal{M}_n^i \setminus \Set{[\emptyset]}$ for
$1\le i\le n-2$.
\end{corollary}
The construction of the terms of the normalizer chain is then reduced
to the determination of the sets $\mathcal{M}_n^i$, a task which turns
out to be way faster than computing the terms of the normalizer chains
as subgroups of $\Sigma_n$ via the \texttt{Normalizer} command
provided by \textsf{GAP}.
\section{A computational supplement}\label{sec:computation}
In this section we show an explicit construction of the first four
groups in the normalizer chain when $n=6$, i.e.\ in $\Sym(64)$. Let us
start with determining the generators of $T$ in terms of rigid
commutators:
\begin{eqnarray*}
t_1 =& [1] &= (1, 33)(2, 34)(3, 35) \ldots (30, 62)(31, 63)(32, 64),\\
t_2 =& [2,1] &= (1, 17)(2, 18)(3, 19) \ldots (46, 62)(47, 63)(48, 64),\\
t_3 = &[3,2,1] &= (1, 9)(2, 10)(3, 11) \ldots (54, 62)(55, 63)(56, 64), \\
t_4 = &[4,3,2,1] &=(1, 5)(2, 6)(3, 7) \ldots (58, 62)(59, 63)(60, 64), \\
t_5 = &[5,4,3,2,1] &= (1, 3)(2, 4)(5, 7) \ldots (58, 60)(61, 63)(62, 64),\\
t_6 =& [6,5,4,3,2,1] &= (1, 2)(3, 4)(5, 6) \ldots (59, 60)(61, 62)(63, 64).
\end{eqnarray*}
We have that $T = \Span{t_1, t_2, \ldots, t_6}$ and, from
Proposition~\ref{prop:U_as_normalizer}, its normalizer in $\Sigma_n$
is
$N_6^0 = U_6=\Span{\mathcal U_6} = \Span{T,u_{ij} \mid 1 \le j < i \le
6}$. Thus the generators of $N^0_6$, besides those of $T$, are
\begin{eqnarray*}
\puncture{6}{5}, \puncture{6}{4}, \puncture{6}{3}, \puncture{6}{2}, \puncture{6}{1}, \\
\puncture{5}{4}, \puncture{5}{3}, \puncture{5}{2}, \puncture{5}{1}, \\
\puncture{4}{3}, \puncture{4}{2}, \puncture{4}{1}, \\
\puncture{3}{2}, \puncture{3}{1}, \\
\puncture{2}{1},
\end{eqnarray*}
consequently $|N_6^0| = 2^{21}$. Now, in accordance with
Eq.~\eqref{def:Ni} and Theorem~\ref{thm:normalizers_indices}, the
normalizer $N_6^1$ is generated by the rigid commutators previously
listed and by $\eta_6$, the only element of $\mathcal{W}_{6,3}$ (see
Eq.~\eqref{eq:C_lk}). The commutator $\eta_6$ is the punctured rigid
commutator based at $6$ and missing the integers $1$ and $2$, i.e.
\begin{equation}\label{eq:N1}
\eta_6=[6,5,4,3] = \puncture{6}{2,1},
\end{equation}
where $1$ and $2$ indeed represent the sole partition of $3$ into at
least two distinct parts. From this,
$\log_2\Size{N^{1}_6 : N^{0}_6 }= 1 = a_3$. Again from
Eq.~\eqref{def:Ni} and Theorem~\ref{thm:normalizers_indices}, the
normalizer $N_6^2$ is generated, along with the elements already
mentioned, by the rigid commutators in $\mathcal{W}_{5,3}$ and
$\mathcal{W}_{6,4}$, i.e.\
\begin{eqnarray}
\label{def_N2_1} [5,4,3] &=& \puncture{5}{2,1},\\[1pt]
\label{def_N2_2} [6,5,4,2]&=&\puncture{6}{3,1}.
\end{eqnarray}
The commutator of Eq.~\eqref{def_N2_1}, which belongs to
$\mathcal{W}_{5,3}$, is the punctured rigid commutator based at $5$
and missing the integers $1$ and $2$. The commutator of
Eq.~\eqref{def_N2_2} instead, which belongs to $\mathcal{W}_{6,4}$, is
based at $6$ and misses the integers $1$ and $3$, composing the sole
partition of $4$ into at least two distinct parts. Notice that
$[5,4,3] = [\cancel{6},5,4,3]$. Indeed, as discussed in
Fact.~\ref{rem:reduction_for_N}, the commutator of
Eq.~\eqref{def_N2_1} is obtained by lifting the one of
Eq.~\eqref{eq:N1}, i.e.\ by removing $6$, the left-most element of
$\eta_6$. We have $\log_2\Size{N^{2}_6 : N^{1}_6 }= 2 = a_4$.
Similarly, $N_6^3$ is generated by adding the new rigid commutators
\begin{eqnarray}
\label{def_N3_1} [\cancel{5},4,3] = & [4,3] &= \puncture{4}{2,1},\\[1pt]
\label{def_N3_2} [\cancel{6},5,4,2] = & [5,4,2] &=\puncture{5}{3,1},\\[1pt]
&\label{def_N3_3} [6,5,3,2] &= \puncture{6}{4,1},\\[1pt]
&\label{def_N3_4} [6,5,4,1]&=\puncture{6}{3,2},
\end{eqnarray}
where the commutators of Eq.~\eqref{def_N3_1} and Eq.~\eqref{def_N3_2}
are respectively obtained by lifting those of Eq.~\eqref{def_N2_1} and
Eq.~\eqref{def_N2_2}, and the commutators of Eq.~\eqref{def_N3_3} and
Eq.~\eqref{def_N3_4} belong to $\mathcal{W}_{6,5}$, respectively
corresponding to the partitions $4+1$ and $3+2$ of $5$. At this
stage, we have that $\log_2\Size{N^{3}_6 : N^{2}_6 }= 4 = a_5$.
Ultimately, the commutators
\begin{eqnarray*}
[\cancel{4},3] = & [3] &= \puncture{3}{2,1},\\[1pt]
[\cancel{5},4,2] = & [4,2] &=\puncture{4}{3,1},\\[1pt]
[\cancel{6},5,3,2] = & [5,3,2] &= \puncture{5}{4,1},\\[1pt]
[\cancel{6},5,4,1] = & [5,4,1]&=\puncture{5}{3,2},\\[1pt]
&[6,4,3,2]&=\puncture{6}{5,1},\\[1pt]
&[6,5,3,1]&=\puncture{6}{4,2},\\[1pt]
&[6,5,4]&=\puncture{6}{3,2,1}
\end{eqnarray*}
complete the set of rigid generators of $N_6^4$, and
$\log_2\Size{N^{4}_6 : N^{3}_6 }= 7 = a_6$.\newline
\newline
Using Corollary~\ref{algo}, we can find a saturated set of rigid
generators for all the elements of the chain. Notice that for
$i > 5$, the sequence $\log_2\Size{N^{i}_6 : N^{i-1}_6 }$ does not fit
the pattern of the sequence $\{a_j\}$. Although we do not have a
general formula to calculate the values of the relative indices
between two consecutive terms in the normalizer chain, they can be
explicitly determined by the algorithm in
\href{https://github.com/ngunivaq/normalizer-chain}{\textsf{GitHub}}.
Computational results are summarized in Table~\ref{tab:n6}, where we
list all the relative indices of the normalizer chain. In the second
column, the logarithms of the sizes of the intersections of each term
with each of the subgroups $S_6, \ldots, S_1$ are displayed.
\begin{table}[hptb]
\begin{tabular}{c||c|c|c}
$i$ & $
\begin{matrix}
\dim(N_6^i\cap S_j) \\
j=6,5,4,3,2,1
\end{matrix}
$ & $\log_2\Size{N_6^i}$ & $\log_2\Size{N_6^i : N_6^{i-1}}$ \\
\hline\hline $0$ & 6, 5, 4, 3, 2, 1 & $21 $ & $15$ \\
\hline $1$ & 7, 5, 4, 3, 2, 1 & $22 $ & $1$ \\
\hline $2$ & 8, 6, 4, 3, 2, 1 & $24 $ & $2$ \\
\hline $3$ & 10, 7, 5, 3, 2, 1 & $28 $ & $4$ \\
\hline $4$ & 13, 9, 6, 4, 2, 1 & $35 $ & $7$ \\
\hline $5$ & 14, 10, 6, 4, 2, 1 & $37 $ & $2$ \\
\hline $6$ & 16, 11, 7, 4, 2, 1 & $41 $ & $4$ \\
\hline $7$ & 18, 12, 8, 4, 2, 1 & $45 $ & $4$ \\
\hline $8$ & 19, 12, 8, 4, 2, 1 & $46 $ & $1$ \\
\hline $9$ & 20, 12, 8, 4, 2, 1 & $47 $ & $1$ \\
\hline $10$ & 21, 13, 8, 4, 2, 1 & $49 $ & $2$ \\
\hline $11$ & 22, 14, 8, 4, 2, 1 & $51 $ & $2$ \\
\hline $12$ & 23, 15, 8, 4, 2, 1 & $53 $ & $2$ \\
\hline $13$ & 24, 16, 8, 4, 2, 1 & $55 $ & $2$ \\
\hline $14$ & 25, 16, 8, 4, 2, 1 & $56 $ & $1$ \\
\hline $15$ & 26, 16, 8, 4, 2, 1 & $57 $ & $1$ \\
\hline $16$ & 27, 16, 8, 4, 2, 1 & $58 $ & $1$ \\
\hline $17$ & 28, 16, 8, 4, 2, 1 & $59 $ & $1$ \\
\hline $18$ & 29, 16, 8, 4, 2, 1 & $60 $ & $1$ \\
\hline $19$ & 30, 16, 8, 4, 2, 1 & $61 $ & $1$ \\
\hline $20$ & 31, 16, 8, 4, 2, 1 & $62 $ & $1$ \\
\hline $21$ & 32, 16, 8, 4, 2, 1 & $63 $ & $1$ \\
\end{tabular}
\bigskip
\caption{The normalizer chain for $n=6$}
\label{tab:n6}
\end{table}
\section{Problems for future research}\label{sec:concl}
We conclude this work by highlighting some further properties and structures of the set
$\mathcal R$ of rigid commutators and providing some hints for future
research.
\subsection*{Algebras of rigid commutators}
The operation of commutation in $\mathcal{R}$ is commutative and
$[\emptyset]$ represents the \emph{zero} element. Moreover, for every
$x,y \in \mathcal{R}$ the following identity is satisfied
\begin{equation*}
[[x,x,y],x] = [[x,x],[y,x]] \text{\ \ \ \ \ \ \ {(Jordan
identity)}}.
\end{equation*}
Let $\F$ be any field of characteristic $2$ and let $\mathfrak{r}$ be
the vector space over $\F$ having the set $\mathcal{R}^*$ of the
non-trivial rigid commutators as a basis. The space $\mathfrak{r}$ is
endowed with a natural structure of an algebra. The product
$x\star y$ of two rigid commutators $x,y \in \mathcal R$ is defined as
\begin{equation*}
x\star y \deq
\begin{cases}
[x,y] & \text{if $[x,y]\ne [\emptyset]$}\\
0 &\text{otherwise.}
\end{cases}
\end{equation*}
This operation is then extended to the whole $\mathfrak{r}$ by
bilinearity and turns $\mathfrak{r}$ into a \emph{Jordan algebra},
since it is commutative and $x\star x=0$ for all $x \in \mathfrak{r}$.
Moreover, if $\mathcal{H}$ is a saturated subset of $\mathcal{R}^*$,
then, on the one hand the group $H=\Span{\mathcal{H}}$ is a saturated
subgroup of $\Sigma_n$ and, on the other hand, the $\F$-linear span
$\mathfrak{h}$ of $\mathcal{H}$ is a subalgebra of $\mathfrak{r}$. The
property
$[\mathcal{R},\mathcal{H}] \subseteq \mathcal{H}\cup
\Set{[\emptyset]}$ is a necessary and sufficient condition for $H$ to
be a normal subgroup of $\Sigma_n$ and for $\mathfrak{h}$ to be an
ideal of $\mathfrak{r}$. We point out that the fact that
$\mathcal{R}$ is closed under commutation is crucial to check the
previous statement. If $c$ is the nilpotency class of $\Sigma_n$,
then the product of $c+1$ elements of $\mathfrak{r}$ is always zero,
so that $\mathfrak{r}$ is nilpotent. The study of the properties and
the representations of this algebra seems to be a problem of
independent interest, in connection with the study of the saturated
subgroups of $\Sigma_n$.
\subsection*{Again on the normalizer chain}
We have obtained from Theorem~\ref{thm:normalizers_indices} an
explicit description of the non-trivial rigid generators of the $i$-th
term of the normalizer chain when $1 \leq i \leq n-2$, i.e.\ the set
$ \mathcal{N}_n^i$. We have seen that $\mathcal{N}_n^i$ has a nice
description by way of Eq.~\eqref{eq:C_lk} and Eq.~\eqref{def:Ni},
i.e.\ it is generated by some rigid commutators either belonging to
$\mathcal{U}_n$ or having a punctured form corresponding to suitable
partitions into at least two distinct parts.
Although we can efficiently compute all the normalizers in the chain,
as described in the lines following Corollary~\ref{algo}, it is an
interesting problem to find a similar combinatorial formula for the
generating set $\mathcal{M}_n^i$ of $N^i_n$ when $i>n-2$. Moreover,
as already mentioned in Section~\ref{sec:computation}, the values of
the sequence $\log_2\lvert N_n^i : N_n^{i-1}\rvert$ do not seem to
belong to any special known pattern when $i > n-2$. Table~\ref{tab:4}
contains the values of $\log_2\lvert N_n^i : N_n^{i-1}\rvert$ for
$1\le i\le 14$ and $3 \leq n \leq 15$. It is an open problem to
determine the general behavior of the sequence.
\begin{table}[phbt]
\label{tab:nindices}
\begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c|c|c|c}
$n$ & \multicolumn{14}{c}{ $\vphantom{\Big|} \log_2\lvert N_n^i : N_n^{i-1}\rvert$ for $1 \leq i \leq 14$} \\ \hline \hline
3 & \hl{1} & 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0 \\ \hline
4 & \hl{1} & \hl{2}& 1& 1& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0 \\ \hline
5 & \hl{1}& \hl{2}& \hl{4}& 1& 2& 2& 1& 1& 1& 1& 0& 0& 0& 0 \\ \hline
6 & \hl{1}& \hl{2}& \hl{4}& \hl{7}& 2& 4& 4& 1& 1& 2& 2& 2& 2& 1 \\ \hline
7 & \hl{1}& \hl{2}& \hl{4}& \hl{7}& \hl{11}& 4& 7& 3& 4& 2& 2& 4& 4& 4 \\ \hline
8 & \hl{1}& \hl{2}& \hl{4}& \hl{7}& \hl{11}& \hl{16}& 7& 5& 6& 2& 6& 6& 3& 3 \\ \hline
9 & \hl{1}& \hl{2}& \hl{4}& \hl{7}& \hl{11}& \hl{16}& \hl{23}& 4& 9& 4& 11& 4& 12& 9 \\ \hline
10 & \hl{1}& \hl{2}& \hl{4}& \hl{7}& \hl{11}& \hl{16}& \hl{23}& \hl{32}& 4& 14& 5& 20& 7& 19 \\ \hline
11 & \hl{1}& \hl{2}& \hl{4}& \hl{7}& \hl{11}& \hl{16}& \hl{23}& \hl{32}& \hl{43}& 5& 22& 7& 32& 4 \\ \hline
12 & \hl{1}& \hl{2}& \hl{4}& \hl{7}& \hl{11}& \hl{16}& \hl{23}& \hl{32}& \hl{43}& \hl{57}& 7& 32& 12& 43 \\ \hline
13 & \hl{1}& \hl{2}& \hl{4}& \hl{7}& \hl{11}& \hl{16}& \hl{23}& \hl{32}& \hl{43}& \hl{57}& \hl{74}& 12& 42& 18 \\ \hline
14 & \hl{1}& \hl{2}& \hl{4}& \hl{7}& \hl{11}& \hl{16}& \hl{23}& \hl{32}& \hl{43}& \hl{57}& \hl{74}&\hl{95}& 8& 24 \\ \hline
15 & \hl{1}& \hl{2}& \hl{4}& \hl{7}& \hl{11}& \hl{16}& \hl{23}& \hl{32}& \hl{43}& \hl{57}& \hl{74}& \hl{95}& \hl{121}& 8 \\
\end{tabular}
\bigskip
\caption{\footnotesize Values of
$\log_2\lvert N_n^i : N_n^{i-1}\rvert$ for small $i$ and $n$. For
$i\le n-2$ these numbers do not depend on $n$ and in the table are
represented by highlighted digits.}
\label{tab:4}
\end{table}
\subsection*{An \emph{odd} generalization}
It appears natural to ask whether a similar rigid commutator machinery
can be developed in a Sylow $p$-subgroup of the symmetric group
$\Sym(p^n)$ when $p$ is an odd prime. This looks as an entirely
different problem in terms of techniques and results. For example, a
rigid commutator could contain repetitions. Although such a machinery
might have a weaker cryptographic application, it may turn out
interesting on a computational point of view.
\section*{Acknowledgment}
We thank the staff of the \emph{Department of Information
Engineering, Computer Science and Mathematics} at the University of
L'Aquila for helping us in managing the HPC cluster CALIBAN, which we
extensively used to run our simulations
(\url{caliband.disim.univaq.it}).
We are also grateful to the
\emph{Istituto Nazionale d'Alta Matematica - F.\ Severi} for
regularly hosting our research seminar \emph{Gruppi al Centro} in
which this paper was conceived.
\bibliographystyle{amsalpha}
|
2,877,628,090,281 | arxiv | \section{Introduction}
The storage of light in atomic gases has its roots in photon echoes, where it
was recognized that an intense $\pi$ pulse could reverse the free induction
decay produced by an initial pulse with arbitrary area. This inversion leads
to the emission of an echo of the original pulse
\cite{hahn1950spin,kurnit1964observation,mossberg1982time}. The problem with
the original work was that it was conceived for a two-level system for which
the transverse relaxation time is quite short, thus significantly limiting
storage time. In addition, the shape of the echo is not the same as that of
the stored pulse.
All these issues have been addressed throughout the years and many techniques
have been proposed, most of them based on the $\Lambda$ configuration (see
Fig.~\ref{fig:lambda}). In this case, one can make the coherence between the
two ground states to be long lived, thus increasing storage time. Most of the
proposals are based on the phenomenon of electromagnetically induced
transparency (EIT)
\cite{harris1997electromagnetically,*boller1991observation}, where the signal
(or probe) pulse is slowed down by turning off the intense control field until
it is stored in the ground states and subsequently restored by turning the
control back on
\cite{fleischhauer2000dark,matsko2001nonadiabatic,dey2003storage}.
In order to avoid any significant loss due to spontaneous decay from the
higher level, schemes based on stimulated Raman transitions have been
proposed. Due to the high detuning, the higher atomic level can be
adiabatically eliminated, which leads to simpler equations
\cite{nunn2007mapping,*reim2011single}.
Extensions to the photon echo technique have also been worked out, where the
signal pulse is mapped into a spin wave of the ground states by means of an
intense $\pi$-control pulse and then retrieved by another counter-propagating
strong-control pulse \cite{moiseev2001complete}.
\begin{figure}
\includegraphics[scale=1]{fig1}
\caption{\label{fig:lambda} (Color online) Three-level
atom in $\Lambda$-configuration
interacting with two fields in two-photon resonance via the common detuning
$\Delta$, with spontaneous emission $\Gamma$ from the excited state.}
\end{figure}
All these techniques address the same problem: How can we store a given signal
field into an atomic medium to then retrieve it? We call this an optical
memory even if the retrieved pulse is not the same as the original one: it
might have a different shape, time duration, or intensity. Of course, this
might not matter if we are only looking at whether a pulse is there or not.
Nonetheless, the ideal scenario is the one in which the retrieved pulse is
identical to the one stored (i.e.,~has the same shape and parameters as the
original signal pulse). In any case, when one talks about optimizing these
types of memories, a figure of merit must be defined. In the series of papers
\cite{gorshkov2007universal,*gorshkov2007photon,*gorshkov2007photon2,
*gorshkov2008photon,gorshkov2007photon3} by Gorshkov \emph{et al.} they
addressed this issue by defining the efficiency as the ratio of the retrieved
pulse intensity to that of the original signal pulse. Hence they can find the
optimal control field to store and retrieve a given signal field by maximizing
this efficiency. This work as well as that by Nunn \emph{et al.} shows that,
in general, the optimal control field has some temporal shape which differs
from the usual on/off variation of the cw control used in EIT-based schemes.
Most works on pulse storage deal with a quantum (low intensity) signal pulse
which allows a treatment up to first order in the signal field. There is also
a recurring assumption of adiabaticity (this point has been addressed in
\citep{matsko2001nonadiabatic,shakhmuratov2007instantaneous}) for the control
pulse, and its spatial evolution is usually left out. These approximations are
understandable as the full set is composed of eight nonlinear partial
differential equations which resist analytic solutions except in special cases
\cite{grobe1994formation,park1998matched,clader2007two,groves2013jaynes,
gutierrez2015multi}.
Therefore numerical calculations are a recurring tool to study the full
evolution
\cite{matsko2001nonadiabatic,dey2003storage,gutierrez2015manipulation}.
In the present work, we move away from these assumptions and consider the full
nonlinear equations for intense-short pulses (non-quantum and non-adiabatic
fields). This is the regime of self-induced transparency (SIT), where, in a
two-level system, a resonant-intense-$2\pi$ pulse propagates without
absorption and preserving its shape \cite{mccall1967self,*mccall1969self}. An
SIT pulse can be understood as a pulse that is constantly encoding and
retrieving itself resulting in a reduced group velocity.
The storage and retrieval of intense broadband pulses in a $\Lambda$ system
has been shown to work for cold atoms in
\cite{groves2013jaynes,gutierrez2015manipulation}. This was further extended
to add more control to the information stored and the storage of multiple
pulses \cite{gutierrez2015multi}. Additionally, within the same framework, the
possibility for two-channel memory and vector-pulse storage in a tripodal
system has been discussed in \cite{gutierrez2016vector}. In these works, the
storage of the intense signal pulse follows a similar scheme to the one
proposed in \cite{grigoryan2009short} for storage (a weak control pulse and
intense signal pulse) but they do not discuss any retrieval procedure.
In the present work we treat the storage and retrieval of intense short pulses
in the presence of Doppler broadening and detuning (this has been treated for
the case of low-intensity light
\cite{gorshkov2007photon3,sangouard2007analysis}). For this, we solve the
evolution equations by means of the inverse scattering transform
\cite{gardner1967method,*ablowitz1973nonlinear,*lamb1980elements,
chakravarty2014inverse} for which we present an operational summary based on
the work presented by Chakravarty \emph{et al.} \cite{chakravarty2014inverse}
for the particular case of a $\Lambda$ system. The first and second order
solutions studied show the storage and retrieval of a signal pulse while
pointing towards the key parameters for this scheme, such as the pulse areas
and their duration. We follow by a numerical exploration adding the effect of
spontaneous emission and show that the scheme presented remains valid.
\section{Physical Model}
As was mentioned, we are looking at the storage of pulses in the
characteristic regime of SIT: short (of the order of 1ns) and intense (areas
around $2\pi$) pulses. Therefore, we can omit (for the time being) the effects
of spontaneous decay. We assume the atomic gas to be sufficiently dilute so as
to safely neglect any loss of coherence due to collisions. Additionally, we
consider that the fields are close to resonance thus validating the use of the
rotating-wave approximation. Writing the fields in carrier-envelope form,
\begin{align}
\boldsymbol{E}(x,t)=&\boldsymbol{\mathcal E}_{s}(x,t)e^{-i(k_{s}x
-\omega_{s}t)}\nonumber\\
&+\boldsymbol{\mathcal E}_{c}(x,t)e^{-i(k_{c}x-\omega_{c}t)}+c.c,
\label{carrier}
\end{align}
we have that the Hamiltonian for the $\Lambda$-configuration (see Fig.~
\ref{fig:lambda}) takes the form
\begin{equation}
\boldsymbol H=-\frac{\hbar}{2} \left(
\begin{array}{ccc}
0&0&\Omega_{s}\\
0&0&\Omega_{c}\\
\Omega_{s}^*&\Omega_{c}^*&-2\Delta
\end{array}
\right).
\label{hrwa}
\end{equation}
Here, we defined the Rabi frequencies for the signal and control fields, $
\Omega_{s,c}(x,t)=2\boldsymbol{d}_{s,c}\cdot \boldsymbol{\mathcal E}_{s,c}(x, t )/\hbar$
($d_{s,c}$ denotes the component of the dipole moment for each transition),
and the detuning $\Delta=\omega_{31}-\omega_{s}=\omega_{32}-\omega_{c}$ for
the two-photon resonance case.
We are interested in studying the joint evolution of the atomic system and the
two fields. Hence, the equations that determine the interaction are the von
Neumann equation for the atomic medium,
\begin{equation}
i\hbar \pd{\boldsymbol \rho}{t}=[\boldsymbol H,\boldsymbol \rho],
\label{neumann}
\end{equation}
and the wave equation in the slowly varying envelope approximation for the
fields,
\begin{subequations}
\label{meqs}
\begin{align}
\left(\pd{ }{x}+\frac{1}{c}\pd{}{t}\right)\Omega_{s}&=-i\mu_{s} \langle
\rho_{13}\rangle,\\
\left(\pd{ }{x}+\frac{1}{c}\pd{}{t}\right)\Omega_{c}&=-i\mu_{c} \langle
\rho_{23} \rangle.
\end{align}
\end{subequations}
Here we defined the atom-field coupling parameters $\mu_{s,c}=N\omega_{s,c}|
d_{s,c}|^2/\hbar \epsilon_0c$ and the brackets denote the Doppler averaging,
\begin{align}
\langle \rho \rangle = \int \rho(\Delta) F(\Delta) d\Delta,
\end{align}
over the frequency distribution
\begin{align}
F(\Delta)= \frac{T_2^\star}{\sqrt{2 \pi}}e^{-(\Delta-\bar\Delta)^2T_2^{\star
2}/2}
\end{align}
where $T_2^\star$ is the Doppler lifetime and $\bar\Delta$ is the mean one-
photon detuning.
This set of partial differential equations is referred to as the coupled
Maxwell-Bloch (CMB) equations.
By further assuming equal atom-field parameters the CMB equations become
integrable and can be written in the following form:
\begin{subequations}
\label{eq:mb}
\begin{equation}
\pd{\boldsymbol \rho}{T}=[\boldsymbol \Omega -i\frac{\Delta}{2} \boldsymbol J,\boldsymbol \rho]
\end{equation}
and
\begin{equation}
\pd{\boldsymbol \Omega}{Z}=\frac{\mu}{4}[\langle\boldsymbol \rho \rangle,\boldsymbol J].
\end{equation}
\end{subequations}
We introduced the traveling-wave coordinates $T=t-x/c$ ($c$ being the speed of
light in vacuum) and $Z=x$, and defined the matrices
\begin{align}
\boldsymbol J=\left(
\begin{array}{ccc}
-1&0&0\\
0&-1&0\\
0&0&1
\end{array}
\right), \quad
\boldsymbol \Omega=\frac{i}{2} \left(
\begin{array}{ccc}
0&0&\Omega_{s}\\
0&0&\Omega_{c}\\
\Omega_{s}^*&\Omega_{c}^*&0
\end{array}
\right).
\end{align}
The integrability of the system allows the use of standard methods to find
analytic solutions. Some of these methods are inverse scattering
\cite{gardner1967method,*ablowitz1973nonlinear,*lamb1980elements,
chakravarty2014inverse}, the
B\"acklund transformation
\cite{lamb1971analytical,*miura1976backlund,*park1998field} and
the Darboux transformation \cite{gu2006darboux,*cieslinski2009algebraic}.
\section{Inverse Scattering}
The method of inverse scattering allows the incorporation of Doppler
broadening in a natural way for multi-pulse solutions, so we will use it to
compute analytical solutions to the CMB equations instead of the B\"acklund or
Darboux transformations used in previous works
\cite{park1998matched,clader2007two,clader2008two,groves2013jaynes,
gutierrez2015multi}. In what follows, we give a pragmatic presentation of the
method without going into a careful derivation of the results. We refer the
interested reader to the work by Chakravarty \emph{et al.~}
\cite{chakravarty2014inverse} for a complete presentation of the method for
the case of a $\Lambda$ system.
Since the CMB equations [Eqs.~\eqref{eq:mb}] are integrable, they can be
expressed as the integrability condition of the linear system
\begin{subequations}
\begin{align}
\pd{\boldsymbol\varphi}{T}&=\left(-\frac{i\lambda}{2} \boldsymbol J+\boldsymbol \Omega\right)\boldsymbol
\varphi \label{eq:scat1} \\
\pd{\boldsymbol\varphi}{Z}&=\frac{i\mu}{2}\left\langle\frac{\boldsymbol \rho}{\lambda-\Delta}
\right\rangle \boldsymbol\varphi \label{eq:scat2}.
\end{align}
\end{subequations}
That is, we recover Eqs.\eqref{eq:mb} by collecting terms with the same $
\lambda$-dependence from the equation $\partial_Z\partial_T\boldsymbol\varphi=
\partial_T\partial_Z\boldsymbol\varphi$. The parameter $\lambda$ is referred to as the
spectral parameter.
We assume the boundary conditions $\boldsymbol \Omega \rightarrow 0$ as $T \rightarrow
\pm \infty$ for all $Z$, meaning the fields are composed of well-defined
pulses, and fix the value of the density matrix at $T \rightarrow - \infty$
which we denote by $\boldsymbol \rho^{(0)}$.
This, in turn, will define the final state of the atomic system $\boldsymbol \rho^{f}=
\boldsymbol \rho(T \rightarrow \infty)$.
By integrating Eq.~\eqref{eq:scat1} we can define two solutions
\begin{subequations}
\label{eq:phisi}
\begin{align}
\boldsymbol\Phi(T,\lambda)&=e^{-i\frac{\lambda}{2}\boldsymbol J T}+\int_{-\infty}^T
e^{-i\frac{\lambda}{2}\boldsymbol J(T-T')}\boldsymbol \Phi(T',\lambda) \Omega(T')dT' \
\
\boldsymbol\Psi(T,\lambda)&=e^{-i\frac{\lambda}{2}\boldsymbol J T}-\int_{T}^{\infty}
e^{-i\frac{\lambda}{2}\boldsymbol J(T-T')}\boldsymbol \Psi(T',\lambda) \Omega(T') dT'
\end{align}
\end{subequations}
with the corresponding boundary conditions
\begin{align}
\boldsymbol\Phi(T,\lambda)&\rightarrow e^{-i\frac{\lambda}{2}\boldsymbol J T} \quad
\text{as} \quad T \rightarrow -\infty \\
\boldsymbol\Psi(T,\lambda)&\rightarrow e^{-i\frac{\lambda}{2}\boldsymbol J T} \quad
\text{as} \quad T \rightarrow +\infty.
\end{align}
As these two sets of solutions determine complete sets of eigenfunctions, we
can relate the two via a matrix $\boldsymbol S(\lambda)$ known as the
scattering data,
\begin{align}
\boldsymbol\Psi=\boldsymbol\Phi \boldsymbol S(\lambda)\quad \text{where} \quad
\boldsymbol S(\lambda)=\left(
\begin{array}{cc}
\bar{\boldsymbol a} &\boldsymbol b\\
\bar{\boldsymbol b}^\dagger & a
\end{array}\right)
\end{align}
with $\bar{\boldsymbol a}$ a $2\times 2$ matrix and $\boldsymbol b$ and $
\bar{\boldsymbol b}$ two-dimensional column vectors. It can be shown that $a$
is analytic in the lower-half $\lambda$-plane, and we will also assume that it
has a finite number of simple zeros, $\lambda_1, ..., \lambda_n$, in the
region $\text{Im}(\lambda)<0$. The reflection coefficient is given by
\begin{align}
\boldsymbol r (\lambda)=\boldsymbol b(\lambda)/a(\lambda)
\end{align}
which is well-defined on the real $\lambda$-axis.
From Eqs.~\eqref{eq:phisi}, it is clear that the first two columns of $
\boldsymbol \Phi$, which we denote by $\boldsymbol \phi$, and the last column
of $\boldsymbol \Psi$, denoted by $\boldsymbol \psi$, are analytic in the
lower-half $\lambda$-plane. Furthermore, they satisfy the following relation:
\begin{align}
\text{det}(\boldsymbol \phi,\boldsymbol \psi) =a(\lambda)e^{i\frac{\lambda}{2}
T}.
\end{align}
Therefore, when evaluated at one of the simple zeros of $a$ they become
linearly dependent. Thus, we can write
\begin{align}
\boldsymbol \psi(\lambda_j)= \boldsymbol \phi (\lambda_j) \boldsymbol
\eta^{(j)}.
\end{align}
From this equality, we can define the norming constants, $\boldsymbol
\beta^{(j)}$, which are written in terms of the residues of the quantity $
\boldsymbol \psi /a$,
\begin{align}
\text{Res}\left\{\frac{\boldsymbol \psi}{a},\lambda_j\right\}= \boldsymbol
\phi(\lambda_j) \boldsymbol \beta^{(j)}, \quad \text{where} \quad \boldsymbol
\beta^{(j)}=\frac{\boldsymbol \eta^{(j)}}{a'(\lambda_j)}.
\end{align}
In order to solve the CMB, we need to determine the evolution in $Z$ of the
scattering data (the components of the scattering matrix). This gives a number
of coupled, nonlinear differential equations which, in general, do not have an
analytical solution. Fortunately, the inverse scattering procedure can be
carried out by simply determining the evolution of the norming constants, $
\boldsymbol\beta^{(j)}$, and the reflection coefficient, $\boldsymbol r$,
along with the location of the zeros of $a$, namely, $\lambda_1, ...,
\lambda_n$. In spite of this, the general case remains an open problem. Some
progress in the case of a small $\boldsymbol r$ has been presented in
\cite{chakravarty2014inverse}.
In this manuscript, we restrict ourselves to the reflection-less case, $
\boldsymbol r=0$. This implies that no radiation background will be
superimposed to the solitonic interaction and that the initial coherences with
the excited state must be zero. Therefore, the initial density matrix is block
diagonal and commutes with $\boldsymbol J$. Thus we have
\begin{align}
\label{eq:rho0}
\boldsymbol{\rho_0}(Z,\Delta)=
\left(\begin{array}{cc}
\boldsymbol{\rho_g}^{(0)}(Z,\Delta) & 0\\
0&\rho_{33}^{(0)}(Z,\Delta)
\end{array}
\right)
\end{align}
where $\boldsymbol{\rho_g}$ denotes the $2\times2$ density matrix of the
ground state.
From Eq.~\eqref{eq:scat2}, the evolution of the norming constants can be
derived and is given by
\begin{align}
\label{eq:beta}
\partial_Z \boldsymbol \beta^{(j)}=\frac{i\mu}{2} \left\langle
\frac{\boldsymbol\rho_g^{(0)}-\rho_{33}^{(0)}\boldsymbol I_2}{\lambda_j-
\Delta}\right\rangle \boldsymbol \beta^{(j)}.
\end{align}
Solving this equation and using the zeros of $a$ we can compute the fields by
the following formula
\begin{align}
\label{eq:omegas}
\left(
\begin{array}{c}
\Omega_s\\
\Omega_c
\end{array}
\right)=-4\sum^n_{j,l=1} (\boldsymbol{K}^{-1})_{jl} \alpha^*_l \boldsymbol \beta^{(j)} e^{i
\lambda_jT}
\end{align}
where
\begin{align}
\label{eq:K}
K_{ij}=2\frac{\alpha_i^* \alpha_j+ \boldsymbol \beta^{(j)T} \boldsymbol \beta^{(i)*}
e^{i(\lambda_j-\lambda_i^*)T}}{\lambda_i-\lambda_j^*}
\end{align}
and
\begin{align}
\label{eq:alpha}
\alpha_i=\frac{\prod_{j=1}^n(\lambda_j^*-\lambda_i)}{2\prod_{j\neq i}
(\lambda_j-\lambda_i)}.
\end{align}
Likewise, we can deduce an expression for the density matrix
\begin{align}
\boldsymbol \rho = \boldsymbol \Phi \boldsymbol \rho_0 \boldsymbol \Phi^{-1}.
\end{align}
From this general expression, we can deduce a simpler formula for the density
matrix for large $T$, i.e.,~after the pulses have passed, which is given by
\begin{align}
\label{eq:rhof}
\boldsymbol \rho^{f}(Z,\Delta)=\left(
\begin{array}{cc}
\left(\bar{ \boldsymbol a}^\dagger \boldsymbol{\rho_g}^{(0)} \bar{ \boldsymbol a}\right)(\lambda=
\Delta) & 0\\
0& \rho_{33}^{(0)}
\end{array}
\right)
\end{align}
with the matrix $\bar{\boldsymbol a}$ given by
\begin{align}
\label{eq:ascat}
\bar{\boldsymbol a}(\lambda)=\prod_{j=1}^n \boldsymbol l_j(\lambda),\quad \boldsymbol l_j(\lambda)=\boldsymbol
I_2+\frac{\lambda_j-\lambda_j^*}{\lambda-\lambda_j}\frac{\boldsymbol v^{(j)} \boldsymbol
v^{(j)\dagger}}{\|\boldsymbol v^{(j)}\|^2}
\end{align}
where the $\boldsymbol v^{(j)}$ vectors are defined recursively as
\begin{align}
\label{eq:vi}
\boldsymbol v^{(1)}&=\boldsymbol \beta^{(1)}, \quad
\boldsymbol v^{(i)}= \left( \prod_{j=1}^{i-1} \boldsymbol l_j(\lambda_i) \right)^{-1}\boldsymbol
\beta^{(i)}, \quad i=2,...,3.
\end{align}
In summary, to obtain a solution of the CMB equations (in the reflection-less
case), first we need to give an initial density matrix of the form given in
Eq.~\eqref{eq:rho0} and the spectral parameters $\lambda_j$ (simple zeros of
$a$, the number of spectral parameters determines the order of the solution).
Then, we compute the evolution of the norming constants by solving Eq.~
\eqref{eq:beta}. Finally, we use Eqs.~(\ref{eq:omegas}-\ref{eq:alpha}) to
obtain the evolution of the fields and Eqs.~(\ref{eq:rhof}-\ref{eq:vi}) for
the final state of the atomic medium.
\section{Analytical solution}
\subsection{Pulse storage}
Now that we have all the pieces to compute the solutions, we consider the
particularly simple case of all the atoms prepared initially in the ground
state $\ket 1$ which is the most common situation that we encounter when
dealing with EIT and pulse storage in a $\Lambda$ configuration. Thus, we have
\begin{align}
\boldsymbol{\rho_g}^{(0)}=\left( \begin{array}{cc}
1&0\\
0&0
\end{array} \right), \quad \rho_{33}^{(0)}=0.
\end{align}
We will start by considering the one-soliton ($n=1$) solution and write $
\lambda_1=\xi_1-i/\tau_1$ with $\xi_1,\tau_1\in\Re$ and $\tau_1>0$. First, we
solve Eq.~\eqref{eq:beta} noting that the initial density matrix is
independent of detuning and the matrix $\boldsymbol\rho_g^{(0)}-\rho_{33}
^{(0)}\boldsymbol I_2$ is diagonal (the two equations for each component of $
\boldsymbol \beta$ are decoupled). Thus, it is easy to see that the solution can be
written as
\begin{equation}
\label{eq:beta1}
\boldsymbol \beta^{(1)T}=\left(
c_1e^{-(\kappa_1+i\delta_1)Z},
c_2\right)
\end{equation}
($\boldsymbol \beta^{(1)T}$ denotes the transpose of $\boldsymbol \beta^{(1)}$) where we
defined
\begin{subequations}
\label{eq:kappadel}
\begin{align}
\kappa_1=&\frac{\mu}{2 \tau_1}\int \frac{F(\Delta)d\Delta}{(\Delta-\xi_1)^2+
(1/\tau_1)^2},
\end{align}
and
\begin{align}
\delta_1=&\frac{\mu}{2}\int \frac{(\Delta-\xi_1)F(\Delta)d\Delta}{(\Delta-
\xi_1)^2+(1/\tau_1)^2},
\end{align}
\end{subequations}
with $\kappa_1$ being the absorption depth as defined for a weak-field
excitation.
Now, we use Eq.~\eqref{eq:omegas} to compute the initial pulses. Since $n=1$
we have that $\boldsymbol K $ is just a scalar and $\alpha=(\lambda_1^*-\lambda_1)/
2=i/\tau_1$. Thus we obtain,
\begin{subequations}
\label{eq:pulses1}
\begin{align}
\Omega_s=&\frac{4c_1}{\tau_1}e^{i(\xi_1 T-\delta_1 Z)} \left[ 2|c_1|\cosh(T/
\tau_1-\kappa_1 Z +\sigma_1)\right.\nonumber\\
&\left.+|c_2| \exp(T/\tau_1+\kappa_1 Z +\sigma_2)\right]^{-1} \\
\Omega_c=&\frac{4c_2}{\tau_1}e^{i\xi_1 T} \left[ 2|c_2|\cosh(T/\tau_1+
\sigma_2)\right.\nonumber\\
&\left.+ |c_1|\exp(T/\tau_1-2\kappa_1 Z +\sigma_1)\right]^{-1}
\end{align}
\end{subequations}
where we defined
\begin{align}
\sigma_i=\ln (|c_i|\tau_1), \quad i=1,2.
\end{align}
Therefore we see that $\delta_1$ appears as an extra contribution to the
refractive index for the signal field.
From these expressions we can calculate the pulse area for each pulse, which
is defined as
\begin{equation}
\theta(Z)=\int_{-\infty}^{\infty}|\Omega(Z,T)|dT,
\end{equation}
and find that the two-pulse area \citep{clader2007two} is given by
\begin{equation}
\Theta(Z)=\sqrt{\theta_s(Z)^2+\theta_c(Z)^2}=2\pi.
\end{equation}
This was to be expected as these are essentially the same solutions presented
in \cite{clader2007two}.
At the boundary, $Z=0$, if $|c^{(1)}|\gg |c^{(2)}|$ these are well
approximated by
\begin{subequations}
\label{eq:pulstor}
\begin{align}
\Omega_s=&\frac{\theta_s(Z=0)}{\pi\tau_1}e^{i\xi_1 T} \text{sech}(T/\tau_1+
\sigma_1),\\
\Omega_c=&\frac{\theta_c(Z=0)}{\pi\tau_1}e^{i\xi_1 T} \text{sech}(T/\tau_1+
\sigma_1).
\end{align}
\end{subequations}
This solution describes the propagation of two matched pulses with
complementing areas such that the two-pulse area is always equal to $2\pi$. As
the signal pulse propagates at a reduced group velocity, it excites some of
the population to the excited state where it can be stolen by the control
pulse. This leads to an amplification of the control pulse while the signal
pulse is absorbed. After the interaction between the two fields, the control
pulse becomes a full $2\pi$ pulse propagating at the speed of light (as it is
decoupled from the medium) together with the full absorption of the signal
field. This is depicted in Fig.~\ref{fig:pulses} for $-20<t/\tau_1<0$.
\begin{figure}
\centering
\includegraphics[width=1.\linewidth]{pulses}
\caption{\label{fig:pulses} (Color online) Pulse propagation as dictated by
the second-order solution when $|c^{(1)}|\gg |c^{(2)}|$ and $\sigma_1\gg\zeta
$. The plot on the left (right) shows the evolution of the signal (control)
pulse. For $T/\tau_1<0$ we have the encoding of the signal pulse onto the
ground state coherence and for $T/\tau_1>0$ we have the retrieval followed by
the re-encoding in the displaced location. }
\end{figure}
Let us now take a look at the density matrix after the soliton interaction has
taken place. From Eq.~\eqref{eq:rhof} we have that
\begin{align}
\boldsymbol{\rho_g}^{f}(Z,\Delta)=\bar{ \boldsymbol a}^\dagger \boldsymbol{\rho_g}^{(0)} \bar{ \boldsymbol a}=
\left( \begin{array}{cc}
|\bar a_{11}|^2 & \bar a_{11}^*\bar a_{12}\\
\bar a_{11}\bar a_{12}^*&|\bar a_{12}|^2
\end{array}\right).
\end{align}
and using Eqs.~\eqref{eq:ascat} and \eqref{eq:vi} we obtain
\begin{subequations}
\begin{align}
\bar a_{11}(\lambda)&=1-\frac{i/\tau_1}{\lambda-\xi_1+i/\tau_1}(1-
\tanh(\kappa_1 Z -\sigma_{12})), \\
\bar a_{12}(\lambda)&=-\frac{i/\tau_1}{\lambda-\xi_1+i/\tau_1}C_{12}e^{-i
\delta_1 Z}\text{sech}(\kappa_1 Z -\sigma_{12}),
\end{align}
\end{subequations}
where we defined
\begin{align}
\sigma_{12}=\ln |c_1/c_2|, \quad \text{and} \quad C_{12}=\frac{c_1c^{*}_2}{|
c_1c_2|}.
\end{align}
Finally, we can write
\begin{subequations}
\label{eq:rhos}
\begin{align}
\rho_{11}=&1+\frac{1}{\tau_1^2(\Delta-\xi_1)^2+1}\left[\tanh^2(\kappa_1 Z -
\sigma_{12})-1\right] ,\\
\rho_{22}=& \frac{1}{\tau_1^2(\Delta-\xi_1)^2+1}\text{sech}^2(\kappa_1 Z -
\sigma_{12}),\\
\rho_{12}=&-\frac{C_{12}e^{-i\delta_1 Z}}{\tau_1^2(\Delta-\xi_1)^2+1}\left[i
\tau_1(\Delta-\xi_1)+\tanh(\kappa_1 Z -\sigma_{12})\right] \nonumber\\
&\times \text{sech}(\kappa_1 Z -\sigma_{12}).
\end{align}
\end{subequations}
From these equations it is clear that while the switching between the signal
and control pulses is taking place, the signal pulse is being transferred
into a spin wave (imprint) located at $\kappa_1 Z=\sigma_{12}$. This location,
denoted $x_1$, can be written in terms of the pulse area to give the simple
formula
\begin{equation}
\label{eq:x1}
\kappa_1 x_1=\ln\left(\frac{\theta_s(Z=0)}{\theta_c(Z=0)}\right).
\end{equation}
Hence, the ratio of the pulse areas plays a key role during the storage of the
signal pulse as it determines the location of the imprint. Figures
\ref{fig:den}(a) and (b) show the shape of the imprint for two different
detuning values.
The storage procedure is in line with the proposal made in
\cite{grigoryan2009short}, where an intense pulse is mapped into a spin wave
by means of a low-intensity control pulse in order to reduce the storage
length as compared to the usual EIT techniques. Yet, no mention of a retrieval
procedure was made, and so we address this point in the next section.
\begin{figure}
\includegraphics[width=1.\linewidth]{den}
\caption{\label{fig:den} (Color online) Imprint of the signal pulse in the
ground state density matrix as it has been encoded before [(a) and (b)] and
after the displacement [(c) and (d)]. $\tau_1\Delta=0$ for (a) and (c) and $
\tau_1\Delta=0.75$ for (b) and (d).}
\end{figure}
\subsection{Pulse retrieval and spin wave manipulation}
Having analyzed in detail how an intense pulse can be mapped into a spin wave,
we need to address the matter of retrieving the information stored. Looking at
previous work on the subject, we see that in order to recover the signal pulse
we need to apply a control field. Going back to the first-order solution we
notice that if we set the integration constant $c_1$ in \eqref{eq:beta1} equal
to zero, the norming constant is given by
\begin{align}
\label{eq:beta2}
\boldsymbol \beta^{(2)T}=(0,d).
\end{align}
This leads to a $2\pi$-control field propagating at the speed of light in
vacuum
\begin{align}
\Omega_c=&\frac{2}{\tau_2}\frac{d}{|d|}e^{i\xi_2 T} \text{sech}(T/\tau_2+\zeta),
\end{align}
where the spectral parameter was taken to be $\lambda_2=\xi_2-i/\tau_2$ and
we defined
\begin{equation}
\zeta=\ln (|d|\tau_2).
\end{equation}
To study how this control pulse interacts with the spin wave encoded in the
medium, we compute the second-order ($n=2$) solution obtained from the norming
constants as defined in Eqs.~\eqref{eq:beta1} and \eqref{eq:beta2}, and the
two spectral parameters $\lambda_1$ and $\lambda_2$. Using Eqs.~
\eqref{eq:omegas} through \eqref{eq:alpha} we obtain an exact solution which
is too cumbersome to be of any use. Hence, we look at the limiting case of $|
c^{(1)}|\gg |c^{(2)}|$ and $\sigma_1\gg\zeta$ for which the pulses at the
boundary $Z=0$ can be well approximated by
\begin{subequations}
\label{eq:pulses2}
\begin{align}
\Omega_s=&\frac{2}{\tau_1}\frac{c_1}{\sqrt{|c_1|^2+|c_2|^2}}e^{i\xi_1 T}
\text{sech}(T/\tau_1+\sigma_1)\nonumber\\
&+\frac{2}{\tau_2}\frac{c_2}{\sqrt{|c_1|^2+|c_2|^2}}e^{i\xi_2 T} \text{sech}(T/
\tau_2+\zeta-\chi),\\
\Omega_c=&\frac{2}{\tau_1}\frac{c_2}{\sqrt{|c_1|^2+|c_2|^2}}e^{i\xi_1 T}
\text{sech}(T/\tau_1+\sigma_1)\nonumber\\
&+\frac{2}{\tau_2}\frac{c_1}{\sqrt{|c_1|^2+|c_2|^2}}e^{i\xi_2 T} \text{sech}(T/
\tau_2+\zeta-\chi),
\end{align}
\end{subequations}
where we defined the phase-lag parameter
\begin{equation}
\label{eq:xi}
\chi=\frac{1}{2}\ln\left( \frac{(\tau_1+\tau_2)^2+(\xi_1-
\xi_2)^2\tau_1^2\tau_2^2}{(\tau_1-\tau_2)^2+(\xi_1-\xi_2)^2\tau_1^2\tau_2^2}
\right).
\end{equation}
This expression describes a well-defined sequence of pulses. First, we have
the matched pulses of duration $\tau_1$ that encode the strong signal pulse
into the medium. These have the same form as the pulses in Eq.~
\eqref{eq:pulstor} and thus lead to the storage of the signal pulse at a
location determined by Eq.~\eqref{eq:x1}. Then we have another set of matched
pulses but of duration $\tau_2$ and inverted areas, that is, we have a strong
control pulse and a weak signal pulse having the same areas as the initial
storing set.
This solution is shown in Fig.~\ref{fig:pulses}, where we can clearly see that
initially we have the same type of solution described in the previous section
followed by the interaction of the second set of matched pulses with the spin
wave left by the first. As the pulses approach the location of the imprint,
the signal pulse is amplified at the expense of the control pulse up to the
point where it becomes a full $2\pi$ pulse (here we can say that the signal
pulse is retrieved). After propagating a given distance, we have another
reversal of intensities: the signal pulse is absorbed leading to the
amplification of the control pulse which, after becoming a full $2\pi$ pulse,
becomes decoupled from the medium.
There are two main points to mention as far as the usual concept of pulse
storage and retrieval. First, the retrieved pulse does not have the same
properties as the original signal pulse if the spectral parameters differ. The
retrieval would, of course, be achieved by cutting off the medium at the
location where the signal pulse has maximum intensity. Second, the retrieval
control pulse is not just a control field but is accompanied by the same
amount of signal field as there was of the control field for the storage. This
contrasts with the usual distinction of one frequency controlling the other.
We will study how this compares with just using a $2\pi$ control field for the
retrieval in Sec.~\ref{sec:num}.
Now let us take a look at the density matrix after the interaction of the
second matched pulse set with the spin wave left behind by the first. For
this, we again use Eqs.~\eqref{eq:ascat} and \eqref{eq:vi} to obtain the
following expressions for scattering coefficients:
\begin{subequations}
\begin{align}
\bar a_{11}(\lambda)&=1-\frac{i/\tau_1}{\lambda-\xi_1+i/\tau_1}(1-
\tanh(\kappa_1 Z -\sigma_{12}-\chi)), \\
\bar a_{12}(\lambda)&=-\frac{i/\tau_1}{\lambda-\xi_1+i/\tau_1}C_{12}e^{-i
\delta Z}e^{i\phi}\text{sech}(\kappa_1 Z -\sigma_{12}-\chi),
\end{align}
\end{subequations}
where the extra phase $\phi$ depends on the two eigenvalues $\lambda_1$ and $
\lambda_2$. When these are purely imaginary, $\phi=\pi$. As a result, the
density matrix takes the form
\begin{subequations}
\begin{align}
\rho_{11}=&1+\frac{1}{\tau_1^2(\Delta-\xi)^2+1}\left[\tanh^2(\kappa_1 Z -
\sigma_{12}-\chi)-1\right] ,\\
\rho_{22}=& \frac{1}{\tau_1^2(\Delta-\xi)^2+1}\text{sech}^2(\kappa_1 Z -\sigma_{12}-
\chi),\\
\rho_{12}=&\frac{-C_{12}e^{-i\delta Z}e^{i\phi}}{\tau_1^2(\Delta-\xi)^2+1}
\left[i\tau_1(\Delta-\xi)\right.\left.+\tanh(\kappa_1 Z -\sigma_{12}-\chi)
\right]\nonumber\\
& \times \text{sech}(\kappa_1 Z -\sigma_{12}-\chi).
\end{align}
\end{subequations}
These have the same form as those given in Eqs.~\eqref{eq:rhos}, and so the
result of the interaction with the second set is clear: the spin wave has been
displaced by an amount determined by the phase-lag parameter $\chi$.
Additionally, the coherence suffered a $\phi$ phase shift. Therefore, if the
medium is not cut off before the retrieved pulse is absorbed, the imprint is
displaced further into the medium with new location
\begin{equation}
\label{eq:x2}
\kappa_1 x_2=\sigma_{12}+\chi.
\end{equation}
\subsection{Effects of Doppler broadening and detuning}
Let us take a moment to study the effect of inhomogeneous broadening and
detuning in the solutions just described. Tracing back our steps, we readily
notice that the only place where a Doppler average is taken is in Eqs.~
\eqref{eq:kappadel}. From the expressions for the pulses and the spin-wave it
is clear that the absorption coefficient sets the spatial dimension, that is,
the smaller $\kappa_1$ is, the longer the atomic sample will need to be in
order to fit the pulses and the spin-wave. On the other hand, the only effect
of the parameter $\delta_1$ is to produce a position-dependent phase to the
signal pulse which is in turn transferred to the spin-wave.
So far we have kept the spectral parameters complex, but taking a look at the
expression for the pulses [Eqs.~\eqref{eq:pulses1} and \eqref{eq:pulses2}] we
notice that the imaginary part can be identified as the pulse duration while
the real part appears as a self-detuning term. This term also appears in the
expression for $\kappa_1$ and $\delta_1$ by shifting the detuning everywhere
except in the Doppler distribution. Furthermore, looking back at the
definition of the Rabi frequencies and the slow-varying envelopes for the
fields, we see that this self-detuning should be included as a real detuning.
Thus we set it to zero and rewrite the expressions for $\kappa_1$ and $
\delta_1$ in term of the absorption coefficient in the absence of Doppler
broadening, $\kappa_0=\mu \tau_1/2$,
\begin{subequations}
\label{eq:norm}
\begin{align}
\kappa_1=&\kappa_0\int \frac{\bar F(\nu)d\nu}{\nu^2+1}, \\
\delta_1=&\kappa_0\int \frac{\nu \bar F(\nu)d\nu}{\nu^2+1},
\end{align}
\end{subequations}
where
\begin{align}
\bar F(\nu)= \frac{T_2^\star}{\tau\sqrt{2 \pi}}e^{-(\nu-\tau \bar
\Delta)^2T_2^{\star 2}/2 \tau^2}.
\end{align}
We plot these as a function of the Doppler distribution width and mean (see
Fig.~\ref{fig:dopp}). We readily notice that the maximum value for the
absorption coefficient is attained in the absence of Doppler broadening and
zero detuning. When the detuning is non-zero, the maximum for $\kappa$ shifts
to a wider Doppler distribution. As the absorption coefficient sets the
spatial length, this can be used to reduce the size of the atomic sample if
detuning is unavoidable. As for the extra contribution to the index of
refraction, there must be some detuning in order for it to have some effect
due to the anti-symmetry of $\delta_1$ with respect to $\bar \Delta$.
Moreover, both parameters tend to zero in the limits of large width and
detuning, so in general it is preferable to keep them as low as possible to
reduce the size of the atomic sample.
\begin{figure}
\includegraphics[width=1.\linewidth]{doppT2}
\caption{\label{fig:dopp} (Color online) Absorption coefficient $\kappa$ and
extra contribution to the index of refraction $\delta$ as a function of the
Doppler width and mean detuning. (a) shows $\kappa$ and (b) $\delta$ as a
function of $\tau/T_2^*$ for $\tau \bar \Delta=0,-0.6,1.2,-2,5$. (c) shows $
\kappa$ and (d) $\delta$ as a function of $\tau \bar \Delta$ for $\tau/
T_2^*=0,1,2,4,16$. }
\end{figure}
The absorption coefficient also determines the group velocity of the signal
pulse, which is given by
\begin{equation}
\frac{v_g}{c}=\frac{1}{1+\kappa_1 c \tau}.
\end{equation}
Hence, as $\kappa$ becomes smaller the signal pulse travels faster through the
medium and so needs to interact for a larger distance with the medium to be
able to produce the same effect as a slower pulse. This is in clear accordance
with the previous discussion about how $\kappa$ sets the spatial dimension.
The change in spatial dimension is also present in the expressions for the
location of the initial imprint and the displacement which can be written as
\begin{subequations}
\begin{align}
\kappa_0 x_1=\frac{\kappa_0}{\kappa_1}\ln\left(\frac{\theta_s(Z=0)}
{\theta_c(Z=0)}\right), \\
\kappa_0 \Delta x=\kappa_0 (x_2- x_1)=\frac{\kappa_0}{\kappa_1}\ln\left(
\frac{\tau_1+\tau_2}{\tau_1-\tau_2} \right).
\end{align}
\end{subequations}
\section{Non-ideal pulse storage and retrieval}
\label{sec:num}
After discussing the analytic solution in detail we need to compare it to more
realistic conditions. In order to accomplish this we recur to numerical
calculations. Note that the initial assumption of zero decay was justified
since the lifetime of the higher level is around two orders of magnitude
longer that the duration of the pulses for the regime in which we are working.
As an example, for Rb we have $\tau \Gamma\approx 0.01$ for pulse duration $
\tau \approx 0.26 \text{ns}$ \citep{steck2001rubidium}. We will consider it,
as well as the finiteness of the pulses, as it may have a noticeable effect.
We will assume that the decay rate into each ground state is the same for a
given decay rate $\Gamma$ for the excited state. Additionally, we set the Rabi
frequencies equal to zero when $|\tau_1\Omega|<10^{-5}$.
\begin{figure}
\includegraphics[width=1.\linewidth]{x1}
\caption{\label{fig:x1} (Color online) Location of the stored spin wave as a
function of the control pulse area for (a) $\tau \bar \Delta=0$ and $\tau/
T_2^*= 0,0.5,1,2$ and (b) $\tau \bar \Delta=1.3$ and $\tau/T_2^*=0, 0.5,1,2$.}
\end{figure}
First, we consider the storage of the signal pulse and thus the location of
the stored spin-wave. The results for different values of the parameters are
shown in Fig.~\ref{fig:x1}. We notice that it is difficult to distinguish
between the different curves, but that is due to the agreement between the
analytical solution and the numerical results. In the absence of detuning
[Fig.~\ref{fig:x1}(a)] we note that the effect of spontaneous emission is to
lower the curve, that is, the signal pulse is stored before the predicted
location. This is the same effect that was reported in
\cite{gutierrez2015manipulation} in the absence of Doppler broadening.
If we now take a closer look at Fig.~\ref{fig:x1}(b) we can spot some
differences. The most noticeable effect is that the curves have been reversed.
In this case ($\tau \bar \Delta=1.3$) the spin wave for a Doppler width of $
\tau/T_2^*= 0.5$ is located after the one made for $\tau/T_2^*= 1$. This is
just a consequence of the displaced maximum for $\kappa_1$ as a function of
the width [see Fig.~\ref{fig:dopp}(a)]. In addition there is a somewhat hidden
feature. For a Doppler width of $\tau/T_2^*= 0.5$ the effect of spontaneous
emission is suppressed: the four curves overlap almost perfectly. While for $
\tau/T_2^*= 0$ the effect is reversed. As for the other two width values, the
curves with spontaneous emission are lowered, but for $\tau/T_2^*=1$ the
effect is less noticeable.
\begin{figure}
\includegraphics[width=1.\linewidth]{x1t2}
\caption{\label{fig:x1t2} (Color online) Location of the stored spin wave as a
function of the Doppler width for a control pulse area of $\theta_c=0.05\pi$.
In (a) we fix the decay rate and consider different values for the detuning, $
\tau \bar \Delta=0.4,0.9,1.3$ and in (b) we fix the detuning at $\tau \bar
\Delta=1.3$ and vary the decay rate. }
\end{figure}
To investigate this strange phenomenon we plot the location of the imprint as
a function of $T_2^*$ for a higher value of spontaneous emission $\tau
\Gamma=0.1$ [see Fig.~\ref{fig:x1t2}(a)]. The effect is clear: for
sufficiently detuned fields the effect of spontaneous emission is reversed for
small values of the Doppler width. Now, with increasing rate of spontaneous
emission, the reversal is enhanced until it reaches a maximum [see Fig.~
\ref{fig:x1t2}(b)]. After this, the effect is diminished. Comparing the
crossing of the numerical results with the curve predicted by the theory we
notice that this reversal is limited to the region where $\partial \kappa/
\partial (\tau/T^*_2) >0$. This effect is independent of the sign of the
detuning.
\begin{figure}[t]
\includegraphics[width=1.\linewidth]{dis}
\caption{\label{fig:dis} (Color online) Displacement of the stored spin wave
as a function of the second control pulse duration for $\tau \bar \Delta=0$
and (a) $\tau/T_2^*= 0$, (b) $\tau/T_2^*= 0.5$, (c) $\tau/T_2^*= 1$ and (d) $
\tau/T_2^*= 2$. Then we fix $\tau \bar \Delta=1.3$ and (e) $\tau/T_2^*= 0$,
(f) $\tau/T_2^*= 0.5$, (g) $\tau/T_2^*= 1$ and (h) $\tau/T_2^*= 2$. The
initial location of the imprint was chosen to be at $\kappa_0 x_1=5$ where the
storage was made by a $2\pi$-signal pulse and the required control pulse.}
\end{figure}
Now, let us take a look at the displacement of the spin-wave. We show the
results for different values of Doppler width, detuning and spontaneous decay
rate along with the theoretical result in Fig.~\ref{fig:dis}. We notice that
in the absence of spontaneous emission the numerical results are very similar
to those predicted by the analytical solution. The only difference is that the
maximum displacement is attained for values slightly bigger than $\tau_2=
\tau_1$. The biggest deviation is when $\tau \bar \Delta=1.3$ and $\tau/
T_2^*=2$ [see Fig.~\ref{fig:dis}(h)], which is also the case where the
largest control-pulse area is required for the storage of the $2\pi$-signal
pulse and thus is the one where the two-pulse area deviates the most from the
one predicted for the storage stage.
The effect of spontaneous emission is evident: as the decay rate increases,
the maximum displacement is shifted to larger values of $\tau_2$. This is
coupled with a lowering in the maximum displacement with increasing rate of
spontaneous emission: For the cases with no detuning and spontaneous decay $
\tau \Gamma \leq 0.01$, by using double precision numbers, no maximum was
attained but we can be fairly certain that a maximum should be attained at
least for the cases with non-zero decay because of the pulse dynamics shown in
Fig.~\ref{fig:retdyn} as explained in the next paragraph. Here, there is a
clear difference between the cases with spontaneous emission and the one
without, unlike in the storage stage. Nonetheless, a considerable displacement
of the spin-wave can be achieved in each case as long as the right duration
for the control pulse is chosen (this feature was not reported in
\citep{gutierrez2015manipulation} as the duration of the control pulses was
always lower than that of the original signal pulse).
\begin{figure}
\includegraphics[width=1.\linewidth]{retnodet}
\caption{\label{fig:retdyn} (Color online) Pulse dynamics around the
displacement peaks for $\tau \bar \Delta=0$ and (a) $\tau \Gamma= 0$, (b) $
\tau \Gamma= 0.01$, and (c) $\tau \Gamma= 0.05$.}
\end{figure}
\begin{figure}
\includegraphics[width=1.\linewidth]{retdet}
\caption{\label{fig:retdyndet} (Color online) Pulse dynamics around the
displacement peaks for $\tau \bar \Delta=1.3$ and (a) $\tau \Gamma= 0$, (b) $
\tau \Gamma= 0.01$, and (c) $\tau \Gamma= 0.05$.}
\end{figure}
\begin{figure}
\includegraphics[width=1.\linewidth]{retden}
\caption{\label{fig:retden} (Color online) Density matrix elements for the
ground states after the maximum displacement for $\tau \Gamma= 0.05$ and (a)
$\tau \bar \Delta=0$ and (b) $\tau \bar \Delta=1.3$.}
\end{figure}
Figure \ref{fig:retdyn} shows the pulse dynamics in the retrieval stage at the
displacement peaks for the cases in resonance with $\tau/T_2^*= 1$. We only
show one value for the Doppler broadening width as the other cases behave in
the same way. Let us first take a look at the case in resonance. The first
thing we notice is that for the case with no decay, the retrieval behaves
pretty much as described by the analytical solution. Now, if we add some
decay channel we notice that the signal pulse starts slowing down at the same
time as it decays. For the case with highest spontaneous emission the signal
pulse almost comes to a full stop, as can be understood by the close to 90
degrees turn. As for the case with $\tau \Gamma= 0.01$, the slowing down and
decay are evidence of the eventual maximum displacement which was not achieved
due to limitations in the precision. If we now go off resonance (see Fig.~
\ref{fig:retdyndet}) we notice that the slowing down disappears. This can be a
result of the reduced population induced in the excited state. Another
consequence of this is the preservation of the coherence between the ground
state levels which would allow another retrieval. Figure \ref{fig:retden}
shows the density matrix elements for the cases with $\tau \Gamma= 0.05$. We
readily notice that the coherence is completely lost when in resonance whilst
it is mostly preserved in the detuned scenario. Therefore, the detuning
prevents the loss of coherence due to spontaneous emission. It is also worth
noting that a maximum displacement is always attained even for the case with
no decay channel.
\begin{figure}
\includegraphics[width=1.\linewidth]{ret}
\caption{\label{fig:retr} (Color online) Input $2\pi$-signal pulse ($\kappa_0
Z=0$) and output pulses ($\kappa_0 Z=10$) for two values of the duration of
the control pulse $\tau_2/\tau_1=1,1.15$. The atomic medium is ten $
\kappa_0^{-1} $ long and we chose $\tau \bar \Delta=0$, $\tau/T_2^*=2$ and $
\tau_1\Gamma=0.05$. }
\end{figure}
Moving on to the retrieval of the signal pulse we find that there is a caveat,
though. As we already mentioned in the analytical solution, the retrieved
signal pulse inherits the duration of the control pulse used to retrieve it.
This remains true in the non-idealized scenario as can be seen in Fig.~
\ref{fig:retr}. Therefore there is a compromise: if we want the highest output
intensity then we have to allow the duration of the retrieved pulse to be
different or, conversely, if we want to keep the same parameters for the
signal pulse we will have to settle for a lower output intensity. In this
regard, the presence of Doppler broadening and detuning helps by bringing the
maximum displacement closer to that predicted by the theory.
As a last note, this scheme also helps reduce the storage length compared to
the usual EIT scheme. This was also noted in \cite{grigoryan2009short}. For
the parameters considered in this work the EIT-type interaction has a minimal
effect in both the control and signal fields for the equivalent distances
considered here. These need to be substantially increased in order to see the
formation of the adiabaton pair \citep{grobe1994formation}.
\section{Conclusions}
In conclusion, we have used the technique of inverse scattering to obtain
analytical solutions that describe the storage and retrieval of intense pulses
in the presence of Doppler broadening and detuning. These results are in
accordance with previous treatments
\citep{groves2013jaynes,gutierrez2015manipulation}. Only minor corrections are
needed and the main effect (according to analytical solutions) is the change
in length scale which is determined by $\kappa_1$, thus showing that the
preliminary conjectures made in \citep{gutierrez2015multi} were somewhat
pessimistic.
Additionally, we have tested the result given by the analytical solution via
numerical solutions through which we have found a notable agreement for the
storage step. The addition of homogeneous broadening causes a minimal
deviation from the theoretical predictions and a strange reversal of its
effect in the region where $\partial \kappa/\partial (\tau/T^*_2) >0$. As for
the displacement (or retrieval, if the displacement is greater than the medium
length), the differences with the theory are appreciable but nonetheless
follow a similar trend. Most noticeable is the shift of the maximum for the
displacement of the spin-wave (an effect previously overlooked) which would
allow the experimental realization of this procedure even for higher values of
decay rate. It is also worth highlighting the slowing down of the retrieved
pulse when in resonance (due to the spontaneous decay) and its disappearance,
along with the protection of coherence as we go off-resonance.
The results presented here and in \citep{gutierrez2015manipulation} show that
this storage scheme can be implemented experimentally in a variety of
conditions. In addition, the backward-transfer solution as well as the multi-
pulse storage presented in \citep{gutierrez2015multi} should remain valid with
the appropriate change of the spatial dimension.
\begin{acknowledgments}
The author gratefully acknowledges J.~H.~Eberly and M.~A.~Alonso for fruitful
discussions and a careful review of this manuscript.
This work was supported by NSF grants (PHY-1203931, PHY-1505189) and a
CONACYT fellowship
awarded to R. Guti\'{e}rrez-Cuevas.
\end{acknowledgments}
|
2,877,628,090,282 | arxiv | \section{Introduction}
Superconformal field theories (SCFT) have long been of considerable interest
in many different contexts. In addition to the key role played by SCFT in $D=2$ in superstring theory, the advent of the remarkable AdS/CFT duality has brought to the forefront the SCFT's in $D=3,4,6$. This paper is devoted to a study of superconformal sigma models in $3D$, in which a renewed interest has emerged over the last few years following the realization that the coincident $M2$-brane theory may be describable by a SCFT in $3D$ involving scalars in a suitable representation of certain Yang-Mills groups, and Chern-Simons interactions \cite{Bagger:2006sk,Bagger:2007jr,Bagger:2007vi,Schwarz:2004yj,Gaiotto:2007qi,Gaiotto:2008sd}. In most studies on this subject so far, the scalar fields are taken to parametrize a Euclidean vector space, and there are only few results in which the scalars are described by a sigma model with non-flat target manifold. Furthermore, typically one works in Minkowskian spacetime. Our aim is to fill this gap by constructing explicitly all possible gauged sigma models in $3D$ with conformal supersymmetry and non-flat target manifolds. We also work in general $3D$ spacetimes which admit conformal Killing spinors, as required by conformal supersymmetry.
It is known that conformal symmetry in a sigma model in $D$ dimensions requires that the $D$-dimensional spacetime admits conformal Killing vectors and that the target space admits a homothetic conformal Killing vector. The latter requirement amounts to the statement that the target space is a cone. Superconformal extensions of the model, however, depend on the dimension of spacetime and on the amount of supersymmetry. In \cite{Sezgin:1994th}, (super-)conformal sigma models and the gauging of target space isometries were studied in arbitrary dimensions. Various geometrical aspects will carry over to $3D$ but there are significant differences. For example, the Chern-Simons terms needed for gauging of the target space isometries is special to $3D$.
We will discuss two ways of constructing rigid superconformal field theories in $3D$. One approach is to perform a direct construction either in component formalism or, if available, in superspace. An alternative approach is to start from gauged Poincar\'e supergravities in $3D$ which have been already constructed \cite{Nicolai:2000sc,Nicolai:2001sv,Nicolai:2001ac,deWit:2003ja}, and to take a rigid limit such that exactly the desired fields and (global) superconformal symmetries survive.
The latter approach basically is an inverse of the conformal tensor calculus
\cite{Ferrara:1977ij,Kaku:1978nz,Kaku:1978ea} (for a review, see \cite{lectures}), combined with taking the limit of rigid (superconformal) supersymmetry. In the usual conformal program one starts from matter coupled conformal supergravity and, after gauge fixing, ends up with matter coupled Poincar\'e supergravity. Here, instead, we start from the already constructed 3D matter coupled Poincar\'e supergravity and, by making field redefinitions, go back to the conformal basis, while at the same time we take the limit of rigid superconformal supersymmetry. We will show in detail how this works. It turns out that for ${\cal N} \ge 2$, some of the auxiliary fields of Poincar\'e supergravity cannot be ignored in the superconformal sigma model since a subset of these auxiliary fields ends up being part of the target space geometry in the rigid superconformal limit.
We shall also discuss the gaugings of the isometry groups of the sigma model, for ${\cal N}\le 4$, making use of 3D Chern-Simons terms, and requiring superconformal symmetry. In particular, we will derive the restrictions on these gaugings for the different numbers of conformal supersymmetry. As a byproduct of the constructions presented in this paper we shall present a non-renormalization theorem for the K\"ahler potential for ${\cal N} \ge 3$.
The paper is organized as follows. In section~2, we review the structure of bosonic conformal sigma-models in arbitrary dimensions. In section~3, we specify to three dimensions and extend the construction to ${\cal N}=1$ and ${\cal N}=2$ superconformal sigma models. In sections 4 and 5 we discuss the possible superconformal gaugings in three dimensions.
We determine the conditions on the gauge group and we give explicit actions which involve the Chern-Simons interactions of the gauge fields. Finally, in section~6 we present an alternative approach to the construction of these theories as particular rigid limits of (gauged) Poincar\'e supergravities. Appendices A and B collect notation, appendix~C contains a detailed discussion on conformal Killing spinors in three dimensions.
\section{Conformal sigma models}
In this section we briefly review the structure of conformal sigma models
in arbitrary space-time dimensions higher than two.
We assume in $p+1$ space-time dimensions a background metric~$h_{\mu\nu}$
($\mu=0, 1, \dots, p$) with signature $(-++\,\cdots\,+)$
which admits a conformal Killing vector $\xi^\mu$, i.e.
\begin{eqnarray}
\nabla_\mu{(h)}\, \xi_\nu +\nabla_\nu{(h)}\, \xi_\mu
&=&
4\,\Omega\,h_{\mu\nu}
\;,
\label{conformalKilling}
\end{eqnarray}
where $\xi_\mu\equiv h_{\mu\nu}\,\xi^\nu$ and the conformal factor can be expressed as
$\Omega= \ft1{2(p+1)}\,h^{\mu\nu}\nabla_\mu{(h)} \xi_\nu$\,.
It has been shown in~\cite{Sezgin:1994th} that in this case the scalar target space ${\cal M}$ (with metric $G_{\alpha\beta}$) admits a homothetic conformal Killing vector $V^\alpha$ which is the gradient of a homogeneous function $V$, i.e.
\begin{eqnarray}
\nabla_\alpha V_\beta + \nabla_\beta V_\alpha &=& 2G_{\alpha\beta}
\;,
\qquad
V_\alpha ~=~ \partial_\alpha V
\;,
\qquad
V^\alpha \partial_\alpha V ~=~ 2 V
\;.
\label{confcon}
\end{eqnarray}
With a particular choice of target space coordinates $\Phi^\alpha=(\phi^0, \phi^i)$, the general solution of these equations is given by~\cite{Sezgin:1994th}
\begin{eqnarray}
V & =& e^{2(p-1)\phi^0} F(\phi^i)\;,
\qquad
G_{00} ~ =~ 2(p-1)^2 e^{2(p-1)\phi^0} F(\phi^i)\;,\nonumber\\
G_{0i} & =& (p-1) e^{2(p-1)\phi^0} \partial_i F(\phi^i)\;,
\qquad
G_{ij} ~=~ e^{2(p-1)\phi^0} \tilde{g}_{ij}(\phi^k)
\;,
\label{gensol}
\end{eqnarray}
where $F(\phi^k)$ and $\tilde{g}_{ij}(\phi^k)$ are an arbitrary function and
an arbitrary metric, respectively, depending on the $({\rm dim}\,{\cal M}-1)$ target space coordinates $\phi^i$. Notice that the function~$F$ is positive definite in order to have a positive metric. The existence of the homothetic conformal Killing vector allows us to define a new coordinate $r=\sqrt{2V}$,
in terms of which the target space metric takes the form of a cone \cite{Gibbons:1998xa}
\begin{eqnarray}
G_{\alpha\beta}(\Phi^\gamma) \, d\Phi^\alpha d\Phi^\beta &=&
dr^2+ r^2 g_{ij}(\phi^k)\, d\phi^i d\phi^j
\;,
\label{cone}
\end{eqnarray}
where $g_{ij}$ is related to (\ref{gensol}) by
\begin{eqnarray}
g_{ij} &=&
\frac{\tilde{g}_{ij}}{2F}+\frac{\partial_i F\partial_j F}{4 F^2}
\;.
\end{eqnarray}
Conversely, any cone metric (\ref{cone}) provides a solution to the
constraints (\ref{confcon}) defining a homothetic conformal Killing vector upon choosing the function $V=\frac12 r^2$.
\smallskip
The general action of a conformal sigma model is then given by
\begin{eqnarray}
{\cal L}_0 &=& -\ft12\sqrt{-h}\,
\Big(
\ft{p-1}{2p}V(\Phi^\alpha) R^{(h)}
+ h^{\mu\nu} \, \partial_\mu \Phi^\alpha\,
\partial_\nu \Phi^\beta \,G_{\alpha\beta}(\Phi^\gamma) + {\cal U}(\Phi^\alpha)
\Big)
\;,
\label{Lconformal}
\end{eqnarray}
where the function $V(\Phi^\alpha)$ from (\ref{confcon}) shows up as a compensating dilaton factor
multiplying the Ricci scalar $R^{(h)}$ of the background metric $h_{\mu\nu}$,
and ${\cal U}(\Phi^\alpha)$ is an arbitrary scalar potential, subject to the homogeneity condition
\begin{eqnarray}
V^\alpha \partial_\alpha {\cal U} &=& \frac{2(p+1)}{p-1}\,{\cal U}
\;.
\end{eqnarray}
The action (\ref{Lconformal}) is invariant under the conformal transformations
\begin{equation}
\delta_{\rm c} \Phi^\alpha = \xi^\mu \partial_\mu \Phi^\alpha + (p-1)\Omega\, V^\alpha\ ,
\end{equation}
with the conformal Killing vector $\xi^\mu$ defined in (\ref{conformalKilling}).
\smallskip
In cone coordinates $\Phi^\alpha=\{r, \phi^i\}$, the target space metric
takes the form (\ref{cone}) and the functions $V$ and ${\cal U}$ are given by
\begin{eqnarray}
V(\Phi^\alpha)=\ft12 r^2\;,\qquad
{\cal U}(\Phi^\alpha)=r^{2(p+1)/(p-1)}\, U(\phi^i)
\;,
\label{VU}
\end{eqnarray}
respectively, where $U(\phi^i)$ is an arbitrary function of the coordinates $\phi^i$.
\section{Superconformal sigma models in $D=3$}
In this section, we extend the bosonic results of the previous section
to the ${\cal N}$--extended supersymmetric case.
As the nature of supersymmetry depends on the dimension of spacetime,
we now focus on $D=3$. For an early discussion of the superconformal approach in $D=3$ dimensions, see \cite{Rosseel:2004fa}.
\subsection{${\cal N}$--K\"ahler cones}\label{sec:Nkahlercones}
We have seen in the preceding section that a sigma--model in arbitrary dimensions (with a suitable potential) is conformally invariant if and only if its target manifold $\mathcal{M}$ is isometric to a \textit{cone}. We stress that $\mathcal{M}$ should be a cone \textit{globally} and not just locally. This stems from the fact that the scalars of any conformal model couple to the world--volume scalar curvature through an interaction of the form $V R^{(h)}$, see \eq{Lconformal}; consistency of this coupling requires the function $ V\,\left(\equiv \frac12 r^2\right)$ to be globally defined on $\mathcal{M}$.
${\cal N}$--extended rigid superconformal symmetry simply adds the requirement that the cone $\mathcal{M}$ has the well--known holonomy appropriate for the target space of an ${\cal N}$--supersymmetric theory~\cite{AlvarezGaume:1981hm,deWit:1992up}.
In $D=3$ dimensions, $\mathcal{M}$ is an arbitrary Riemannian cone for ${\cal N}=1$, a \textit{K\"ahlerian cone} for ${\cal N}=2$, a \textit{hyperk\"ahler cone} for ${\cal N}=3$ \cite{deWit:2001bk,deWit:2001dj}, and so on. This is equivalent to requiring the base of the cone, $B$, to be an `${\cal N}$--Sasakian' manifold\footnote{The reader interested in the beautiful geometry of these manifolds is referred to the wonderful book \cite{sasakian}. }.
For our present purposes, it is convenient to rewrite the holonomy conditions in a uniform way for all ${\cal N}$'s. The holonomy algebra $\mathfrak{hol}(\mathcal{M})$ should be
\begin{equation}\label{holonomy}
\mathfrak{hol}(\mathcal{M})\subseteq \mathfrak{h}\subset \mathfrak{spin}({\cal N})\oplus \mathfrak{h}\subset \mathfrak{so}(\dim\mathcal{M}),
\end{equation}
where $\mathfrak{h}$ is the commutant of $\mathfrak{spin}({\cal N})$ in $\mathfrak{so}(\dim\mathcal{M})$.
Explicitly, equation~\eqref{holonomy} means that, for ${\cal N}\neq 4$, we may introduce frames $V^{aA}_{\alpha}$ on the target space $\mathcal{M}$. Here $\alpha=1, \dots, \dim \mathcal{M}$ is a `curved' tangent space index on $\mathcal{M}$, $A$ is an index of an irreducible spinorial representation of $\mathrm{Spin}({\cal N})$, and $a$ is an index of the commutant subgroup $H$ (in the representation induced by the vector representation of $\mathfrak{so}(\dim \mathcal{M})$ under the decomposition \eqref{holonomy}).
The case ${\cal N}=4$ is special \cite{deWit:1992up}: we have (in general) a pair of such frames, $V^{aA}_{\alpha}$ and $\tilde V^{a\dot A}_{\alpha}$, where $A$, $\dot A$ are indices of the two irreducible spinorial representations of $\mathfrak{spin}(4)\simeq
\mathfrak{spin}(2)\oplus\mathfrak{spin}(2)$.
Correspondingly, the target manifold factorizes into a hyperk\"ahler cone parametrized by hypermultiplets and one parametrized by \textit{twisted} hypermultiplets. We shall refer to a manifold $\mathcal{M}$ with the holonomy in eqn.~\eqref{holonomy} as an ${\cal N}$\textit{--K\"ahler manifold.} It is an easy consequence of Berger's theorem \cite{besse} that for ${\cal N}\geq 5$ all ${\cal N}$--K\"ahler manifolds are locally flat.\smallskip
Let $(\Sigma^{MN})_{AB}$ be the matrices representing the generators of $\mathfrak{spin}({\cal N})$ in the basic spinorial representation (see appendix~B for definitions and conventions ), and let $\eta_{ab}$ be the $H$--invariant pairing\footnote{$\eta_{ab}$ is antisymmetric if the irreducible spinorial representation of $\mathrm{Spin}({\cal N})$ is symplectic, namely for ${\cal N}=3,4,5$, symmetric if the representation is
orthogonal, ${\cal N}=7,8$, and a Hermitian form for ${\cal N}=2,6$.}. Consider the two--forms
\begin{equation}\label{kahlerforms}
\omega^{MN}=-\omega^{NM}\equiv (\Sigma^{MN})_{AB}\, \eta_{ab}\, V^{aA}_{\alpha}\, V^{bB}_{\beta}\, d\Phi^\alpha\wedge d\Phi^\beta\ .
\end{equation}
These ${\cal N}({\cal N}-1)/2$ $2$--forms should be regarded as generalized K\"ahler forms: Indeed, for ${\cal N}=2$, there is just one form, $\omega^{12}$, which is the K\"ahler form, while for ${\cal N}=3$ we get the three linear independent K\"ahler forms of the hyperk\"ahler cone. The statement that the cone $\mathcal{M}$ has the right holonomy, eqn.~\eqref{holonomy}, is reflected in the condition that the $\omega^{MN}$ are closed (in fact covariantly constant) forms. Moreover, $(\omega^{MN})^{\dim\mathcal{M}/2}\not=0$ since the corresponding matrices $(\Sigma^{MN})_{AB}$ are non--degenerate. Hence the forms $\omega^{MN}$ are \textit{symplectic structures}.
In flat space-time, the supersymmetry transformations are given in terms of these complex structures as
\begin{eqnarray}
\delta \Phi^\alpha &=& \overline{\epsilon}^0 \psi^\alpha + \overline{\epsilon}^I (\omega^{0I})^\alpha{}_\beta \,\psi^\beta
\ ,\nonumber\\
\delta \psi^\alpha &=& \gamma^\mu\partial_\mu\Phi^\beta\,
\left(\delta_\beta^\alpha \epsilon^0+\epsilon^I (\omega^{0I}){}^\alpha{}_\beta \right)
\ ,
\label{susygeneral}
\end{eqnarray}
where we have split $\epsilon^M\rightarrow (\epsilon^0, \epsilon^I)$,
and the index $I=1, \dots, {\cal N}-1$ labels the extended supersymmetries.
\subsection{Conformal Killing Spinors}
The formulation requires the existence of a conformal Killing spinor $\epsilon$ in $D=3$ defined by the equation
\begin{equation}
\nabla_\mu \epsilon = \frac12 \gamma_\mu \eta\ ,
\label{conformalspinor}
\end{equation}
for some spinor $\eta$ which can be readily solved from this equation. Both $\epsilon$ and $\eta$
are two component Majorana spinors in the three-dimensional space time with metric
$h_{\mu\nu}=e_\mu{}^r e_\nu{}^s \eta_{rs}$, see appendix~A for our spinor conventions.
In particular, equation (\ref{conformalspinor}) implies the existence of a conformal Killing vector,
$\xi^\mu=\overline{\epsilon}\gamma^\mu \epsilon$, i.e.\footnote{
As usual, in the expression for the Killing vector (and only there),
we assume {\em commuting} spinor components of $\epsilon$.}
\begin{eqnarray}
{\cal L}_\xi e_\mu{}^r &=& 2\Omega\,e_\mu{}^r-\Lambda^r{}_s\,e_\mu{}^s
\;,
\label{conf_vielbein}
\end{eqnarray}
with a compensating Lorentz transformation
$\Lambda_{rs}\equiv e_{[r}{}^\mu e_{s]}{}^\nu\,\nabla_{\mu} \xi_\nu+\xi^\mu \omega_{\mu\,rs}$\,.\footnote{
For Lorentz transformations, we use the conventions
$
\delta E_\mu{}^r = -\Lambda^{rs} E_{\mu s}
\,,\;\;
\delta \omega_\mu{}^{rs}= \partial_\mu \Lambda^{rs}+\dots
\,,\;\;
\delta \psi = -\ft14 \Lambda^{rs} \gamma_{rs}\psi
\,.
$}
The geometry of the three dimensional manifolds admitting non--zero conformal Killing spinors is discussed in some detail in appendix \ref{app:killingspinors}. The local integrability condition of eqn.\eqref{conformalspinor} is that the Cotton tensor $C_{\nu\rho\mu}$ vanishes, that is that the spacetime metric $h_{\mu\nu}$ should be locally conformally flat. The number of linearly independent conformal Killing spinors is $N(g)_\mathrm{local} \le 4$. As discussed in appendix C, it is known that \cite{baumleitner}:
\begin{itemize}
\item $N(g)_\mathrm{local} = 4$ if and only if $M$ is conformally flat;
\item $N(g)_\mathrm{local} = 1$ if and only if $M$ is locally conformally equivalent to a $pp$--wave metric
\begin{equation}
ds^2= dx^+\, dx^- +f(x^+, y) (dx^+)^2+ dy^2;
\end{equation}
\item $N(g)_\mathrm{local}=0$ in all other cases.
\end{itemize}
Note that a two component Majorana spinor counts as two linearly independent spinors. Thus, $N_g=4$ means that there are two Majorana spinors, with two
linearly independent components each, and $N_g=1$ refers to one Majorana spinor with a single nonvanishing component.
Not all of the conformal Killing spinors listed above have global extensions. As discussed in detail in appendix C, while it is difficult to analyse the global existence conditions for $N(g)_\mathrm{global}<4$, it can be shown that $N(g)_\mathrm{global}=4$ for:
\begin{itemize}
\item $M$ is conformally equivalent to one of the (infinitely many) covers of the $AdS_3$ space;
\item $M$ is conformal to an open domain in one of the above.
\end{itemize}
The latter case includes, in particular, the $3D$ Minkowski space.
\subsection{The ${\cal N}=1$ superconformal sigma model}
The ${\cal N}=1$ superconformal sigma model is the minimal supersymmetric extension of the bosonic model described by the Lagrangian \eq{Lconformal} such that the supermultiplet of fields consists of $(\Phi^\alpha, \psi^\alpha)$ and the target space metric is a Riemannian cone $\mathcal{M}$. The Lagrangian, up to quartic fermion terms, is given by \cite{Sezgin:1994th}
\begin{eqnarray}
e^{-1}\,{\cal L}_0
&=& -\ft18 V R^{(h)} -\ft12 h^{\mu\nu}\, \partial_\mu \Phi^\alpha\, \partial_\nu \Phi^\beta \,G_{\alpha\beta}(\Phi)
-\ft12 \, \overline{\psi}{}^{\alpha} \gamma^\mu D_\mu\psi^{\beta} \, G_{\alpha\beta}(\Phi)
\nonumber\\
&&{}
- \ft12 D_\alpha\partial_\beta {\cal F}(\Phi) \, \overline{\psi}^\alpha \psi^\beta
- \ft12 G^{\alpha\beta}(\Phi)\,\partial_\alpha {\cal F}(\Phi) \partial_\beta {\cal F}(\Phi)\ ,
\label{N1glob}
\end{eqnarray}
where $e=|{\rm det}\, e_\mu{}^r|$, the covariant derivative $D_\mu\psi^{a}= \nabla_\mu\psi^{\alpha} + \Gamma^\alpha_{\beta\gamma}(\Phi) \partial_\mu\Phi^\beta\psi^\gamma$ involves the Christoffel symbols of the scalar target space, and
as in the bosonic model $V$ is a function on the cone $\mathcal{M}$ which satisfies the relations \eq{confcon}:
\begin{equation}
G_{\alpha\beta} = D_\alpha \partial_\beta V\ ,\quad G^{\alpha\beta} \partial_\alpha V\partial_\beta V = 2V\ .
\end{equation}
The {\em real superpotential} ${\cal F}$ is another function on $\mathcal{M}$, which
encodes the scalar potential and must satisfy the homogeneity condition
\begin{equation}
G^{\alpha\beta} \partial_\alpha V \partial_\beta {\cal F}= 4{\cal F}\ .
\label{homF}
\end{equation}
The action (\ref{N1glob}) is invariant under the following conformal transformations
\begin{eqnarray}
\delta_{\rm c} \Phi^\alpha &=&
\xi^\mu \partial_\mu \Phi^\alpha + 2\Omega V^\alpha\ ,
\nonumber\\
\delta_{\rm c} \psi^\alpha &=&
\xi^\mu\,\nabla_\mu
\psi^\alpha
+ \ft14
\nabla_{\mu} \xi_\nu \,\gamma^{\mu\nu}\,\psi^\alpha
+ 2\Omega \psi^\alpha
- \Omega V^\beta \Gamma^\alpha_{\beta\gamma}\,\psi^\gamma
\ ,
\end{eqnarray}
and the conformal supersymmetry transformation, up to cubic fermions, by
\begin{eqnarray}
\delta_{\rm sc}\Phi^\alpha &=& \overline{\epsilon}\,\psi^\alpha\ ,
\nonumber\\
\delta_{\rm sc} \psi^\alpha &=& \partial_\mu\Phi^\alpha\,\gamma^\mu\epsilon
- G^{\alpha\beta}\,\partial_{\beta} {\cal F}\,\epsilon + \ft12 V^\alpha \eta\ ,
\label{susyN1}
\end{eqnarray}
where $\epsilon$ is a conformal Killing vector defined by \eq{conformalKilling} and $\epsilon$ is a Killing spinor defined by \eq{conformalspinor}. In view of the latter equation, the spinor $\eta$ is not an independent spinor, but given by $\eta=\frac23\gamma^\mu D_\mu \epsilon$. The number of solutions to the conformal Killing spinor equation is not to be confused with the number supersymmetries ${\cal N}$. For each supersymmetry, there is a Killing spinor equation which may admit one or more solutions. The number of such solutions, the form they take, and the $3d$ metric for which they exist are additional data in the definition of the model.
For the later discussion of how to obtain the model discussed above from a suitable supergravity theory, it is convenient to work with the cone coordinates $\Phi^\alpha=\{r, \phi^i\}$, in which case the superpotential ${\cal F}$ takes the form
\begin{eqnarray}
{\cal F}(\Phi^\alpha) = r^4 F(\phi^i)\ ,
\label{supoN1}
\end{eqnarray}
with an arbitrary real function $F(\phi^i)$.
For later use, we note that in these coordinates, with (\ref{supoN1}),
and with the split of fermions according to $\psi^\alpha\rightarrow
(r\lambda, r\chi^i)$, the Lagrangian (\ref{N1glob}) takes the explicit form
\begin{eqnarray}
e^{-1}\,{\cal L}_0
&=& -\ft1{16} r^2 R^{(h)} -\ft12 \partial^\mu r \partial_\mu r
-\ft12 r^2\,\partial^\mu \phi^i \partial_\mu {\phi}^{j}\,g_{ij}
-\ft12 r^2 \overline{\lambda}{} \gamma^\mu \nabla_\mu\lambda
-\ft12 r^4\,\overline{\chi}{}_{i} \gamma^\mu D_\mu\chi^{i}
\nonumber\\
&&{}
- r^3\,\overline{\chi}{}_{i} \gamma^\mu \lambda
\, \partial_\mu\phi^i
-6 r^4 F\, \overline{\lambda} \lambda
-3 r^5 \partial_i F\, \overline{\chi}^i \lambda
-\ft12r^6 \left(D_i\partial_j F
+4 F g_{ij} \right)\, \overline{\chi}^i \chi^j
\nonumber\\
&&{}
- \ft12 r^6 \left(16 F^2 +g^{ij}\,\partial_i F \partial_j F \right)
\ .
\label{N1glob_cone}
\end{eqnarray}
We will come back to this result in section 6.2, where we discuss its relation to ${\cal N}=1$ supergravity.
\subsection{The ${\cal N}=2$ superconformal sigma model}
\label{sec:n2}
For ${\cal N}=2$, the sigma model target space ${\cal M}$ is a K\"ahler cone.
It is then convenient to switch to notation in complex coordinates $(\Phi^\alpha, \Phi^{\!*}{}^{\,\bar \alpha})$,
$\alpha=1, \dots, n$.
Our spinor conventions for ${\cal N}=2$ Dirac spinors are collected in appendix A.
The Lagrangian for the ${\cal N}=2$ superconformal sigma model, up to quartic fermion terms, is given by
\begin{equation}
\begin{split}
e^{-1}\,{\cal L}_0
=&
-\ft18 V R^{(h)}
-
h^{\mu\nu}
\,
\partial_\mu \Phi^\alpha\,
\partial_\nu \Phi^{\!*}{}^{\,\bar \alpha} \,G_{\alpha\bar \alpha}(\Phi,\Phi^{\!*})
-
\overline{\psi}{}^{\bar \alpha} \gamma^\mu
D_\mu\psi^{\alpha} G_{\alpha\bar \alpha}(\Phi,\Phi^{\!*})
\\
&{}
-\ft12 \Big(
\tilde\psi^\alpha \psi^\beta \, D_\alpha \partial_\beta {\cal W} +
\tilde\psi^{\bar \alpha} \psi^{\bar \beta} \, D_{\bar \alpha} \partial_{\bar \beta} {\cal W}^{\!*}
\Big)
- \partial_\alpha {\cal W} \partial_{\bar \alpha} {\cal W}^*\, G^{\alpha\bar \alpha}(\Phi,\Phi^{\!*}) \ ,
\label{N2glob}
\end{split}
\end{equation}
where $W(\Phi)$ is the holomorphic superpotential,
$D_\alpha \partial_\beta {\cal W} \equiv\partial_\alpha\partial_\beta{\cal W}-\Gamma^\gamma_{\alpha\beta}\partial_\gamma {\cal W}$,
and $G_{a\bar a}(\Phi,\Phi^{\!*})$ is now a K\"ahler metric:
\begin{equation}
G_{\alpha\bar \alpha}(\Phi,\Phi^{\!*}) =
\partial_\alpha \partial_{\bar \alpha}{\cal K}(\Phi,\Phi^{\!*})\ .
\end{equation}
Comparing this to (\ref{confcon}) shows that the function $V$ can be identified with
the K\"ahler potential $V={\cal K}$\,.
Moreover, the K\"ahler cone structure implies that
\begin{equation}
D_\alpha \partial_\beta\, {\cal K}=0=D_{\bar\alpha} \partial_{\bar\beta} \,{\cal K}
\;.
\end{equation}
In particular, the Lagrangian (\ref{N2glob}) has ${\cal N}=1$ supersymmetry. Indeed, one
verifies, that it is of the form (\ref{N1glob}) with the real superpotential
\begin{eqnarray}
{\cal F}={\cal W}+{\cal W}^*
\;,
\end{eqnarray}
which must satisfy the homogeneity condition (\ref{homF}).
The full ${\cal N}=2$ superconformal symmetry, up to cubic fermion terms, is given by
\begin{eqnarray}
\delta_{\rm sc}\Phi^\alpha &=& \bar{\epsilon}\,\psi^\alpha\ ,
\nonumber\\[.5ex]
\delta_{\rm sc} \psi^\alpha &=& \partial_\mu\Phi^\alpha\,\gamma^\mu\epsilon
-G^{\alpha\bar \alpha}\,\partial_{\bar \alpha} {\cal W}^*\,B^*\epsilon^{*}
+\ft12G^{\alpha\bar \alpha}\,\partial_{\bar \alpha} {\cal K}^*\,\eta\ ,
\label{N2_globalsusy}
\end{eqnarray}
where $\epsilon$ is a conformal Killing spinor satisfying \eq{conformalspinor}
in which both $\epsilon$ and $\eta$ are Dirac spinors,
and $B$ is a constant matrix defined in appendix A.
The K\"ahler cone structure is exhibited by splitting the complex coordinates on
$\mathcal{M}$ as $\Phi^\alpha = \{z, \phi^i\}\equiv\{r e^{i\tau} e^{-K(\phi,\phi^{\!*})/2}, \phi^i\}$
such that the K\"ahler potential takes the form
\begin{eqnarray}
{\cal K}(\Phi, \Phi^{\!*}) &=&
r^2 ~=~ |z|^2\,e^{K(\phi,\phi^{\!*})} \;.
\label{Kpotential}
\end{eqnarray}
The target space metric then takes the form
\begin{equation}
\begin{split}
G_{\alpha\bar \alpha} \,d\Phi^\alpha d\Phi^{\bar \alpha} =&
(\partial_\alpha \partial_{\bar \alpha} {\cal K})\,d\Phi^\alpha d\Phi^{\!*\,\bar \alpha}
~=~
dr^2+r^2 \left[ (d\tau-\ft12{\cal Q})^2
+ g_{i\bar\jmath}\,d\phi^i d{\phi}^{*\,\bar\jmath}\right]\ ,
\label{Kcone}
\end{split}
\end{equation}
with the connection $ {\cal Q} =i (\partial_i K d \phi^i-\partial_{\bar\imath}K\,d {\phi}^{*\,\bar\imath})$,
and where $g_{i\bar\jmath}=\partial_i \partial_{\bar\jmath}K$\, is a K\"ahler metric on the
$(n-1)$ dimensional complex manifold parametrized by the $\phi^i$.
In these coordinates, the holomorphic superpotential takes the form
\begin{eqnarray}
{\cal W}(\Phi) &=& z^4\, W(\phi)\ ,
\label{superpotential}
\end{eqnarray}
where $W(\phi)$ is an arbitrary holomorphic function of the coordinates $\phi^i$.
\section{Building blocks for superconformal gaugings}
In this and the next section we discuss all possible gaugings of an ${\cal N}$--supersymmetric sigma--model which are compatible with the extended superconformal symmetry. In our set--up the gauge vectors $A_\mu^m$ enter in the Lagrangian only trough the covariant derivatives and the Chern--Simons terms. This is not a limitation to generality: any Lagrangian (at most quadratic in the vector's field strengths) can be put in this canonical form by a generalized duality transformation \cite{Nicolai:2003bp,deWit:2003ja}, at the price of allowing, possibly, for non--reductive gauge groups\footnote{However, in the \textit{superconformal} ${\cal N}\geq 3$ case the gauge group $G$ should be compact, hence reductive. See section \ref{solution}. }.
Again, we have two possible strategies at our disposal: either we perform a direct construction, or we start from supergravity and take a rigid limit. Both approaches lead to the same conclusions, and there is a beautiful interplay between the geometric structures of the gauged superconformal and supergravity theories. We start with the direct approach and then relate our findings to supergravity in section~\ref{sec:sugra}.
\subsection{The isometry group}\label{secisometry}
The isometry group, $\mathrm{Iso}(\mathcal{M})$, of a manifold $\mathcal{M}$ which is both a \textit{cone} and ${\cal N}$\textit{--K\"ahler} enjoys special properties.
Indeed, for ${\cal N}\geq 3$,
\begin{equation}\label{isotheorem}
\begin{matrix}\text{the }{\cal N}\text{--K\"ahler manifold}\\
\mathcal{M}\ \text{is a cone}\end{matrix}\ \ \Longleftrightarrow\ \ \ \begin{matrix}\mathfrak{spin}({\cal N})\subset \mathfrak{iso}(\mathcal{M})\\
\text{with the } \omega^{MN}\ \text{transforming}\\ \text{in the adjoint representation}
\end{matrix}
\end{equation}
while for ${\cal N}=2$ only the arrow $\Rightarrow$ makes sense since the statement in the \textsc{rhs} about the $\omega^{MN}$'s is empty in this case.
\smallskip
The meaning of this geometric theorem is quite transparent: if $\mathcal{M}$ is a cone, the corresponding $\sigma$--model is conformal, while, if $\mathcal{M}$ is ${\cal N}$--K\"ahler, the $\sigma$--model has ${\cal N}$--extended \textsc{susy}; hence, if $\mathcal{M}$ enjoys both properties, the $\sigma$--model should have a full ${\cal N}$--extended superconformal invariance. In particular, it must have a $\mathrm{Spin}({\cal N})$ $R$--symmetry, which should act on the scalars $\Phi^\alpha$ trough isometries of $\mathcal{M}$. It is quite remarkable that for ${\cal N}\geq 3$ the converse is also true: ${\cal N}$--\textsc{susy} and a global $\mathrm{Spin}({\cal N})$ $R$--symmetry together imply conformal invariance!
The proof of \eqref{isotheorem} is straightforward. For one direction ($\Rightarrow$), consider the vectors
\begin{equation}
K^{MN}_\alpha={(\omega^{MN})_\alpha}^\beta\, \mathcal{K}_\beta.
\end{equation}
They are obviously Killing vectors, since $D_\alpha K_\beta^{MN}=\omega^{MN}_{\beta\gamma}\,D_\alpha\mathcal{K}^\gamma
=\omega^{MN}_{\alpha\beta}=-\omega^{MN}_{\beta\alpha}$, and belong manifestly to the adjoint of $\mathfrak{spin}({\cal N})$. For the other direction, recall that a Riemannian manifold $\mathcal{M}$ is cone iff there is a global function ${\cal K}$ such that
\begin{equation}
\,G_{\alpha\beta}=\,D_\alpha\partial_\beta {\cal K},\qquad \text{and}\qquad\partial^\alpha {\cal K}\,\partial_\alpha {\cal K}=2\, {\cal K},\label{conicalcondition}
\end{equation}
generalizing the K\"ahler potential of sect.\,\ref{sec:n2}.
Assume the ${\cal N}$--K\"ahler manifold $\mathcal{M}$ has a $\mathrm{Spin}({\cal N})$ isometry under which the closed $2$--forms $\omega^{MN}$ transform according to the adjoint representation, and let $K_\alpha^{MN}$ be the corresponding Killing vectors. We claim that the function\footnote{Here and in the following, the $SO({\cal N})\simeq \mathrm{Spin}({\cal N})$ indices $M,N,P,\dots$ are raised and lowered with the Kronecker metric $\delta^{MN}$ and $\delta_{MN}$; hence, as a rule, we shall not distinguish between upper and lower $SO({\cal N})$ indices. }
\begin{equation}\label{CKfunction}
{\cal K}= \frac{1}{2{\cal N}({\cal N}-1)}K^{MN\,\alpha}\,K^{MN}_\alpha
\end{equation}
satisfies eqns.\eqref{conicalcondition} and hence $\mathcal{M}$ is a (global) cone.
Indeed, consider the antisymmetric tensor $D_\alpha K_\beta^{MN}$. It is a $2$--form which transforms according to the adjoint of $\mathrm{Spin}({\cal N})$; moreover, by a theorem of Kostant \cite{kostant}, the forms $D_\alpha K_\beta^{MN}$ are covariantly constant\footnote{Consider the `flat' index object $V^{\alpha}_{aA}V^{\beta}_{aB}\,D_\alpha K_\beta^{MN}$. It should be an invariant $\mathrm{Spin}({\cal N})$ tensor, and hence of the form $f\, \eta_{ab}(\Sigma^{MN})_{AB}$ for some function $f$. In form notations, this is $dK^{MN}=f\,\omega^{MN}$. Taking the derivative of both sided one gets $df\wedge \omega^{MN}=0$ which implies $f$ is a constant since $\omega^{MN}$ is non--degenerate.}. Then $D_\alpha K_\beta^{MN}$ should coincide with the symplectic form
$\omega^{MN}_{\alpha\beta}$ up to an overall normalization. We fix the relative normalization to $1$:
\begin{equation}
D_\alpha K_\beta^{MN}=\omega^{MN}_{\alpha\beta}\ .
\label{dak}
\end{equation}
Consider the vector
\begin{equation}
\begin{split}
{\cal K}_\alpha &=\frac{1}{{\cal N}({\cal N}-1)}{(\omega^{MN})_\alpha}^\beta\, K^{MN}_\beta=\frac{1}{{\cal N}({\cal N}-1)} (D_\alpha K^{MN\, \beta})\, K^{MN}_\beta=\\
&=\partial_\alpha\left(\frac{1}{2{\cal N}({\cal N}-1)} K^{MN\, \beta}\, K^{MN}_\beta\right)\equiv \partial_\alpha {\cal K}.
\end{split}
\end{equation}
one has the following identity
\begin{equation}\begin{split}
D_\alpha\partial_\beta {\cal K}&= D_\alpha {\cal K}_\beta = \frac{1}{{\cal N}({\cal N}-1)}{(\omega^{MN})_\beta}^\gamma\,D_\alpha K^{MN}_{\gamma}=\\
&= \frac{1}{{\cal N}({\cal N}-1)}{(\omega^{MN})_\beta}^\gamma\, \omega^{MN}_{\alpha\gamma}=G_{\alpha\beta}
\end{split}\end{equation}
where in the last equality we used the definition of $\omega^{MN}$, eqn.~\eqref{kahlerforms}, and the $\mathrm{Spin}({\cal N})$ identity
\begin{equation}
{(\Sigma^{MN})^A}_C\,{(\Sigma^{MN})^C}_B=-{\cal N}({\cal N}-1)\, {\delta^A}_B.
\end{equation}
In particular, ${\cal K}_\alpha$ is a \textit{conformal} Killing vector,
\begin{equation}
\hbox{\Large\textit{\pounds}}_{\cal K} \,G_{\alpha\beta}=2\, G_{\alpha\beta}\qquad\Longrightarrow\qquad\hbox{\Large\textit{\pounds}}_{\cal K}\, {\cal K}=2\, {\cal K},
\end{equation}
\textit{i.e.}\! $G^{\alpha\beta}\partial_\alpha {\cal K}\partial_\beta {\cal K}= 2\,{\cal K}$, and $\mathcal{M}$ is indeed a cone\footnote{The special case ${\cal N}=3$ is a central result in the theory of $3$--Sasakian manifolds:
\emph{$\mathcal{M}$ hyperk\"ahler with a $Spin(3)$ isometry rotating the $3$ complex structures $\Leftrightarrow$ $\mathcal{M}$ is a metric cone over a $3$--Sasakian manifold.} See \cite{swann} and \cite{boyer} (especially
\textbf{proposition 1.6} and \textbf{theorem A}).}.
\smallskip
For our purposes, the above geometric theorem has two useful applications: first of all, it gives us an explicit formula for the coupling function ${\cal K}$,
eqn.~\eqref{CKfunction}. Secondly, it leads to a supersymmetric \textit{non--renormalization} theorem, see \S.\,\ref{sec:nonrem}.
\smallskip
\subsection{Gaugeable isometries}
Our goal is to gauge a subgroup $G$ of the isometry group $\mathrm{Iso}(\mathcal{M})$ of the target space $\mathcal{M}$ in a ${\cal N}$--superconformal way.
In general, not all isometries of the target manifold can be gauged in a supersymmetric way, but only those belonging to the subgroup
$\mathrm{Iso}(\mathcal{M})_0$ of the \textit{multi--symplectic} isometries, namely those generated by Killing vectors $K_\alpha^m$ leaving invariant the ${\cal N}({\cal N}-1)/2$
symplectic structures $\omega^{MN}$,
\begin{equation}\label{isozeroK}
K^m \in \mathfrak{iso}(\mathcal{M})_0\ \ \Longleftrightarrow \ \ \hbox{\Large\textit{\pounds}}_{K^{m}}\,\omega^{MN}=0\ .
\end{equation}
In the ${\cal N}=2$ case, the subgroup $\mathrm{Iso}(\mathcal{M})_0$ corresponds precisely to the group of \textit{holomorphic} isometries.
\smallskip
Conformal invariance, even in the purely bosonic context, puts other conditions on the allowed gaugings. Indeed, the conformal Lagrangian contains the coupling $-\mathcal{K}\, R(h)$; we can gauge (in a conformal way) only the global symmetries which leave invariant this term, that is isometries contained in the subgroup of $\mathrm{Iso}(\mathcal{M})$ generated by Killing vectors $K^m_\alpha$ such that \cite{Sezgin:1994th}
\begin{equation}\label{seztaniicond}
\hbox{\Large\textit{\pounds}}_{K^m} \mathcal{K}= i_{\mathcal{E}}K^m=0\quad \Leftrightarrow\quad \hbox{\Large\textit{\pounds}}_{\mathcal{E}}K^m=0,
\end{equation}
where $\mathcal{E}\equiv \partial^\alpha\mathcal{K}\,\partial_\alpha$
is the \textit{concurrent}\footnote{A vector field $V_\alpha$ is called \textit{concurrent} if $G_{\alpha\beta}=D_\alpha V_\beta$. Then $V_\alpha$ is automatically both a gradient and a homothetic (conformal) Killing vector.} vector field, corresponding to the Euler vector $r\,\partial_r$, whose existence characterizes the conical metrics.
The two conditions in eqn.~\eqref{seztaniicond} are equivalent for $\mathcal{E}$ the concurrent vector of a conic geometry. Indeed,
\begin{equation}
\hbox{\Large\textit{\pounds}}_{\mathcal{E}}K^m=-\hbox{\Large\textit{\pounds}}_{K^m}{\mathcal{E}}= -\,\mathrm{grad}\, \hbox{\Large\textit{\pounds}}_{K^m}\mathcal{K}=0.
\end{equation}
The geometric meaning of the conditions \eqref{seztaniicond} is evident: we can gauge in a conformal way only the isometries $\mathrm{Iso}(B)$ of the base $B$ of the cone $\mathcal{M}$.
In conclusion, the maximal subgroup that can be gauged in an ${\cal N}$--superconformal way is contained in $\mathrm{Iso}(B)_0$, namely the isometries of the base $B$ which act multi--symplectically on the total cone $\mathcal{M}$. The Kostant theorem \cite{kostant} gives
\begin{equation}\label{eq:splitsiso}
\mathfrak{iso}(B)=\mathfrak{spin}({\cal N})\oplus \mathfrak{iso}(B)_0.
\end{equation}
We stress that, in particular, this means that the $R$--symmetry cannot be gauged in a superconformal way. Besides the general constraint $G\subset \mathrm{Iso}(B)_0$, ${\cal N}$--extended superconformal invariance requires the gauge group to satisfy some specific conditions to be discussed in detail in sect.\,\ref{supercongaugings} below.
The gauge group $G\subset \mathrm{Iso}(B)_0$ is necessarily compact. The same is true for ${\cal N}=2$ if we assume $\mathcal{M}$ to be Ricci--flat (namely a Calabi--Yau cone, whose basis is a Sasaki--Einstein manifold \cite{sasakian}) as it should be for a conical $\sigma$--model in order to be conformal at the quantum level\footnote{This statement holds for $\mathcal{M}$ \textit{a cone.} The full quantum ${\cal N}=2$ theory may have other, higher non--trivial, RG fixed points not based on conical manifolds. This cannot happen for ${\cal N}\geq 3$, where the classical geometric condition for superconformal invariance, \textit{i.e.}\! $\mathcal{M}$ a cone, is expected to hold at the full quantum level. See section~\ref{sec:nonrem}}.\smallskip
There is a simple geometric reason for this. If $\mathcal{M}$ is any metric cone, $ds^2=dr^2+r^2\,g_{ij}(y)\,dy^{i}\,dy^{j}$, its Riemann tensor has the form
\begin{gather}\label{eq:riemanntens}
R_{ijk r}=R_{i r j r}=0\\
{R_{ijk}}^\ell ={R_{ijk}}^\ell\Big|_{B}-g_{ik}\,\delta^{\ell}_{j}+
g_{jk}\,\delta^{\ell}_{i}\ ,
\end{gather}
where $i,j,k,\ell$ label the directions tangent to the base $B$ (\textit{i.e.}\! orthogonal to the Euler vector $\mathcal{E}\equiv r\partial_r$) and
$g_{ij}$ and ${R_{ijk}}^\ell\Big|_{B}$ are, respectively, the metric and the Riemann tensor of the basis $B$. Hence
\begin{equation}\label{relationricci}
R_{ij}\Big|_\mathcal{M}=R_{ij}\Big|_B-(\dim B-1)\, g_{ij}\ .
\end{equation}
Now, for ${\cal N}\geq 3$, any ${\cal N}$--K\"ahler manifold is, in particular, Ricci--flat. Then eqn.~\eqref{relationricci} implies that the basis $B$ is an Einstein space, $R_{ij}=\Lambda\, G_{ij}$, with positive `cosmological constant' $\Lambda\equiv (\dim B-1)$. Then Meyers theorem (ref.\,\cite{besse} \textbf{theorem 6.51}, or ref.\,\cite{Boyer:1998sf}) implies that the base $B$, if complete, should be \textit{compact} with a diameter
\begin{equation}
\mathrm{diam}(B)\leq \pi/\sqrt{(\dim B-1)}.
\end{equation}
This means that group of isometries which is relevant for the superconformal gaugings, $\mathrm{Iso}(B)_0$, is also compact (ref.\,\cite{besse} \textbf{corollary 1.78}). The gauge group, $G$, being a closed subgroup of $\mathrm{Iso}(B)_0$, should also be compact, and hence reductive. Thus, in the superconformal case ${\cal N}\geq 3$ (and also ${\cal N}=2$ if we require Ricci--flatness), no fancy non--reductive/non--compact gauging is allowed.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}\hline
$\mathcal{N}$\ & $B$\ & $R$--symmetry & generic $\mathrm{Iso}(B)$ \\\hline
1 & \emph{any} Riemannian manifold & trivial & $\mathrm{Iso}(B)$ \\\hline
2 & Sasakian manifold & $Spin(2)$ & $Spin(2)\times \mathrm{Iso}(B)_0$ \\\hline
3 & $3$--Sasakian manifold & $Spin(3)$ & $Spin(3)\times \mathrm{Iso}(B)_0$ \\\hline
4 & `$4$--Sasakian' manifold\ \ ($\ast$) & $Spin(4)$ &$Spin(2)\times Spin(2)\times \mathrm{Iso}(B)_0$\\\hline
$\geq 5$ & $S^{n-1}/\Gamma\qquad\ (\ast\ast)\ \ $ & $Spin(\mathcal{N})$ & commutant of $\Gamma$ in $SO(n)$ \\\hline
\end{tabular}
\end{center}
\caption{\footnotesize{Bases $B$ of the ${\cal N}$--K\"ahlerian cones. The $4$--Sasakian manifolds are $3$--Sasakian manifolds with, possibly, a special action of the $Spin(4)$ isometry group. In the last row, $n=8 m$ with $m$ the number of `supermultiplets' and $\Gamma$ is a discrete subgroup of
$SO(n)$ commuting with the action of $Spin({\cal N})$.}}
\label{table}
\vglue 12pt
\end{table}
\subsection{Momentum maps}
In the ${\cal N}=2$ case ($\mathcal{M}$ K\"ahler), it is well known that the full \textsc{susy} completion of the gauge interactions (including Yukawa and potential terms) for a general $\sigma$--model may be conveniently encoded in the momentum map of the isometry subgroup $G$ to be gauged.
Here we generalize the momentum map method to any ${\cal N}$--K\"ahler manifold.
In rigid supersymmetry not all symmetries can be gauged, but only those commuting with the supercharges. As we already mentioned, geometrically this means that the gauge group should be a subgroup of $\mathrm{Iso}(\mathcal{M})_0$, the group of isometries commuting with the $R$--symmetry $\mathrm{Spin}({\cal N})$. This subgroup is the exponential of the algebra $\mathfrak{iso}(\mathcal{M})_0$ generated by the Killing vectors $K_\alpha^m$ satisfying eqn.~\eqref{isozeroK}. Identifying our symplectic structures $\omega^{MN}$ with the Lie algebra $\mathfrak{spin}({\cal N})$, the momentum map can be seen as a map $\mu\colon \mathfrak{iso}(\mathcal{M})_0\rightarrow \mathfrak{spin}({\cal N})$. Concretely, one defines functions $\mu^{MN\,m}$ on $\mathcal{M}$ from the condition\footnote{In general, $\mu^{MN\, m}$ is defined only locally and it is defined up to the addition of a constant. As we shall see momentarily, in the superconformal case, there is a unique global (preferred) definition of $\mu^{MN\,m}$.}
\begin{equation}\label{defmommap}
0=\hbox{\Large\textit{\pounds}}_{K^m}\omega^{MN}= d\big(i_{K^m}\omega^{MN}\big)\ \Longrightarrow \ \omega^{MN}_{\alpha\beta}\,K^{m\,\beta}=-\partial_\alpha\mu^{MN\, m}.
\end{equation}
In general, the ${\cal N}$--supersymmetric gauge coupling of a $\sigma$--model can be described completely in terms of the functions $\mu^{MN\,m}$.
The situation in the superconformal case is simpler, since we have an explicit \textit{global} expression for $\mu^{MN\, m}$. Indeed, recall from \S.\,\ref{secisometry} that, if $\mathcal{M}$ is a cone, we have Killing vectors $K^{MN}_{\alpha}$ generating the $R$--isometry group $\mathrm{Spin}({\cal N})$. We claim that
\begin{equation}\label{mucone}
\mu^{MN\, m}=-\frac{1}{2} K^{MN\,\alpha}\, K^m_\alpha\ .
\end{equation}
Indeed, it is straightforward to show that $\mu^{MN\,m}$ constructed in this way satisfies \eq{defmommap}, recalling $D_\alpha K_\beta^{MN}=\omega^{MN}_{\alpha\beta}$ and using \eq{dak} and $\hbox{\Large\textit{\pounds}}_{K^{AB}}K^m=0$ for $K^m\in\mathfrak{iso}(B)_0$, cfr.\! \eqref{eq:splitsiso}.
Another important property of the momentum map is that the \textit{complex function}
\begin{equation}
\mu^{NP\, m}+i\,\mu^{MP\, m}\qquad \text{$M$, $N$, $P$ all distinct!}
\end{equation}
is holomorphic with respect the complex structure $(\omega^{MN})_\alpha^\beta$ in the sense that the Dolbeault derivative with respect this complex structure defined as $(\partial_\alpha-i{(\omega^{MN})_\alpha}^\beta\partial_\beta)$ acting on this function gives zero:
\begin{equation} \big(\partial_\alpha-i{(\omega^{MN})_\alpha}^\beta\partial_\beta\big)\big(\mu^{NP\,m}+i\mu^{MP\, m}\big)=0\ ,
\label{hol}
\end{equation}
with NO sum over repeated capital indices. This follows from the computation of $(\omega^{MN})_\alpha{}^\beta\,\partial_\beta\mu^{PQ\, m}$ for $Q=M$, with $M,N,P$ taken to be \textit{all distinct}, and using the fact that $(\omega^{MN})_{\alpha\beta}$ are the generators of $\mathfrak{spin}({\cal N})$ satisfying \eq{eq:cliffordsismasignma}. The result \eq{hol} is needed in establishing key properties of the \textit{T-tensors} which turn out to encode the gauge-induced couplings, as will be discussed in the next section.
\subsection{The embedding tensor $\Theta_{mn}$ and the $T$--tensor}\label{embeddingsect}
We wish to gauge a subgroup $G\subset \mathrm{Iso}(B)_0$ in a superconformal manner. This is conveniently done by introducing an `embedding tensor'
$\Theta_{mn}\colon \mathfrak{iso}(B)_0^\vee\rightarrow \mathfrak{iso}(B)_0$, as it is customary in gauged supergravity~\cite{Nicolai:2000sc}. The gauged Lie subalgebra $\mathfrak{g}$ corresponds to the image of $\Theta_{mn}$ in $\mathfrak{iso}(B)_0$. In terms of $\Theta_{mn}$, the infinitesimal gauge transformation on a generic field $\boldsymbol{\Phi}$ reads
\begin{equation}\label{gaugetransfromaz}
\delta_{\Lambda}\boldsymbol{\Phi} = \Lambda^{m}(x)\, \Theta_{mn}\,\hbox{\Large\textit{\pounds}}_{K^n}\boldsymbol{\Phi},
\end{equation}
where $\Lambda^m(x)$ are $x$--dependent parameters. Correspondingly, the gauge--covariant derivative takes the form
\begin{equation}
{\cal D}_\mu\boldsymbol{\Phi} = D_\mu-A^m_\mu\, \Theta_{mn}\, \hbox{\Large\textit{\pounds}}_{K^n}\boldsymbol{\Phi},
\end{equation}
where $D_\mu$ is the covariant derivative appropriate for the field $\boldsymbol{\Phi}$ in the corresponding ungauged model. The closure of the transformations \eqref{gaugetransfromaz}, requires that the range $\mathfrak{g}$ of $\Theta_{mn}$ is a Lie \emph{sub}algebra of $\mathfrak{iso}(B)_0$. Then $\Theta_{mn}$ is automatically a $\mathfrak{g}$--invariant tensor.\smallskip
The possible gaugings of a supersymmetric model are in one--to--one correspondence with the allowed embedding tensors $\Theta_{mn}$. Then the question of which gaugings are compatible with ${\cal N}$--superconformal invariance boils down to the classification of the corresponding embedding tensors
$\Theta_{mn}$.
\smallskip
There are various techniques for finding the conditions that $\Theta_{mn}$ should satisfy. The crucial observation, however, is that geometrically we have just one canonical $\mathfrak{g}$--invariant function on $\mathcal{M}$ which may encode the gauge--induced physical couplings, namely
\begin{equation}
T^{MN, PQ}=\mu^{MN\,m}\,\Theta_{mn}\,\mu^{PQ\,n}.
\end{equation}
Borrowing from the supergravity language \cite{Nicolai:2000sc,deWit:2003ja}, we shall call $T^{MN, PQ}$ the \textit{$T$--tensor.} All terms in the ${\cal N}$--supersymmetric completion of the gauge couplings should be encoded in the $T$--tensor in an universal (\textit{i.e.}\! model--independent) way.
\smallskip
The $T$--tensor satisfies very interesting differential identities. Assuming that $\Theta_{mn}=\Theta_{nm}$ is symmetric (as we shall see momentarily) one has in particular:
\begin{equation}
\partial_\alpha\big(T^{MP,MP}-T^{NP,NP}\big)=2\, {(\omega^{MN})_\alpha}^\beta\,\partial_\beta T^{MP,NP}\ .
\label{eq:firstderTidentiti}
\end{equation}
where $M$, $N$, $P$ are \textit{all distinct} and there is NO sum over repeated $\mathrm{Spin}({\cal N})$ indices. This identity readily follows from the observation that fixing $M$ and $N$ (distinct), and letting $P,Q$ be two indices which are not equal to $M$ nor $N$ (we do not exclude the case $P=Q$), the expression
\begin{equation}
\begin{split}
\big(\mu^{NP\, m}+i\, \mu^{MP\, m}\big)&\,\Theta_{mn}\,\big(\mu^{NQ\, m}+i\, \mu^{MQ\, m}\big)=\\
&=\big(T^{NP,NQ}-T^{MP,MQ}\big)+i\big(T^{MP,NQ}+T^{NP,MQ}\big)
\end{split}
\end{equation}
is holomorphic with respect the complex structure ${(\omega^{MN})_\alpha}^\beta$, in view of \eq{hol}.
Another useful identity of the same kind is
\begin{equation}
{(\omega^{MN})_\alpha}^\gamma\left\{K^m_\gamma\,\Theta_{mn}\, K^n_\beta+D_\beta\partial_\gamma\left(\frac{1}{2}\, T^{MN,MN}\right)\right\}+
(\alpha\leftrightarrow \beta)=0\ ,
\label{Tidentityfirst}
\end{equation}
where, again, NO sum over $M, N$ is implied. Contracting the identity \eq{Tidentityfirst} with $(\omega^{MN})_\delta^{\ \beta}$ we get
\begin{multline}
\qquad\quad(\omega^{MN})_\alpha^{\ \gamma}\,(\omega^{MN})_\beta^{\ \delta}\,\left[K_\gamma^{m}\,\Theta_{mn}\, K^{n}_\delta+D_\gamma\partial_\delta\!\left( \frac{1}{2}T^{MN, MN}\right)
\right]=\\
= K^m_\alpha\,\Theta_{mn}\, K^n_\beta + D_\alpha\partial_\beta\!\left( \frac{1}{2}T^{MN,MN}\right)\ ,\qquad \text{NO sum over } M, N\ .
\label{eq:firstTident}
\end{multline}
This identity, and \eq{eq:firstderTidentiti}, will be needed in establishing
the criteria for enhanced ${\cal N}$--superconformal symmetry in section 5.2.
\subsection{A quantum non--renormalization theorem}\label{sec:nonrem}
We stress that the superconformal invariance persists at the full quantum level for ${\cal N}\geq 3$ (modulo the problem with the singularity at the tip of the cone). This is known to be the case for flat target space \cite{Gaiotto:2007qi,Avdeev:1991za,Avdeev:1992jt}. Here, we can generalize this result to the case of non-flat targets by noting that the geometric theorem proven in sect.\,\ref{secisometry} (eqn.~\eqref{isotheorem}) can be interpreted as a supersymmetric \textit{non--renormalization} theorem: Quantum corrections are not expected, in $D=3$, to spoil neither supersymmetry nor the global $\mathrm{SO}(\cal N)$ symmetry; but the two together imply that the target metric is conic, so the conical nature of $G_{\alpha\beta}$ is preserved by the quantum corrections. But conicity is, geometrically, the landmark of conformal invariance in $D=3$. Note that here we get a non--renormalization theorem for the K\"ahler potential, rather than for the superpotential as usual.
\section{Superconformal gaugings and Chern--Simons \hfill\break interactions} \label{sec:CSI}
Using the results of the previous section, we are now ready to find all possible superconformal gaugings. We shall start with the ${\cal N}=1$ case.
We shall then study the criteria for the enhancement of the ${\cal N}=1$ conformal supersymmetry to any ${\cal N}$. As the target manifolds are necessarily flat for ${\cal N}>4$ and the models for those cases are already known explicitly, we shall highlight the models with non-flat target manifolds for ${\cal N} \le 4$, and
present in detail those with ${\cal N}=1,2$ conformal supersymmetry.
\subsection{${\cal N}=1$ theories}
In accordance with the formalism presented in the previous section, the
${\cal N}=1$ gauged superconformal model has the Lagrangian
\begin{eqnarray}
e^{-1}\,{\cal L}_0
&=& -\ft18 V R^{(h)} -\ft12 h^{\mu\nu}\, {\cal D}_\mu \Phi^\alpha\, {\cal D}_\nu \Phi^\beta \,G_{\alpha\beta}
+\ft12 \, \overline{\psi}{}^{\alpha} \gamma^\mu {\cal D}_\mu\psi^{\beta} \, G_{\alpha\beta}
\nonumber\\[.5ex]
&&{}
- \ft12 \left( \Theta_{mn}\, K^m_\alpha\, K^n_\beta\, +D_\alpha\partial_\beta {\cal F} \right) \, \overline{\psi}^\alpha \psi^\beta
- G^{\alpha\beta}\,\partial_\alpha {\cal F}\partial_\beta {\cal F}
\nonumber\\[.5ex]
&&{}
+ \varepsilon^{\mu\nu\rho}\,\Theta_{mn}\, A^m_\mu
\left( \partial_\nu A^n_\rho+\ft{1}{3} \Theta_{kp} f^{np}{}_l \,A^k_\nu A^l_\rho \right)\ ,
\label{N1glob_gauged}
\end{eqnarray}
with covariant derivatives
\begin{gather}\label{eq:covariant derivtvies}
{\cal D}_\mu\Phi^\alpha=\partial_\mu\Phi^\alpha-A_{\mu\,m}\,K^{\alpha\,m}\\
{\cal D}_\mu\psi^\alpha= D_\mu\chi^a-A_{\mu\,m}\,D_\beta K^{\alpha\,m} \psi^\beta
\end{gather}
where we used the convention $A_{\mu\, m}\equiv \Theta_{nm}\, A^n_\mu$.
The killing vector fields obey the algebra
\begin{equation}
K^{\alpha m}\,\partial_\alpha K^{\beta n} - K^{\alpha n}\,\partial_\alpha K^{\beta m} ~=~
f^{mn}{}_{k}\,K^{\beta k} \ ,
\label{fmnk}
\end{equation}
The embedding tensor $\Theta_{mn}$ encodes all the gauging data: the subgroup $G\subset \mathrm{Iso}(B)_0$ we are gauging, and the level--matrix of the Chern--Simons sector (that is, the gauge couplings). Consistency requires ${\cal F}(\Phi)$ to be a gauge invariant function on $\mathcal{M}$, namely
\begin{equation}
\Theta_{mn}\,\hbox{\Large\textit{\pounds}}_{K^n}{\cal F}=0.
\end{equation}
The action (\ref{N1glob}) is invariant under the following superconformal transformations:
\begin{eqnarray}
\delta\Phi^a &=& \overline{\epsilon}\,\chi^a\ ,
\nonumber\w2
\delta \psi^\alpha &=& {\cal D}_\mu\Phi^\alpha\,\gamma^\mu\epsilon
- G^{\alpha\beta}\,\partial_{\beta} {\cal F}\,\epsilon + \ft12 G^{\alpha\beta}\partial_\beta V \eta\ ,
\nonumber\w2
\delta A_\mu^m &=&
K^{\alpha m} \, \overline{\epsilon} \gamma_\mu \psi_\alpha
\ ,
\end{eqnarray}
where $V$ is an arbitrary function on the cone.
\subsection{Criteria for enhanced ${\cal N} $--superconformal symmetry}\label{supercongaugings}
In rigid supersymmetry, any ${\cal N}$--extended supersymmetric model can be seen as a special instance of the ${\cal N}=1$ theory. We have already written the most general ${\cal N}=1$ superconformal Chern--Simons--matter theory in terms of two homogeneous (gauge--invariant) functions, ${\cal K}$ and ${\cal F}$, and the gauging data $\Theta_{mn}$. It remains only to find the special functions ${\cal K}$, ${\cal F}$, and gauging data $\Theta_{mn}$ compatible with enhanced ${\cal N}$--superconformal symmetry. For the generalized K\"ahler potential, ${\cal K}$, we already know the answer, eqn.~\eqref{CKfunction}.
\smallskip
In order to enhance the ${\cal N}=1$ superconformal symmetry to an ${\cal N}$--extended one, it is enough to ensure that the $\mathrm{Spin}({\cal N})$ $R$--symmetry is actually a symmetry of the full Lagrangian. The action of $\mathrm{Spin}({\cal N})$ will then produce all the generators of the ${\cal N}$--extended superconformal algebra out of the ${\cal N}=1$ ones.
$\mathrm{Spin}({\cal N})$ is automatically a symmetry of the kinetic terms (since $\mathrm{Spin}({\cal N})$ acts on $\mathcal{M}$ by isometries), as well as a symmetry of the minimal gauge couplings (since we gauge a subgroup of $\mathrm{Iso}(B)_0$, whose generators commutes with
$\mathrm{Spin}({\cal N})$), and also a trivial symmetry of the Chern--Simons sector (since the vectors are inert under $\mathrm{Spin}({\cal N})$).
Therefore, to get ${\cal N}$--superconformal invariance, it remains only to enforce the $\mathrm{Spin}({\cal N})$ $R$--symmetry in the Yukawa couplings. Then the invariance of the scalar potential will be automatic by the fundamental principles of supersymmetry.
As we saw above, the Yukawa couplings in the ${\cal N}=1$ case read
\begin{equation}\label{n1compyuk}
\bar\chi^\alpha\big( K_\alpha^m\, \Theta_{mn}\, K^n_\beta+ D_\alpha\partial_\beta{\cal F}\big)\chi^\beta,
\end{equation}
where $\chi^\alpha$ is the susy--partner of the scalar $\Phi^\alpha$.
Assume our Chern--Simons--matter model is invariant under an extended supersymmetry generated by the supercharges $Q^M$, $M=1,\dots, {\cal N}$. We may, in particular, view it as an ${\cal N}=1$ theory with respect to the ${\cal N}=1$ supersymmetry generated by the $M$--th supercharge, $Q^M$. We write $\chi^{M\alpha}$ for the fermionic superpartner of the scalar $\Phi^\alpha$ with respect to this particular ${\cal N}=1$ supersymmetry, \textit{i.e.}\! $\chi^{M\alpha}=[Q^M,\Phi^\alpha]$.
The action of $\mathrm{Spin}({\cal N})$ on the fermions gives the identity
\begin{equation}\label{defagfermions}
\chi^{M\alpha}\equiv {(\omega^{MN})^\alpha}_\beta\,\chi^{N\beta}\qquad \text{NOT summed over $N$!}
\end{equation}
By eqn.~\eqref{n1compyuk}, invariance of the Lagrangian with respect to the $M$--th ${\cal N}=1$ supersymmetry, $Q^M$, requires the Yukawa term to have the form
\begin{equation}
\bar\chi^{M\alpha}\big( K_\alpha^m\, \Theta_{mn}\, K^m_\beta+ D_\alpha\partial_\beta{\cal F}^M\big)\chi^{M\beta}\qquad \text{NOT summed over $M$!},
\end{equation}
for certain (real) superpotentials ${\cal F}^M$ (depending on the index $M$).
Of course, the physical Yukawa interactions cannot depend on which ${\cal N}=1$ sub--supersymmetry we choose to focus on. Equating the physical couplings computed using the ${\cal N}=1$ supersymmetries generated by $Q^M$ and $Q^N$, we get, \textit{for all pairs} $M,N$, the equalities
\begin{equation}
\bar\chi^{M\alpha}\big( K_\alpha^m\, \Theta_{mn}\, K^n_\beta+ D_\alpha\partial_\beta{\cal F}^M\big)\chi^{M\beta}=
\bar\chi^{N\alpha}\big( K_\alpha^m\, \Theta_{mn}\, K^n_\beta+ D_\alpha\partial_\beta{\cal F}^N\big)\chi^{N\beta}\ ,
\end{equation}
or, in view of eqn.~\eqref{defagfermions},
\begin{eqnarray}
\begin{split}
& \big( K_\alpha^m\, \Theta_{mn}\, K^n_\beta+ D_\alpha\partial_\beta{\cal F}^M\big)={(\omega^{MN})_\alpha}^\gamma
{(\omega^{MN})_\beta}^\delta \big( K_\gamma^m\, \Theta_{mn}\, K^n_\delta+ D_\gamma\partial_\delta{\cal F}^N\big)\ ,
\label{consistencycond}
\end{split}
\end{eqnarray}
\textit{with NO sum over repeated capital indices!} A gauging, specified by the embedding tensor $\Theta_{mn}$, has a full ${\cal N}$--invariant completion if and only if there exist superpotentials
${\cal F}^M$, $M=1,2,\dots, {\cal N}$, such that the consistency equation \eqref{consistencycond} holds for all $M$, $N$.
Let ${\cal F}^M$ and $\widetilde{{\cal F}}^M$ be two solutions to the consistency equation \eqref{consistencycond}. One has
\begin{equation}
D_\alpha\partial_\beta\big({\cal F}^M-\widetilde{{\cal F}}^M\big)={(\omega^{MN})_\alpha}^\gamma
{(\omega^{MN})_\beta}^\delta \,D_\gamma\partial_\delta\big({\cal F}^N-\widetilde{{\cal F}}^N\big)
\label{chaucryrieman}
\end{equation}
(again, NO sum over $N$!). This equation, which is independent of the gauging data $\Theta_{mn}$, has a simple interpretation. Recall that,
in the ${\cal N}=2$ case, ${(\omega^{12})_\alpha}^\gamma$ is simply the complex structure of the K\"ahler manifold $\mathcal{M}$. Then, in the ${\cal N}=2$ case, eqn.~\eqref{chaucryrieman} is simply the Cauchy--Riemann equation stating that
${\cal F}^1-\widetilde{{\cal F}}^1$ and ${\cal F}^2-\widetilde{{\cal F}}^2$ are, respectively, the real and imaginary part of a holomorphic function $\mathcal{W}$. In this way, we recover the well--known fact that, in the ${\cal N}=2$ case, the Yukawa couplings arise from two sources: the gauging and the superpotential $\mathcal{W}$ which is an arbitrary holomorphic function (as long as it is gauge invariant). Thus, in that case, the ${\cal F}^M$ are not uniquely determined by the gauging data, and the non--uniqueness is parametrized by a free holomorphic function $\mathcal{W}$, namely the superpotential.
Analogously, for ${\cal N}>2$, eqn.~\eqref{chaucryrieman} states that
${\cal F}^N-\widetilde{{\cal F}}^N$ is the real part of a holomorphic function with respect to \textit{all} the $({\cal N}-1)$ complex structures
${(\omega^{MN})_\alpha}^\gamma$ ($N$ fixed, any $M$). Since, for ${\cal N}\geq 3$, there are no non--trivial such functions, the solution to the consistency equation \eqref{consistencycond}, if it exists, is essentially\footnote{${\cal N}=3$ is somewhat special in that, in some case, a residual non--uniqueness may still be present. This subtlety is nonmaterial for the \emph{superconformal} gaugings.} unique. That is: for ${\cal N}\geq 3$ the supersymmetric Lagrangian is fully determined by the geometry of the target space $\mathcal{M}$ and the gauging data $\Theta_{mn}$.
\smallskip
It remains to find the solution to eqn.~\eqref{consistencycond}. The solution has a simple formulation in terms of the $T$--tensor $T^{MN,PQ}=\mu^{MN\, m}\,\Theta_{mn}\, \mu^{PQ\,n}$, which is a nice way to summarize the interplay between the geometry of $\mathcal{M}$ and the gauging data $\Theta_{mn}$.
The tensor $T^{MN,PQ}$ decomposes into the \textit{irreducible}
$\mathrm{Spin}({\cal N})$ representations given in terms of $SO({\cal N})$ Young tableaux as
\begin{equation}
\footnotesize{\left(\
\begin{tabular}{|c|}\hline \phantom{A\Big|}\\\hline \phantom{A\Big|}\\\hline\end{tabular}\,\bigodot\,
\begin{tabular}{|c|}\hline \phantom{A\Big|}\\\hline \phantom{A\Big|}\\\hline\end{tabular} \ \right)_{\rm sym.} \ \simeq\
\boldsymbol{1}\ \bigoplus\
\begin{tabular}{|c|c|}\hline \phantom{A\Big|} & \phantom{A\Big|}\\\hline\end{tabular}\ \bigoplus\ \begin{tabular}{|c|c|}\hline \phantom{A\Big|} & \phantom{A\Big|}\\\hline
\phantom{A\Big|} & \phantom{A\Big|}\\\hline\end{tabular}\ \bigoplus\
\begin{tabular}{|c|}\hline \phantom{A\Big|}\\\hline \phantom{A\Big|}\\\hline\phantom{A\Big|}\\\hline\phantom{A\Big|}\\\hline\end{tabular}
\label{eq:youngtableaus}}
\end{equation}
\textit{A solution ${\cal F}^M$ to the consistency condition \eqref{consistencycond} exists} (and is unique if ${\cal N}\geq 3$) \textit{if and only if the $\Large\boxplus$ component of the $T$--tensor vanishes.} In the rest of this section, we will prove this assertion, and as a byproduct we shall find an explicit expression for ${\cal F}_M$ in terms of the T-tensor.
\smallskip
To begin, let us assume $T^{MN,PQ}\big|_\boxplus=0$ or, explicitly,
\begin{equation}\label{eq:defTAB}
T^{MN,PQ}=\delta^{MP}\, T^{NQ}-\delta^{MQ}\, T^{NP}-\delta^{NP}\, T^{MQ}+\delta^{NQ}\, T^{MP}+T^{[MNPQ]}
\end{equation}
with $T^{MN}=T^{NM}$. Then, for $M\not=N$,
\begin{equation}\label{ttt=ee}
T^{MN,MN}=T^{MM}+T^{NN}.
\end{equation}
Now, recall the basic differential identity for the $T$--tensor \eqref{eq:firstTident} and subtract it from the consistency equation \eqref{consistencycond}, using eqn.~\eqref{ttt=ee}, we thus obtain
\begin{equation}
\begin{split}
{(\omega^{MN})_\alpha}^\gamma& {(\omega^{MN})_\beta}^\delta\,
D_\gamma\partial_\delta\left(2\,{\cal F}^M-T^{MM}-T^{NN}\right)=\\
&=
D_\alpha\partial_\beta\left(2\,{\cal F}^M-T^{MM}-T^{NN}\right)\ ,\\
&\qquad\qquad\qquad \text{NO sum over $M$, $N$!}\ .
\end{split}
\end{equation}
This equation just requires the function $(2{\cal F}^M-T^{MM}-T^{NN})$ to be the real part of a function holomorphic with respect to the complex structure ${(\omega^{MN})_\alpha}^\beta$. This condition has an obvious solution
\begin{equation}\label{cFsolution}
{\cal F}^M=T^{MM}\phantom{\Big|}
\end{equation}
Indeed, the function $T^{MM}-T^{NN}$ is the real part of a function holomorphic with respect to the complex structure ${(\omega^{MN})_\alpha}^\beta$. To see this, choose (for ${\cal N}\geq 3$) an index $P\not=M,N$. From eqn.~\eqref{eq:defTAB} we get
\begin{equation}\label{relttt}
T^{MP,MP}-T^{NP,NP}=(T^{PP}+T^{MM})-(T^{PP}+T^{NN})=T^{MM}-T^{NN}\ .
\end{equation}
The \textsc{lhs} is the real part of a function,
\begin{equation}
(T^{MP,MP}-T^{NP,NP})+2i\, T^{MP,NP}\ ,
\end{equation}
which is holomorphic with respect to the complex structure ${(\omega^{MN})_\alpha}^\beta$ in virtue of the identity
\eqref{eq:firstderTidentiti}. On the other hand, since we
know that the solution is unique, \eqref{cFsolution}
should be the general answer.\smallskip
Conversely, we have to show that if $T^{MN,PQ}\big|_\boxplus\not=0$ there does not exist any supersymmetric completion. First of all, we observe that this condition is empty for ${\cal N}\leq 3$, so for ${\cal N}\leq 3$ \emph{any} gauging is allowed and, in particular, for ${\cal N}=3$ we have a \emph{unique} susy completion for any gauging \cite{Kao:1992ig,Gaiotto:2008sd} given by eqn.~\eqref{cFsolution}. Hence we may assume ${\cal N}\geq 4$. The ${\cal N}\geq 4$ models are, in particular, ${\cal N}=3$ theories; choosing an ${\cal N}=3$ sub--supersymmetry, and forgetting for the moment the other $2({\cal N}-3)$ supercharges, we get precisely \textit{one} solution for ${\cal F}^M$. In order for this unique Lagrangian to give actually an ${\cal N}$--supersymmetric model, and not just an ${\cal N}=3$ one, we must have equalities between the Lagrangians obtained by different choices of the ${\cal N}=3$ sub--supersymmetry. Take, say, the two sets of supersymmetries generated, respectively, by $Q^1,Q^2, Q^3$ and $Q^1, Q^2, Q^4$. In the two cases, one gets, respectively, the following superpotentials (cfr.\! eqns.\eqref{cFsolution} and \eqref{relttt})
\begin{gather}
\Big({\cal F}^1-{\cal F}^2\Big)_{Q^1,Q^2,Q^3}=T^{13,13}-T^{23,23}\ , \\
\Big({\cal F}^1-{\cal F}^2\Big)_{Q^1,Q^2,Q^4}=T^{14,14}-T^{24,24}\ .
\end{gather}
If the unique ${\cal N}=3$ gauging has to be ${\cal N}\geq 4$ supersymmetric, the \textsc{rhs} of the two above equations should be equal. But their difference
\begin{equation}
T^{13,13}-T^{23,23}-
T^{14,14}+T^{24,24} \in {\Huge\boxplus}
\end{equation}
and thus we have agreement precisely if $T\big|_\boxplus=0$. This completes the proof of the criterion \eqref{eq:youngtableaus}.
\smallskip
In the particular case of ${\cal N}=4$ models with flat target space and no twisted hypermultiplet, the condition \eqref{eq:youngtableaus} is equivalent to the beautiful Gaiotto--Witten Lie superalgebras criterion \cite{Gaiotto:2008sd}.
\smallskip
\subsection{${\cal N}=2$ theories}
In accordance with the results of the previous section, the gauged Lagrangian takes the explicit form
\begin{eqnarray}
e^{-1}\,{\cal L}
&=&
-{\cal K}\, R(h)
-\ft12
h^{\mu\nu}
\,
{\cal D}_\mu \Phi^\alpha\, {\cal D}_\nu \Phi^{\!*}{}^{\,\bar \alpha} \,G_{\alpha\bar \alpha}
+\ft12 \,
\bar{\psi}{}^{\bar \alpha} \gamma^\mu
{\cal D}_\mu\psi^{\alpha} \,
G_{\alpha\bar \alpha}
\nonumber\\
&&{}
-\ft12 \Big(
\tilde\psi^\alpha \psi^\beta \, D_\alpha \partial_\beta {\cal W} -
\tilde\psi^{\bar \alpha} \psi^{\bar \beta} \, D_{\bar \alpha} \partial_{\bar \beta} {\cal W}^{\!*}
+\overline\psi^{\bar \alpha} \psi^\beta K^m_{\bar \alpha} \Theta_{mn} K^n_\beta
\Big)
\nonumber\\
&&{}
- G^{\alpha\bar \alpha}\,\left(\partial_\alpha {\cal W} \partial_{\bar \alpha} {\cal W}^*+\partial_\alpha T \partial_{\bar{\alpha}} T\right)
\nonumber\\[.5ex]
&&{}
+ \varepsilon^{\mu\nu\rho}\,\Theta_{mn}\, A^m_\mu
\left( \partial_\nu A^n_\rho+\ft{1}{3} \Theta_{kp} f^{np}{}_l \,A^k_\nu A^l_\rho \right)
\ ,
\label{N2glob_gauged}
\end{eqnarray}
with
\begin{equation}
T=\mu^m \Theta_{mn} \mu^n\ ,
\end{equation}
and the moment map $\mu^m$ is given by
\begin{eqnarray}
\mu^m &=& K^{\alpha m} \partial_\alpha{\cal K} + \mbox{c.c.}
\end{eqnarray}
In ${\cal N}=1$ language, the Lagrangian (\ref{N2glob_gauged}) comes from
a real superpotential of the form
\begin{equation}
{\cal F}={\cal W}+{\cal W}^*+T\ ,
\end{equation}
as explained below \eq{chaucryrieman}. It is also useful to note that the Killing vectors involved here are (anti)holomorphic and they can be expressed as
\begin{eqnarray}
K^{a m} &=& i G^{a\bar{a}} \partial_{\bar{a}} \mu^m\;,\qquad
K^{\overline{a} m} ~=~ -i G^{a\bar{a}} \partial_{a} \mu^m
\end{eqnarray}
Furthermore, the moment maps satisfy the relations
\begin{eqnarray}
\nabla_{(a} \partial_{b)}\,\mu^m &=& 0\;,\qquad
\partial_{\bar{a}} \partial_b \, \mu^m ~=~ 0\ .
\end{eqnarray}
The Lagrangian \eq{N2glob} has the ${\cal N}=2$ superconformal symmetry which, up to cubic fermion terms, is given by
\begin{eqnarray}
\delta\Phi^\alpha &=& i\,\bar{\epsilon}\,\psi^\alpha\ ,
\nonumber\\[.5ex]
\delta \psi^\alpha &=& {\cal D}_\mu\Phi^\alpha\,\gamma^\mu\epsilon
-G^{\alpha\bar \alpha}\,\partial_{\bar \alpha} {\cal W}^*\,B^*\epsilon^{*}
-G^{\alpha\bar \alpha}\,\partial_{\bar \alpha} {\cal K}^*\,B^*\eta^{*}
+ G^{\alpha\bar \alpha}\,\partial_{\bar{\alpha}} T\, \epsilon
\ ,
\nonumber\\[.5ex]
\delta A_\mu^m &=&
K^{\alpha m} \, \overline{\epsilon} \gamma_\mu \psi_\alpha + \mbox{c.c.}
\end{eqnarray}
\subsection{A puzzle and its solution}\label{solution}
At first, it may seem that there is room for a paradox here. As already mentioned, any theory at most quadratic in the vector's field--strengths can be put in the Chern--Simons--matter form~\cite{Nicolai:2003bp,deWit:2003ja}. This, in particular, is true for the usual ${\cal N}=4, 8$ super--Yang--Mills theories (with kinetic term $F_{\mu\nu}F^{\mu\nu}$), whose scalars' target space is conical (in fact, flat), but which are obviously \textit{not} superconformal invariant in $D=3$. One checks that the dual Chern--Simons--matter Lagrangian does satisfy the $T\big|_\boxplus=0$ criterion for extended
${\cal N}=4,8$ supersymmetry, as they should. So what is going wrong?
\smallskip
The point is that, in order to formulate the ordinary $D=3$ SYM as a Chern--Simons--matter model, we have to enlarge the usual compact gauge group $G$ to a non--semisimple gauge group of the form $G\ltimes A$, with $A$ an Abelian group whose generators transform according to the adjoint of $G$ \cite{Nicolai:2003bp,deWit:2003ja}. The Killing vectors, $K_A$, generating the isometries associated to the Abelian ideal $A$, although belonging to the subgroup $\mathrm{Iso}(\mathcal{M})_0$ as required by rigid supersymmetry, do not belong to the smaller subgroup $\mathrm{Iso}(B)_0$, (that is, they do not satisfy the condition in eqn.~\eqref{seztaniicond}). Therefore these Abelian gaugings, while supersymmetric,
are neither $\mathrm{Spin}({\cal N})$ invariant nor scale invariant
(that is $\pounds_{\mathcal{E}}K_A\not=0$). So superconformal invariance gets broken in a quite rude way.
In the context of M2 branes, this has been discussed in \cite{Ho:2008ei,Ezhuthachan:2008ch}.
\medskip
This mechanism also explains one way out of a `no--go' physical argument formulated in refs.\,\cite{Gaiotto:2008sd,Kao:1992ig}. Let us recall the logic: Consider the class of ${\cal N}$--supersymmetric models which
are obtained, \textit{via} the duality of \cite{Nicolai:2003bp,deWit:2003ja}, from $D=3$ theories whose vectors have both $F^2$ canonical kinetic terms
and Chern--Simons interactions. The gauge vectors get massive \cite{Deser:1981wh} and have (say) helicity
$+1$. The ${\cal N}$--\textsc{susy} algebra has ${\cal N}$ helicity lowering operators, so a massive vector supermultiplet should contain states with helicity $\lambda$
\begin{equation}
\lambda\ =\ 1,\ \frac{1}{2},\ 0\, \cdots,\ 1-\frac{{\cal N}}{2}.
\end{equation}
In particular, for ${\cal N}\geq 4$, we have states with helicity $-1$ which are also massive vectors.
Since (rigid) supersymmetry commutes with the gauge symmetry, all the above states transform in the same way under
$G$, that is in the adjoint representation (which is the representation for the gauge vectors $\lambda=+1$). But, for ${\cal N}\geq 4$, we have also $\lambda=-1$ vectors in the supermultiplet, always in the adjoint representation. Thus the
vectors transform according to (at least) \emph{two} copies of the adjoint representation, but this is forbidden in
a non--Abelian gauge theory where the vectors should form a \textit{single} copy. Hence the paradox.
\smallskip
Above we have seen how Super--Yang--Mills cleverly avoids the paradox. The duality transformation which eliminates the $F^2$ kinetic terms also changes the gauge group
\begin{equation}
G\rightarrow G \ltimes A,
\end{equation}
with the effect of doubling the number of vectors. Both the generators of $G$ and $A$ transform in the adjoint representation of $G$, and hence the vector fields form precisely \emph{two} copies of the adjoint representations of the \emph{original} compact gauge group $G$, the only one which may be linearly realized on the spectrum/$S$--matrix. If we add a Chern--Simons term to give mass, both copies of the adjoint representation will give rise to physical massive helicity $\pm 1$ particles. So, the paradox is not really a paradox. It is just the magic of dualities and non--compact gaugings in $D=3$.
\medskip
\section{Relation to Poincar\'e supergravity} \label{sec:sugra}
In this final section, we show how the above results on
the construction of superconformal sigma models can be obtained
in a rather elegant way by taking particular truncations of three-dimensional (gauged) Poincar\'e supergravity. Evaluating the supergravity action with a particular truncation ansatz for the (off-shell) supergravity multiplet
on a background that admits conformal Killing spinors, leads to theories with global supersymmetry which by construction are superconformal. The geometrical structure that was revealed in the direct construction above is directly induced by the geometry of the supergravity target spaces and their gaugings.
We first present the construction for the bosonic case (following~\cite{Sezgin:1994th}) and then extend the method to ${\cal N}=1$ and ${\cal N}=2$ supergravity. In the latter case, it turns out to be necessary to start from off-shell supergravity. In some sense, this procedure amounts to an ``inversion of the conformal program'', as we discuss in the introduction.
\subsection{Bosonic case}
We start from a gravity coupled sigma-model
with scalar potential
(the bosonic sector of a generic three-dimensional ungauged supergravity)
\begin{eqnarray}
{\cal L}_0 &=&
-\ft12 \sqrt{-g}\,
\Big(
R^{(g)}
+
g^{\mu\nu}
\partial_\mu \phi^i\,
\partial_\nu \phi^j \,
g_{ij}(\phi)
+U(\phi)
\Big)\ ,
\label{Lbos}
\end{eqnarray}
with space-time metric $g_{\mu\nu}$.
Note that compared to standard gravity, we have chosen here the wrong sign for
the Einstein-Hilbert term in the action.
It is with this choice of sign, that we will obtain from (\ref{Lbos})
in the following a ghost-free action with global superconformal symmetry.
We now make the following ansatz for the space-time metric
\begin{eqnarray}
g_{\mu\nu} &=& e^{2\varphi}\,h_{\mu\nu}\ ,
\label{conformal_an}
\end{eqnarray}
where $\varphi$ is a dilaton field and $h_{\mu\nu}$ is a fixed background metric which admits a conformal Killing vector $\xi^\mu$,
i.e.\ satisfies~(\ref{conformalKilling}). The relation between the two metrics implies that
\begin{eqnarray}
\Gamma^\lambda_{\mu\nu}(g) &=&
\Gamma^\lambda_{\mu\nu}(h) + 2\delta^\lambda_{(\mu}\partial_{\nu)}\,\varphi
-h_{\mu\nu}\,h^{\lambda\tau}\,\partial_\tau\varphi\ ,
\nonumber\\
R^{(g)} &=& e^{-2\varphi} \left( R^{(h)} -2 \partial_\mu \varphi \partial^\mu -4 \nabla^\mu \partial_\mu \varphi \right)\ ,
\end{eqnarray}
for the Christoffel symbols and the Ricci scalar, respectively.
Under the particular diffeomorphism generated by the conformal Killing vector $\xi^\mu$ (\ref{conformalKilling}) of the metric $h_{\mu\nu}$, the metric $g_{\mu\nu}$ transforms as
\begin{eqnarray}
\delta_\xi \,g_{\mu\nu} &=& (4\Omega
+2\xi^{\lambda} \partial_\lambda\varphi)\,g_{\mu\nu} \ .
\end{eqnarray}
This shows that combining this diffeomorphism with the transformation
\begin{eqnarray}
\delta_\xi \varphi &=& \xi^\mu \partial_\mu\varphi +2\Omega\ ,
\label{varphi}
\end{eqnarray}
of the dilaton field, leaves the ansatz (\ref{conformal_an}) invariant,
i.e.\ implies that $\delta h_{\mu\nu}=0$, in accordance with the role of $h_{\mu\nu}$ as a fixed background metric.
Thus, evaluating the Lagrangian (\ref{Lbos}) with the particular ansatz (\ref{conformal_an}) yields an action on a fixed background metric $h_{\mu\nu}$ which by construction is invariant under the conformal transformations
\begin{eqnarray}
\delta \phi^i &=& \xi^\mu \partial_\mu\phi^i\ ,\qquad
\delta \varphi ~=~ \xi^\mu \partial_\mu\varphi +2\Omega \ ,
\end{eqnarray}
as a consequence of the diffeomorphism invariance of the original action (\ref{Lbos}).
Explicitly, plugging (\ref{conformal_an}) into (\ref{Lbos}) leads to
\begin{eqnarray}
{\cal L}_0 &=& -\ft12 \sqrt{-h}\, \Big( r^2 R^{(h)}
+ h^{\mu\nu} \Big( 8 \partial_\mu r\,\partial_\nu r
+ r^2\,\partial_\mu \phi^i\, \partial_\nu \phi^j g_{ij}(\phi)\Big)\ +r^6\, U(\phi)\Big)\ ,
\label{Lbos_conf}
\end{eqnarray}
where we have defined $e^\varphi\equiv r^2$. We see, that (upon rescaling of the target space metric and potential) this construction precisely reproduces the conformal action (\ref{Lconformal}) with potential of the form (\ref{VU})
and cone metric (\ref{cone}), the base of the cone being the target space of the gravity coupled sigma-model (\ref{Lbos}). Moreover, it is straightforward to see that starting from a gauged sigma-model in the gravitational action (\ref{Lbos}) the same procedure leads to a conformal gauged sigma-model in (\ref{Lbos_conf}) in which only isometries of the base manifold of the cone are gauged. This shows that all bosonic conformal sigma-models in three dimensions found above by direct construction can be obtained by this procedure.
\subsection{${\cal N}=1$ supergravity}
We now extend this construction to the supersymmetric case.
The general three-dimensional ${\cal N}=1$ (gauged) supergravity Lagrangian
has been given in~\cite{deWit:2003ja}. Here, we will start from an off-shell
version of the ungauged theory~\cite{Uematsu:1984zy,Uematsu:1986de,Andringa:2009yc}. The extension to gaugings is straightforward. The action is given by
\begin{eqnarray}
{\cal L}^{{\cal N}=1}_{{\rm off-shell}} &=&
-{\cal L}^{{\cal N}=1}_{{\rm sugra}}+{\cal L}^{{\cal N}=1}_{{\rm matter}}+{\cal L}^{{\cal N}=1}_{{F}}\ ,
\label{L_N1_off}
\end{eqnarray}
with
\begin{eqnarray}
E^{-1}{\cal L}^{{\cal N}=1}_{{\rm sugra}} &=&
\ft12 R -\ft12 \overline{\psi}{}_\mu \gamma^{\mu\nu\rho} D_\nu\psi_\rho-S^2
\ ,\nonumber\\[.5ex]
E^{-1}{\cal L}^{{\cal N}=1}_{{\rm matter}} &=&
-\ft12 \,
\partial^\mu \phi^i\,
\partial_\mu \phi^j\,g_{ij}-\ft12 \, \overline{\chi}{}_{i} \gamma^\mu
D_\mu\chi^{i} +\ft12 \,\overline{\chi}{}_{i} \gamma^\mu
\gamma^\nu \psi_\mu\,\partial_\nu\phi^i
+\ft12 f_i f^i+\ft14 S \overline{\chi}{}_i \chi^i \ ,
\nonumber\\[.5ex]
E^{-1}{\cal L}^{{\cal N}=1}_{{F}} &=&
\ft12 F(\phi) \overline{\psi}_\mu \gamma^{\mu\nu} \psi_\nu
+ \partial_i F(\phi) \overline{\psi}_\mu \gamma^\mu \chi^i
-D_i\partial_j F(\phi) \, \overline{\chi}^i \chi^j +4SF-2f^i \partial_i F
\ ,\nonumber
\end{eqnarray}
where $E=|{\rm det} E_\mu{}^r|$ now refers to the determinant of the vielbein associated with $g_{\mu\nu}$, the covariant derivative is defined as $D_\mu\chi^{i}\equiv\nabla_\mu\chi^{i} + \Gamma^i_{mn}(\phi) \partial_\mu\phi^m\chi^n$, and the function $F=F(\phi)$ is a real superpotential. Like in the bosonic case, we must choose the wrong sign for the supergravity part of Lagrangian, in order to obtain a ghost-free
globally supersymmetric action in the following. As we start from an off-shell result, the above action is supersymmetric for either choice of sign of ${\cal L}^{{\cal N}=1}_{{\rm sugra}}$.
The local ${\cal N}=1$ supersymmetry transformation rules are given by
\begin{eqnarray}
\delta E_\mu{}^r &=& \ft12\,\overline{\varepsilon}\gamma^r \psi_\mu
\;,\nonumber\\
\delta \psi_\mu &=& D_\mu \varepsilon + \ft12 S \,\gamma_\mu \varepsilon
\;,\nonumber\\
\delta S &=& \ft18 \overline{\varepsilon} \gamma^{\mu\nu}\psi_{\mu\nu}
-\ft14 S \overline{\varepsilon} \gamma^\mu \psi_\mu
\;,\nonumber\\
\delta \phi^i &=&
\ft12\overline{\varepsilon} \chi^i
\;,\nonumber\\
\delta \chi^i &=& \ft12 \gamma^\mu \varepsilon \partial_\mu \phi^i - \ft12 f^i \varepsilon
\;,\nonumber\\
\delta f^i &=& -\ft12 \overline{\varepsilon} \gamma^\mu D_\mu \chi^i
+\ft14 \overline{\varepsilon}\gamma^\mu\gamma^\nu\psi_\mu\partial_\nu\phi^i
+\ft14 S \overline{\varepsilon} \chi^i -\ft14\overline{\varepsilon} \gamma^\mu \psi_\mu f^i
\;,
\label{susy_N1_off}
\end{eqnarray}
where $\psi_{\mu\nu}=2D_{[\mu}\psi_{\nu]}$\,.
Next we study the emergence of a SCFT from this theory.
We start from a background metric $h_{\mu\nu}=\eta_{rs} e_\mu{}^r e_\nu{}^s$
that admits a conformal Killing spinor~(\ref{conformalspinor})
\begin{eqnarray}
\nabla_\mu \epsilon &=& \ft12 \gamma_\mu \eta
\;,
\label{CKS1}
\end{eqnarray}
and accordingly also a conformal Killing vector $\xi^\mu$, satisfying (\ref{conf_vielbein}).
For the fields of ${\cal N}=1$ off-shell
supergravity (\ref{L_N1_off}), we generalize (\ref{conformal_an}) to
the following ansatz for the vielbein and gravitino
\begin{eqnarray}
E_{\mu}{}^r &=& e^{\varphi}\,e_\mu{}^r \;,\nonumber\\
\psi_\mu &=& e^{\varphi/2}\, e_\mu{}^r\,\gamma_r\,\lambda
\;.
\label{conformal_N1}
\end{eqnarray}
In particular, this implies the relation
\begin{eqnarray}
\omega_\mu{}^{rs}(E) &=& \omega_\mu{}^{rs}(e)
+2 e_\mu{}^{[r}\,\partial^{s]} \varphi
\;,
\end{eqnarray}
between the spin connections of $E_\mu{}^r$ and $e_\mu{}^r$.
Under the combination of a diffeomorphism with the conformal Killing vector $\xi^\mu$ and a Lorentz transformation with parameter $\Lambda_{rs}=-e_{[r}{}^\mu \nabla_\mu \xi_{s]}-\xi^\mu \omega_\mu{}^{rs}(e)$, the supergravity vielbein transforms as
\begin{eqnarray}
\left(\delta_\xi +\delta_{\Lambda}\right) E_{\mu}{}^r &=&
e^\varphi\, {\cal L}_\xi e_\mu{}^r + \xi^\nu \partial_\nu \varphi\,E_\mu{}^r
-\Lambda^r{}_s\,E_\mu{}^s
~\equiv~ \delta\varphi\,E_{\mu}{}^r
\;,
\end{eqnarray}
which is compatible with (\ref{varphi}) assuming that the background metric $h_{\mu\nu}$ does not transform. From the action of the same combination of
diffeomorphism and Lorentz transformation on the gravitino we find that also the ansatz for the gravitino in (\ref{conformal_N1}) is consistent and implies
\begin{eqnarray}
\delta\varphi &=&
\xi^\mu\,\partial_\mu\varphi +2\Omega \ ,
\nonumber\\
\delta \lambda &=& \xi^\mu\,\nabla_\mu
\lambda + \ft14\, \nabla_{r} \xi_s \,\gamma^{rs}\lambda
+ \Omega \lambda\ .
\label{confN1}
\end{eqnarray}
Moreover, the supersymmetry transformations (\ref{susy_N1_off})
with the particular choice of parameter
\begin{eqnarray}
\varepsilon&=&e^{\varphi/2}\epsilon\ ,
\end{eqnarray}
with the conformal Killing spinor $\epsilon$ from (\ref{CKS1}),
when combined with a Lorentz transformation with parameter
$\Lambda_{rs}=\frac12\,\overline{\epsilon}\gamma_{rs} \lambda$
are compatible with the ansatz (\ref{conformal_N1}) provided the fields $\varphi$ and $\lambda$ transform as
\begin{eqnarray}
\delta \varphi &=&
\ft12\overline{\epsilon}\lambda\;,
\nonumber\\
\delta \lambda &=&
\ft12 \gamma^\mu \epsilon\,\partial_\mu\varphi
+\ft12 e^\varphi\,S \epsilon
+\ft12 \eta\ .
\label{suconfN1}
\end{eqnarray}
Thus, analogous to the bosonic case, evaluating the Lagrangian (\ref{Lbos}) with the particular ansatz (\ref{conformal_N1}) yields an action on a fixed background metric $h_{\mu\nu}$ which by construction is invariant under the conformal and superconformal transformations (\ref{confN1}), (\ref{suconfN1}), extended by the corresponding transformations of the matter and auxiliary fields, that are obtained from evaluating (\ref{susy_N1_off}) on the ansatz (\ref{conformal_N1}).
Plugging the ansatz (\ref{conformal_N1}) into the off-shell action (\ref{L_N1_off}) we obtain after some calculation
\begin{eqnarray}
e^{-1} {\cal L} &=& -\ft12 r^2 R
-4 \partial_\mu r \partial^\mu r
-\ft12 r^2 \partial_\mu \phi^i \partial^\mu \phi^j \,g_{ij}(\phi)
+\ft12 r^6 (f^i f_i +2S^2)
\nonumber\\
&&{}
-r^2 \overline{\lambda} \gamma^\mu D_\mu \lambda
-\ft 12 \, r^4 \overline{\chi}_i \gamma^\mu D_\mu \chi^i
-\ft12 r^3 \overline{\chi}_i \gamma^\mu \lambda \partial_\mu \phi^i
+\ft14 r^6 S \overline{\chi}^i \chi_i
\nonumber\\
&&{}
+r^6(4SF-2f^i \partial_i F)
-3r^4 F \overline{\lambda}\lambda
-3r^5 \overline{\lambda} \chi^i \partial_i F
-r^6 \overline{\chi}{}^i\chi^j D_i\partial_j F\ ,
\end{eqnarray}
with $e^\varphi=r^2$\,. By construction, this action is invariant under the following superconformal transformations
\begin{eqnarray}
\delta_{\rm sc} r &=&
\ft14 r \overline{\epsilon}\lambda \ ,
\nonumber\\
\delta_{\rm sc}\lambda&=&
r^{-1}\partial_\mu r \,\gamma^\mu \epsilon
+\ft12 r^2\,S \epsilon
+\ft12 \eta \ ,
\nonumber\\
\delta_{\rm sc} S &=& \ft12r^{-2} \overline{\epsilon} \gamma^\mu D_\mu \lambda
+ \ft12 r^{-3} \overline{\epsilon} \gamma^\mu\lambda \partial_\mu r
-\ft34 \overline{\epsilon} \lambda S \ ,
\label{N1_off_gauged}
\end{eqnarray}
and
\begin{eqnarray}
\delta_{\rm sc} \phi^i &=& \ft12 r \overline{\epsilon} \chi^i\ ,
\nonumber\\
\delta_{\rm sc} \chi^i &=& \ft12 r^{-1} \gamma^\mu \epsilon \partial_\mu \phi^i-\ft12r\epsilon f^i\ ,
\nonumber\\
\delta_{\rm sc} f^i &=& -\ft12 r^{-1} \overline{\epsilon} \gamma^\mu D_\mu \chi^i - \ft14 r^{-2} \overline{\epsilon} \gamma^\mu \lambda \partial_\mu \phi^i - r^{-2} \overline{\epsilon} \gamma^\mu \chi^i \partial_\mu r
\nonumber\\
&&{}
-\ft34 \overline{\epsilon} \lambda f^i
+\ft14 r \overline{\epsilon} \chi^i S\ .
\end{eqnarray}
This construction gives the off-shell version of the ${\cal N}=1$ superconformal sigma-model. It is straightforward to check that upon integrating out the auxiliary fields by virtue of their field equations
\begin{eqnarray}
S=-2F-\ft18 \overline{\chi}{}_i \chi^i \ ,\qquad
f^i = 2g^{ij} \partial_j F\ ,
\end{eqnarray}
we precisely reproduce the Lagrangian (\ref{N1glob_cone}) obtained above.\footnote{ upon rescaling $\lambda \rightarrow 2\lambda$\, $\epsilon \rightarrow 2\epsilon$\, $g_{ij}\rightarrow 8 g_{ij}$, ${\cal L}\rightarrow8{\cal L}$, $F \rightarrow 4F$.} Moreover, as in the bosonic case it is straightforward to see how the procedure extends to the gaugings. In supergravity, any subgroup of isometries of the scalar target space can be gauged upon introduction of an additional Yukawa term~\cite{deWit:2003ja}.
Working through the same procedure then extends (\ref{N1_off_gauged})
to an off-shell version of the gauged ${\cal N}=1$ superconformal sigma-model (\ref{N1glob_gauged}), in which only isometries of the base manifold of the cone are gauged.
\subsection{${\cal N}=2$ supergravity}
Here, we extend the procedure to ${\cal N}=2$. As in the lower ${\cal N}$ cases discussed above, the main ingredient in the construction is a consistent truncation ansatz for the supergravity multiplet, which allows
to pass from Poincar\'e supergravity to a theory with global superconformal symmetry. In order to illustrate this structure for ${\cal N}=2$,
we restrict the discussion to the off-shell supergravity multiplet,
the extension to matter couplings is straightforward. In $3D$ the off-shell supergravity multiplet consists of the fields $(e_\mu^a, \psi_\mu, A_\mu, u)$
where the gravitino is a Dirac spinor, and the real vector field $A_\mu$ and the complex scalar $u$ are the auxiliary fields. The Lagrangian takes the form \cite{Rocek:1985bk}
\begin{eqnarray}
E^{-1} {\cal L} &=& -\ft12 R + \ft12 (\overline{\psi}_\mu \gamma^{\mu\nu\rho} D_\nu \psi_{\rho} + {\rm c.c.})
+|u|^2 - A_\mu A^\mu\ ,
\label{L_N2_off}
\end{eqnarray}
where for the same reasons as above we have chosen the wrong global sign.
Off-shell supersymmetry transformations of (\ref{L_N2_off}) are given by
\begin{eqnarray}
\delta E_\mu{}^r &=& \ft12 \overline{\varepsilon} \gamma^r \psi_\mu - \ft12 \overline{\psi}_\mu\gamma^r\varepsilon\ ,
\nonumber\\[.5ex]
\delta \psi_\mu &=& D_\mu \varepsilon - \ft{i}2 A_\nu \gamma^\nu\gamma_\mu \varepsilon
+\ft12 u \gamma_\mu (B\varepsilon)^*\ ,
\nonumber\\[.5ex]
\delta u &=& \ft14 \tilde{\varepsilon} \gamma^{\mu\nu}\widehat{\psi}_{\mu\nu}\ ,
\nonumber\\[.5ex]
\delta A_\mu &=& \ft{i}{8} \overline{\varepsilon} \gamma^{\nu\rho}\gamma_\mu \widehat{\psi}_{\nu\rho}+{\rm c.c.}\ ,
\label{N2off}
\end{eqnarray}
where $\tilde\varepsilon \equiv \overline{(B\varepsilon)^*}$ (cf.\ appendix A) and
\begin{eqnarray}
\widehat{\psi}_{\mu\nu} &\equiv& 2D_{[\mu}\psi_{\nu]} - i A_\rho\gamma^\rho \gamma_{[\mu}\psi_{\nu]}
+u\, \gamma_{[\mu} (B\psi_{\nu]})^*\ .
\end{eqnarray}
As above, we need to specify a consistent truncation ansatz for the fields of the supergravity multiplet. The following turns out to be the correct generalization of (\ref{conformal_N1})
\begin{eqnarray}
E_\mu{}^r &=& e^\varphi e_\mu{}^r\ ,
\nonumber\\[.5ex]
\psi_\mu &=& e^{(\varphi+i\tau)/2}\,\gamma_\mu \lambda \ ,
\nonumber\\[.5ex]
A_\mu &=& \ft12 \partial_\mu\tau-\ft12i \overline{\lambda}\gamma_\mu \lambda\ .
\label{conformal_N2}
\end{eqnarray}
Again, this ansatz is stable under diffeomorphisms with the conformal Killing vector $\xi^\mu$ of the background metric $h_{\mu\nu}=e_\mu{}^re_\nu{}^s\eta_{rs}$ (upon a compensating Lorentz transformation) provided that the fields transform as
\begin{eqnarray}
\delta_{\rm c}\varphi &=& \xi^\mu\,\partial_\mu\varphi +2\Omega\ ,
\nonumber\\
\delta_{\rm c} \tau &=& \xi^\mu\,\partial_\mu \tau\ ,
\nonumber\\
\delta_{\rm c} \lambda &=& \xi^\mu\,\nabla_\mu
\lambda\ + \ft14\,e_r{}^\mu e_s{}_\nu
\nabla_{\mu} \xi^\nu \,\gamma^{rs}\,\lambda + \Omega \lambda\ .
\end{eqnarray}
The real nontrivial check for the ansatz (\ref{conformal_N2}) is that it is also stable under the particular supersymmetry transformations
\begin{eqnarray}
\varepsilon &=& e^{(\varphi+i\tau)/2}\,\epsilon\ ,
\label{eps2}
\end{eqnarray}
where $\epsilon$ is a complex conformal Killing spinor of the background metric $h_{\mu\nu}$\,. Let us as an example consider the transformation of the auxiliary field $A_\mu$. Supersymmetry (\ref{N2off}) together with the ansatz (\ref{conformal_N2}) implies that
\begin{eqnarray}
\delta A_\mu &=& \ft{i}{8} \overline{\epsilon}\gamma^{\nu\rho}\gamma_\mu
\Big( \gamma_\rho\lambda\,\partial_\nu(\varphi+i\tau)
+2 \gamma_\rho D_\nu\lambda
+\gamma_{\nu\sigma}\gamma_\rho\lambda\,\partial^\sigma \varphi
-\ft12i\gamma^\sigma\gamma_{\nu\rho}\lambda\,\partial_\sigma\tau
\nonumber\\
&&{}\qquad\qquad
+ u e^{\varphi-i\tau}\gamma_{\nu\rho}B^*\lambda^*
\Big) + \mbox{c.c}\ .
\end{eqnarray}
Upon some gamma-matrix algebra\footnote{ using relations like:
$\gamma^{\mu\nu}\gamma_{\mu\rho}\gamma_\nu = 0,\;\;$
$\gamma^{\mu\nu}\gamma_\rho\gamma_\mu = 2\delta^\nu_\rho,\;\;$
$\gamma^{\sigma\tau} \gamma_\mu \gamma_\nu \gamma_{\sigma\tau} = 2\gamma_{\mu\nu}-6 g_{\mu\nu},\;\;$
$\gamma^{\sigma\tau} \gamma_\mu \gamma_{\sigma\nu} \gamma_\tau = -2\gamma_{\mu\nu}+4 g_{\mu\nu}$\,.
} and the conformal spinor relation (\ref{conformalspinor}), this variation can be rewritten as
\begin{eqnarray}
\delta A_\mu &=&
-\ft12i \partial_\mu (\overline{\epsilon} \lambda) + \ft14 i \overline{\epsilon}\gamma_\nu\gamma_\mu\lambda\,
\partial^\nu(\varphi+\ft12i\tau)-\ft14i u^* e^{\varphi+i\tau} \, \tilde{\epsilon}\gamma_\mu\lambda
-\ft14i\overline{\eta}\gamma_\mu\lambda
+\mbox{c.c.}\ ,
\end{eqnarray}
which is manifestly compatible with the ansatz (\ref{conformal_N2}),
such that the transformations of $\tau$ and $\lambda$ can be determined from
\begin{eqnarray}
\delta A_\mu &=& \ft12 \partial_\mu \delta \tau-\ft12 (i \overline{\lambda}\gamma_\mu \delta \lambda + {\rm c.c.})\ .
\end{eqnarray}
A similar calculation for the variation of $\psi_\mu$, $E_\mu{}^r$ and $u$
determines the transformation rules of the remaining fields.
In total, we find that the ansatz (\ref{conformal_N2})
is stable under the supersymmetry transformations (\ref{eps2}), provided the
parametrizing fields transform as
\begin{eqnarray}
\delta_{\rm sc}( \varphi +\ft12 i \tau)&=& \overline{\epsilon}\lambda\;,
\nonumber\\[.5ex]
\delta_{\rm sc}\lambda &=& \ft12 \gamma^\mu\epsilon\,
\partial_\mu( \varphi +\ft12 i \tau) +\ft12 e^{\varphi-i\tau} u (B\epsilon)^*
+\ft12\eta\ ,
\nonumber\\[.5ex]
\delta_{\rm sc} u &=&
e^{-\varphi+i\tau}
\tilde{\epsilon}\left[
\gamma^\mu D_\mu \lambda
+\ft12 \gamma^\mu \lambda\, \partial_\mu(\varphi+\ft12 i \tau)\right]\ .
\label{susyconfN2}
\end{eqnarray}
Having established consistency of the ansatz (\ref{conformal_N2}),
as for the models with lower supersymmetry above,
evaluating the off-shell action (\ref{L_N2_off})
for the ansatz (\ref{conformal_N2}) yields an action
which by construction is invariant under the
superconformal transformations (\ref{susyconfN2}).
Explicitly, we obtain
\begin{eqnarray}
e^{-1} {\cal L} &=& -\ft12 r^2 R
- 4 \partial_\mu r \partial^\mu r -r^2 \partial_\mu \tau \partial^\mu \tau +r^6 |u|^2
\nonumber\\[.5ex]
&&{}-r^2(\overline{\lambda}\gamma^\mu D_\mu \lambda +\mbox{c.c})
-\ft12 i r^2 \overline{\lambda}\gamma^\mu \lambda \partial_\mu \tau\ .
\end{eqnarray}
Upon integrating out $u$ and a rescaling similar to the ${\cal N}=1$ case, this reproduces the Lagrangian~(\ref{N2glob})
truncated to $\phi^i=0$, cf.~(\ref{Kcone}).
Repeating the construction described in this section for the general matter coupled ${\cal N}=2$ supergravity, reproduces the
full Lagrangian~(\ref{N2glob}) with the $\phi^i$ parametrizing the supergravity target space.
Likewise, extending the construction to gauged ${\cal N}=2$ supergravity straightforwardly reproduces the action~(\ref{N2glob_gauged}). By construction, the resulting gauge group then is a subgroup of the supergravity isometries, thus of $\mathrm{Iso}(B)_0$.
In a similar construction for the ${\cal N}=3$ case, we expect that the metric conformal mode $r$, together with $3$ scalars to arise from the $Sp(1)$ valued three auxiliary vector fields of ${\cal N}=3$ (which originate from the Weyl multiplet) and the QK manifold scalars present in the supergravity theory, should build up the desired HCK according to \cite{deWit:2001bk,deWit:2001dj}.
\section{Conclusions}\label{conclusion}
In conclusion, we have seen that a supersymmetric Chern--Simon--matter model, in order to be ${\cal N}$--superconformal, should: \textit{i)} have a target space $\mathcal{M}$ which is a ${\cal N}$--K\"ahlerian cone (namely a cone over an ${\cal N}$--Sasakian manifold); \textit{ii)} (for ${\cal N}\leq 2$) have a superpotential with the right scaling with respect to the Euler vector of the cone; \textit{iii)} have a compatible gauging, that is the gauge group is a subgroup of $\mathrm{Iso}(B)_0$ and the $T$--tensor satisfies the algebraic condition $T\big|_\boxplus=0$, which is spelled out in \eq{eq:defTAB}
(this is a non-trivial condition only for ${\cal N}\ge4$).
Once these conditions are fulfilled, we have shown that the Lagrangian and supersymmetry transformations are simply those of the ${\cal N}=1$ case with the real superpotential ${\cal F}$ set equal to $T^{MM}$ (for any choice of $M$). Moreover, we have constructed $T^{MM}$ by using the momentum maps $\mu^{MN\, m}$, for which we have given concrete expressions, see eqn. \eqref{mucone}.
We have moreover shown the emergence of the (super) CFTs from a suitable rigid limit of Chern--Simons--matter supergravities in $D=3$ \cite{deWit:2003ja,deWit:2004yr}. In particular, we have exhibited in detail this limiting procedure in pure gravity, and in off-shell formulations of ${\cal N}=1$ and ${\cal N}=2$ supergravities, in which we have shown that the auxiliary fields play an important role in the description of the resulting sigma models with underlying conic geometries.
In particular, the extra scalar fields that enhance the supergravity target space to the target space of the superconformal sigma model descend from particular modes of the off-shell supergravity multiplet.
A different rigid limit of three-dimensional Chern--Simons--matter supergravities has been analyzed in~\cite{Bergshoeff:2008ix,Bergshoeff:2008bh}
in which the supergravity sigma-model target spaces generically flatten out preserving their dimension. In contrast, the limit described in this paper
gives rise to a curved conic target space geometry which contains the supergravity target space as a particular subspace of the base manifold.
The conditions on the gauge group then arise automatically from the corresponding conditions in supergravity and the limit procedure.
\subsection*{Acknowledgments}
We acknowledge discussions with Chris Pope and Jan Rosseel. E.S. thanks Groningen University and Scuola Internazionale Superiore di Studi Avanzati in Trieste for hospitality where part of this work was done. The work of H.S. has been supported in part by the Agence Nationale de la Recherche (ANR), and the work of E.S. has been supported in part by NSF grants PHY-0555575 and PHY-0906222.
\newpage
\section*{Appendix}
\begin{appendix}
\section{Complex spinor conventions}
The three-dimensional space-time metric has signature $(-++)$. The
gamma matrices satisfy the Clifford algebra
$\{\gamma^{\mu},\gamma^{\nu}\} =2h^{\mu\nu}$ and obey the
identities
\begin{eqnarray}
\left(\gamma^{\mu}\right)^{\dagger} \ = \
\gamma_{0}\gamma^{\mu}\gamma_{0}\ , \qquad
\left(\gamma^{\mu}\right)^{T} \ = \
-C\gamma^{\mu}C^{-1}\ , \qquad
\left(\gamma^{\mu}\right)^{*} \ = \
B\gamma^{\mu}B^{-1}\ ,
\end{eqnarray}
where $C^T=-C$ is the charge conjugation matrix, $B^T=B$ and $B^\dagger B=1$.
It follows that $B^\star B=1$. Consistent with these, we work with $C^\dagger C=1$ and $C=B\gamma_0$. We then have $C^\star C = -1$. In general, we use Majorana spinors, i.e.\ we impose the reality condition $\psi^* = B\psi$\,.
For the ${\cal N}=2$ model, we prefer to use complex notation, i.e.~the spinor
fields are two-component Dirac spinors. The Dirac conjugate is defined as
\begin{eqnarray}\label{inv1}
\bar{\psi}^{\bar \alpha} &=& \left(\psi^\alpha\right)^{\dagger}i\gamma_{0}\;,
\end{eqnarray}
such that $G_{\alpha\bar \alpha}\,\bar{\psi}^{\bar \alpha} \psi^\alpha$ is a (real) Lorentz scalar. Note that the indices $\alpha, \bar\alpha$ are asscoiated with the complex coordinates $(\Phi^\alpha, \Phi^{\!*}{}^{\,\bar \alpha})$ used in Section 3.3, and that the spinor indices are suppressed.
For Dirac spinors there is an alternative definition of the conjugate by
\begin{eqnarray}
\tilde{\psi}^\alpha &=& \overline{(B\psi^\alpha)^*}\ ,
\end{eqnarray}
which gives rise to a second bilinear invariant
\begin{eqnarray}\label{inv2}
{\tilde \psi}^\alpha\psi^\beta \ = \
i\left(\psi^{T}\right){}^\alpha C\psi^\beta\; \qquad {\rm and}\qquad
{\tilde \psi}^{\bar \alpha} \psi^{\bar \beta}\ = \
i\left(\psi^T\right){}\!^{\bar \alpha}\, C^{-1}\psi^{\bar \beta}\ ,
\end{eqnarray}
where $\psi^{\bar \alpha} \ = \ (\psi^{\alpha})^{\star}$ and the Dirac spinor indices have been suppressed.
The symplectic indices are raised and lowered as
\begin{equation}
\psi^a =\Omega^{ab}\psi_b\ , \quad \psi_a = \psi^b \Omega_{ba}\ ,\qquad \Omega_{ab}\Omega^{bc}=-\delta_a^c\ ,
\end{equation}
and similarly for fields carrying the $SU(2)$ doublet indices $A=1,2$.
\section{The $\mathfrak{spin}({\cal N})$ matrices $(\Sigma^{MN})_{AB}$}\label{appendixSigma}
The $\mathfrak{spin}({\cal N})$ matrices $\Sigma^{MN}=-\Sigma^{NM}$ are defined as
\begin{align}
\Sigma^{0\, I}&= -\Sigma^{I\, 0}= \Gamma^I\\
\Sigma^{I\,J}&= \frac{1}{2}\left(\Gamma^I\, \Gamma^J-\Gamma^J\,\Gamma^I\right)\qquad \text{for } I, J=1,2,\dots, {\cal N}-1,
\end{align}
where the $\Gamma^I$ are the Dirac matrices generating the Euclidean Clifford algebra $\mathbb{C}l({\cal N}-1)$ in $({\cal N}-1)$ dimensions:
\begin{equation}
\Gamma^I\Gamma^J+\Gamma^J\Gamma^I=-2\,\delta^{IJ}\qquad I, J=1,2,\dots, {\cal N}-1.
\end{equation}
The matrices $\Gamma^I$ are real, and being anti--Hermitian, antisymmetric. Then the $\Sigma^{MN}$ are also real antisymmetric. Moreover each matrix $\Sigma^{MN}$ has square equal to $-1$. More generally, they satisfy the Clifford relations
\begin{equation}\begin{split}\label{eq:cliffordsismasignma}
\Sigma^{MN}\Sigma^{PQ}&=(\delta^{NP}\delta^{MQ}-\delta^{MP}\delta^{NQ})\,\boldsymbol{1}+\\
&+\delta^{MP}\Sigma^{NQ}-
\delta^{MQ}\Sigma^{NP}-\delta^{NP}\Sigma^{MQ}+\delta^{NQ}\Sigma^{MP}+\Sigma^{MNPQ},
\end{split}\end{equation}
corresponding to the Clifford multiplication in $\mathbb{C}l_0({\cal N}-1)$. Here $\Sigma^{MNPQ}$ is the totally antisymmetrized product of the $\Gamma^M$ (with $\Gamma^{\cal N}$ identified with $1$).
\section{Geometry of conformal Killing spinors}\label{app:killingspinors}
In this appendix we address the question of which three--dimensional (pseudo)Riemannian spaces $M$ admit conformal Killing spinors (CKS), that is solutions $(\epsilon,\eta)$ to the equation
\begin{equation}\label{CKSEquiation}
D_\mu\epsilon=\gamma_\mu\eta.
\end{equation}
Notice that, in our definition, an ordinary Killing spinor is a special case of a conformal Killing spinor.\medskip
Let $N(g)$ be the number of linear independent CKS on the manifold $M$ equipped with the Riemannian metric $g$. $N(g)$ depends only on the \emph{conformal class} $[g]$ of the metric $g$. Indeed, if $(\epsilon, \eta)$ is a CKS for the metric $g$,
\begin{equation}\label{confromalCFS}
(\tilde \epsilon,\tilde\eta)\equiv \left(e^{\phi/2}\epsilon,\ e^{\phi/2}\!\left(\eta+\frac{1}{2}\gamma^\mu\partial_\mu\phi\, \epsilon\right)\right)
\end{equation}
is a CKS for the conformally equivalent metric $\tilde g=e^{2\phi}\,g$.
\medskip
\subsection{Local solutions}
We begin by discussing $N([g])_\mathrm{local}$, that is the number of \textit{local} solutions to the CKS equation \eqref{CKSEquiation} in a neighborhood of a point. In \cite{baumleitner} it was shown that, in \textit{Lorentzian} signature:
\begin{enumerate}
\item $N(g)_\mathrm{local}=4$ if and only if $M$ is conformally flat;
\item $N(g)_\mathrm{local}=1$ if and only if $M$ is locally conformally equivalent to a $pp$--wave metric
\begin{equation}\label{ppmetric}
ds^2= dx^+\, dx^- +f(x^+, y) (dx^+)^2+ dy^2;
\end{equation}
\item $N(g)_\mathrm{local}=0$ in all other cases.
\end{enumerate}
In some cases, the $pp$--wave may be seen as a `degenerate limit' of a conformally flat space in the following sense: the Lorentzian conformally--flat spaces are modelled on the $AdS_3$ space (see below) and the supersymmetric $pp$--waves arise as Penrose limits of $AdS_3$ \cite{figuroa}. Notice that the CKS in the metric \eqref{ppmetric} is parallel, hence an ordinary Killing spinor (that is $\eta=0$) \cite{baumleitner}.
\smallskip
In \textit{Euclidean} signature the local result is simpler:
\begin{enumerate}
\item $N(g)_\mathrm{local}=4$ if and only if $M$ is conformally flat;
\item $N(g)_\mathrm{local}=0$ otherwise.
\end{enumerate}
To understand these results, recall that in $d=3$ a metric $g$ is conformally flat if and only if its Cotton tensor,
\begin{equation}
C_{\nu\mu\rho}:=D_\rho\Big(R_{\mu\nu}-\tfrac{1}{4}g_{\mu\nu}R\Big)
-D_\mu\Big(R_{\rho\nu}-\tfrac{1}{4}g_{\rho\nu}R\Big),
\end{equation}
vanishes identically. In ref.\cite{baumleitner} is was shown that the local integrability condition for the CKS equation \eqref{CKSEquiation} is (in any space--time signature)
\begin{equation}\label{cottongsammaepsilon}
C_{\nu\mu\rho}\gamma^{\nu}\epsilon=0.
\end{equation}
If $\epsilon\not=0$ this algebraic equation implies that, for all vectors $X^\mu$, $Y^\mu$, the vector $C_{\nu\mu\rho}X^\mu Y^\rho$ is \textit{null}.
In Euclidean signature all null vectors vanish, so $C_{\nu\mu\rho}\equiv 0$. In Minkowski signature, a non--zero vector may be null. In this case the matrix $C_{\nu\mu\rho}X^\mu Y^\rho$ has precisely \textit{one} zero eigenvalue, and since $\epsilon$ is a zero eigenvector, we may have at most one linearly independent CKS if $C_{\nu\mu\rho}\not=0$. The case $N(g)_\mathrm{local}=1$ corresponds to spaces locally conformal to $pp$--waves \cite{baumleitner}.
The above result allows an explicit construction of all the \emph{local} solutions to the CKS equation. For simplicity, here we limit ourselves to the conformally flat case\footnote{\ Sometimes the $pp$--case may be reduced to this one by taking the Penrose limit.} (the only one in Euclidean signature). Since the metric is conformally flat, there exist local coordinates in which the metric $g$ takes the form $e^{2\phi} \eta_{\mu\nu}dx^\mu\, dx^\nu$; in each such coordinate patch we can use eqn.~\eqref{confromalCFS} to map the CKS's to the CKS of flat space.
Then the general {local} solution to the CKS equation in the conformally flat metric $g_{\mu\nu}=e^{2\phi}\eta_{\mu\nu}$ is
\begin{equation}\label{eq:localsolCKS}
\epsilon=e^{\phi/2}\big(x^\mu\gamma_\mu \epsilon_1+\epsilon_2\big) \qquad \epsilon_1, \epsilon_2\ \text{constant spinors}.
\end{equation}
However, these {four} local solutions need \textit{not} to extend to global conformal Killing spinors. Given a conformally flat manifold, in general we get $N(g)\leq 4$ CKS, the actual number depending on how many of the four local solutions have a global extension. For instance, the $3$--torus $S^1\times S^1\times S^1$ with the usual flat metric is certainly conformally flat, but it has $N((S^1)^3)=2$, since only the local solutions \eqref{eq:localsolCKS} with $\epsilon_1=0$ are globally univalued on the torus (the two surving CKS correspond to the two parallel spinors of the flat connection).
We need to discuss the global topological properties which must be fulfilled in order to get well--defined global CKS. To do this it is convenient to introduce the conformal counterpart of the usual Riemannian normal coordinates.
\subsection{Conformal normal coordinates}
Let $\eta_{\mu\nu}$ ($\mu,\nu=1,2,3$) be the flat metric for the given signature $(p,q)$ of spacetime. Consider the following quadric in projective four-dimensional space
\begin{equation}
Q\colon \ \ \eta_{\mu\nu}X^\mu X^\nu-2 X^0 X^4=0,
\end{equation}
and let $\widetilde{Q}$ be its universal cover. $\widetilde{Q}=S^3$ in Euclidean signature and $\widetilde{Q}=\widetilde{AdS}_3$ in the Minkowski one.
$\widetilde{Q}$ has a natural `round' metric $g_\mathrm{can}$, of signature $(p,q)$, on which the group of projective rotations $SO(p+1,q+1)$ acts by conformal symmetries.
\medskip
Let $M$ be a complete \textit{conformally--flat} manifold of signature $(p,q)$. One can show \cite{kuiper} that there is a open set $U\subset \widetilde{Q}$ and a map
\begin{equation}
\varphi\colon U\rightarrow M,
\end{equation}
such that:
\begin{enumerate}
\item $\varphi$ is surjective;
\item in the neighborhood of each point $p\in M$, $\varphi$ is a local diffeomorphism, hence it defines local coordinates (conformal normal coordinates);
\item $\varphi^\ast g= e^{-2\omega}\, g_\mathrm{can}$, for some function $\omega$. \textit{I.e.}\! $\varphi$ is a \textit{conformal map};
\item $\varphi$ is unique up to a global $SO(p+1,q+1)$ rotation.
\end{enumerate}
However $\varphi$ is not one--to--one globally. Many points of $U\subset \widetilde{Q}$ may be mapped to the same point of $M$.
\subsection{Global solutions}
Locally, in the normal conformal coordinates, the solutions to the CKS equation are simply
\begin{equation}
\label{eq:pullback}
\epsilon_\mathrm{local} = e^{\omega/2}\,(\varphi^{-1})^*\epsilon_Q,
\end{equation}
where $\epsilon_Q$ are the canonical CKS on the quadric $\widetilde{Q}$ (constructed out of the Maurer--Cartan forms for $SO(p+1,q+1)$), compare with eqn.~\eqref{eq:localsolCKS}). In writing eqn.~\eqref{eq:pullback} we used the fact that $\varphi$ is a local diffeomorphism, so the map $\varphi^{-1}$ is locally defined.
However, $\varphi^{-1}$ is not globally defined (in general) since $\varphi$ is \textit{many--to--one} in the large. Then the inverse map
$\varphi^{-1}$ has many distinct branches. The global CKS are precisely those local solutions \eqref{eq:pullback} for which the different branches of $\varphi^{-1}$ agree. We have already seen an example of this phenomenon at the end of the previous subsection. The local solutions to the CKS equation on a flat $3$--torus are $x^\mu\gamma_\mu\epsilon_1+\epsilon_2$; the map $\varphi\colon \mathbb{R}^3\subset S^3\rightarrow (S^1)^3$ being given by $\vec x\mapsto \vec x \mod \mathbb{Z}^3$. A point $\vec x\in (S^1)^3$ has many preimages, ($\vec x+\vec n$), and the difference between the pull--backs \emph{via} different branches of $\varphi^{-1}$, namely $(\vec n-\vec m)\cdot\vec \gamma \epsilon_1$, vanishes precisely if $\epsilon_1=0$. Thus we get two global CKS rather than four.
In the Euclidean case, we have the maximum number of conformal Killing spinors, namely $4$, when the map $\varphi$ is a diffeomorphism (that is \textit{one--to--one.}). The Lorentzian case is slightly subtler since the quadrics $Q= AdS_3$ is not simply connected. Thus the criterion for $N(g)_\mathrm{global}=4$ is that the inverse map $\varphi^{-1}$ exists and induces a covering map of a domain of $AdS_3$.
In conclusion, we have $N(g)_\mathrm{global}=4$ if:
\begin{enumerate}
\item In Euclidean signature:
\begin{enumerate}
\item $M$ is a conformal sphere;
\item $M$ is conformal to an open domain $U$ in $S^3$;
\end{enumerate}
\item In Lorentzian signature:
\begin{enumerate}
\item $M$ is conformally equivalent to one of the (infinitely many) covers of the $AdS_3$ space;
\item $M$ is conformal to an open domain in one of the above.
\end{enumerate}
\end{enumerate}
An example of (2b) is the Minkowski spacetime, while examples of (1b) are $\mathbb{R}^3$ and $\mathbb{H}^3$ with metrics conformal to the usual constant curvature ones. In these (1b) cases, the metric (if complete) takes the form
\begin{equation}\label{metricdomain}
ds^2=\frac{d\vec x\cdot d\vec x}{f(\vec x)^2},
\end{equation}
where $f(\vec x)$ is a positive function on the domain $U\subset \mathbb{R}^3$ which vanishes on the boundary $\partial U$. On $U$ we have four solutions to the CKS equation, namely $f^{-1/2}\,(x^\mu\,\gamma_\mu\epsilon_1+\epsilon_2)$ ($\epsilon_1$, $\epsilon_2$ constant spinors).
As a word of caution about the manifolds \eqref{metricdomain} (and their Minkowski counterparts), we stress that, although in these geometries we have four linearly independent solutions to the CKS equation, it is \textit{not true}, in general, that all four CKS may be used to generate superconformal symmetries of a sensible QFT living on $M$. Indeed, to define a QFT we need to impose boundary conditions on $\partial M$ and only the CKS's which respect these boundary conditions are true superconformal invariances of the physical theory. Thus $N(g)_\mathrm{physical}\leq N(g)_\mathrm{global}$. \medskip
The manifolds $M$ with less than the maximal number of global CKS's are less easy to classify. We know only partial results for the Euclidean case.
$M$ should be conformally flat; then in the {compact, simply--connected, Euclidean signature} geometry we may invoke the Kuiper theorem \cite{kuiper}: \textit{A conformally flat compact simply connected Riemannian manifold is conformally equivalent to the canonical sphere.} Hence,
{in the simply--connected case, a compact manifold admitting non--trivial conformal Killing spinors is conformally equivalent to $S^3$ with the round metric,} and there are no manifolds with $1\leq N(g)_\mathrm{global}\leq 3$.\medskip
There are, however, interesting examples of (Euclidean) \textit{non}--simply connected conformally--flat compact $3$--folds with $1\leq N(g)_\mathrm{global}\leq 3$. One example is the $3$--torus $(S^1)^3$. A more interesting example is $S^1\times S^2$ realized as
\begin{equation}
\label{othergeoemtry2}
S^1\times S^2\simeq \{\mathbb{R}^3\setminus (0,0,0)\}\Big/ \vec x\sim q\, \vec x,\qquad q\neq 1,
\end{equation}
with the metric
\begin{equation}
ds^2=\frac{d\vec x\cdot d\vec x}{\vec x\cdot\vec x}.
\end{equation}
The conformal pull--back formula gives (locally)
\begin{equation}
\label{secongeoconfk}
\epsilon(\vec x)= (\vec x\cdot\vec x)^{-1/4}\, (\vec x\cdot\vec\gamma\epsilon_1+\epsilon_2),\qquad \epsilon_1,\epsilon_2\ \text{constant spinors}.
\end{equation}
A spinor $\epsilon(\vec x)$ is globally defined in the geometry \eqref{othergeoemtry2} iff $\epsilon(q\,\vec x)= q^{1/2} \epsilon(\vec x)$. Thus only the spinors in eqn.~\eqref{secongeoconfk} having $\epsilon_2=0$ survive. We find $2$ linearly independent conformal Killing spinors.
\end{appendix}
\newpage
\providecommand{\href}[2]{#2}\begingroup\raggedright |
2,877,628,090,283 | arxiv | \section{Introduction}
\label{intro}
Constraint programming (CP) has made significant progress over the last decades, and is now considered as one of the foremost paradigms for solving combinatorial problems. The basic assumption in CP is that the user models the problem and a solver is then used to solve it. Despite the many successful applications of CP on combinatorial problems from various domains, there are still challenges to be faced in order to make CP technology even more widely used.
A major bottleneck in the use of CP is modeling. Expressing a combinatorial problem as a constraint network requires considerable expertise in the field \cite{freuder1999modeling}. To overcome this obstacle, several techniques have been proposed for modeling a constraint problem automatically, and nowadays automated modeling is regarded as one of the most important aspects of CP \cite{freuder1999modeling,o2010automated,freuder2014grand,lombardi2018boosting,de2018learning,freuder2018progress}. Along these lines, an area of research that has started to attract a lot of attention is that of {\em constraint acquisition} where the model of a constraint problem is acquired (i.e. learned) using a set of examples that are posted to a human user or to a software system \cite{bessiere2017constraint,de2018learning}.
Constraint acquisition is an area where CP
meets machine learning, as it can be formulated as a
concept learning task.
Constraint acquisition can come in various flavours depending on factors such as whether the learner can post queries to the user dynamically, and the type of queries that can be posted and answered. In {\em passive} acquisition, examples of solutions and non-solutions are provided by the user. Based on these examples and their classification by the user as positive (solutions) or negative (non-solutions),
the system learns a set of constraints that correctly classifies all the given examples.
A limitation of passive acquisition (and passive learning in general) is the requirement, from the user's part, to provide diverse examples of solutions and non-solution to the system, especially in problems with irregular structure.
In contrast, in {\em active} or {\em interactive} acquisition, the learner interacts with the user dynamically while acquiring the constraint network.
In such systems, the basic query is to ask the user to classify an example as solution or not solution.
This "yes/no" type of question is called membership query~\cite{angluin1988queries},
and this is the type of query that has received the most attention in active constraint acquisition.
A state-of-the-art interactive acquisition algorithm is QuAcq \cite{bessiere2013constraint}. For each example that is classified as negative by the user, QuAcq is able to learn one constraint of the target network by setting a series of partial queries to the user. An alternative algorithm, called MultiAcq, finds all the constraints of the target network violated by a generated example that is classified as negative \cite{arcangioli2016multiple}. However, MultiAcq needs a linear number of queries in the size of the example to learn each constraint, whereas QuAcq has a logarithmic complexity.
In general, active acquisition has several advantages. First of all, it decreases the number of examples necessary to converge to the target set of constraints. In addition, it does not require the existence of diverse examples of solutions or non-solutions to the problem. This is a significant advantage especially if the problem has not already been solved.
Another advantage is that the user does not need to be human. It might be a previous system developed to solve the problem. For example, active learning has been used to automatically acquire CSPs which model the elementary actions of a robot by asking queries to the simulator of the robot \cite{paulin2008automatic}.
However, active learning still presents computational challenges regarding the number of queries required and the cpu time needed to generate them.
Despite the good theoretical bound of QuAcq and QuAcq-like approaches in terms of the number of queries required to learn a network, the generation of a membership query is an NP-complete problem. Hence, it can be too time-consuming, and therefore annoying, especially if the system interacts with a human user. For example, QuAcq can take more than 30 minutes to generate a query for the model acquisition of Sudoku puzzles near convergence.
In this work, we present methods to deal with the challenges of interactive learning. We first introduce an algorithm, called MQuAcq, that blends the main idea of MultiAcq into QuAcq, achieving a better complexity bound than MultiAcq. This algorithm uses the reasoning of QuAcq when searching for constraints to learn once a negative query is encountered, but instead of focusing on one constraint, it learns a maximum number of constraints, just like MultiAcq does. But whereas MultiAcq learns constraints of the target network in a number of queries linear in the size of the example, our proposed approach finds constraints in a logarithmic number of queries.
We make a detailed theoretical analysis of MQuAcq, proving its correctness and its complexity in terms of required queries.
We also propose an optimization on the process of locating scopes that reduces the number of queries needed to learn the target constraint network significantly. This is done by avoiding posting redundant queries to the user.
Then we focus on the query generation process, which is an important step of constraint acquisition that has not been discussed in detail in the literature. We describe the query generation process of standard interactive acquisition systems in detail, and we propose heuristics that can be applied during query generation to boost the performance of constraint acquisition algorithms.
First, we present a heuristic that generalizes the idea of allowing partial queries to be posted to the user. Instead of using partial queries only when trying to focus on one or more constraints after a complete example has been classified as negative, we allow the generation of partial examples to be posted as partial queries to the user. As experiments demonstrate, this can reduce the time needed for the system to converge, resulting in avoidance of premature convergence and reduced total run time for the acquisition process. Next, we focus on the generation of more informative queries, i.e. queries that can help to reduce the version space faster.
We propose a variable ordering heuristic for the query generation process, aiming at generating queries with more information, and achieving to reduce the maximum cpu time needed for query generation. We then propose a value ordering heuristic that cuts down the number of queries required.
Experimental results with benchmark problems from various domains demonstrate that the integration of our methods results in an algorithm that considerably outperforms both QuAcq and MultiAcq as it generates significantly fewer queries, it is up to one order of magnitude faster in average query generation time, it is by far superior in total run time, and it largely alleviates the premature convergence problem from which both QuAcq and MultiAcq suffer.
In addition, experiments show that our proposed algorithm scales up quite well in terms of the number of queries required, while the time performance, being highly dependant on the size of the target network, can rise sharply. Also, it is shown that learning problems with expressive biases scales well with our proposed algorithm, even when using a big language to construct the bias, especially regarding the number of generated queries.
The rest of this paper is organized as follows. Related work is presented in Section~\ref{sec:rel}. Section~\ref{sec:background} introduces the necessary background on constraint acquisition. In Section~\ref{sec:quacq} we review the algorithms QuAcq and MultiAcq. Section~\ref{sec:mquacq} describes the new algorithm that we propose. In Section~\ref{findScope2} we describe the optimization to the process of locating scopes.
Section~\ref{sec:query} details the query generation process. In Section~\ref{sec:heur} we describe the proposed heuristics.
An experimental evaluation is presented at Section~\ref{sec:experiments}. In Section~\ref{sec:disc} we discuss some important aspects of MQuAcq and point to future work, while Section~\ref{sec:conclusion} concludes the paper.
\section{Related Work}
\label{sec:rel}
An early approach to passive constraint acquisition is the algorithm ConAcq.1 \cite{bessiere2004leveraging,bessiere2005sat,bessiere2017constraint}. Based on the examples of solutions and non-solutions provided by the user, the system learns a set of constraints that correctly classifies all examples given so far. A passive learning method based on inductive logic programming was proposed in \cite{lallouet2010learning}. This system uses background knowledge on the structure of the problem to learn a representation of the problem, correctly classifying the examples given. Another approach to passive learning is the ModelSeeker system \cite{beldiceanu2012model}. In this approach, the user has to provide positive examples to the system, which are then arranged as a matrix. Then the system uses the global constraints catalog to identify specific global constraints that are present in the model. ModelSeeker has been shown to be very effective in extracting a model from highly structured problems, requiring only a few positive examples to learn the model of problems such as Sudoku.
Concerning active acquisition, an early related work is the {\em matchmaker} agent which interactively asks the user to explain why a proposed example is considered as incorrect, by providing one of the constraints that are violated \cite{freuder1998suggestion}.
An approach to interactive constraint acquisition using version spaces is presented in \cite{o2004study}. With this approach, examples are provided by the user that can be used to identify a version space of constraints and then the system attempts to generalize the user's examples by choosing a hypothesis from the current version space.
Another active learner presented in the literature is ConAcq.2 \cite{bessiere2007query,bessiere2017constraint} and its extension \cite{shchekotykhin2009argumentation}. Both these systems acquire constraint models using membership queries that are posted to the user \cite{angluin1988queries}.
A state-of-the-art interactive acquisition algorithm, again based on membership queries, is QuAcq \cite{bessiere2013constraint}. QuAcq is able to ask the user to classify partial queries, i.e. incomplete variable assignments, which may be easier for the user to answer. Also, asking partial queries gives the system the capability to focus on the scope of a constraint that is violated and hence learn the constraint. If the answer to a membership query posted by QuAcq is positive, the system reduces the search space by removing the set of constraints violated by this example. If the answer is negative, QuAcq asks a series of partial queries to locate the scope of one of the violated constraints of the target network.
An attempt to make QuAcq more efficient was presented by Arcangioli et. al with the MultiAcq algorithm \cite{arcangioli2016multiple}. Given that the generation of a useful membership query is not easy and that there may be several constraints that can be learned by each query, it is very likely that the system can learn more information from a negative query. So, instead of learning only one constraint, MultiAcq finds all the constraints of the target network violated by a generated example that is classified as negative. However, MultiAcq needs a linear number of queries in the size of the example to learn a constraint. On the other hand, QuAcq has a logarithmic complexity in terms of the number of queries.
Attempts to reduce the time needed to generate a membership query were presented in \cite{addi2018time,pquacq}. Apart from membership queries, other types of queries such as recommendation and generalization ones, have also been proposed to be used in interactive constraint acquisition \cite{daoudi2016constraint,bessiere2014boosting}. The use of such queries can reduce the total number of queries needed to learn a model, but the drawback is that they require a higher level of expertise from the user's part.
Active constraint acquisition is a special case of query-directed learning, also known as {\em exact learning} \cite{BSHOUTY1995146,bshouty2018exact}. Concept learning via queries has been widely studied in the theoretical machine learning literature. There are well-known results for several classes of concepts~\cite{bshouty1996asking,bshouty2018exact}, e.g. conjunctions of Horn clauses~\cite{angluin1992learning}, $k$-term DNF~\cite{blum1995fast} (or CNF) formula, decision trees~\cite{BSHOUTY1995146} etc.
In the learning model introduced in~\cite{angluin1988queries} and used by the most approaches to exact learning, two types of queries are used. The {\em membership} query, mentioned above, that requests the user to classify a given example as positive or negative and the {\em equivalence} query, which asks the user to decide whether the given concept is equivalent to the target concept. In case of a negative answer, the user must then provide a counterexample that proves why the two concepts are not equivalent.
As stated in~\cite{bessiere2017constraint}, in the context of constraint acquisition, posting equivalence queries to the user and expecting counterexamples to be returned is not feasible from a practical point of view as the assumption is that the user does not know the constraint network and does not have the skills to model the target concept directly. However, in the theoretical case where the user can answer equivalence queries, there exist theoretical results proving that a constraint network is learnable by equivalence queries alone~\cite{bessiere2017constraint}.
It has to be noted that constraint networks are quite complex to acquire and the results of generic concept learning algorithms cannot directly be compared to constraint acquisition algorithms. Also, operating with a bias of bounded arity constraints and without handling disjunctions of constraints, the current constraint acquisition algorithms cannot be applied to learn most of the boolean functions which have been studied in exact learning.
\section{Background}
\label{sec:background}
\subsection{Vocabulary and Constraint Networks}
\label{sec:voc}
The \textit{vocabulary} $(X, D)$ is a finite set of $n$ variables $X = \{x_1 , . . . , x_n \}$ and a set of domains $D = \{D(x_1), . . . , D(x_n)\}$, where $D(x_i) \subset \mathbb{Z}$ is the finite set of values for $x_i$. The vocabulary is the common knowledge shared by the user and the constraint acquisition system.
A \textit{constraint} $c$ is a pair (rel($c$), scope($c$)), where scope($c$) $\subseteq X$ is the \textit{scope} of the constraint and rel($c$) is a relation between the variables in scope($c$) that specifies which of their assignments are allowed. $|scope(c)|$ is called the \textit{arity} of the constraint. We denote by $c_{ij}$ a binary constraint between variables $x_i$ and $x_j$, with $c$ being the relation. A \textit{constraint network} is a set $C$ of constraints on the vocabulary $(X, D)$.
A constraint network that contains at most one constraint for each subset of variables (i.e. for each scope) is called a {\em normalized constraint network}.
An example $e_Y$ is an assignment on a set of variables $Y \subseteq X$. $e_Y$ is rejected by a constraint $c$ iff scope($c$) $\subseteq Y$ and the projection $e_{scope(c)}$ of $e_Y$ on the variables in scope($c$)
is not in rel($c$). An assignment $e_Y$ is a partial solution iff it is accepted by all the constraints $c \in C$ where $scope(c) \subseteq Y$. A complete assignment that is accepted by all the constraints in $C$ is a solution to the problem. $sol(C)$ denotes the set of solutions of $C$. A partial assignment $e_Y$ which is accepted by $C$ is not necessarily part of a complete solution.
A \textit{redundant} or \textit{implied} constraint $c \in C$ is a constraint that if removed from the constraint network, the set of solutions $sol(C)$ remains the same. In other words, if all the other constraints in $C$ are satisfied then $c$ is also satisfied.
\subsection{Interactive Learning}
\label{sec:inter}
Using terminology from machine learning, a \textit{concept} is a Boolean function over $D^X = \prod_{x_i \in X} D(x_i) $, that assigns to each example $e \in D^X$ a value in $\{0, 1\}$, or in other words classifies it as negative or positive. The target concept $f_T$ is a concept that assigns 1 to $e$ if $e$ is a solution of the problem and 0 otherwise. In constraint acquisition, the target concept is the target constraint network $C_T$, such that $sol(C_T) = \{e \in D^X \mid f_T(e) = 1\}$. Hereafter, following the literature, we will assume that the target constraint network is normalized.
Besides the vocabulary, the learner has a \textit{language} $\Gamma$ consisting of {\em bounded arity} constraints.
The \textit{constraint bias} $B$ is a set of constraints on the vocabulary $(X, D)$, built using the constraint language $\Gamma$, from which the system can learn the target constraint network. $\kappa_B(e_Y)$ represents the set of constraints in $B$ that reject $e_Y$.
The classification question asking the user to determine if an example $e_X$ is a solution to the problem that the user has in mind is called a \textit{membership query} $ASK(e)$. The answer to a membership query is positive if $f_T(e) = 1$ and negative otherwise. A \textit{partial query} $ASK(e_Y)$, with $Y \subseteq X$, asks the user to determine if $e_Y$, which is an assignment in $D^Y$, is a partial solution or not, i.e. if it is accepted by all the constraints $c \in C$ where $scope(c) \subseteq Y$. A classified assignment $e_Y$ is labelled as positive or negative depending on the answer of the user to $ASK(e_Y)$. Following the literature, we assume that all queries are answered correctly by the user. From now on, we will sometimes use the terms query and example interchangeably.
A {\em minimal scope} in a negative example $e_Y$ is a subset of variables $S \subseteq Y$ such that ASK($e_S$) = ``no'' and for all $x_i \in S$, ASK($e_{S \setminus x_i}$) = ``yes''. Thus, a minimal scope $S$ is the scope of a violated constraint $c \in C_T$, such that there does not exist any violated constraint $c^{\prime} \in C_T$ with $scope(c) \subset S$.
To better understand the terms presented, consider the following example, which we use as a running example in the rest of the paper.
\begin{example}
\label{ex:back}
Consider a problem consisting of $8$ variables with domains $\{ 1, ... , 8 \}$. The vocabulary $(X,D)$ given to the system would be $X = \{ x_1, ..., x_8 \}$ and $D = \{D(x_1), . . . , D(x_8)\}$ with $D(x_i) = \{ 1, ... , 8 \}$. Assume that the problem the user has in mind has to satisfy the constraints $x_1 \neq x_2$, $x_1 \neq x_3$ and $x_3 \neq x_4$. Thus, the target network $C_T$ would be the set $\{ \neq_{12}, \neq_{13}, \neq_{34} \}$. Also, for simplicity assume that the language $(\Gamma)$ given to the system by the user contains only the relation $\{\neq$\}. In this case, the bias $B$ would contain the given relation for all the possible scopes. As it is a binary relation, $B = \{ \neq_{ij}$ $\mid$ $1 \leq i < 8 \land i < j \leq 8\}$. In addition, given an example $e = \{1,1,1,2,3,4,5,6\}$, $\kappa_B(e) = \{\neq_{12}, \neq_{13}, \neq_{23}\}$. The scopes $\{x_1,x_2\}$, $\{x_1,x_3\}$, $\{x_2,x_3\}$ are minimal scopes, as there is no constraint in $C_T$ with a scope $S$ being a subset of the scope of any of them. If $e$ is posted to the user to be classified as positive or negative then ASK($e$) is a complete membership query. If only a partial assignment, for instance $e_Y= \{1,1,1,2,-,-,-,-\}$ with $Y = \{x_1,x_2,x_3,x_4\}$, is posted to the user then $e_Y$ is a partial query.
\end{example}
In interactive constraint acquisition the system generates a set $E$ of complete and partial examples, which are labelled by the user as positive or negative. A constraint network $C$ agrees with $E$ if $C$ accepts all examples labelled as positive in $E$ and rejects those labelled as negative. The \textit{learned network} $C_L$ has to agree with $E$.
A (complete or partial) query $q = e_Y$ is called \textit{irredundant} (or \textit{informative}) iff the answer to $q$ is not predictable. That is, $q$ is irredundant iff it is not classified as positive by all the constraints in the bias $B$, which means that $\kappa_B(e_Y)$ is not empty. At the same time, $q$ should be accepted by the learned network $C_L$ otherwise it will be classified as negative.
Considering Example~\ref{ex:back}, the query $q$ = ASK($e$) is irredundant as it is not classified as positive by all the constraints from $B$. In this case, a query $q$ = ASK($e$), with $e = \{1,2,3,4,5,6,7,8\}$, would be redundant as it does not violate any constraint from $B$, thus it is surely positive.
The acquisition process has \textit{converged} on the learned network $C_L \subseteq B$ iff $C_L$ agrees with $E$ and for every other network $C \subseteq B$ that agrees with $E$, we have $sol(C) = sol(C_L)$. If the first condition is true ($C_L$ agrees with $E$) but the second condition has not been proved, we have \textit{premature convergence}.
If there does not exist a constraint network $C \subseteq B$ such that $C$ agrees with $E$ then the acquisition process has \textit{collapsed}. This happens when the target constraint network is not included in the bias, i.e. $C_T \nsubseteq B$.
\section{Algorithms for Interactive Constraint Acquisition}
\label{sec:quacq}
In this section we describe the state-of-the-art QuAcq and MultiAcq algorithms for interactive constraint acquisition.
State-of-the-art constraint acquisition algorithms are based on the version space learning paradigm. Initially, the system uses the given language $\Gamma$ to construct the bias $B$, containing the ``candidate'' constraints. Then the system iteratively posts a series of membership queries to the user in order to learn the constraints of the target network. Each example posted as a query must satisfy $C_L$, i.e. the network that has already been learned so far,
and violate at least one constraint from $B$. A query that satisfies the above criteria is called {\em informative}, as whatever the user's answer is, the version space will be pruned. In case the answer is positive, each constraint $c \in B$ that violates the posted example can be removed from $B$ (i.e. all the constraint networks containing $c$ are removed from the version space). In case the answer is negative, we know that one or more of the violated constraints are certainly in $C_T$. So, the system will search to find the scope of one or all of them, depending on the
algorithm used.
This is done through a function called {\em FindScope} in QuAcq. A similar function, called {\em FindAllScopes}, is used by MultiAcq.
Once a scope has been located, the function {\em FindC}~\cite{bessiere2013constraint} is used to learn the specific constraint (i.e. its relation).
\subsection{QuAcq}
QuAcq learns one constraint via each generated negative query. Once a generated example is classified as negative, QuAcq calls the recursive function {\em FindScope} to discover the scope of one of the violated constraints, as follows. It successively maps the problem of finding a constraint to a simpler problem by removing entire blocks of variables from the query and asking partial queries to the user. If after the removal of some variables the answer of the user to the partial query posted is ``yes'', then we know that the removed block contains at least one variable from the scope of a violated constraint. So, then {\em FindScope} focuses on this block. When, after repeatedly removing variables, the size of the considered block becomes 1 (i.e. the block contains a single variable), this variable certainly belongs to a violated constraint. {\em FindScope} achieves a logarithmic complexity in terms of the number of queries posted to the user by splitting the variables approximately in half after each removal.
QuAcq (Algorithm \ref{alg:quacq}) starts with an empty $C_L$ and a bias $B$ containing all the possible constraints that can be built using the constraint language $\Gamma$. The algorithm iteratively posts queries to the user, in the form of complete assignments. According to the classification of each query, the learned network $C_L$ is augmented with a new constraint from $B$ or some constraints are removed from $B$. If the algorithm terminates having learned the target network, it has converged. Otherwise, it has collapsed.
In more detail, QuAcq first checks if the currently learned network has at least one solution. This is done in case the problem that the user has in mind is unsolvable. If indeed it is, the acquisition process collapses (line 3). Otherwise, QuAcq generates a complete assignment $e$ which satisfies the currently built $C_L$ and is rejected by at least one constraint in $B$ (line 4).
This is an important step that is not described in detail in the literature. We focus on the query generation step in Section~\ref{sec:query}.
If no such example exists, then the system has converged to the target network. After generating a suitable example $e$, this example is posted as a membership query to the user. If $e$ is classified as positive (i.e. it is a solution) then all constraints that violate it are removed from $B$ (line 6), as they definitely cannot be part of the target network. If $e$ is negative, the algorithm tries to find one constraint that is violated by $e$ to add to $C_L$ by calling functions {\em FindScope} and {\em FindC} (line 8). Once the system learns the constraint (line 10), if no collapse occurs (line 9), it returns to the query generation step.
\begin{algorithm}
\caption{QuAcq: Quick Acquisition}\label{alg:quacq}
\begin{footnotesize}
\begin{algorithmic}[1]
\Require $B$, $X$, $D$ ($B$: the bias, $X$: the set of variables, $D$: the set of domains)
\Ensure $C_L$ : a constraint network
\State $C_L \leftarrow \emptyset$;
\While {true}
\If{ $sol(C_L) = \emptyset$ } \Return ``collapse'';
\EndIf
\State Generate $e$ in $D^X$ accepted by $C_L$ and rejected by $B$;
\If{ $e$ = nil } \Return ``$C_L$ converged'';
\EndIf
\If{ $ASK(e)$ = ``yes'' } $B \leftarrow B \setminus \kappa_B(e) $;
\Else
\State $c \leftarrow FindC(e, FindScope( e, \emptyset, X, false ) )$;
\If{ $c$ = nil } \Return ``collapse'';
\Else \quad $C_L \leftarrow C_L \cup \{c\} $;
\EndIf
\EndIf
\EndWhile
\end{algorithmic}
\end{footnotesize}
\end{algorithm}
{\em FindScope} (Algorithm \ref{alg:findscope}) takes as parameters an example $e$ that violates at least one constraint from the bias, two sets of variables $R$ and $Y$,
and a Boolean variable $ask\_query$. In the first call to {\em FindScope} from QuAcq, $e$ is the example generated in line 4 of QuAcq that is classified as negative, $R$ is the empty set and $Y$ the set of all the variables of the problem. $ask\_query$ is set to false as we already know that $e$ is a negative example.
\begin{algorithm}
\caption{{\em FindScope}}\label{alg:findscope}
\begin{algorithmic}[1]
\Require $e$, $R$, $Y$, $ask\_query$ ($e$: the example, $R$,$Y$: sets of variables, $ask\_query$: boolean)
\Ensure $Scope$ : a set of variables, the scope of a constraint in $C_T$
\Function{FindScope}{$e$, $R$, $Y$, $ask\_query$}
\If{ $ask\_query$ }
\If{ $ASK(e_R)$ = ``yes'' } $B \leftarrow B \setminus \kappa_B(e_R) $;
\Else \quad \Return $\emptyset$;
\EndIf
\EndIf
\If{ $|Y| = 1$ } \Return $Y$;
\EndIf
\State split $Y$ into $<Y_1, Y_2>$ such that $|Y_1| = \lceil |Y|/2 \rceil $;
\State $S_1 \leftarrow FindScope(e,R \cup Y_1, Y_2, true)$;
\State $S_2 \leftarrow FindScope(e,R \cup S_1, Y_1, (S_1 \neq \emptyset))$;
\State \Return $S_1 \cup S_2$;
\EndFunction
\end{algorithmic}
\end{algorithm}
An invariant of {\em FindScope} is that the example $e$ violates at least one constraint whose scope is a subset of $R \cup Y$.
If {\em FindScope} is called with $ask\_query$ = true it asks the user if $e_R$ is positive or not (line 3). If the answer is yes, it removes all the constraints from the bias that reject $e_R$.
Otherwise, it returns the empty set (line 3). {\em FindScope} reaches line 5 only in the case where $e_R$ does not violate any constraint. Hence, because as mentioned above $e$ violates at least one constraint whose scope is a subset of $R \cup Y$, if $Y$ is a singleton, the variable it contains surely belongs to the scope of a constraint that is violated.
That is because $e_R$ does not violate any constraint (because we have reached at this point), but we know that $e_{R \cup Y}$ is a negative example.
In this case the function returns $Y$.
If none of the return conditions is satisfied, the set $Y$ is split in two balanced parts (line 6) and the algorithm searches recursively in the sets of variables built using these parts for the scope of a violated constraint, in a logarithmic number of steps (lines 7-9).
Function {\em FindScope} posts partial queries to the user until it finds the scope of a constraint that is violated. A potential deficiency is the fact that if a question to the user violates, say 3 constraints from $B$, and the answer was negative, then after removing some variables from $Y$, if the partial query is still violating 3 constraints from $B$, {\em FindScope} will ask the user to classify the partial query again. However, there is no need for this because it is certain that the partial query will still be classified as negative. In Section~\ref{findScope2} we propose a fix to this problem.
After the system has located the scope of a violated constraint, it calls function {\em FindC} (Algorithm \ref{alg:findc}) to find the violated constraint. {\em FindC} asks partial queries to the user in the scope returned by {\em FindScope} to locate the violated constraint.
Function {\em FindC} takes as parameters $e$ and $Y$, with $e$ being the negative example in which {\em FindScope} found that there is a violated constraint from the target network $C_T$, and $Y$ being the scope of that constraint.
\begin{algorithm}
\caption{{\em FindC}}\label{alg:findc}
\begin{algorithmic}[1]
\Require $e$, $Y$ ($e$: the example, $Y$: The scope to search)
\Ensure $c$ : a constraint in $C_T$
\Function{FindC}{$e$, $Y$}
\State $B \leftarrow B \setminus \{c_Y \mid C_L \models c_Y \}$;
\State $ \Delta \leftarrow \kappa_B(e_Y)$;
\If{ $\Delta = \emptyset$ } \Return $\emptyset$;
\EndIf
\While {true}
\State Generate $e^{\prime}$ in $D^Y$ accepted by $C_L$ and rejected by $\Delta$, with $\kappa_\Delta(e_Y) \neq |\Delta|$;
\If{ $e^{\prime} = nil$ }
\If{ $\Delta = \emptyset$ } \Return $\emptyset$;
\Else \quad \Return random $c$ in $\Delta$ ;
\EndIf
\EndIf
\If{ $ASK(e^{\prime})$ = ``yes'' }
\State $B \leftarrow B \setminus \kappa_B(e^{\prime}) $;
\State $\Delta \leftarrow \Delta \setminus \kappa_\Delta(e^{\prime}) $;
\Else \quad $\Delta \leftarrow \kappa_\Delta(e^{\prime}) $;
\EndIf
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
In more detail, {\em FindC} first removes from the bias the constraints with scope $Y$ that are implied by the learned network $C_L$ (line 2). Next, set $\Delta$ is initialized to the candidate constraints, i.e. the constraints from $B$ with scope $Y$ that are violated by $e$ (line 3). If there are no candidate constraints then the empty set is returned (line 4) resulting in a collapse for QuAcq. In line 5 it enters its main loop in which it posts partial queries to the user. In line 6, a partial example $e^{\prime}$ is generated that is accepted by $C_L$ and rejected by $\Delta$ but not by all of its constraints. This is done to reduce the number of the candidate constraints whatever the answer of the user may be. If no such example exists (line 7), this means
that any of these constraints could be in $C_L$, so one constraint is randomly returned, except if $\Delta$ is empty (lines 8-9). If an example was found then it is posted as a query to the user (line 10). If the answer of the user is ``yes'' then all constraints rejecting it are removed from $B$ and $\Delta$ (lines 11-12), otherwise all constraints accepting it are removed from $\Delta$ (line 13).
Another version of the {\em FindC} function which fixes a problem of Algorithm \ref{alg:findc} is described in~\cite{bessiere2016new}. Namely, in case the target constraint network contains two constraints with scopes $S$ and $S^{\prime}$ such that $S \subset S^{\prime}$ then Algorithm \ref{alg:findc} is not correct. This is because if an example is classified as negative, then when {\em FindC} removes from $\Delta$ all the constraints accepting it, at line 13, it does this under the assumption that a constraint in the scope it currently searches violates the example.
However, the example may have been rejected because of a constraint in a subscope and not by a constraint in the current scope that is searched. In this case, the constraints accepting the example should not be removed from $\Delta$, but Algorithm \ref{alg:findc} will remove them. The {\em FindC} function introduced in~\cite{bessiere2016new} fixes this problem.
These versions of {\em FindC} can deal only with normalized constraint networks. In order to handle non-normalized constraints a different version of {\em FindC} should be used. However, developing such a method is not within the scope of this paper. Thus, we assume that the target constraint network is normalized, following the literature.
\subsection{MultiAcq}
Given that there may be several constraints from the target network that are violated by a generated membership query, it is very likely that the system can learn more information from a negative generated example (i.e. acquire more constraints). This is what MultiAcq tries to do, learning a maximum number of constraints from each negative example \cite{arcangioli2016multiple}. This is done by using function {\em FindAllScopes}. After a negative answer to a query $e_Y$, it posts a series of partial queries by removing one variable from $Y$ for each query. In case in all of these calls the example is positive then $Y$ is the scope of a violated constraint. Otherwise, it focuses on all the negative partial queries to find minimal scopes.
In more detail, MultiAcq (see Algorithm \ref{alg:multiacq}) takes as input a bias $B$ and returns a constraint network $C_L$ equivalent to the target network $C_T$ like QuAcq does. It iteratively generates an example like QuAcq and then the function {\em FindAllScopes} is called to learn a maximum number of constraints violated by the specific example (line 8). Before the call of {\em FindAllScopes}, it initializes the set $MSes$, in which it will store the minimal scopes found, to the empty set.
\begin{algorithm}
\caption{MultiAcq: Multiple Acquisition}\label{alg:multiacq}
\begin{footnotesize}
\begin{algorithmic}[1]
\Require $B$, $X$, $D$ ($B$: the bias, $X$: the set of variables, $D$: the set of domains)
\Ensure $C_L$ : a constraint network
\State $C_L \leftarrow \emptyset$;
\While {true}
\If{ $sol(C_L) = \emptyset$ } \Return ``collapse'';
\EndIf
\State Generate $e$ in $D^X$ accepted by $C_L$ and rejected by $B$;
\If{ $e$ = nil } \Return ``$C_L$ converged'';
\Else
\State $MSes \leftarrow \emptyset$;
\State $FindAllScopes(e, X, MSes)$
\Foreach {$Y \in MSes$}
\State $c_Y \leftarrow FindC(e, Y)$;
\If{ $c_Y$ = nil } \Return ``collapse'';
\Else \quad $C_L \leftarrow C_L \cup \{c_Y\} $;
\EndIf
\EndFor
\EndIf
\EndWhile
\end{algorithmic}
\end{footnotesize}
\end{algorithm}
The recursive function {\em FindAllScopes} (Algorithm \ref{alg:findallscopes}) takes as input a complete example $e$, a subset of variables $Y$ (equal to $X$ for the first call) and the set of minimal scopes already found (the empty set in the first call). If $Y$ is not a minimal scope already found (line 1) and does not contain a minimal scope already learned (line 3) and $e_Y$ contains at least one violated constraint from the bias (line 2), the system asks the user to classify the (partial) example $e_Y$ (line 4). If the answer is ``yes'' the constraints from $B$ violating the example are removed (line 5) and false is returned (line 6), as $Y$ does not contain any minimal scope. If the answer is ``no'', it means that there still exist violated constraints from $C_T$ in $Y$. So then {\em FindAllScopes} is called on each subset of $Y$ built by removing one variable from $Y$ (lines 8-9). If in all of these calls the example is positive then $Y$ is the scope of a violated constraint and it is added to the set $MSes$ (line 10). Then {\em FindAllScopes} returns true, as it has found a minimal scope.
Function {\em FindC} is then called by MultiAcq to find the constraint(s), like in QuAcq.
\begin{algorithm}
\caption{{\em FindAllScopes}}\label{alg:findallscopes}
\begin{footnotesize}
\begin{algorithmic}[1]
\Require $e$, $Y$, $MSes$ ($e$: an example, $Y$: a set of variables, $MSes$: the set of minimal scopes)
\Ensure a boolean : if $e_Y$ contains a minimal scope
\If{$Y \in MSes$} \Return true;
\EndIf
\If{ $k_B(e_Y) = \emptyset$ } \Return false;
\EndIf
\If{ $\nexists M \in MSes$ $|$ $M \subset Y$}
\If{ $ASK(e_Y)$ = ``yes'' }
\State $B \leftarrow B \setminus \kappa_B(e_R) $;
\State \Return false;
\EndIf
\EndIf
\State $flag \leftarrow$ false;
\Foreach {$x_i \in Y$}
\State $flag \leftarrow FindAllScopes(e, Y \setminus \{x_i\}, MSes) \lor flag$
\EndFor
\If{ $\neg flag$ } $MSes \leftarrow MSes \cup \{Y\}$;
\EndIf
\State \Return true;
\end{algorithmic}
\end{footnotesize}
\end{algorithm}
So, instead of focusing on the scope of only one constraint, MultiAcq learns all the constraints of the target network violated by a generated example. However, a disadvantage of MultiAcq is that it needs a linear number of queries in the size of the example to learn a constraint.
Now, we illustrate the behaviour of QuAcq and MultiAcq using our running example from Section~\ref{sec:inter}.
\begin{example}
\label{ex:quacq}
Recall that the vocabulary $(X,D)$ given to the system is $X = \{ x_1, ..., x_8 \}$ and $D = \{D(x_1), . . . , D(x_8)\}$ with $D(x_i) = \{ 1, ... , 8 \}$, the target network $C_T$ is the set $\{ \neq_{12}, \neq_{13}, \neq_{34} \}$ and $B = \{ \neq_{ij}$ $\mid$ $1 \leq i < 8 \land i < j \leq 8\}$. Assume that the example generated at line 4 of QuAcq or MultiAcq is $e = \{1,1,1,2,3,4,5,6\}$.
QuAcq will directly post it as a query to the user. The answer will be ``no'' as it violates the constraints $\{ \neq_{12}, \neq_{13} \}$ from the target network. Next, {\em FindScope} is called to find the scope of a violated constraint. Table~\ref{ta:ex-quacq} shows the recursive calls of {\em FindScope}. A dash (-) in columns $e_R$ and {\em ASK} means that no query is posted to the user, due to the condition at line 2. Also recall that queries are always only on the variables in $R$. As we can see, after 4 queries, {\em FindScope} will find the scope $\{x_1, x_2\}$. Then {\em FindC} will immediately return the constraint $\{\neq_{12}\}$ as it is the only constraint in $B$ with this scope. After the constraint is added to $C_L$, QuAcq will go back to line 3 and as no collapse occurs, it will generate a new example.
\begin{table}
\centering
\caption{Recursive calls of {\em FindScope} in Example~\ref{ex:quacq}}
\label{ta:ex-quacq}
{
\resizebox{\textwidth}{!}{%
\begin{tabular}{ |l|l|l|l|c|c| }
\hline
{\em call} & $R$ & $Y$ & $e_R$ & {\em ASK} & {\em return} \\
\hline
0 & $\emptyset$ & $x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8$ & - & - & $\{x_1,x_2\}$ \\
1 & $x_1, x_2, x_3, x_4$ & $x_5,x_6,x_7,x_8$ & $\{1,1,1,2,-,-,-,-\}$ & ``no'' & $\emptyset$ \\
2 & $\emptyset$ & $x_1, x_2, x_3, x_4$ & - & - & $\{x_1,x_2\}$ \\
2.1 & $x_1, x_2$ & $x_3, x_4$ & $\{1,1,-,-,-,-,-,-\}$ & ``no'' & $\emptyset$ \\
2.2 & $\emptyset$ & $x_1,x_2$ & - & - & $\{x_1,x_2\}$ \\
2.2.1 & $x_1$ & $x_2$ & $\{1,-,-,-,-,-,-,-\}$ & ``yes'' & $\{x_2\}$ \\
2.2.2 & $x_2$ & $x_1$ & $\{-,1,-,-,-,-,-,-\}$ & ``yes'' & $\{x_1\}$ \\
\hline
\end{tabular}
}
}
\end{table}
After generating the same example $e$, MultiAcq will directly give it to the function {\em FindAllScopes}. Table~\ref{ta:ex-multiacq} presents the trace of its recursive calls. After 8 queries it will find the scopes of both constraints from $C_T$. It will need one more query to remove the constraint $\{\neq_{23}\}$ from the bias and as no other constraint from $B$ is violated it will return. {\em FindC} will return immediately each of the constraints $\{\neq_{12}\}$ and $\{\neq_{13}\}$ as they are the only ones in $B$ with these scopes.
\begin{table}
\centering
\caption{Recursive calls of {\em FindAllScopes} in Example~\ref{ex:quacq}. }
\label{ta:ex-multiacq}
{
\resizebox{\textwidth}{!}{%
\begin{tabular}{ |l|l|l|c|c|c| }
\hline
{\em call} & $Y$ & $e_Y$ & {\em ASK} & $MSes$ & {\em return} \\
\hline
0 & $x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8$ & $\{1,1,1,2,3,4,5,6\}$ & ``no'' & $\emptyset$ & true\\
1 & $x_1,x_2,x_3,x_4,x_5,x_6,x_7$ & $\{1,1,1,2,3,4,5,-\}$ & ``no'' & $\emptyset$ & true\\
1.1 & $x_1,x_2,x_3,x_4,x_5,x_6$ & $\{1,1,1,2,3,4,-,-\}$ & ``no'' & $\emptyset$ & true\\
1.1.1 & $x_1,x_2,x_3,x_4,x_5$ & $\{1,1,1,2,3,-,-,-\}$ & ``no'' & $\emptyset$ & true\\
1.1.1.1 & $x_1,x_2,x_3,x_4$ & $\{1,1,1,2,-,-,-,-\}$ & ``no'' & $\emptyset$ & true\\
1.1.1.1.1 & $x_1,x_2,x_3$ & $\{1,1,1,-,-,-,-,-\}$ & ``no'' & $\emptyset$ & true\\
1.1.1.1.1.1 & $x_1,x_2$ & $\{1,1,-,-,-,-,-,-\}$ & ``no'' & $\{\{x_1,x_2\}\}$ & true\\
1.1.1.1.1.1.1 & $x_1$ & $\{1,-,-,-,-,-,-,-\}$ & - & $\{\{x_1,x_2\}\}$ & false\\
1.1.1.1.1.1.2 & $x_2$ & $\{-,1,-,-,-,-,-,-\}$ & - & $\{\{x_1,x_2\}\}$ & false\\
1.1.1.1.1.2 & $x_1,x_3$ & $\{1,-,1,-,-,-,-,-\}$ & ``no'' & $\{\{x_1,x_2\},\{x_1,x_3\}\}$ & true\\
1.1.1.1.1.2.1 & $x_1$ & $\{1,-,-,-,-,-,-,-\}$ & - & $\{\{x_1,x_2\},\{x_1,x_3\}\}$ & false\\
1.1.1.1.1.2.2 & $x_3$ & $\{-,-,1,-,-,-,-,-\}$ & - & $\{\{x_1,x_2\},\{x_1,x_3\}\}$ & false\\
1.1.1.1.1.3 & $x_2,x_3$ & $\{-,1,1,-,-,-,-,-\}$ & ``yes'' & $\{\{x_1,x_2\},\{x_1,x_3\}\}$ & false\\
\hline
\end{tabular}
}
}
\end{table}
\end{example}
In addition to the above, which are described in the relevant papers, QuAcq and MultiAcq take some extra steps during query generation\footnote{Personal communication with the authors of the algorithms.}. We detail these in Section~\ref{sec:query}.
\section{Efficient Multiple Constraint Acquisition}
\label{sec:mquacq}
As explained, the main difference between QuAcq and MultiAcq is that the latter tries to find multiple constraints that are violated once a query is classified as negative. However, MultiAcq needs a linear number of queries in the size of the example to locate the scope of each violated constraint. In contrast, QuAcq requries a logarithmic number of queries but finds only one violated constraint. Our intuition was to merge the idea of learning a maximum number of constraints from each generated negative example with the QuAcq reasoning of learning each constraint in a logarithmic number of steps.
Our resulting new algorithm, called Multi-QuAcq (MQuAcq for short), needs a logarithmic number of queries to discover each violated constraint, achieving the benefits of both QuAcq and MultiAcq.
\subsection{Multi-QuAcq description}
MQuAcq (Algorithm \ref{alg:all}) is an active learning algorithm which is based on QuAcq and extends it by incorporating the basic idea of MultiAcq. The main difference between QuAcq and MQuAcq is the fact that QuAcq finds one explanation (constraint) of why the user classified an example as negative, whereas MQuAcq tries to learn all the violated constraints. This is done by calling function {\em FindScope} (Algorithm~\ref{alg:findscope}) iteratively, while reducing the search space by removing variables from the scopes already found.
The main difference between MQuAcq and MultiAcq is that the former uses the QuAcq search method to find each scope through function {\em FindScope}, and in this way avoids some redundant searches (which can be very time-consuming) as well as queries that MultiAcq makes with function {\em FindAllScopes}. Besides this, there are several other differences, particularly on how the algorithms operate to locate irredundant queries after learning a constraint from a generated negative example.
As a result, our proposed algorithm has a better complexity bound in terms of the number of queries.
MQuAcq finds all the violated constraints via the function {\em FindAllCons}. The main idea is that after finding a constraint $c$, using QuAcq's reasoning, we exploit the fact that for any other violated constraint $c^{\prime} \in C_T$, we have $scope(c) \setminus scope(c^{\prime}) \neq \emptyset$. This is because otherwise $scope(c)$ would not be a minimal scope. So, {\em FindAllCons} recursively acquires all the violated constraints from $e_{Y \setminus \{ x \}}$, for each $x \in scope(c)$. Hence, the reasoning of QuAcq is recursively used in these partial examples, in order to find multiple constraints, with the benefit (inherited from QuAcq) of finding the scope of each constraint with a logarithmic complexity.
\begin{algorithm}
\caption{The MQuAcq Algorithm}\label{alg:all}
\begin{algorithmic}[1]
\Require $B$, $X$, $D$ ($B$: the bias, $X$: the set of variables, $D$: the set of domains)
\Ensure $C_L$ : a constraint network
\State $C_L \leftarrow \emptyset$;
\State $collapse \leftarrow$ false;
\While {true}
\If{ $sol(C_L) = \emptyset$ } \Return ``collapse'';
\EndIf
\State Generate $e$ in $D^X$ accepted by $C_L$ and rejected by $B$;
\If{ $e$ = nil } \Return ``$C_L$ converged'';
\EndIf
\State $FindAllCons(e, X, \emptyset)$;
\If{ $collapse = $ true } \Return ``collapse'';
\EndIf
\EndWhile
\end{algorithmic}
\end{algorithm}
MQuAcq starts by initializing the $C_L$ network to the empty set (line 1) and the global variable $collapse$ to false (line 2). This variable is used within function {\em FindAllCons} as explained below. Next, the algorithm enters its main loop (line 3).
If $C_L$ is unsatisfiable, the algorithm collapses (line 4). Otherwise, a complete assignment $e$ is generated (line 5), satisfying $C_L$ and violating at least one constraint in $B$. This step is explained in detail in Section~\ref{sec:query}.
If such an example does not exist then we have converged (line 6). Otherwise, the function {\em FindAllCons} is called to find all the constraints that are violated by the example $e$ (lines 7). If {\em findAllCons} has detected a collapse then the algorithm terminates (line 8).
The recursive function {\em FindAllCons} is presented in Algorithm \ref{alg:allcons}. It takes as parameters an example $e$, a set of variables $Y$ and a set $Scopes$, which contains the scopes of the violated constraints on $e_Y$ already learned. It returns the set $\operatorname{NScopes}$ consisting of the scopes of the constraints acquired. {\em FindAllCons} adds to $C_L$ all the constraints that are violated by $e$ in $Y$.
The sets $Scopes$ and $\operatorname{NScopes}$ are used to store all the scopes of the constraints that have been found in $e_Y$ to avoid searching and asking partial queries with the scope of a violated constraint that has been already learned. Specifically, the set $Scopes$ stores the scopes of the constraints learned before the current call of the function. On the other hand, the set $\operatorname{NScopes}$ stores the scopes of the constraints learned from the current call (or any sub-call).
For example, if we have acquired 3 constraints from $e_Y$, 2 from a previous call and 1 from the current call of {\em FindAllCons}, then the set $Scopes$ will contain the scopes of the 2 constraints previously learned and the set $\operatorname{NScopes}$ will contain the scope of the constraint learned from the current call.
Our proposed approach is to search for partial queries in the given example that do not contain any constraint already found, so that the answer will not be predictable. To achieve this, from each scope $S$ already found we make $|S|$ partial examples, one for each variable $x_i \in S$, with each such example involving variables $Y^{\prime} = Y \setminus \{x_i\}$.
When a partial example that violates no constraint already learned but at least one from $B$ is found, {\em FindAllCons} uses {\em FindScope}, as in QuAcq, to learn a constraint from $C_T$.
\begin{algorithm}
\caption{FindAllCons}\label{alg:allcons}
\begin{algorithmic}[1]
\Require $e, Y, Scopes$ ($e$: the example, $Y$: set of variables, $Scopes$: a set of scopes already learned)
\Ensure $\operatorname{NScopes}$ : the set of scopes learned
\Function{FindAllCons}{$e$, $Y$, $Scopes$}
\If{ $collapse = $ true } \Return $\emptyset$;
\EndIf
\If { $\nexists scope(c) \neq S$ $|$ $c \in \kappa_{B \setminus C_L}(e_Y) \land S \in Scopes $ } \Return $\emptyset$;
\EndIf
\State $\operatorname{NScopes} \leftarrow \emptyset$;
\If{ $Scopes \neq \emptyset$ }
\State pick an $S \in Scopes$;
\Foreach{ $x_i \in S$ }
\State $\operatorname{NScopes} \leftarrow \operatorname{NScopes} \cup FindAllCons(e, Y \setminus \{x_i\}, \operatorname{NScopes} \cup (Scopes \setminus \{S\} ) )$;
\EndFor
\Else
\If{ ASK($e_Y$) = ``yes'' } $B \leftarrow B \setminus \kappa_B(e_Y) $;
\Else
\State $scope \leftarrow FindScope( e, \emptyset, Y, false ) $;
\State $c \leftarrow FindC(e, scope)$;
\If{ $c$ = nil }
\State $collapse \leftarrow$ true;
\State \Return $\emptyset$;
\Else \quad $C_L \leftarrow C_L \cup \{c\} $;
\EndIf
\State $\operatorname{NScopes} \leftarrow \operatorname{NScopes} \cup \{scope\}$;
\State $\operatorname{NScopes} \leftarrow \operatorname{NScopes} \cup FindAllCons(e, Y, \operatorname{NScopes})$;
\EndIf
\EndIf
\State \Return $\operatorname{NScopes}$;
\EndFunction
\end{algorithmic}
\end{algorithm}
{\em FindAllCons} starts by checking if collapse has occurred. If this is the case the empty set is returned (line 2). Then it checks if there exists any violated constraint in $B$ to learn, with a scope different to those of the constraints already acquired. If no constraint that can be learned exists, it is implied that ASK($e_Y$) = ``yes''. Thus, again we return the empty set (line 3) because we assume that the bias is expressive enough to learn a $C_L$ equivalent to the target network $C_T$. This check is important because as the recursive calls to {\em FindAllCons} remove variables from $Y$ (as explained below), we may end up in a case where $e_Y$ is surely positive and no search for a violated constraint is needed. This is because if ASK($e_Y$) = ``yes'' then for every $Y^{\prime} \subseteq Y$ we surely have ASK($e_{Y^{\prime}}$) = ``yes''. With this check the algorithm avoids a lot of redundant searches, reducing the number of nodes in the tree of recursive calls, and also avoids asking redundant queries.
In the case that neither of the two conditions is satisfied, {\em FindAllCons} will continue. At line 4, the set $\operatorname{NScopes}$ is initialized to the empty set.
After that, {\em FindAllCons} checks if the set $Scopes$ is not empty (line 5). If this is the case it means that we have not branched on all the scopes already found and we still have in $e_Y$ the scope of at least one violated constraint.
So we call {\em FindAllCons} recursively on each subset of $Y$ created by removing one of the variables of a scope $S \in Scopes$, removing the scope $S$ in which we branched from the set $Scopes$ given to the recursive calls (lines 6-8). We remove the scope in which we branched as this scope is no longer included in the set of variables given to to the recursive calls. Also we give to the function the set $\operatorname{NScopes} \cup (Scopes \setminus \{S\} )$ as although the set $\operatorname{NScopes}$ is empty at the first call, it may contain scopes found in the next recursive calls.
In the case that the set $Scopes$ is empty (line 9), it means that we have finished with branching and we have a partial example $e_Y$ that does not contain the scope of any violated constraint already learned. Hence, there must exist a partial query $e_Y$ that violates at least one constraint of $B$ (otherwise the algorithm would have returned at line 2) and no violated constraint already found exists in $Y$. Therefore, the system asks again the user to classify the partial example as positive or negative
(line 10). If the answer is positive then the constraints in $B$ that reject $e$ are removed. Otherwise, function {\em FindScope} is called to find the scope of one of the violated constraints (line 12). {\em FindC} will then select a constraint from $B$ with the discovered scope that is violated by $e_Y$ (line 13). If no constraint is found then the algorithm collapses (line 14-16). Otherwise, the constraint returned by {\em FindC} is added to $C_L$ (line 17) and its scope is added to the set of found scopes (line 18). Then, we call again {\em FindAllCons} to continue searching in the partial examples created by removing the variables of the scope the function has just found.
We now illustrate the behavior of {\em FindAllCons} in a simple problem using our running example.
\begin{example}
\label{ex:mquacq}
Recall that the problem consists of $8$ variables and suppose that the complete example $e = \{1,1,1,1,2,3,4,5\}$ is generated in line 5 of MQuAcq. The constraints from $C_T$ that are violated by $e$ are $\neq_{12}$, $\neq_{13}$ and $\neq_{34}$ (all the constraints from $C_T$ in this case). Table~\ref{ta:ex-mquacq} presents the trace of the recursive calls of {\em FindAllCons}.
\begin{table}
\centering
\caption{Recursive calls of {\em FindAllCons} in Example~\ref{ex:mquacq}}
\label{ta:ex-mquacq}
{
\resizebox{\textwidth}{!}{%
\begin{tabular}{ |l|l|l|c|c|c| }
\hline
{\em call} & $Y$ & $e_Y$ & {\em ASK} & $Scopes$ & {\em return} \\
\hline
0 & $x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8$ & $\{1,1,1,1,2,3,4,5\}$ & ``no'' & $\emptyset$ & $\{\{x_1,x_2\}, \{x_3,x_4\}, \{x_1,x_3\}\}$\\
1 & $x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8$ & $\{1,1,1,1,2,3,4,5\}$ & - & $\{\{x_1,x_2\}\}$ & $\{\{x_3,x_4\}, \{x_1,x_3\}\}$\\
1.1 & $x_2,x_3,x_4,x_5,x_6,x_7,x_8$ & $\{-,1,1,1,2,3,4,5\}$ & ``no'' & $\emptyset$ & $\{\{x_3,x_4\}\}$\\
1.1.1 & $x_2,x_3,x_4,x_5,x_6,x_7,x_8$ & $\{-,1,1,1,2,3,4,5\}$ & - & $\{\{x_3,x_4\}\}$ & $\emptyset$\\
1.1.1.1 & $x_2,x_4,x_5,x_6,x_7,x_8$ & $\{-,1,-,1,2,3,4,5\}$ & ``yes'' & $\emptyset$ & $\emptyset$\\
1.1.1.2 & $x_2,x_3,x_5,x_6,x_7,x_8$ & $\{-,1,1,-,2,3,4,5\}$ & ``yes'' & $\emptyset$ & $\emptyset$\\
1.2 & $x_1,x_3,x_4,x_5,x_6,x_7,x_8$ & $\{1,-,1,1,2,3,4,5\}$ & - & $\{\{x_3,x_4\}\}$ & $\{\{x_1,x_3\}\}$\\
1.2.1 & $x_1,x_4,x_5,x_6,x_7,x_8$ & $\{1,-,-,1,2,3,4,5\}$ & ``yes'' & $\emptyset$ & $\emptyset$\\
1.2.2 & $x_1,x_3,x_5,x_6,x_7,x_8$ & $\{1,-,1,-,2,3,4,5\}$ & ``no'' & $\emptyset$ & $\{\{x_1,x_3\}\}$\\
\hline
\end{tabular}
}
}
\end{table}
In the first call (call 0) to {\em FindAllCons}, $e$ will be posted as a query to the user. After the user answers ``no'', the algorithm will find the constraint $\neq_{12}$ using functions {\em FindScope} and {\em FindC}. Next, {\em FindAllCons} will be called to continue searching for the remaining constraints that violate $e$. In the next call (call 1) we have $Y = X$ and $Scopes = \{\{x_1,x_2\}\}$. As $Scopes \neq \emptyset$, we know that the answer to ASK($e_Y$) (with $e_Y = e_X$) will be ``no''. So, no query is posted to the user and {\em FindAllCons} will be called recursively on each subset of $Y$ built by removing one variable from a scope $S \in Scopes$ (i.e. $\{x_1,x_2\}$ as this is the only one), and removing this scope from the set given to the recursive calls.
Thus, in the first recursive call (call 1.1) we have $Y^{\prime} = Y \setminus \{x_1\}$ and $Scopes = \emptyset$. This means that we have branched on all scopes found until now. Hence, the example $e_{Y^{\prime}} = \{-,1,1,1,2,3,4,5\}$ will be posted as a query to the user and the constraint $\neq_{34}$ will be learned because it is the only constraint from $C_T$ that is violated by $e_{Y^{\prime} }$. In the next call (call 1.1.1) of {\em FindAllCons} in line 14 no further constraint will be found, as no constraint from $C_T$ is violated.
So, we go back to the second call of line 5 (call 1.2). We have $Y^{\prime} = Y \setminus \{x_2\}$ and $Scopes = \{\{x_3,x_4\}\}$ (the scope of the constraint learned from the call 1.1). As $Scopes \neq \emptyset$, we have another scope in which we have to branch. Hence, {\em FindAllCons} will be called recursively on each subset of $Y^{\prime}$ built by removing one variable from a scope $S \in Scopes$ (i.e. $\{x_3,x_4\}$ as this is the only one). Also this scope is removed from the set of scopes given to the recursive calls.
In call 1.2.1, we have $Y^{\prime\prime} = Y^{\prime} \setminus \{x_3\}$ and $Scopes = \emptyset$. Because no constraint from $C_T$ is violated by $e_{Y^{\prime\prime}} = \{1,-,-,1,2,3,4,5\}$, the answer from the user will be ``yes'' and the empty set will be returned. In call 1.2.2, we have $Y^{\prime\prime} = Y^{\prime} \setminus \{x_4\}$ and $Scopes = \emptyset$. Thus, the example $e_{Y^{\prime\prime}} = \{1,-,1,-,2,3,4,5\}$ will be posted to the user and then the constraint $\neq_{13}$ will be learned. No further constraint will be found, as no other constraint from $C_T$ is violated by $e_Y$.
\end{example}
\subsection{Analysis}
In this section we prove the correctness (i.e. soundness and completeness) of MQuAcq. To obtain this proof we first prove some properties of functions {\em FindScope} and {\em FindC}. We also study the complexity of MQuAcq in terms of the number of queries it needs to converge to the target network.
\begin{lemma}
\label{lemma1}
If ASK($e_Y$) = ``yes'' then for any $Y^{\prime} \subseteq Y$ it holds that ASK($e_{Y^{\prime}}$) = ``yes''.
\end{lemma}
\begin{proof}
We know that for every $Y^{\prime} \subseteq Y$, the set of constraints from $C_T$ that are violated by $e_{Y^{\prime}}$ is a subset of the set of constraints rejecting $e_Y$
(i.e. $\kappa_{C_T}(e_{Y^{\prime}}) \subseteq \kappa_{C_T}(e_Y) $).
Thus, if we know that $\kappa_{C_T}(e_Y) = \emptyset$ (ASK($e_Y$) = ``yes'') then for every $Y^{\prime} \subseteq Y$ it holds that $\kappa_{C_T}(e_{Y^{\prime}}) = \emptyset$ which means that ASK($e_{Y^{\prime}}$) = ``yes''.
\end{proof}
\begin{lemma}
\label{lemma2}
If ASK($e_Y$) = ``no'' then for any $Y^{\prime} \supseteq Y$ it holds that ASK($e_{Y^{\prime}}$) = ``no''.
\end{lemma}
\begin{proof}
We know that for every $Y^{\prime} \supseteq Y$, the set of constraints from $C_T$ that are violated by $e_{Y^{\prime}}$ is a superset of the set of constraints rejecting $e_Y$
(i.e. $\kappa_{C_T}(e_{Y^{\prime}}) \supseteq \kappa_{C_T}(e_Y) $).
Thus, if we know that $\kappa_{C_T}(e_Y) \neq \emptyset$ (ASK($e_Y$) = ``no'') then for every $Y^{\prime} \supseteq Y$ it holds that $\kappa_{C_T}(e_{Y^{\prime}}) \neq \emptyset$ which means that ASK($e_{Y^{\prime}}$) = ``no''.
\end{proof}
Lemmas~\ref{lemma1} and~\ref{lemma2} have also been proved in~\cite{arcangioli2016multiple}, albeit slightly differently.
\begin{prop}
\label{prop:findscope}
Given the assumption that $C_T$ is representable by $B$, if FindScope is given an example $e_Y$ and returns a scope $S$, then there exists a violated constraint $c \in C_T$ with $scope(c) = S$. Also $S$ is a minimal scope.
\end{prop}
\begin{proof}
Recall that an invariant of {\em FindScope} is that the example $e$ violates at least one constraint whose scope is a subset of $R \cup Y$ (i.e. ASK($R \cup Y$) = ``no''). Also, it reaches line 5 only in the case that $e_R$ does not violate any constraint from $C_T$ (i.e. ASK($e_R$) = ``yes'' from Lemma~\ref{lemma1}). In addition, in {\em FindScope} variables are returned only at line 5, in the case $Y$ is a singleton. Thus, for any $x_i \in S$ we know that ASK($S$) = ``no'' and ASK($S \setminus x_i$) = ``yes''. Hence, $S$ is a scope of a violated constraint from the target network. Also, as we have ASK($S \setminus x_i$) = ``yes' for any $x_i \in S$, it holds that $S$ is a minimal scope.
\end{proof}
\begin{prop}
\label{prop:findc}
Given an example $e$, the scope $Y$ of a violated constraint of $C_T$ and a bias $B$ that can represent $C_T$, FindC will return a constraint $c \in C_T$ with $scope(c) = Y$ under the assumption that $C_T$ does not contain any other constraint with scope $Y' \subseteq Y$.
\end{prop}
\begin{proof}
$\forall c \in \Delta$ we know that $c$ is violated by $e_Y$ as $c \in \kappa_{B}(e_Y)$ (line 2). Thus, as $B$ can represent $C_T$, if $Y$ is the scope of a violated constraint $c \in C_T$, this constraint is surely included in $\Delta$. Now we will prove that this constraint is never removed from $\Delta$ but the constraints not in $C_T$ are. Constraints are removed from $\Delta$ only at lines 12,13.
When the user's answer to the generated query is ``yes'' then the constraint that we seek is not violated (Lemma~\ref{lemma1}), so $\Delta \leftarrow \Delta \setminus \kappa_\Delta(e^{\prime}) $ does not remove it. On the other hand, if the user's answer is ``no'' then the constraint $c \in C_T$ that we seek is surely violated as $C_T$ does not contain any other constraint with a scope $Y' \subseteq Y$. Therefore, the operation $\Delta \leftarrow \kappa_\Delta(e^{\prime})$ does not remove it.
Thus, an invariant of {\em FindC} is that $\Delta$ surely includes a constraint that made the user to classify as negative the example $e_Y$ given to the function, as it is added to $\Delta$ and never removed from it. Hence, if an example accepting some constraints in $\Delta$ and rejecting others cannot be generated at line 6, all the constraints in $\Delta$ are equivalent wrt $C_L$. Thus, whichever among them (if more than one) is returned, this constraint $c$ is surely included in $C_T$.
\end{proof}
\begin{thm}
\label{correctness}
Given a bias $B$ built from a language $\Gamma$, with bounded arity constraints, and a target network $C_T$ representable by $B$, MQuAcq is correct.
\end{thm}
\begin{proof}
{\em Soundness}.
MQuAcq learns constraints only via the function {\em FindAllCons}. {\em FindAllCons} learns a constraint using the function {\em FindC} after finding the scope of the constraint with the function {\em FindScope}. Given the assumption that the user's answers are correct and that the target network $C_T$ is representable by $B$, {\em FindScope} returns the scope of a violated constraint from $C_T$ (Proposition~\ref{prop:findscope}). Also, as {\em FindC} is called with the scope returned from {\em FindScope} and the example classified as negative by the user, it will return a violated constraint $c \in C_T$ (Proposition~\ref{prop:findc}). Thus, {\em FindAllCons} is sound, which means that MQuAcq is sound, as for every constraint $c$ added to $C_L$ it holds that $c \in C_T$.
{\em Completeness}.
MQuAcq learns constraints only via the function {\em FindAllCons}. Thus, if given an example $e_Y$ {\em FindAllCons} can acquire any violated constraint $c \in C_T$ then MQuAcq is complete. That is because MQuAcq iteratively generates examples that violate constraints from $B$ and gives them to {\em FindAllCons}. It stops only if no example can be generated at line 5. If this is the case, it means that the system has converged as $C_L$ agrees with $E$ and for every other network $C \subseteq B$ that agrees with E, we have $sol(C) = sol(C_L)$.
Now we will prove that given an example $e_Y$ function {\em FindAllCons} can acquire any violated constraint $c \in C_T$.
Given the assumption that the target network $C_T$ is representable by $B$, if the condition at line 3 is satisfied then we know that no constraint from $C_T$ can be learned. Also, given an example $e_Y$, with $Y \subseteq X$, we know that if ASK($e_Y$) = ``yes'' then for any $Y^{\prime} \subseteq Y$ it is ASK($e_{Y^{\prime}}$) = ``yes'' (Lemma~\ref{lemma1}). Hence, in this case again no constraint from $C_T$ is violated by $e_Y$.
Thus, if a minimal scope $M$ exists in $Y$ then the condition at line 3 is not satisfied and the answer from the user at line 10 will be ``no'', as $Y \supseteq M$ (Lemma~\ref{lemma2}). Thus, if function {\em FindAllCons} is given an example $e_Y$ violating a constraint from $C_T$, it will reach lines 12-13 to learn the constraint using functions {\em FindScope} and {\em FindC}.
In addition, {\em FindAllCons} will surely search for any minimal scope $M$ in a $Y \supseteq M$. We know that if a constraint with scope $S$ is learned from an example, then {\em FindAllCons} will search for another violated constraint from $C_T$ in $Y \setminus \{x_i\}, \forall x_i \in S$. Thus, in each $e_{Y \setminus \{x_i\}}$ it will search for minimal scopes $M$ such that $x_i \not\in M$ and $M \subseteq Y \setminus \{x_i\}, \forall x_i \in S$. Generalizing this, function {\em FindAllCons} will find all the minimal scopes $M$ such that $S \nsubseteq M$ and $M \subseteq Y$. Hence, it will find all the minimal scopes in $Y$ and then learn the constraints by calling {\em FindC}.
\end{proof}
We now analyse the complexity of MQuAcq in terms of the number of queries it asks to the user.
\begin{thm}
\label{complexity}
Given a bias $B$ built from a language $\Gamma$, with bounded arity constraints, and a target network $C_T$,
MQuAcq uses $O(|C_T| \cdot (log|X| + |\Gamma|))$ queries to find the target network or to collapse and $O(|B|)$ queries to prove convergence.
\end{thm}
\begin{proof}
Queries are asked to the user in lines 10 of {\em FindAllCons}, 3 of {\em FindScope} and 9 of {\em FindC}.
We know that a scope of a constraint from $C_T$ is found in $O(|S| \cdot log|Y|)$ queries with the function {\em FindScope}, with $|S|$ being the arity of the scope and $|Y|$ the size of the example given to the function \cite{bessiere2013constraint}. As $Y \subseteq X$, {\em FindScope} needs at most $|S| \cdot log|X|$ queries to find the scope of a constraint, because in {\em FindAllCons}, in the worst case, only one constraint from $C_T$ will be violated by any complete example. Also, {\em FindC} needs at most $|\Gamma|$ queries to find a constraint from $C_T$ in the scope it takes as parameter, if one exists \cite{bessiere2013constraint}. If none exists, the system collapses. Hence, in the worst case, the number of queries necessary to find each constraint is
$O(|S| \cdot log|X|+|\Gamma|)$. Thus, the number of queries for finding all the constraints in $C_T$ or collapsing is at most $C_T \cdot (|S| \cdot log|X|+|\Gamma|)$ which is $O(C_T \cdot (log|X|+|\Gamma|))$ because $|S|$ is bounded.
Convergence is proved when $B$ is empty or contains only redundant constraints. Constraints are removed from $B$ when the answer from the user is ``yes'' in a query. In the case that the example generated by the algorithm in line 5, contains only one violated constraint from $B$, it leads to at least one constraint removal in each query. This gives a total of $O(|B|)$ queries to prove convergence.
\end{proof}
The complexities of QuAcq and MultiAcq to find the target network are $O(|C_T| \cdot (log|X| + |\Gamma|))$ and $O(|C_T| \cdot (|X| + |\Gamma|))$ respectively. Hence, we achieve the same bound as QuAcq but a better one than MultiAcq, while discovering all the violated constraints from a negative example.
\section{FindScope-2}
\label{findScope2}
We now describe an optimization to function {\em FindScope}, aiming at asking fewer queries to the user by avoiding posting redundant queries. This results in a function we simply call {\em FindScope-2}, which can be used instead of {\em FindScope} either inside QuAcq or inside our proposed algorithm MQuAcq.
Let us first consider a simple example to show a deficiency of {\em FindScope}.
\begin{example}
Consider the behaviour of {\em FindScope} in the simple problem of the Example~\ref{ex:quacq}. The recursive calls of {\em FindScope} are illustrated in Table~\ref{ta:ex-quacq}. The negative example given to {\em FindScope} is $e = \{1,1,1,2,3,4,5,6\}$. The constraints from $B$ that it violates are $\kappa_B(e) = \{\neq_{12}, \neq_{13}, \neq_{23}\}$. After the first call to {\em FindScope}, $R$ is equal to $x_1, x_2, x_3, x_4$, so the partial example $e_R$ that is then asked to the user is $e_R = \{ 1,1,1,2 \}$.
As we can see, the constraints from $B$ that are violated are still $\kappa_B(e) = \{\neq_{12}, \neq_{13}, \neq_{23}\}$. Therefore, this partial example is negative as no violated constraint from $B$, that could be included to $C_T$, is removed. Thus, there is no point in posting it to the user.
In addition, given the assumption that the bias is expressive enough to learn $C_T$, in cases where $|\kappa_B(e_R)| = 0$ (i.e. there is no violated constraint in $B$), it is implied that ASK($e_R$) = ``yes''. For example, see the last two queries asked from {\em FindScope} in the current example. The are queries that include only one variable, but the bias does not include any unary constraint. Thus, it is implied that these are positive examples and thus, these queries were redundant.
\end{example}
To avoid such redundant queries made by {\em FindScope}, we modify this function (see Algorithm \ref{alg:findscope2}) adding a check that inspects if the number of violated constraints from the bias is the same as in the last query asked. This is implemented using a global variable $rej$ to store this number. This check is done in line 3. If this is the case, it is implied that the answer will still be no and therefore we return the empty set. Before the first call to {\em FindScope}, $rej$ must be initialized to the number of constraints from $B$ that are violated by the complete query.
\begin{algorithm}
\caption{FindScope-2}\label{alg:findscope2}
\begin{algorithmic}[1]
\Require $e$, $R$, $Y$, $ask\_query$ ($e$: the example, $R$,$Y$: sets of variables, $ask\_query$: boolean)
\Ensure $Scope$ : a set of variables, the scope of a constraint in $C_T$
\Function{FindScope-2}{$e$, $R$, $Y$, $ask\_query$}
\If{ $ask\_query \land |\kappa_B(e_R)| > 0$}
\If{ $rej \neq |\kappa_B(e_R)|$ }
\If{ ASK($e_R$) = ``yes'' } $B \leftarrow B \setminus \kappa_B(e_R) $;
\Else
\State $ rej \leftarrow |\kappa_B(e_R)|$;
\State \Return $\emptyset$;
\EndIf
\Else \quad \Return $\emptyset$;
\EndIf
\EndIf
\If{ $|Y| = 1$ } \Return $Y$;
\EndIf
\State split $Y$ into $<Y_1, Y_2>$ such that $|Y_1| = \lceil |Y|/2 \rceil $;
\State $S_1 \leftarrow FindScope-2(e,R \cup Y_1, Y_2, true)$;
\State $S_2 \leftarrow FindScope-2(e,R \cup S_1, Y_1, (S_1 \neq \emptyset))$;
\State \Return $S_1 \cup S_2$;
\EndFunction
\end{algorithmic}
\end{algorithm}
As a further improvement to {\em FindScope}, in cases where $|\kappa_B(e_R)| = 0$ no query is asked to the user as is implied that ASK($e_R$) = ``yes''. So another check is performed in line 2. If the bias is not expressive enough to learn $C_T$, the system will collapse later, because it will not find any constraint to learn.
\subsection{FindScope-2 analysis}
\label{findScope2-analysis}
\begin{prop}
\label{prop:findscope-2}
Given the assumption that $C_T$ is representable by $B$, if {\em FindScope-2} is given an example $e_Y$ and returns a scope $S$, then there exists a violated constraint $c \in C_T$ with $scope(c) = S$. Also $S$ is a minimal scope.
\end{prop}
\begin{proof}
Let us first prove that the invariant of {\em FindScope} that the example $e$ violates at least one constraint whose scope is a subset of $R \cup Y$ (i.e. ASK($R \cup Y$) = ``no'') applies to {\em FindScope-2} as well. The check added at line 2 does not affect this property, as in the case $|\kappa_B(e_R)| = 0$ we know that for any $Y^\prime \subseteq Y$ it holds that ASK($e_{Y^\prime}$) = ``yes'' (Lemma~\ref{lemma1}). Thus, it reaches line 9 only in the case that $e_R$ does not violate any constraint from $C_T$. Focusing on the check added at line 3, in the case $rej = \kappa_B{e_R}$ it returns $\emptyset$, because we know that the answer of the user would be negative.
Thus, for the same reason as in {\em FindScope}, for any $x_i \in S$, we know that ASK($S$) = ``no'' and ASK($S \setminus x_i$) = ``yes''. Hence, $S$ is a scope of a violated constraint from the target network. Also, as we have ASK($S \setminus x_i$) = ``yes' for any $x_i \in S$, it holds that $S$ is a minimal scope.
\end{proof}
\begin{prop}
\label{prop:findscope-2_compl}
Given a negative example $e_Y$, {\em FindScope-2} posts $\Theta(|S| \cdot log|Y|)$ queries in order to find the scope of a violated constraint, in the worst case.
\end{prop}
\begin{proof}
The number of queries posted by {\em FindScope-2} is equal to the number of nodes of the tree of recursive calls in the worst case, in which a query is posted in each node. Now we will find the number of nodes of this tree in this case.
The branches of the tree that will find a variable in $S$ will have $log|Y|$ depth. The tree of recursive calls to {\em FindScope-2} will have $|S|$ such branches. Thus, in such a case the number of nodes of these branches is $n = |S| \cdot log|Y|$.
{\em FindScope-2} either makes two recursive calls in each call or it returns $\emptyset$. Thus, for each node on a branch that finds a variable in $S$, we have one sibling that is either a leaf, or starts another branch that will find another variable in $S$. Hence, the leaves that do not return a variable in $S$ will be $n - |S|$. As a result, the number of nodes will be $2 \cdot |S| \cdot log|Y| - |S|$. As a result, {\em FindScope-2} posts $\Theta(|S| \cdot log|Y|)$ queries to the user in the worst case.
\end{proof}
\section{Query Generation in Constraint Acquisition}
\label{sec:query}
In constraint acquisition we would ideally want every generated query to contain as much ``information'' to be learned as possible. That is, we would like to generate queries that violate as many constraints as possible.
To acquire this information we want a constraint to be violated and then, via the user's answers, the algorithm will decide either to learn the constraint or to remove it from the bias. We want to maximize the constraints from $B$ that reject the generated example because this can reduce the number of queries required to converge. That is because after a positive query, all the constraints rejecting the example are removed from $B$. MQuAcq, like QuAcq, has a worst case complexity in terms of the number of queries in $O(|C_T| \cdot log|X| + O(|B|)$. Hence, it is desirable to prune from $B$ the constraints that are not included in $C_T$ with as few queries as possible. Ideally we want each positive query to violate a maximum number of constraints from $B$. With this objective, as proved in~\cite{bessiere2013constraint}, we can bridge the gap to $O(|C_T|log|X|)$ queries for some simple languages, and avoid needing a number of queries up to $|B|$ to prove convergence.
The standard technique for query generation in constraint acquisition systems, such as QuAcq and MultiAcq, is based on the following basic idea: find a solution of the learned constraint network ($C_L$) that violates a maximum number of constraints from the bias. Although the query generation step is a very important step of the acquisition process, it is not described in detail in the literature. Here we focus on query generation and explain it in detail for the first time. We then propose heuristics to enhance its efficiency.
Query generation it typically viewed as an optimization problem that includes both {\em hard} and {\em soft} constraints. In general, a hard constraint represents a requirement that cannot be violated. All hard constraints must be mandatorily satisfied in a solution. Soft constraints are used to formalize desired properties, preferences that should be satisfied as much as possible.
Query generation can be modelled in this way by considering the constraints from $C_L$ as {\em hard} constraints. The {\em soft} constraints can either (and equivalently) be the constraints from the bias or their complement (the set $\{\neg c $ $|$ $c \in B \setminus C_L $\}). In case the soft constraints are the complement of the constraints from $B$, the objective is to maximize the number of such constraints that are satisfied. Otherwise, the objective is to maximize the number of constraints from $B$ that are violated.
As CP solvers, whether they can handle soft constraints or not, cannot express this objective naturally, we reformulate the problem as maximizing the the number of satisfied constraint negations.
Hence, we view the problem of query generation as a MAX-CSP that includes hard constraints (the ones from $C_L$) and soft ones (the complement of the constraints in $B$).
Thus, this is a simple case of (unweighted) MAX-CSP~\cite{khanna2001approximability}, with the requirement that hard constraints must be satisfied, giving all the soft constraints the same importance and the goal is to maximize the number of the satisfied soft constraints.
In the case of query generation there is also another requirement, that is, at least one soft constraint must be satisfied. This is because we want to generate an irredundant query, in which we do not already know the answer.
This is also the approach taken by QuAcq and MultiAcq. When solving this optimization problem, both these algorithms try to find a solution that satisfies all the constraints in $C_L$ and maximizes the satisfaction of the complementary constraints from $B$. This is known as the {\em max} heuristic \cite{bessiere2013constraint,arcangioli2016multiple}.
Except form the above, there are some extra steps in the query generation. This is because the generation of a query that maximizes the violated constraints from $B$ is an NP-hard problem, and therefore may be very time-consuming.
The query generation process (line 4 of QuAcq and MultiAcq, line 5 of MQuAcq and line 6 of {\em FindC}) is presented in Algorithm~\ref{alg:irr}. We denote the process described above as $QGen(C_h,C_s)$, with $C_h$ being the set of hard constraints and $C_s$ the set of soft constraints.
\begin{algorithm}
\caption{IrrGen: Generate an Irredundant Query}\label{alg:irr}
\begin{algorithmic}[1]
\Require $C_L$, $B$, $V$, $D$ ($C_L$: the learned network, $B$: the bias, $V$: the variables, $D$: the domains of the variables)
\Ensure $q$: an irredundant query
\State $C_s \leftarrow \{ \neg c $ $|$ $c \in B \setminus C_L \}$;
\State $e \leftarrow QGen(C_L, C_s)$;
\If{ $e \neq nil$ } \Return $e$;
\Else
\ForAll{ $c \in C_s$}
\State Generate $e$ in $sol(C_L \cap c)$;
\If{ $e \neq nil$ } \Return $e$;
\EndIf
\EndFor
\EndIf
\State \Return nil;
\end{algorithmic}
\end{algorithm}
First the set $C_s$ containing the soft constraints is initialized (line 1). Next (line 2), an example is generated that is a solution to the learned network $C_L$ and satisfies as many constraints as possible from $C_s$, (i.e. violating as many as possible from the bias).
There are two cutoffs imposed in {\em QGen}, to make sure that the query generation will run in acceptable time. We denote them as $cut_{min}$ and $cut_{max}$. If the query generator has found a query violating at least one constraint and the first cutoff ($cut_{min}$) is triggered, the best query found is returned. If not, it tries until the maximum time (defined by $cut_{max}$) has been reached.
If an irredundant query is found then it is returned (line 3). In case no example is found within this time limit then the system tries again, taking the constraints in $B$ one by one (lines 5-8). That is, for each constraint $c \in C_s$, it tries to find a solution of $C_L$ satisfying $c$. The second cutoff of {\em QGen} ($cut_{max}$) is again used for this process. This is done until a query violating at least one constraint from $B$ is found. However, setting any time limit to the query generation process means that the algorithm may reach {\em premature convergence}. That is because it is quite likely that no solution to $C_L$ that violates some constraints from $B$ is found within the time limit, at some point of the algorithm's execution, meaning that it has not been proved that $sol(C) = sol(C_L)$ for every other network $C \subseteq B$ that agrees with $E$.
As a result, all the algorithms suffer from this problem. This is something that has only very recently been pointed out \cite{addi2018time,mquacq}.
\section{Heuristics for boosting query generation}
\label{sec:heur}
In this section we propose heuristics to improve the performance of constraint acquisition systems. In Section~\ref{sec:par} we propose a heuristic for the generation of partial queries by the algorithms. In Sections~\ref{sec:var} and~\ref{sec:value} we propose heuristics for value and variable ordering when trying to generate queries.
\subsection{Exploiting partial queries}
\label{sec:par}
Let us first note that although both QuAcq and MultiAcq allow for the use of partial queries to focus on the violated constraint(s) after an example has been classified as negative, they both always aim to generate complete examples.
However, as explained, generating a complete example requires finding a complete variable assignment that satisfies all constraints in $C_L$ and violates at least one constraint in $B$. Given that this is an NP-complete problem, the process can be very time-consuming, especially as the size of $C_L$ grows and the size of $B$ shrinks, i.e. when approaching convergence.
Experimental results that we have obtained with both QuAcq and MultiAcq demonstrate that when no time limit to the query generation process is set then both algorithms can take several minutes (more than 30 minutes) to generate a query as convergence is approached, even for small problems such as the 9x9 Sudoku.
This of course is unacceptable from the user's point of view, and therefore a time limit is necessary for the practical application of the algorithms. However, setting a time limit to the query generation process means that the algorithm may reach {\em premature convergence}, as explained before.
Another relevant issue is that of proving convergence in problems that contain redundant constraints. As the system cannot always know beforehand if some of the constraints in the bias are redundant, proving that no solution of $C_L$ violates at least one constraint in $B$ can be very time-consuming in the presence of redundant constraints. This is because if near the end of the process $B$ is left with redundant constraints only, no solution of $C_L$ can violate any of these constraints, simply because these constraints, being implied, will be surely satisfied.
Given the importance of query generation in the acquisition process, it is of primary importance that it is executed as efficiently as possible, and in a way such that the problem of premature convergence is avoided as much as possible. Towards this, we propose to exploit partial queries at this step of the process. Both QuAcq and MultiAcq, and also our proposed algorithm, assume that the user, be it human or machine, is able to answer partial queries, so there is no reason to limit the use of partial queries to the case where a complete query has been classified as negative.
Our proposal is to model the query generation problem as an optimization problem, like in the previous section, in which we seek to find a (partial) assignment of the variables that maximizes the number of violated constraints in $B$. That is, we again have
a MAX-CSP, with the difference being that the optimal solution does not necessarily involve an assignment to all the variables. This optimization problem can be formally stated as ``search for $(e_Y$, $Y)$ with $e_Y \in sol(C_L[Y]) \land Y \subseteq X$, maximizing $\kappa_B(e_Y)$''. We call this heuristic {\em max$_B$}. This is related to but is not the same as the {\em max} heuristic (described in the previous section) that was also used within QuAcq \cite{bessiere2013constraint}. As already explained, the {\em max} heuristic tries to generate a complete solution of $C_L$ that violates a maximum number of constraints from $B$. Hence, given a time limit, which is necessary for any algorithm to run in reasonable times as explained above, {\em max} will focus on finding complete assignments that satisfy all the constraints in $C_L$ and violate as many as possible from $B$, while {\em max$_B$} will focus on violating as many constraints as possible from $B$ without necessarily building a complete variable assignment.
Of course, finding a partial assignment $e_Y$ on a set of variables $Y \subseteq X$ not rejected by $C_L$ and violating constraints from the bias, does not mean that $e_Y$ can be extended to a solution of $C_L$, but this is not a problem under the assumption that the user can classify partial queries correctly.
Although the difference between {\em max$_B$} and {\em max} may not seem substantial, experimental results given below show that the use of {\em max$_B$} largely alleviates the danger of premature convergence and can have a significant impact on the total run time of the acquisition algorithm. This is because by using {\em max$_B$}, the system can also learn redundant constraints, thus it does not have to prove that a constraint cannot be violated. Learning the redundant constraints is necessary if we want to guarantee that the system will always converge.
\subsection{Variable ordering heuristic}
\label{sec:var}
Given that query generation is an optimization problem that is solved using a CP solver, an important question that must be answered is which variable/value ordering heuristic to use for this problem. One way is to simply apply the default strategies of the CP solver used. For example, dom/wdeg \cite{boussemart2004boosting} or a simpler heuristic like dom/ddeg (or even dom)
for variable ordering, and random or lexicographic value ordering. This is the path taken by all acquisition algorithms so far. The reasoning behind it is that standard heuristics will help find a complete solution to $C_L$ quite fast, as this problem is a typical CSP, and then the maximum number of violating constraints in $B$ will be seeked within the time limit as a secondary requirement.
But, as explained before, in constraint acquisition we would ideally want every generated query to contain as much ``information'' as possible.
Given that the focus of our proposed heuristic {\em max$_B$}, is on violating as many constraints from $B$ as possible, and not on building a complete solution to $C_L$, it is likely that traditional variable and value ordering heuristics, that are efficient when seeking a solution to a CSP, are not the best choice.
This is because these heuristics focus on information (conflicts, degrees, domain sizes) obtained from the variables and constraints of $C_L$. But since {\em max$_B$} primarily focuses on the bias $B$, meaning that finding a complete solution to $C_L$ is not paramount, it is reasonable to use a variable ordering heuristic that exploits information obtained from the variables and constraints of $B$. Towards this, we propose a simple variable ordering heuristic that maximizes the constraint violations in $B$.
This heuristic, which we call {\em bdeg} (degree of variables in the bias $B$), selects the variable which participates in the maximum number of constraints present in $B \setminus C_L$. It can be seen as analogue to the classic variable ordering heuristic {\em deg} for CSPs, which selects the variable with the largest degree. But in contrast to standard CSPs where deg is not competitive at all, bdeg is very efficient when used for query generation, especially near convergence where it manages to drastically cut down the waiting times for the user.
Comparing bdeg to heuristics like dom/wdeg, we note that bdeg prefers variables belonging to the scope of many constraints from $B$, ignoring $C_L$, whereas standard heuristics do not treat the constraints from $C_L$ and $B$ differently when computing their metric, or even focus only on constraints from $C_L$. Hence, for example, a variable involved in many constraint from $C_L$ and only few from $B$ is very likely to be preferred.
\subsection{Value Ordering Heuristic}
\label{sec:value}
First recall that a generated query should violate a large number of constraints from $B$ and some of these constraints may be added to $C_L$, while others may be removed from $B$, depending on the answers of the user to the partial queries posted by {\em FindAllCons}.
So, using a value ordering heuristic that picks values that are involved in a large number of conflicts (i.e. violate many constraints) in $B$ makes sense for the query generation step. This is because picking a value which violates many constraints from $B$ quickly leads to a (partial) assignment with a lot of information to be extracted.
Hence, we propose the {\em max$_v$} heuristic for value ordering, which selects the value that maximizes the number of conflicts (constraint violations) between the currently selected variable and the variables that have been already assigned.
We consider instantiated variables only because if {\em max$_B$} is used then the query generated may not include all the variables of the problem.
To better understand the practical effect of the {\em max$_v$} heuristic, let us consider its behaviour when generating a query in the problem from our running example.
\begin{example}
For the purposes of this example, the variable ordering heuristic is not important. Assume that variables are ordered lexicographically. In the beginning of the query generation process, the use of {\em max$_v$} will not have any effect at the first variable's instantiation. Assume that the value selected for the first variable is $x_1 = 1$. From now on, when choosing a value for the following variables, {\em max$_v$} will keep on choosing value 1, as in this example it is the only value that violates some constraints from $B$. Thus, the generated example will be $e = \{1,1,1,1,1,1,1,1\}$, which in this case violates all the constraints from the bias.
\end{example}
An important factor to consider about the effect of value ordering regards the generation of the first query.
In this case all the variables are involved in the same number of constraints in $B$ (because nothing has been removed from $B$ yet) and $C_L$ is empty. Thus, the variable ordering is not important. On the other hand, value ordering seems to be very important.
Experiments have shown that using {\em max$_v$} for the generation of the first query often leads to a query which violates {\em all} of the constraints from $C_T$. Also, when the acquisition process is near convergence and only a few constraints are left in $B$, we want queries that remove a maximum number of constraints after a positive answer from the user. So, again the {\em max$_v$} heuristic is the best choice.
Given the importance of selecting values that maximize constraint violations, the lexico value ordering heuristic, which simply selects values in their lexicographic order and is very commonly used by CP solvers,
is not a good idea because the values that maximize conflicts may appear in the middle or near the end of a domain, meaning that values with low conflicts may be preferred instead. On the other hand, random value ordering is a better idea but still not as good as focusing on the values that maximize conflicts, as {\em max$_v$} does. As a downside, {\em max$_v$} is more expensive to compute as the conflicts caused by each value must be calculated before the selection is made.
\section{Experimental Evaluation}
\label{sec:experiments}
To evaluate our proposed methods, we ran experiments on a system carrying an Intel(R) Xeon(R) E5-2667 CPU, 2.9 GHz,
with 8 Gb of RAM. We compared the proposed methods to both QuAcq and MultiAcq, which were implemented as efficiently as possible using the strategies described in \cite{bessiere2013constraint,arcangioli2016multiple}.
To be precise:
\begin{itemize}
\item
The ``flawed'' {\em FindC} function of \cite{bessiere2013constraint} described in Section~\ref{sec:quacq} is used in all our methods. This does not affect our results as a situation in which it may fail (analyzed in Section~\ref{sec:quacq})
does not appear in any of the studied benchmarks. However, we have also implemented the corrected version of \cite{bessiere2016new} to deal with such cases when they arise.
\item
In all our methods, and also in QuAcq and MultiAcq, we set the $cut_{max}$ cutoffs of the query generation step (described in Section~\ref{sec:query}) to 5 seconds. This means that if no query is found within 5 seconds, Function {\em QGen} returns. Also, we set the $cut_{min}$ cutoff to 1 second, returning the best example found within this time limit, if any is found.
\item
To maximize the performance of MultiAcq we used the heuristic proposed in \cite{arcangioli2016multiple}:
A cutoff of 5 seconds is used in function {\em FindAllScopes}. After triggering the cutoff for the first time, {\em FindAllScopes} is called again on the same complete example with a reverse order of the variables. If the cutoff is triggered for a second time, we generate a new example and shuffle the variables' order. To ensure termination, {\em FindAllScopes} is forced to return at least one scope before cutting off.
\item
To evaluate our proposed variable and value ordering heuristics, we implemented them within QuAcq and MQuAcq and ran experiments using benchmark instances. We compared them with the use of {\em dom/wdeg} as variable ordering heuristic and {\em random} for value ordering, which are the standard options for existing acquisition algorithms.
\item In order to compare all the algorithms on the same scenario, all the experiments concern the extreme case where no background knowledge is used and thus $C_L$ is initially empty. This extreme scenario results in an overall number of queries that may seem too large for human users to answer without making errors. However, in real applications background knowledge can be used either by giving a frame of basic constraints to the system or by using some other method to extract some constraints from known solutions and non-solutions of the problem, e.g. ModelSeeker~\cite{beldiceanu2012model}.
\item
Each method was run 10 times and the means are presented.
\end{itemize}
We used the following benchmarks in our study:
\textbf{Sudoku}. The Sudoku puzzle is a 9 $\times$ 9 grid. It must be completed in such a way that all the rows, all the columns and the 9 non-overlapping 3 $\times$ 3 squares contain the numbers 1 to 9. The {\em vocabulary} for this problem has 81 variables and domains of size 9. The target network has 810 binary $\neq$ constraints on rows, columns and squares. The bias was initialized with 12.960 binary constraints from the language $ \Gamma = \{=, \neq, >, < \}$.
\textbf{Greater than Sudoku} (GTSudoku). This is a variant of Sudoku where instead of having only cliques of $\neq$ constraints, some neighboring variables are related via $>, <$ constraints. The number of variables and the maximum domain size are the same as in Sudoku, but there are no prefilled squares (i.e. assigned variables). We used the instance shown in Figure~\ref{fig:gtsudoku} and the same language and bias as in Sudoku. What is interesting with GTSudoku is that the introduction of inequality constraints breaks up the regular structure of Sudoku.
\begin{figure}[h]
\centerline{\includegraphics[width=2.2in]{gtsudoku2}}
\caption{Greater than Sudoku instance used in the experiments}
\label{fig:gtsudoku}
\end{figure}
\textbf{Latin Square}. The Latin square problem consists of a n $\times$ n table in which each element occurs once in every row and column. In our experiments we set n to 10, meaning that we have 100 variables with domains of size 10. The target network has 900 binary $\neq$ constraints on rows and columns. The system was initialized with a bias of 19.800 binary constraints created from the language $\Gamma = \{=, \neq, >, < \}$.
\textbf{Zebra}. The Zebra problem has a single solution. The problem
consists of 25 variables of domain size of 5. The target network contains 50 $\neq$ constraints and 12 additional constraints given in the description of the problem. The bias was initialized with 1200 binary constraints from the language $\Gamma = \{=, \neq, >, <, x_i - x_j = 1, |x_i - x_j| = 1\}$.
\textbf{Murder}. Someone was murdered and there are 5 suspects, each one having an item, an activity and a motive for the crime. The problem is to find the murderer.
This problem consists of 20 variables (the 5 suspects and their items, activities and motives)
with domains of size 5. The target network contains 4 cliques of $\neq$ constraints and 12 additional binary constraints, given as clues in the description of the problem. The bias was initialized with 760 constraints based on the language $\Gamma = \{=, \neq, >, < \}$.
\textbf{Purdey’s general store}~\cite{purdey}. Four families stopped by Purdey’s general store, each one aiming to buy a different item and paying in a different way. Under a set of additional constraints given in the description of the problem, the goal is to match each family with the item they bought and how they paid for it. It has a single solution. It is modelled with 12 variables (for families, items and paying methods), with domains of size 4. The target network consists of 27 constraints. The bias was initialized with 264 constraints based on the language $\Gamma = \{=, \neq, >, < \}$.
\textbf{Allergy}. A problem crafted by the XCSP team. There are 4 people having allergies, and 8 products are given in the description of the problem. Based on the constraints given, the goal is to find who has an allergy on which product. It consists of 12 variables with domains of size 4 and 26 constraints. The bias was initialized with 264 constraints based on the language $\Gamma = \{=, \neq, >, < \}$.
\textbf{Golomb rulers}. The problem is to find a ruler where the distance between any two marks is different from that between any other two marks. We built a simplified version of a Golomb ruler with 12 marks, with the target network consisting only of quaternary constraints\footnote{The ternary constraints derived when $i = k$ or $j = l$ in $|x_i - x_j| \neq |x_k - x_l|$ were excluded from the target network}. In total $C_T$ consists of 495 constraints.
The bias was created with the language $\Gamma = \{=, \neq, >, <, |x_i - x_j| = |x_k - x_l|, |x_i - x_j| \neq |x_k - x_l| \}$, including binary and quaternary constraints. In total $B$ contained 1254 constraints.
\textbf{Exam Timetabling Problem} (Exam TT). We used a simplified version of the exam timetabling problem of the Department of Electrical and Computer Engineering of the University of Western Macedonia, Greece. We considered 24 courses and 2 weeks of exams, meaning that there are 10 possible days for each course to be assigned. We assumed that there are 3 timeslots available in each day. This resulted in a model with 24 variables and domains of size 30. There are $\neq$ constraints between any two courses, assuming that only one course is examined during each time slot, i.e. there is only one available room for examination. Also, there exist constraints prohibiting courses of the same semester being examined on the same day. As we assumed that there are 3 timeslots in each day, the constraint preventing 2 courses of the same semester being in the same day was modeled as $|\lfloor x_i/3 \rfloor - \lfloor x_j/3 \rfloor| > 0$. Hence, the language use was $\Gamma = \{=, \neq, >, <, |\lfloor x_i/3 \rfloor - \lfloor x_j/3 \rfloor| > y\}$, with 5 different values for $y$. This resulted in a bias of 3864 constraints.
\textbf{Radio Link Frequency Assignment Problem}.
The RLFAP is the problem of providing communication channels from limited spectral resources~\cite{cabon1999radio}. We use a simplified version of the problem, which consists of 50 variables with domains of size 40. The target network contains 125 binary distance constraints. We built the bias using a language of 2 basic distance constraints ($\{|x_i - x_j| > y, |x_i - x_j| = y\}$) with 5 different possible values for $y$. This led to a language of 10 different distance constraints.
In total, $B$ contained 12250 constraints.
In our experiments we measure the size of the learned network $C_L$, the total number of queries $\#q$, the average size $\bar{q}$ of all queries, the number of complete queries $\#q_c$, the average waiting time $\bar{T}$ (in secs) for the user, the maximum waiting time $T_{max}$ (in secs) for the user, the time $T_{queries}$ taken from the start of the process until the last query and the total time needed (to converge) $T_{total}$. The difference between $T_{total}$ and $T_{queries}$ is the time needed to prove convergence or to reach premature convergence (because of the cutoffs). The size of $C_L$ in some cases is smaller than the size of the target network $C_T$ due to the presence of redundant constraints that some methods learn and others do not. In addition, we counted the times each method triggers any of the two cutoffs.
We first demonstrate the performance of MQuAcq and {\em FindScope-2} on these benchmarks, compared to the existing methods (Section~\ref{sec:mquacq-eval}). Then in Section~\ref{sec:heur-eval} we evaluate the proposed heuristics. In Section~\ref{sec:bias} we evaluate the effect of the size of the bias on the performance of MQuAcq. Finally, in Section~\ref{sec:scaling} we investigate our algorithm's scalability.
\subsection{MQuAcq and {\em FindScope-2} evaluation}
\label{sec:mquacq-eval}
For the experiments presented here all the methods compared, including QuAcq and MultiAcq, use the {\em max} heuristic (described in Section~\ref{sec:query}) for the query generation step, with {\em dom/wdeg} for variable ordering and {\em random} value ordering. In Table \ref{res:comp} we evaluate our proposed algorithm MQuAcq and the new function {\em FindScope-2} and we compare them against the existing methods. Hence, we give results from QuAcq, MultiAcq, MQuAcq, QuAcq with {\em FindScope-2} instead of {\em FindScope} and MQuAcq with {\em FindScope-2}.
We do not present results from the RLFAP benchmark for MultiAcq, as it did not manage to converge after running for 24 hours. This was due not only to its linear complexity in terms of the number of queries, but also because the bias contains many constraints in each possible scope and thus the condition at line 2 of {\em FindAllScopes} does not help to avoid redundant searches.
The other algorithms also suffer from high cpu times, but only when they are trying to generate queries near convergence. The large number of constraints in each scope, and particularly the existence of constraints that are implied by other constraints (e.g. $\{|x_i - x_j| > y_1$ implies $\{|x_i - x_j| > y_2$ if $y_1 > y_2$) can cause the appearance of a large number of constraints in the bias that cannot be violated, resulting in high convergence times. A similar problem is present in GTSudoku, again because of implied constraints that appear in the bias.
As we explain in Section~\ref{sec:heur-eval}, our proposed heuristics from Section~\ref{sec:heur} can alleviate this problem.
\begin{table}[htbp]
\begin{footnotesize}
\centering
\caption{Results of MQuAcq and {\em FindScope-2}}
{
\resizebox{\textwidth}{!}
\begin{tabular}{ |l|l|r|r|r|r|r|r|r|r| }
\hline
Benchmark & Algorithm & $|C_L|$ & $\#q$ & $\bar{q}$ & $\#q_c$ & $\bar{T}$ & $T_{max}$ & $T_{queries}$ & $T_{total}$ \\
\hline
& QuAcq & 648 & 11529 & 35 & 659 & 0.061 & 1.14 & 708.76 & 1529.78 \\
& MultiAcq & 796 & 14508 & 10 & 39 & 0.071 & 36.76 & 1034.52 & 1119.69 \\
& MQuAcq & 803 & 14935 & 26 & 37 & 0.010 & 20.49 & 154.47 & 194.57 \\
& QuAcq + {\em FindScope-2} & 648 & 5960 & 43 & 659 & 0.119 & 1.15 & 710.57 & 1531.58 \\
\multirow{-5}{*}{Sudoku} & MQuAcq + {\em FindScope-2} & 801 & 6865 & 32 & 40 & 0.026 & 15.33 & 175.14 & 225.15 \\
\hline
& QuAcq & 634 & 11325 & 35 & 649 & 0.82 & 1140.45 & 9235.54 & 11217.63 \\
& MultiAcq & 747 & 16324 & 15 & 70 & 0.78 & 1383.72 & 12522.16 & 13917.67 \\
& MQuAcq & 732 & 13912 & 26 & 45 & 0.40 & 905.68 & 5564.73 & 6959.69 \\
& QuAcq + {\em FindScope-2} & 636 & 5950 & 42 & 653 & 1.51 & 1582.52 & 8987.09 & 10920.54 \\
\multirow{-5}{*}{GTSudoku} & MQuAcq + {\em FindScope-2} & 742 & 6663 & 31 & 52 & 0.86 & 970.77 & 5720.09 & 7003.27 \\
\hline
& QuAcq & 855 & 15489 & 46 & 870 & 0.066 & 10.17 & 1020.83 & 1251.22 \\
& MultiAcq & 899 & 21079 & 11 & 52 & 0.163 & 20.27 & 3429.16 & 3439.18 \\
& MQuAcq & 899 & 17842 & 37 & 49 & 0.010 & 5.23 & 171.75 & 181.77 \\
& QuAcq + {\em FindScope-2} & 855 & 8115 & 55 & 873 & 0.127 & 10.15 & 1028.85 & 1259.23 \\
\multirow{-5}{*}{Latin} & MQuAcq + {\em FindScope-2} & 899 & 8228 & 46 & 50 & 0.023 & 10.30 & 189.34 & 199.38 \\
\hline
& QuAcq & 60 & 775 & 11 & 60 & 0.069 & 1.03 & 53.68 & 53.69 \\
& MultiAcq & 57 & 975 & 6 & 8 & 0.264 & 127.62 & 257.77 & 257.78 \\
& MQuAcq & 59 & 783 & 8 & 8 & 0.006 & 1.03 & 4.37 & 4.37 \\
& QuAcq + {\em FindScope-2} & 60 & 496 & 12 & 60 & 0.109 & 1.03 & 54.08 & 54.03 \\
\multirow{-5}{*}{Zebra} & MQuAcq + {\em FindScope-2} & 59 & 469 & 10 & 7 & 0.009 & 1.03 & 4.08 & 4.09 \\
\hline
& QuAcq & 52 & 599 & 9 & 52 & 0.085 & 1.01 & 51.16 & 51.48 \\
& MultiAcq & 52 & 704 & 5 & 8 & 0.025 & 3.65 & 17.34 & 17.65 \\
& MQuAcq & 52 & 619 & 6 & 8 & 0.012 & 1.01 & 7.28 & 7.75 \\
& QuAcq + {\em FindScope-2} & 52 & 356 & 10 & 52 & 0.144 & 1.01 & 51.31 & 51.63 \\
\multirow{-5}{*}{Murder} & MQuAcq + {\em FindScope-2} & 52 & 374 & 8 & 7 & 0.028 & 1.01 & 6.56 & 6.89 \\
\hline
& QuAcq & 26 & 282 & 5 & 26 & 0.07 & 1.00 & 19.15 & 19.16 \\
& MultiAcq & 27 & 234 & 4 & 5 & 0.01 & 1.01 & 2.21 & 2.22 \\
& MQuAcq & 26 & 269 & 4 & 5 & 0.01 & 1.00 & 2.06 & 2.08 \\
& QuAcq + {\em FindScope-2} & 26 & 170 & 6 & 26 & 0.11 & 1.00 & 19.23 & 19.25 \\
\multirow{-5}{*}{Purdey} & MQuAcq + {\em FindScope-2} & 26 & 149 & 5 & 5 & 0.01 & 1.00 & 2.13 & 2.14 \\
\hline
& QuAcq & 26 & 283 & 5 & 26 & 0.06 & 1.00 & 17.83 & 17.85 \\
& MultiAcq & 26 & 226 & 4 & 4.8 & 0.01 & 1.00 & 2.16 & 2.18 \\
& MQuAcq & 26 & 267 & 4 & 5 & 0.01 & 1.00 & 1.94 & 2.06 \\
& QuAcq + {\em FindScope-2} & 26 & 169 & 6 & 26 & 0.11 & 1.00 & 17.87 & 17.88 \\
\multirow{-5}{*}{Allergy} & MQuAcq + {\em FindScope-2} & 26 & 151 & 4 & 5 & 0.01 & 1.00 & 2.11 & 2.11 \\
\hline
& QuAcq & 495 & 7585 & 6 & 496 & 0.069 & 1.17 & 525.98 & 526.08 \\
& MultiAcq & 495 & 2368 & 6 & 64 & 0.029 & 1.18 & 68.82 & 68.92 \\
& MQuAcq & 495 & 6350 & 5 & 72 & 0.012 & 1.18 & 78.76 & 78.86 \\
& QuAcq + {\em FindScope-2} & 495 & 1552 & 9 & 496 & 0.338 & 1.18 & 524.97 & 525.07 \\
\multirow{-5}{*}{Golomb-12} & MQuAcq + {\em FindScope-2} & 495 & 961 & 8 & 69 & 0.082 & 3.87 & 79.10 & 79.20 \\
\hline
& QuAcq & 276 & 3856 & 11 & 277 & 0.07 & 1.14 & 281.11 & 576.17 \\
& MultiAcq & 276 & 3086 & 7 & 35 & 0.73 & 341.76 & 2264.81 & 2553.40 \\
& MQuAcq & 276 & 3747 & 9 & 36 & 0.08 & 126.13 & 311.70 & 592.94 \\
& QuAcq + {\em FindScope-2} & 276 & 1451 & 14 & 277 & 0.19 & 1.04 & 281.66 & 576.86 \\
\multirow{-5}{*}{Exam TT} & MQuAcq + {\em FindScope-2} & 276 & 1222 & 11 & 36 & 0.24 & 100.57 & 296.69 & 584.93 \\
\hline
& QuAcq & 102 & 1705 & 26 & 166 & 3.657 & 890.60 & 6,235.19 & 7,513.17 \\
& MultiAcq & - & - & - & - & - & - & - & - \\
& MQuAcq & 122 & 2492 & 24 & 107 & 2.067 & 933.00 & 5,150.23 & 6,308.14 \\
& QuAcq + {\em FindScope-2} & 102 & 1096 & 29 & 167 & 6.163 & 896.26 & 6,755.14 & 7,629.21 \\
\multirow{-5}{*}{RLFAP} & MQuAcq + {\em FindScope-2} & 122 & 1442 & 25 & 107 & 3.380 & 932.00 & 4,873.96 & 6,204.14 \\
\hline
\end{tabular}}
}
\label{res:comp}
\end{footnotesize}
\end{table}
Looking at the performance of MQuAcq, and comparing it to QuAcq, we observe that the use of {\em FindAllCons} to learn all the violated constraints from a negative example reduces significantly the average waiting time per query for the user and the total time of the execution in all benchmarks except RLFAP and GTSudoku (due to the nature of the problems, as mentioned above), where the time needed is still reduced but only by a little. Also, in Exam TT, QuAcq and MQuAcq have similar performance in terms of average time and total time. This is because although MQuAcq learns faster most of the constraints, it needs a lot more time for the generation of the last queries, due to the structure of the problem, as several constraints from the target network not learned yet (i.e. they are still in the bias) are difficult to be violated when the learned network is satisfied. This is confirmed by considering the maximum time that the user had to wait for a query to be generated and posted.
Regarding the rest of the problems, QuAcq is 8 times slower than MQuAcq in Sudoku and Allergy, 7 times in Latin square, 12 times in Zebra, 9 times in Purdey, and 6.5 times in Murder and Golomb rulers. This is due to the fewer generations of new examples in line 5 of MQuAcq, because the algorithm is able to learn a maximum number of violated constraints from each negative example. This is validated by looking at column $\#q_c$, which shows that far fewer complete queries are generated. As a downside, MQuAcq requires more queries in total than QuAcq to converge in most cases, and the difference is more evident on Sudoku, GTSudoku and Latin. However, as we can see on these problems MQuAcq learns a greater number of constraints of the target network than QuAcq, and the average size of the queries posted by MQuAcq is smaller. Also, we can observe that in Golomb rulers, which contains quaternary constraints, the queries posted to the user by MQuAcq were fewer.
Comparing MQuAcq to MultiAcq, it is clear that the redundant searches made by MultiAcq greatly affect the average time per query and total time needed for the system to converge. MQuAcq needs far less time to ask a query to the user, and requires posting fewer queries to converge, on most problems. On the other hand, on Golomb Rulers, MultiAcq displays better performance both in number of queries and in total time. This can be explained as {\em FindScope}, that is used by MQuAcq, posts a lot of redundant queries to the user and also the problem consists of only 12 variables, so the branching of MultiAcq is not very time-consuming.
Focusing on {\em FindScope-2} when used inside QuAcq, we can see that the number of queries posted to the user
were significantly lower compared to standard QuAcq with {\em FindScope}, because the former avoids asking several redundant queries. In terms of the number of queries, {\em FindScope-2} gives a gain of $35\%$ on the RLFAP problem, $36\%$ on the Zebra problem, $40\%$ on Murder, Purdey and Allergy, $48\%$ on Sudoku, GTSudoku and Latin square, $62\%$ on Exam TT and up to $80\%$ on Golomb Rulers.
Interestingly, it seems that the more variables are present in a problem, the bigger is the gain in avoided queries. As a downside, {\em FindScope-2} increases the average waiting time between the queries, but not the total time required to converge. The average time is increased simply because some queries are not posted because they would be redundant. In addition, as we can observe from the results from Golomb Rulers, the reduction in the number of queries in problems with higher arity constraints is even bigger.
The results obtained from MQuAcq with {\em FindScope-2} show that the use of {\em FindScope-2} has the same effect on MQuAcq as on QuAcq, cutting down the number of queries significantly, from $40\%$ (in Murder) up to $85\%$ (in Golomb). Comparing to MultiAcq, now the number of queries posted to the user is considerable lower, from $33\%$ (in Allergy) up to $61\%$ (in Latin squares).
Regarding the cutoffs,
neither of the two cutoffs was triggered by any method on Zebra, Murder, Purdey, Allergy and Golomb. On Sudoku, QuAcq (resp. MultiAcq) triggered the first cutoff 2 (resp. 3) times on average and the second 170 (resp. 26) times. On Latin squares these numbers were 9 and 46 for QuAcq and 16 and 19 for MultiAcq. MQuAcq triggered the first cutoff 5 times on average on Sudoku and the second also 5 times. On Latin square these numbers were 11 and 5 respectively. Given that the triggering of the cutoffs is associated with the problem of premature convergence, as we explain at the end of Section~\ref{sec:quacq}, the lower numbers for MQuAcq indicate that it is less likely to terminate with premature convergence. On the other hand, on Exam TT, QuAcq triggered the first cutoff only once, while MQuAcq and MultiAcq triggered it 5 times. The second cutoff was triggered 57 times from QuAcq, 98 times from MultiAcq and 104 from MQuAcq.
On RLFAP and GTSudoku, the cutoffs were triggered too many times due to the reasons explained above. On average, QuAcq triggered the first cutoff 21 times and the second 1444 times on RLFAP (resp. 98 and 1757 on GTSudoku). The corresponding numbers for MQuAcq were 20 and 1195 on RLFAP (resp. 17 and 1356 on GTSudoku). MultiAcq triggered the first cutoff 27 times and the second 2598 times on GTSudoku.
In the remainder of the experimental evaluation we will compare our methods only against QuAcq, as it is clear that learning a maximum number of constraints from each generated query using MQuAcq is more efficient than with MultiAcq. Also, we will present the results of both QuAcq and MQuAcq with the use {\em FindScope-2} instead of {\em FindScope}.
\subsection{Evaluation of heuristics}
\label{sec:heur-eval}
In this section we first evaluate the heuristic {\em max$_B$} for the query generation step in tandem with {\em bdeg} for variable ordering. Next, we focus on the performance of the proposed value ordering heuristic.
\subsubsection{{\em max$_B$} for the query generation step}
\label{sec:maxb-eval}
Recall that the objective of the {\em max$_B$} heuristic is to find a (partial) assignment that maximizes the number of violated constraints from $B$ instead of focusing on finding a complete solution of $C_L$ as {\em max} does. Hence, if {\em max$_B$} is used to generate queries, the variable ordering heuristic should comply with this objective. Our intuition behind the proposed variable ordering heuristic {\em bdeg} is that standard heuristics like {\em dom/wdeg} are not suitable for use in conjunction with {\em max$_B$}.
On the other hand, such heuristics are better suited to be used in tandem with {\em max} whose objective is to find a complete solution of $C_L$ quickly.
We use Sudoku as a sample problem to confirm the above assumptions. In Figures~\ref{fig:max-maxb-quacq} and~\ref{fig:max-maxb-mquacq} we report the cpu time performance of the QuAcq and MQuAcq algorithms using {\em max} and {\em max$_B$} with {\em bdeg} and {\em dom/wdeg}. In all cases we use random value ordering. Specifically, the figures depict the cpu time required by each combination of heuristics to learn an increasing portion of the target network (the x-axis gives the number of constraints learned).
The results confirm our intuition. When {\em max} is used for query generation within QuAcq (Figure~\ref{fig:max-maxb-quacq}a), the algorithm is by far faster with {\em dom/wdeg} compared to {\em bdeg}. In contrast, when {\em max$_B$} is used (Figure~\ref{fig:max-maxb-quacq}b) then the choice of variable ordering heuristic does not affect the run time initially, but as convergence is approached, {\em bdeg} speeds up the process considerably because {\em dom/wdeg} takes too long to generate the last few queries compared to {\em bdeg} which finds partial assignments really fast. Considering MQuAcq, when {\em max} is used (Figure~\ref{fig:max-maxb-mquacq}a), {\em bdeg} is slightly faster initially, but is outperformed by {\em dom/wdeg} near convergence. On the other hand, when {\em max$_B$} is used (Figure~\ref{fig:max-maxb-mquacq}b), {\em bdeg} and {\em dom/wdeg} are very close initially, but the former is again faster near convergence.
\begin{figure}[h]
\subfloat[]{\includegraphics[width=2.2in]{sudoku_quacq_max.png}}
\qquad
\subfloat[]{\includegraphics[width=2.2in]{sudoku_quacq_maxb.png}}
\caption{QuAcq using {\em max} and {\em max$_B$} with {\em bdeg} and {\em dom/wdeg} in the Sudoku problem}
\label{fig:max-maxb-quacq}
\end{figure}
\begin{figure}[h]
\subfloat[]{\includegraphics[width=2.2in]{sudoku_mquacq_max.png}}
\qquad
\subfloat[]{\includegraphics[width=2.2in]{sudoku_mquacq_maxb.png}}
\caption{MQuAcq using {\em max} and {\em max$_B$} with {\em bdeg} and {\em dom/wdeg} in the Sudoku problem}
\label{fig:max-maxb-mquacq}
\end{figure}
For a closer look at the difference between {\em bdeg} and {\em dom/wdeg} near convergence, Figure~\ref{fig:heur} displays the number of constraints from $B$ that are violated (y-axis) during each of the last 20 generated queries (x-axis). This data was obtained by applying MQuAcq with {\em max$_B$} on the Sudoku benchmark. It is clear that {\em bdeg}, as a heuristic that orders the variables with information obtained from $B$, violates considerably more constraints than {\em dom/wdeg}. Hence, the queries generated using {\em bdeg} are more ``informative'', which explains its good performance near convergence when used in tandem with {\em max$_B$}.
\begin{figure}[H]
\centerline{\includegraphics[width=3.5in]{heuristics_comparative_diagram}}
\caption{Number of constraints from the Bias that are violated when MQuAcq generates the last 20 queries in Sudoku.}
\label{fig:heur}
\end{figure}
To summarize, we have established that the use of {\em max$_B$} to generate queries requires the use of {\em bdeg} for variable ordering in order to maximize the performance of the acquisition algorithm, while if {\em max} is used to generate queries then {\em dom/wdeg} is a better option. We now compare these two strategies on all the considered benchmarks.
Table~\ref{res:maxb} displays the performance of
{\em max$_B$} (with {\em bdeg}) when used inside QuAcq and MQuAcq compared to {\em max} (with {\em dom/wdeg}). We can see that on small problems (Zebra, Murder, Purdey, Allergy and Golomb) {\em max$_B$} has similar performance to {\em max}. This is because such problems have only a few variables, meaning that in most cases both {\em max$_B$} and {\em max} can find complete solutions to $C_L$ that violate many constraints in $B$ within the time limit.
\begin{table}[htbp]
\begin{footnotesize}
\centering
\caption{Comparing {\em max$_B$} to {\em max}.}
{
\resizebox{\textwidth}{!}
\begin{tabular}{ |l|l|r|r|r|r|r|r|r|r| }
\hline
Benchmark & Algorithm & $|C_L|$ & $\#q$ & $\bar{q}$ & $\#q_c$ & $\bar{T}$ & $T_{max}$ & $T_{queries}$ & $T_{total}$ \\
\hline
& QuAcq {\em max} & 648 & 5960 & 43 & 659 & 0.119 & 1.15 & 710.57 & 1531.58 \\
& MQuAcq {\em max} & 801 & 6865 & 32 & 40 & 0.026 & 15.33 & 175.14 & 225.15 \\
& QuAcq {\em max$_B$} & 810 & 6657 & 38 & 510 & 0.131 & 1.15 & 869.67 & 869.68 \\
\multirow{-4}{*}{Sudoku} & MQuAcq {\em max$_B$} & 810 & 6858 & 32 & 14 & 0.015 & 1.12 & 104.89 & 104.90 \\
\hline
& QuAcq {\em max} & 636 & 5950 & 42 & 653 & 1.51 & 1582.52 & 8987.09 & 10920.54 \\
& MQuAcq {\em max} & 742 & 6663 & 31 & 52 & 0.86 & 970.77 & 5720.09 & 7003.27 \\
& QuAcq {\em max$_B$} & 786 & 6493 & 24 & 278 & 0.19 & 5.37 & 1216.01 & 1219.98 \\
\multirow{-4}{*}{GTSudoku} & MQuAcq {\em max$_B$} & 787 & 6813 & 29 & 12 & 0.11 & 5.28 & 735.99 & 738.67 \\
\hline
& QuAcq {\em max} & 855 & 8115 & 55 & 873 & 0.127 & 10.15 & 1028.85 & 1259.23 \\
& MQuAcq {\em max} & 899 & 8228 & 46 & 50 & 0.023 & 10.30 & 189.34 & 199.38 \\
& QuAcq {\em max$_B$} & 900 & 7946 & 54 & 793 & 0.126 & 1.19 & 999.17 & 999.18 \\
\multirow{-4}{*}{Latin} & MQuAcq {\em max$_B$} & 900 & 8411 & 46 & 17 & 0.017 & 1.16 & 142.85 & 142.86 \\
\hline
& QuAcq {\em max} & 60 & 496 & 12 & 60 & 0.109 & 1.03 & 54.08 & 54.03 \\
& MQuAcq {\em max} & 59 & 469 & 10 & 7 & 0.009 & 1.03 & 4.08 & 4.09 \\
& QuAcq {\em max$_B$} & 60 & 481 & 12 & 56 & 0.110 & 1.03 & 52.84 & 52.84 \\
\multirow{-4}{*}{Zebra} & MQuAcq {\em max$_B$} & 60 & 480 & 10 & 6 & 0.009 & 1.03 & 4.53 & 4.54 \\
\hline
& QuAcq {\em max} & 52 & 356 & 10 & 52 & 0.144 & 1.01 & 51.31 & 51.63 \\
& MQuAcq {\em max} & 52 & 374 & 8 & 7 & 0.028 & 1.01 & 6.56 & 6.89 \\
& QuAcq {\em max$_B$} & 52 & 370 & 10 & 47 & 0.136 & 1.01 & 50.23 & 50.44 \\
\multirow{-4}{*}{Murder} & MQuAcq {\em max$_B$} & 52 & 357 & 8 & 4 & 0.019 & 1.01 & 6.82 & 7.07 \\
\hline
& QuAcq {\em max} & 26 & 170 & 6 & 26 & 0.11 & 1.00 & 19.23 & 19.25 \\
& MQuAcq {\em max} & 26 & 149 & 5 & 5 & 0.01 & 1.00 & 2.13 & 2.14 \\
& QuAcq {\em max$_B$} & 26 & 171 & 6 & 23 & 0.12 & 1.00 & 20.13 & 20.13 \\
\multirow{-4}{*}{Purdey} & MQuAcq {\em max$_B$} & 26 & 152 & 5 & 3 & 0.02 & 1.00 & 2.29 & 2.30 \\
\hline
& QuAcq {\em max} & 26 & 169 & 6 & 26 & 0.11 & 1.00 & 17.87 & 17.88 \\
& MQuAcq {\em max} & 26 & 151 & 4 & 5 & 0.01 & 1.00 & 2.11 & 2.11 \\
& QuAcq {\em max$_B$} & 26 & 169 & 6 & 23 & 0.12 & 1.00 & 19.46 & 19.47 \\
\multirow{-4}{*}{Allergy} & MQuAcq {\em max$_B$} & 26 & 150 & 4 & 3 & 0.01 & 1.00 & 2.13 & 2.13 \\
\hline
& QuAcq {\em max} & 495 & 1552 & 9 & 496 & 0.338 & 1.18 & 524.97 & 525.07 \\
& MQuAcq {\em max} & 495 & 961 & 8 & 69 & 0.082 & 3.87 & 79.10 & 79.20 \\
& QuAcq {\em max$_B$} & 495 & 1789 & 9 & 438 & 0.294 & 1.19 & 526.77 & 526.89 \\
\multirow{-4}{*}{Golomb-12} & MQuAcq {\em max$_B$} & 495 & 970 & 8 & 49 & 0.086 & 1.18 & 83.00 & 83.12 \\
\hline
& QuAcq {\em max} & 276 & 1451 & 14 & 277 & 0.19 & 1.04 & 281.66 & 576.86 \\
& MQuAcq {\em max} & 276 & 1222 & 11 & 36 & 0.24 & 100.57 & 296.69 & 584.93 \\
& QuAcq {\em max$_B$} & 276 & 1468 & 13 & 230 & 0.22 & 6.06 & 316.19 & 321.32 \\
\multirow{-4}{*}{Exam TT} & MQuAcq {\em max$_B$} & 276 & 1237 & 11 & 14 & 0.08 & 5.90 & 94.04 & 100.56 \\
\hline
& QuAcq {\em max} & 102 & 1096 & 29 & 167 & 6.163 & 896.26 & 6,755.14 & 7,629.21 \\
& MQuAcq {\em max} & 122 & 1442 & 25 & 107 & 3.380 & 932.00 & 4,873.96 & 6,204.14 \\
& QuAcq {\em max$_B$} & 106 & 1094 & 26 & 77 & 0.242 & 6.63 & 264.62 & 268.32 \\
\multirow{-4}{*}{RLFAP} & MQuAcq {\em max$_B$} & 124 & 1445 & 24 & 25 & 0.115 & 6.55 & 165.76 & 173.90 \\
\hline
\end{tabular}}
}
\label{res:maxb}
\end{footnotesize}
\end{table}
On the other hand, on the bigger and harder problems (Sudoku, GTSudoku, Latin, Exam TT and RLFAP) the average and maximum time per query of both QuAcq and MQuAcq are all reduced when {\em max$_B$} is used, and so is the number of complete queries posted to the user. Also, the differences in the maximum time per query are quite large.
Another observation is that for both QuAcq and MQuAcq, as column $|C_L|$ demonstrates, the use of {\em max$_B$} helps to not only learn the complete target network, but also redundant constraints. Inadvertently, this results in more queries being asked in some cases and greater $T_{queries}$ (e.g. QuAcq in Sudoku). On the other hand $T_{total}$ is significantly reduced.
We can see that the use of {\em max$_B$} in QuAcq (resp. in MQuAcq) reduces the total time by $43\%$ ($53\%$) in Sudoku, $89\%$ ($89\%$) in GTSudoku, $20\%$ ($24\%$) in Latin, $44\%$ ($83\%$) in Exam TT and $96\%$ ($97\%$) in RLFAP.
These gains in average time per query and total cpu time can be explained because in Sudoku and Latin, any method that used {\em max$_B$} never triggered any cutoff, meaning that an irredundant query was always found in time. Accordingly, in GTSudoku, Exam TT and RLFAP, when {\em max} is used the cutoffs are triggered too many times, resulting in very high cpu times. On the other hand, when {\em max$_B$} is used, the second cutoff was never triggered in any of these problems while the first cutoff was triggered in average 40 times by QuAcq and 53 times by MQuAcq in GTSudoku only 7 times by both QuAcq and MQuAcq in EXAM TT and 15 times by QuAcq and 11 times by MQuAcq on RLFAP.
An issue that is not clearly visible from the data in the table is that of premature convergence. The difference between $T_{total}$ and $T_{queries}$ is in fact the time needed to reach (premature) convergence, because of the cutoffs. In general, the use of {\em max$_B$} alleviates the problem of premature convergence, as in all the benchmarks both the algorithms proved convergence because having learned the redundant constraints during the process, $B$ is empty in the end, and therefore the system does not have to prove that no solution of $C_L$ violates them.
\subsubsection{{\em max$_v$} for value ordering}
\label{sec:maxv-eval}
Now, let us focus on the use of the {\em max$_v$} value ordering heuristic. Table~\ref{res:maxv} illustrates the results obtained using random and {\em max$_v$} for value ordering alongside {\em bdeg} for variable ordering, in tandem with {\em max$_B$}.
\begin{table}[htbp]
\begin{footnotesize}
\centering
\caption{Comparing random value ordering to {\em max$_v$}.}
{
\resizebox{\textwidth}{!}
\begin{tabular}{ |l|l|r|r|r|r|r|r|r|r| }
\hline
Benchmark & Algorithm & $|C_L|$ & $\#q$ & $\bar{q}$ & $\#q_c$ & $\bar{T}$ & $T_{max}$ & $T_{queries}$ & $T_{total}$ \\
\hline
& QuAcq {\em rand} & 810 & 6657 & 38 & 510 & 0.131 & 1.15 & 869.67 & 869.68 \\
& MQuAcq {\em rand} & 810 & 6858 & 32 & 14 & 0.015 & 1.12 & 104.89 & 104.90 \\
& QuAcq {\em max$_v$} & 810 & 7074 & 37 & 555 & 0.123 & 1.14 & 868.22 & 868.23 \\
\multirow{-4}{*}{Sudoku} & MQuAcq {\em max$_v$} & 810 & 5101 & 4 & 3 & 0.215 & 1.30 & 1,095.18 & 1,095.20 \\
\hline
& QuAcq {\em rand} & 786 & 6493 & 24 & 278 & 0.19 & 5.37 & 1216.01 & 1219.98 \\
& MQuAcq {\em rand} & 787 & 6813 & 29 & 12 & 0.11 & 5.28 & 735.99 & 738.67 \\
& QuAcq {\em max$_v$} & 776 & 6598 & 23 & 259 & 0.18 & 5.37 & 1217.91 & 1223.17 \\
\multirow{-4}{*}{GTSudoku} & MQuAcq {\em max$_v$} & 810 & 5144 & 4 & 2 & 0.29 & 5.18 & 1481.40 & 1486.71 \\
\hline
& QuAcq {\em rand} & 900 & 7946 & 54 & 793 & 0.126 & 1.19 & 999.17 & 999.18 \\
& MQuAcq {\em rand} & 900 & 8411 & 46 & 17 & 0.017 & 1.16 & 142.85 & 142.86 \\
& QuAcq {\em max$_v$} & 900 & 8309 & 51 & 817 & 0.120 & 1.20 & 996.92 & 996.93 \\
\multirow{-4}{*}{Latin} & MQuAcq {\em max$_v$} & 900 & 6968 & 4 & 3 & 0.356 & 1.84 & 2,478.57 & 2,478.58 \\
\hline
& QuAcq {\em rand} & 60 & 481 & 12 & 56 & 0.110 & 1.03 & 52.84 & 52.84 \\
& MQuAcq {\em rand} & 60 & 480 & 10 & 6 & 0.009 & 1.03 & 4.53 & 4.54 \\
& QuAcq {\em max$_v$} & 60 & 494 & 12 & 59 & 0.112 & 1.03 & 55.21 & 55.21 \\
\multirow{-4}{*}{Zebra} & MQuAcq {\em max$_v$} & 61 & 491 & 6 & 3 & 0.008 & 1.03 & 3.69 & 3.69 \\
\hline
& QuAcq {\em rand} & 52 & 370 & 10 & 47 & 0.136 & 1.01 & 50.23 & 50.44 \\
& MQuAcq {\em rand} & 52 & 357 & 8 & 4 & 0.019 & 1.01 & 6.82 & 7.07 \\
& QuAcq {\em max$_v$} & 52 & 367 & 10 & 49 & 0.138 & 1.01 & 50.76 & 51.00 \\
\multirow{-4}{*}{Murder} & MQuAcq {\em max$_v$} & 52 & 365 & 4 & 3 & 0.007 & 1.01 & 2.67 & 2.67 \\
\hline
& QuAcq {\em rand} & 26 & 171 & 6 & 23 & 0.12 & 1.00 & 20.13 & 20.13 \\
& MQuAcq {\em rand} & 26 & 152 & 5 & 3 & 0.02 & 1.00 & 2.29 & 2.30 \\
& QuAcq {\em max$_v$} & 26 & 169 & 6 & 23 & 0.12 & 1.00 & 20.39 & 20.40 \\
\multirow{-4}{*}{Purdey} & MQuAcq {\em max$_v$} & 27 & 146 & 3 & 2 & 0.01 & 1.00 & 1.66 & 1.66 \\
\hline
& QuAcq {\em rand} & 26 & 169 & 6 & 23 & 0.12 & 1.00 & 19.46 & 19.47 \\
& MQuAcq {\em rand} & 26 & 150 & 4 & 3 & 0.01 & 1.00 & 1.85 & 1.89 \\
& QuAcq {\em max$_v$} & 26 & 176 & 6 & 24 & 0.11 & 1.00 & 19.87 & 19.88 \\
\multirow{-4}{*}{Allergy} & MQuAcq {\em max$_v$} & 26 & 149 & 3 & 3 & 0.01 & 1.00 & 1.72 & 1.72 \\
\hline
& QuAcq {\em rand} & 495 & 1789 & 9 & 438 & 0.294 & 1.19 & 526.77 & 526.89 \\
& MQuAcq {\em rand} & 495 & 970 & 8 & 49 & 0.086 & 1.18 & 83.00 & 83.12 \\
& QuAcq {\em max$_v$} & 495 & 1740 & 9 & 473 & 0.306 & 3.26 & 532.96 & 533.06 \\
\multirow{-4}{*}{Golomb-12} & MQuAcq {\em max$_v$} & 495 & 567 & 3 & 2 & 0.109 & 1.25 & 61.78 & 61.87 \\
\hline
& QuAcq {\em rand} & 276 & 1468 & 13 & 230 & 0.22 & 6.06 & 316.19 & 321.32 \\
& MQuAcq {\em rand} & 276 & 1237 & 11 & 14 & 0.08 & 5.90 & 94.04 & 100.56 \\
& QuAcq {\em max$_v$} & 276 & 1808 & 13 & 277 & 0.16 & 2.99 & 285.23 & 293.84 \\
\multirow{-4}{*}{Exam TT} & MQuAcq {\em max$_v$} & 276 & 1268 & 7 & 14 & 0.01 & 1.05 & 18.06 & 29.16 \\
\hline
& QuAcq {\em rand} & 106 & 1094 & 26 & 77 & 0.242 & 6.63 & 264.62 & 268.32 \\
& MQuAcq {\em rand} & 124 & 1445 & 24 & 25 & 0.115 & 6.55 & 165.76 & 173.90 \\
& QuAcq {\em max$_v$} & 108 & 1073 & 23 & 91 & 0.152 & 8.00 & 163.33 & 170.04 \\
\multirow{-4}{*}{RLFAP} & MQuAcq {\em max$_v$} & 123 & 1377 & 9 & 1 & 0.042 & 7.63 & 57.12 & 70.43 \\
\hline
\end{tabular}}
}
\label{res:maxv}
\end{footnotesize}
\end{table}
Comparing against the results of random value ordering, we can see that the use of {\em max$_v$} does not affect the results of QuAcq significantly in terms of cpu time, but it has a negative effect on the number of queries required for the larger problems. This is because QuAcq does not use all the information included in each generated query, learning only one violated constraint.
With respect to MQuAcq, we observe that in RLFAP, Exam TT, Zebra and Murder, which include a small number of variables, the use of {\em max$_v$} reduces the total time and the average time per query (up to $70\%$ in Exam TT). In addition, in Golomb rulers the number of queries is reduced significantly ($42\%$) and the total time of the acquisition process is also reduced. In Purdey and Allergy, which are the smallest problems, there is no difference.
In contrast, in the other three benchmarks, which have a much larger $C_T$ (810 for Sudoku and GTSudoku and 900 for Latin), we observe that although the number of queries posted to the user is considerably reduced (by $25\%$ for Sudoku and GTSudoku and $17.2\%$ for Latin),
the total time of the acquisition process and the average time per query are one order of magnitude higher compared to random value ordering. However, it can be seen that the maximum time the user has to wait for a query is not much higher. The total time and the average time per query are increased because of the branching that {\em FindAllCons} performs.
Another observation is that the use of {\em max$_v$} significantly reduces the average size per query as well as the number of complete queries posted to the user.
For a closer look at the behaviour of QuAcq and MQuAcq with different value ordering heuristics for query generation, we evaluated their performance in terms of the time elapsed and the size of the learned network $C_L$ in relation to the number of queries posted to the user.
Figures~\ref{fig:beh1} and~\ref{fig:beh2} illustrate the performance of the two algorithm on Sudoku and Latin when using random value ordering, while Figures~\ref{fig:beh3} and~\ref{fig:beh4} illustrate their performance when using {\em max$_v$} for value ordering.
\begin{figure}[h]
\subfloat[]{\includegraphics[width=2.2in]{sudoku_queries_time_randvalue.jpg}}
\qquad
\subfloat[]{\includegraphics[width=2.2in]{sudoku_queries_cl_randvalue.jpg}}
\caption{The behaviour of QuAcq and MQuAcq in Sudoku, using the {\em max$_B$} heuristic, {\em bdeg} for variable ordering and random value ordering.}
\label{fig:beh1}
\end{figure}
\begin{figure}[h]
\subfloat[]{\includegraphics[width=2.2in]{latin_queries_time_randvalue.jpg}}
\qquad
\subfloat[]{\includegraphics[width=2.2in]{latin_queries_cl_randvalue.jpg}}
\caption{The behaviour of QuAcq and MQuAcq in Latin, using the {\em max$_B$} heuristic, {\em bdeg} for variable ordering and random value ordering.}
\label{fig:beh2}
\end{figure}
\begin{figure}[h]
\subfloat[]{\includegraphics[width=2.2in]{sudoku_queries_time_maxvalue.jpg}}
\qquad
\subfloat[]{\includegraphics[width=2.2in]{sudoku_queries_cl_maxvalue.jpg}}
\caption{The behaviour of QuAcq and MQuAcq in Sudoku, using the {\em max$_B$} heuristic, {\em bdeg} for variable ordering and {\em max$_v$} for value ordering.}
\label{fig:beh3}
\end{figure}
\begin{figure}[h]
\subfloat[]{\includegraphics[width=2.2in]{latin_queries_time_maxvalue.jpg}}
\qquad
\subfloat[]{\includegraphics[width=2.2in]{latin_queries_cl_maxvalue.jpg}}
\caption{The behaviour of QuAcq and MQuAcq in Latin, using the {\em max$_B$} heuristic, {\em bdeg} for variable ordering and {\em max$_v$} for value ordering.}
\label{fig:beh4}
\end{figure}
In Figures~\ref{fig:beh1} and~\ref{fig:beh2} we can observe that using random value ordering QuAcq needs fewer queries to learn a higher proportion of the target network than MQuAcq. In contrast, MQuAcq needs far less time as it learns all the violated constraints from the target network from each generated query.
When using {\em max$_v$} for value ordering, these results are reversed (Figures~\ref{fig:beh3} and~\ref{fig:beh4}). That is, MQuAcq takes more time than QuAcq as the process unfolds, but it requires fewer queries to learn the target network. This reversal occurs because with the {\em max$_v$} heuristic MQuAcq can acquire more information (i.e. more constraints) from each generated query. This leads to fewer queries but at the same time it needs more time due to the branching of {\em FindAllCons} as explained before.
A generic remark we can make regarding the value ordering heuristic in MQuAcq is that it can be selected depending on which metric of the constraint acquisition process is viewed as critical. If the only important factors are the number of queries posted to the user and the size of the queries, the {\em max$_v$} heuristic is a better option than random ordering. This is often the case when the user is human. On the other hand, if speeding up the acquisition process is more important, random value ordering should be preferred. This can occur in cases where the user is an existing software system. Concerning QuAcq, we can see that {\em max$_v$} does not improve the acquisition process in any metric. On the contrary, it increases the number of queries, as locating the scope of a violated constraint can end up in a lot small positive queries.
\subsection{The effect of the size of the bias}
\label{sec:bias}
We evaluated the effect of the size of the bias on MQuAcq, in terms of the number of queries posted and the time needed to converge. We used the constraint relations needed for each problem and increased the size of the bias progressively using the language $\{=, \neq, >, <, \leq, \geq, x_i - x_j = 1, |x_i - x_j| = 1, |x_i - x_j| > y, |x_i - x_j| = y, |\lfloor x_i/3 \rfloor - \lfloor x_j/3 \rfloor| > y\}$. We used ExamTT (Figure~\ref{fig:bias_tt}), Latin (Figure~\ref{fig:bias_latin}), Sudoku (Figure~\ref{fig:bias_sudoku}) and Golomb (Figure~\ref{fig:bias_golomb}) to evaluate the effect of the bias' size in different problems.
\begin{figure}[H]
\subfloat[]{\includegraphics[width=2.2in]{queries_bias_tt.png}}
\qquad
\subfloat[]{\includegraphics[width=2.2in]{time_bias_tt.png}}
\caption{Performance of MQuAcq in Exam Timetabling with bias of different sizes.}
\label{fig:bias_tt}
\end{figure}
\begin{figure}[H]
\subfloat[]{\includegraphics[width=2.2in]{queries_bias_latin.png}}
\qquad
\subfloat[]{\includegraphics[width=2.2in]{time_bias_latin.png}}
\caption{Performance of MQuAcq in Latin squares with bias of different sizes.}
\label{fig:bias_latin}
\end{figure}
\begin{figure}[h]
\subfloat[]{\includegraphics[width=2.2in]{queries_bias_sudoku.png}}
\qquad
\subfloat[]{\includegraphics[width=2.2in]{time_bias_sudoku.png}}
\caption{Performance of MQuAcq in Sudoku with bias of different sizes.}
\label{fig:bias_sudoku}
\end{figure}
\begin{figure}[h]
\subfloat[]{\includegraphics[width=2.2in]{queries_bias_golomb.png}}
\qquad
\subfloat[]{\includegraphics[width=2.2in]{time_bias_golomb.png}}
\caption{Performance of MQuAcq in Golomb rulers with bias of different sizes.}
\label{fig:bias_golomb}
\end{figure}
Looking at the effect of the bias on the number of queries, we can observe that it does not affect it considerably. In all the benchmarks, the number of queries increases logarithmically as the size of the bias is increased. This is more visible in Latin and Sudoku, where the increase in the bias size is larger, because of the larger number of variables in these benchmarks. These results agree with the corresponding results given in \cite{bessiere2013constraint}. The increase in the number of queries is very mild because although in the worst case each positive query will remove only one constraint from $B$ (in which case the increase in the number of queries would be substantial), in practice, each positive query removes several constraints from $B$, even in the same scope. Thus, as the number of constraints in $B$ increases, so does the average number of constraints removed by each positive query, resulting in a mild increase in the total number of queries.
Regarding the effect of the bias on the time required by MQuAcq, in problems with fewer variables, where the number of constraints in the bias is low even when the whole language is considered, the increase in the time needed to converge is very small (around 25s in Exam Timetabling and 20s in Golomb). However, in Sudoku and Latin, where the larger number of variables means that the size of the bias increases considerably when taking into account more relations in the language used, the increase in the time needed is sharper. But still, this increase is manageable. Overall, the results show that learning problems with expressive biases scales well, even when using a large language to construct the bias, especially regarding the number of generated queries.
\subsection{Scalability Analysis}
\label{sec:scaling}
Finally, we ran experiments to investigate the scalability of MQuAcq as the problem size increases. Towards this we used the following benchmarks, with instances of different sizes:
\textbf{Latin Square}. We used instances with the number of rows/columns $n = 6, ..., 12$. The language used is the same as above, i.e. $\Gamma = \{=, \neq, >, < \}$. Thus, the number of variables varied from 36 to 144 and the size of the target network from 180 to 1584 constraints.
\textbf{Exam Timetabling}. We used instances with the number of variables (number of courses) varying from 24 to 54. The size of the target network varied from 276 to 1431 constraints. The language used is the same as the one described above.
\textbf{Radio Link Frequency Assignment Problem}.
We used simplified versions of the problem, with 40, 45, 50, 55 and 60 variables. The size of the target network varied from 52 to 170
binary distance constraints. The language used was the same as above ($\{|x_i - x_j| > y, |x_i - x_j| = y\}$, with 5 different possible values for $y$).
We ran MQuAcq with {\em FindScope-2}, using max$_B$ as the optimization heuristic. We used {\em bdeg} for variable ordering with {\em random} value ordering. We evaluated our algorithm in terms of the number of queries posted to the user and its time performance. The results are shown in Figure~\ref{fig:scale_latin} for Latin, Figure~\ref{fig:scale_rlfap} for RLFAP and Figure~\ref{fig:scale_tt} for the Exam Timetabling problem.
\begin{figure}[h]
\subfloat[]{\includegraphics[width=2.2in]{queries_vars_latin.png}}
\qquad
\subfloat[]{\includegraphics[width=2.2in]{time_vars_latin.png}}
\caption{Performance of MQuAcq in Latin instances of different size}
\label{fig:scale_latin}
\end{figure}
\begin{figure}[h]
\subfloat[]{\includegraphics[width=2.2in]{queries_vars_rlfap.png}}
\qquad
\subfloat[]{\includegraphics[width=2.2in]{time_vars_rlfap.png}}
\caption{Performance of MQuAcq in RLFAP instances of different size}
\label{fig:scale_rlfap}
\end{figure}
\begin{figure}[h]
\subfloat[]{\includegraphics[width=2.2in]{queries_vars_tt.png}}
\qquad
\subfloat[]{\includegraphics[width=2.2in]{time_vars_tt.png}}
\caption{Performance of MQuAcq in Exam Timetabling instances of different size}
\label{fig:scale_tt}
\end{figure}
As we can observe, the increase in the number of queries is proportional to the increase in the number of variables for all benchmarks. This confirms our theoretical analysis. Focusing on the time performance, we can see that the time needed for convergence in Latin problems rises sharply as the number of variables grows beyond 100. This can be explained by the substantial increase in the number of constraints of the target network in these instances. On the other hand, in RLFAP, the increase in time is not very significant because the number of constraints remains relatively small. In the Exam Timetabling problem we see that the time needed is analogous to the number of queries, and grows proportionally to the number of variables present in the problem.
Hence, our proposed algorithm scales up quite well in terms of the number of queries required, while the time performance, being highly dependant on the size of the target network, can rise sharply, and even become unmanageable, for target networks with large numbers of constraints. We believe that methods that try to exploit the structure of the problem being learned may help alleviate this problem, and we intend to work on this in the immediate future.
\section{Discussion}
\label{sec:disc}
We now discuss certain aspects of MQuAcq in relevance to its performance, theoretical guarantees, and applicability. First, we discuss the importance of partial queries, which is a strong point of the algorithm. Then we elaborate on a negative result by proving that MQuAcq cannot learn constraint networks with an optimal number of queries even for very simple languages. Finally, we discuss a weakness of all the proposed constraint acquisition algorithms which paves the way for future work.
\subsection{On partial queries}
Given the importance of partial queries in MQuAcq (and QuAcq), a question that arises is whether such queries are easier or harder for the user to classify than complete ones.
First of all, a partial example does not have to be part of a complete solution to be classified as positive. The user only needs to decide if the example at hand violates any requirement (constraint). Hence, it can be easier for the user to classify small examples with only a few variables instead of full assignments, simply because inspecting if a full assignment satisfies all the requirements can be very tedious. This is especially true when the problem is large, consisting of many variables.
Hence, partial queries can make the acquisition process easier for the user.
In addition, the smaller negative examples posted to the user require fewer queries from {\em FindScope-2} to locate the scope of the violated constraint.
Another important factor that supports the argument that partial queries are easier to classify is that many partial queries are subsets of the same complete negative query. Thus, they may be easier for the user to classify simply because the user has already seen the full query and has determined that it is not a solution.
As a result, generating a new example is not always the best choice if we have not acquired the desired information from the previous generated one.
Considering the above, the way MQuAcq operates, posting more partial sub-queries than generating new ones, is favorable for the user, not only because of the reduced waiting time but also because it makes it easier to answer the queries.
A drawback of MQuAcq is that searching for all the violated constraints of each generated example can lead to posting a lot of relatively small positive queries that violate only a few constraints from the bias. However, it is desirable to prune the bias from the constraints that are not included in $C_T$ with as few queries as possible. Thus, ideally, we want each positive query to violate a maximum number of constraints from $B$ to prune it with just a few queries. This drawback of MQuAcq is the main reason for the slightly increased number of queries it posts compared to QuAcq when both of them learn the complete target network including the redundant constraints with the {\em max$_B$} heuristic (Table~\ref{res:maxb}).
This problem could be avoided if the algorithm focused on some of the violated constraints by the generated example, instead of trying to acquire all of them. Non-random problems usually display some structure/pattern in their constraint network. However, this is not taken into account by the existing constraint acquisition algorithms. As future work, as we mentioned above, we plan to adjust the acquisition process to take into account the structure that is revealed as constraints are learned and hence target specific constraints.
\subsection{On the (non-)optimality of MQuAcq}
An interesting question about concept learning algorithms is whether they can learn certain types of concepts with an optimal number of queries. In~\cite{bessiere2013constraint} it was proved that QuAcq is guaranteed to converge after $O(|X| \cdot log(|X|))$ queries in the languages $\{=, \neq\}$ and $\{>\}$ on the Boolean domain. However, this is not the case for MQuAcq.
We now show that MQuAcq is not optimal even for very simple languages like the one that includes a basic binary relation.
\begin{prop}
\label{optimality-mquacq}
MQuAcq, using max for the generation of the queries, does not learn Boolean networks on the language $\{=\}$ with an optimal number of queries.
\end{prop}
\begin{proof}
In~\cite{bessiere2013constraint}, it is proved that the minimum number of queries required to learn a constraint network in this language is in $\Omega(|X| \cdot log|X|)$. In such a network, the maximum number of constraints is equal to the number of $2$-combinations of the $|X|$ variables which is $\frac{|X|\cdot(|X|-1)}{2}$. For each constraint a number of queries up to $2 \cdot |S| \cdot log|X|$ may be needed by {\em FindScope-2} (Proposition~\ref{prop:findscope-2_compl}). Assume that MQuAcq generates an example that violates all the constraints. {\em FindAllCons} will find all the minimal scopes and learn all the constraints. Thus, in this case it will learn both the redundant and the non-redundant constraints, i.e. it will learn $\frac{|X|\cdot(|X|-1)}{2}$ constraints. As a result, in the worst case MQuAcq's number of queries to learn the constraint network is in $\Omega(|X|^2 \cdot log|X|)$, which is not optimal.
\end{proof}
The following example illustrates this behaviour of MQuAcq, contrasting it to QuAcq.
\begin{example}
\label{ex:optim}
Consider a problem consisting of $4$ variables with domains $\{ 1, 2 \}$. Also, assume that the target network consists of a single clique of $=$ constraints, i.e. $C_T = \{ =_{12}, =_{13}, =_{14}, =_{23}, =_{24}, =_{34} \}$. Note that there exist equivalent networks to $C_T$ with fewer constraints. For instance, the first three constraints are enough to form an equivalent network, and in this case, the other three are implied (i.e. they are redundant).
As MQuAcq will learn the entire target constraint network, including the redundant constraints, it will need $|C_T| * log(|X|) = 6*2 = 12$ queries to find the scopes of the constraints.
No query is needed to be made by FindC, as we have $|\Gamma| = 1$. On the other hand, QuAcq will not learn the redundant constraints.
The number of non-redundant constraints in the above constraint network is equal to $|X| -1 = 3$. Thus, QuAcq will need $3*2 = 6$ queries to learn the constraint network and converge.
\end{example}
We now prove that if $max_B$ is used by either QuAcq or MQuAcq, allowing them to generate partial queries (e.g. at line 5 of MQuAcq or line 4 of QuAcq), then these algorithms are not optimal in terms of the number of queries posted to the user even on the very simple language $\{=\}$, on which QuAcq (with {\em max}) is optimal.
\begin{prop}
\label{optimality-partial}
Constraint acquisition algorithms QuAcq and MQuAcq do not learn Boolean networks on the language $\{=\}$ with an optimal number of queries, if partial queries can be generated.
\end{prop}
\begin{proof}
The minimum number of queries required to learn a constraint network in this language is in $\Omega(|X| \cdot log|X|)$~\cite{bessiere2013constraint}. The maximum number of constraints is equal to the number of $2$-combinations of the $|X|$ variables which is $\frac{|X|\cdot(|X|-1)}{2}$. For each constraint a number of queries up to $2 \cdot |S| \cdot log|X|$ is needed by {\em FindScope-2} (Proposition~\ref{prop:findscope-2_compl}). In case that partial queries can be generated, redundant constraints can be learned too, so in the worst case the number of queries to learn the constraint network is in $\Omega(|X|^2 \cdot log|X|)$, which is not optimal.
\end{proof}
\subsection{On errors and omissions}
One significant issue that has not been addressed in the context of constraint acquisition is the possibility of omissions and/or errors in the answers of the user to the posted queries. All the constraint acquisition algorithms that have been proposed are guaranteed to operate only under the assumption that the queries are answered correctly.
In the context of concept learning the existence of omissions or errors has been studied for some classes of concepts. Angluin et. al presented an algorithm that can learn the target concept function by using equivalence and incomplete membership queries ~\cite{angluin1994randomly} . In this model, the answers to some of the learner's membership queries may be unavailable. Extending this, in the exact learning model defined in~\cite{angluin1997malicious} the learning system can learn exactly a target concept using equivalence and membership queries with at most some number $l$ of errors or omissions in the answers of the user to the membership queries posted.
In this model the {\em limited} membership queries and the {\em malicious} membership queries are introduced. A limited membership query may be answered either by classifying the example (correctly), or with a special answer i.e. ``I don't know'', while in a malicious membership query the classification by the user may be wrong.
Although equivalence queries are more difficult to be answered than membership queries and more expensive~\cite{bshouty1996asking}, in the above model the assumption is that the answers to equivalence queries remain correct, meaning that any counterexample returned is indeed a counterexample to the hypothesis of the learning algorithm.
Extending the above models to more classes, \cite{bisht2008learning} showed that for concepts that are closed under projection both models are equivalent to the exact learning model without omission and errors. In addition, the presented system can also handle errors in the equivalence queries, i.e. the {\em malicious} equivalence query (MEQ) is introduced, in which the user can return a wrong counterexample (an assignment that is not a counterexample in the hypothesis) for at most $l$ different assignments.
One relevant question regarding the answers to the queries is whether the omissions or errors are {\em persistent} or not. They are persistent if the same query to the same examples always returns the same answer (even if the answer is an omission or if it is wrong). In the above models the assumption is that the answers are persistent.
A model of non-persistent errors is defined by Sakakibara~\cite{sakakibara1991learning}, in which the answer to each query may be wrong with some given probability. In this model, repeated membership queries for the same example are considered as independent events, so the answer may be different. In this model, a general technique of repeating each query sufficiently often to establish the correct answer with high probability is introduced.
Although in the constraint acquisition context we only have membership and partial queries (as explained, equivalence queries are considered too hard to be answered by the user), the presence of omissions or errors has not been studied yet. We plan to deal with this significant issue in the future. Of course in this context limited and malicious partial queries have to be examined too.
\section{Conclusion}
\label{sec:conclusion}
Constraint acquisition has started to receive increasing attention as a useful tool for automated problem modeling in CP. As a result, a number of both passive and active acquisition algorithms have been proposed, with QuAcq and MultiAcq being prime examples of active algorithms. However, two bottlenecks of such algorithms are the large number of queries required to converge to the target network, and the high cpu times needed to generate queries, especially near convergence. An additional side effect of the latter is the often occurrence of premature convergence in constraint acquisition systems.
We have presented new methods that can boost the performance of active constraint acquisition systems. We proposed the MQuAcq algorithm which extends QuAcq to discover all the violated constraints from a negative example, just like MultiAcq does, but with a better complexity bound in terms of the number of queries. We also proposed an optimization on the process of locating scopes that, as experiments demonstrate, helps reduce the number of queries by up to $85\%$ in some cases.
Another contribution of our work is that we focus on query generation which is a very important but rather overlooked part of the acquisition process. We described the algorithmic query generation process of standard interactive acquisition systems in detail, and we proposed several heuristics that can be applied during query generation to boost the performance of constraint acquisition algorithms.
Experimental results demonstrate that an algorithm which integrates all our methods significantly outperforms the state-of-the-art active constraint acquisition algorithms on all the important metrics. It does not only generate considerably fewer queries than QuAcq and MultiAcq, but it is also by far faster than both of them, both in average query generation time and in total run time.
Last but not least, our proposed heuristics for the query generation process support the generation of more ``informative'' queries and also largely alleviate the premature convergence problem.
As future work, it would be very interesting if a hybrid system that integrades a passive learning method, specifically ModelSeeker, and an active one, such as MQuAcq, was designed and built. We believe that the two approaches are orthogonal, and combining their strengths may prove very beneficial in practice. ModelSeeker can learn constraints in highly structured problems using only very few examples, but this is not the case in problems with irregular structure. On the other hand, active methods, such as our own, require to generate a much larger number of examples to learn problems like Sudoku, but can handle irregularly structured problems. So ideally, in the future we would like to have a hybrid system that takes as input a (small) set of examples, runs ModelSeeker to learn the basic constraints, and then completes the model using an active technique.
\bibliographystyle{spbasic}
|
2,877,628,090,284 | arxiv | \section{Comparison-matrix landscape method for highly excited states}
In this appendix we discuss the limitation of traditional localization landscape methods to study excited states.
For concreteness we focus on the method of Ref.~\onlinecite{Lemut2019}, which introduced a variation on the localization landscape based on the comparison matrix in order to study systems with inner degrees of freedom.
While it can in principle also be applied to middle of spectrum states, it generically fails at characterizing the localization of these states.
Here we discuss the reasons for this failure as it reveals some limitations of conventional localization landscape methods.
There exist two natural ways to study highly excited states using the comparison matrix, by introducing the Hamiltonians $H_1(\varepsilon)$ and $H_2(\varepsilon)$ defined by
\begin{equation}
H_1(\varepsilon) = H + i\varepsilon \text{Id},
\end{equation}
\begin{equation}
H_2(\varepsilon) = H^\dagger H + \varepsilon^2 \text{Id}.
\end{equation}
These two Hamiltonians admit the same eigenstates as $H$ (for $H$ Hermitian) and are both invertible for $\varepsilon\neq 0$.
They satisfy Eq.~\eqref{eq:LocLand-equality} with renormalized energies given by
\begin{equation}
E_1^\beta=\sqrt{(E^\beta)^2 + \varepsilon^2} \text{ and } E_2^\beta=(E^\beta)^2 + \varepsilon^2.
\end{equation}
Both Green functions $H_{\alpha}^{-1}$ are generally not real positive, despite $H_2$ being definite positive.
In particular, $H_{1}^{-1}$ is generally complex-valued.
Ostrowski's comparison matrix\citep{Ostrowski1937, Ostrowski1956} can be introduced to solve this issue and avoid the need to compute the full inverse\citep{Lemut2019}.
The comparison matrix $\overbrack{H}$ of an Hamiltonian $H$ is defined by
\begin{equation}
\overbrack{H}_{m, n} = 2 \lvert H_{m, m}\rvert \delta_{m, n} - \lvert H_{m, n}\rvert.
\end{equation}
If it is positive definite, then it verifies
\begin{equation}
\lvert H_{m, n}^{-1} \rvert \leq \overbrack{H}^{-1}_{m, n}.
\end{equation}
We then have $\lvert\phi^\beta_m \rvert \leq E^\beta_\alpha \max\limits_n \lvert\phi_n^\beta \rvert u^\mathrm{CM}_{\alpha, m}$, where the localization landscape can be efficiently obtained by solving the equation $\overbrack{H}\,_\alpha u_\alpha^\mathrm{CM} = 1$.
The key limitation of the method, like the original landscape method, is the need for $\overbrack{H}_\alpha$ to be positive definite, and the consequences of such a requirement on $u_\alpha$.
A naive but informative sufficient condition for definite positiveness for a real symmetric matrix $A$ is
\begin{equation}
\sum\limits_m A_{m, n} > 0 \text{ for all } n.
\end{equation}
For the comparison matrix, this translates into having $\overbrack{H}$ be diagonally dominant.
This condition can always be satisfied by choosing $\varepsilon$ large enough.
On the other hand, if $\varepsilon$ is too large, i.e., much larger than the typical mean level spacing or of the order of the bandwidth, the renormalized energies $E_\alpha$ become comparable for all low-energy states.
The localization landscapes $u_\alpha$ are then no longer a good predictor of the localization of low-energy eigenstates as too many eigenstates contribute with similar amplitudes.
An alternative interpretation is that the eigenvectors of the comparison matrix are no longer close to those of the original Hamiltonian, and the landscape obtained from $\overbrack{H}_\alpha$, which describes the localization of its eigenvectors, no longer describes the eigenstates of $H_\alpha$.
Conversely, when $\overbrack{H}$ is already diagonally dominant before introducing $\varepsilon$---for example, for well-chosen disorder distributions in the strong disorder limit---it proves to be a very efficient way to study the localization of the low-energy eigenstates.
Let us illustrate these statements in the Anderson model introduced in Eq.~\eqref{eq:AndersonModel}.
Fig.~\ref{fig:AndersonComparison-ComparisonMatrix} summarizes our results studying eigenstates at zero energy in a chain of $L=100$ sites, looking at the same disorder realizations as in Fig.~\ref{fig:AndersonComparison-L2}.
When the disorder is strong enough, the comparison matrix can typically be definite positive for $\varepsilon$ smaller than the typical level spacing.
The localization landscapes are then good predictors of the localization of the eigenstates.
Note that this is a finite-size effect: as the system size increases, one requires larger and larger disorder to reach that limit.
On the other hand, at low disorder, $\varepsilon$ needs to be much larger than the level spacing in order for $\overbrack{H}$ to be positive definite and the landscapes completely fail to predict the localization of the low-energy eigenstates.
\begin{figure}[tb!]
\includegraphics[width=\linewidth]{newFigs/uCM1-Anderson-both-J-both-L-100}
\caption{
Localization landscapes ((a-b) $u^{\mathrm{CM}}_1$, (c-d): $u^{\mathrm{CM}}_2$) and the four eigenstates closest to zero energy in the Anderson model for different disorder strengths ((a, c): $W=25$, (b, d): $W=2$).
We consider the same disorder realizations as in Fig.~\ref{fig:AndersonComparison-L2}.
At large disorder, due to the small size of the system, the comparison matrices can be (close to) positive definite and we can take $\varepsilon$ smaller than the typical level spacing.
Then peaks in $u^{CM}_1$ and $u^{CM}_2$ do correspond to the low-energy eigenstates, albeit the bound is not tight and the ordering of the height of the peaks might not correspond to the ordering of eigenstates.
At lower disorder, typical realizations require much larger shift for the comparison matrix to be positive definite and $\varepsilon$ becomes of the order of the bandwidth.
The peaks of the localization landscape are then no longer well-correlated with the localization of the low-lying states.
Note also that we had to normalize the landscape in order to represent it at the same scale as the normalized eigenstates.}
\label{fig:AndersonComparison-ComparisonMatrix}
\end{figure}
|
2,877,628,090,285 | arxiv | \section{Introduction}
Deep Neural Networks (DNNs) have obtained state-of-the-art performances on various tasks across a wide range of domains including computer vision \cite{girshick2015fast, redmon2016you, kroner2020contextual,clay2021learning} and natural language processing \cite{serban2016generating, andreas2016learning, joulin2016bag}.
Currently, there is an increasing interest in implementing DNNs on portable devices, driven by the strong and demanding needs in a wide spectrum of applications noticeably including smart sensor networks, telehealthcare, and security.
However, as DNNs need large amounts of memory and computing power, it is still a challenging problem on how to effectively implement DNNs on the devices with constrained memory or limited computing power.\par
One solution to this problem is to reduce the bit-width of DNN weights, such as training DNNs with binary weights or ternary weights.
In this paper, we focus on training ternary weights without considering discrete activations.
Ternary weight networks are different from binary weight networks in that they set some values in the binary weights to zeros, which makes them sparser, thus being more efficient but also requiring more memory.
Ternary weight networks can be divided into three categories, i.e., the ternary weight with values constrained to \{-1, 0, +1\}, the ternary weight with \{-$\alpha$, 0, +$\alpha$\}, and the ternary weight with \{-$\alpha$, 0, +$\beta$\} where $\alpha$ and $\beta$ are real numbers and $\alpha \neq \beta$.
Among these three kinds of ternary weights, the ternary weight with \{-1, 0, +1\} is the most efficient\footnote{\{-$\alpha$, 0, +$\alpha$\} is more efficient than \{-$\alpha$, 0, +$\beta$\} as it can be considered as \{-1, 0, +1\} $\times \alpha$.} as it avoids multiplications completely, and it is even more efficient than the binary counterpart as ternary weights \{-1, 0, +1\} can be considered as setting some values in the binary weights \{-1, +1\} to 0s.
0s can be considered as network pruning \cite{frankle2018lottery} that makes networks sparse and thus reduces the computational cost.
In light of this, in this paper, we attempt to train DNNs with ternary weights \{-1, 0 +1\} that are able to reduce the computational complexity substantially, speed up the inference significantly, and lead to about 16$\times$ or 32$\times$ memory requirement reduction compared with the float (32-bit) or double (64-bit) precision counterparts. \par
\par
\begin{table}[t]
\caption{Comparison between WDR and the existing regularizers}
\centering
\label{refdif}
\begin{tabular}{lcc}
\toprule
Methods & Controlling sparsity &Free from gradient estimators \\ \midrule
The regularizer in \cite{tang2017train} & \xmark & \xmark \\ \midrule
The regularizers in \cite{darabi2018bnn+} & \xmark & \xmark \\ \midrule
SCA & \cmark & \cmark
\\ \bottomrule
\end{tabular}
\end{table}
The challenge for training ternary or binary weight networks is that the gradients with respect to the discrete weights do no exist.
Substantial efforts have been made on addressing this issue \cite{zhang2018lq, leng2018extremely, zhou2017incremental, xu2019iterative}.
These approaches train discrete weight networks by using stochastic weights or using the straight-through estimator to estimate the gradients with respect to discrete weights.
However, we notice that most existing efforts on training ternary weight networks focus on training ternary weights with \{-$\alpha$, 0, +$\alpha$\} \cite{li2016ternary, leng2018extremely, zhou2018explicit} or \{-$\alpha$, 0, +$\beta$\} \cite{zhu2016trained,zhou2017incremental}, which are unable to avoid multiplications completely.
Most of these approaches still have a large performance gap to full-precision counterparts, while some approaches \cite{shayer2017learning,zhou2017incremental} are able to obtain promising results, but they need a complex training process, which causes the ternary weight networks not widely used.
Moreover, none of these approaches is able to control the sparsity of the ternary weights, which cannot fully use the advantage of ternary weights.
It is thus necessary and appealing for the development of an accurate, sparsity-controlling and easy-to-implement approach for converting full-precision networks to the ternary versions with weights \{-1, 0, +1\}.
\par
In the paper, we propose a simple yet effective approach (i.e., SCA) to training ternary weight \{-1, 0, +1\} networks and it also has the ability to control the sparsity of the weights.
SCA is developed base on the fact that the ternary solutions to a DNN are also included in the full-precision weight space.
SCA attempts to search for a ternary solution in the full-precision space.
However, it is difficult to find an appropriate ternary solution in such a large weight space.
To address this issue, at training time, SCA limits the full-precision weight to be in the range between -1 and +1 by parameterizing them with $\tanh(\Theta)$ where $\Theta$ are the parameters.
In this way, the weight space is largely reduced.
By taking advantage of the properties of the $\tanh(\Theta)$ function, we design a novel weight discretization regularization (WDR) to force weight ternarization and control the sparsity.
To the end, SCA simply trains a ternary weight network by optimizing a full-precision network loss plus a WDR term.
At test time, the ternary weights are obtained by simply rounding $\tanh(\Theta)$ to the nearest integers.
Despite its simplicity, it is able to outperform the state-of-the-art approaches and perform on a par with full-precision counterparts.
\par
SCA is a regularizer-based approach.
In the existing literature, three well-known regularizers \cite{tang2017train,darabi2018bnn+} have been proposed for training binary weight networks and they can be easily extended to training ternary weight networks.
We summarize the differences between the regularizer (i.e., WDR) in SCA and the existing regularizers in Table \ref{refdif}.
First, SCA is able to \textbf{control the sparsity} of the ternary weights while the existing regularizers cannot.
Second, SCA is able to use the real gradients to train ternary weight networks while the existing regularizers rely on the straight-through gradient estimator.
We provide the technique differences among these regularizers in more detail in Section \ref{difs}.
\par
The main contributions of our work can be summarized as follows:
\begin{itemize}
\item We have proposed a simple yet effective approach (i.e., SCA) to training ternary weight networks by simply minimizing a task loss plus a regularizer.
The simple nature of SCA makes it easy-to-implement and have a high code-reuse-rate when converting a full-precision network to the ternary version, which is of much practical values, considering the fact that the state-of-the-art models in various domains are still built on full-precision weight DNNs.
\item We have proposed a novel regularizer WDR for training ternary weights.
The shape controller $\alpha$ in WDR enables SCA to control the sparsity of the trained ternary weights.
We theoretically and empirically show that the sparsity of the ternary weights is positively related to $\alpha$.
\item
The existing literature has only studied how to control the sparsity of full-precision weights in a DNN.
To the best of our knowledge, this is first work to explore controlling the sparsity of ternary weights in a DNN.
\item Extensive experiments on several benchmark datasets demonstrate that SCA outperforms the state-of-the-art approaches and matches the performances of full-precision counterparts.
\end{itemize}
\section{Related Work}
\label{headings}
Our work is related to the literature on model compression including network quantization and network pruning.
Thus, we first present an overview of the existing techniques on training DNNs with discrete weights and then review the existing literature on network pruning.
\subsection{Network Quantization}
Network quantization has drawn numerous research attentions due to its potential in various applications and low memory and computational power requirements.
The goal is to train DNNs with low-bit weights or activations.
These approaches train discrete neural networks by approximating full-precision weights or activations in each layer with scaling factors and discrete values \cite{rastegari2016xnor,li2016ternary,alemdar2017ternary,mellempudi2017ternary,lin2017towards,zhu2016trained,mcdonnell2018training,zhou2018explicit,zhuang2019structured,zhang2018lq,martinez2020training,stock2019and}, using stochastic weights \cite{soudry2014expectation,shayer2017learning,meng2020training,stamatescucritical,shekhovtsov2020path}, using a gradient estimator \cite{esser2019learned,li2019additive,bulat2020high}, using the straight-through estimator \cite{courbariaux2015binaryconnect,hubara2016binarized}, or using reinforcement learning \cite{wang2019learningCIBCNN}.
Among these approaches, our work is most related to these approaches to training binary or ternary weight neural networks.
\par
\subsubsection{Approaches to Training Binary Neural Networks}
Many approaches have been proposed to training binary neural networks.
Soudry et al. \cite{soudry2014expectation} propose to train binary neural networks through the variational Bayesian method that infers networks with binary weights and neurons.
BinaryConnect \cite{courbariaux2015binaryconnect} uses sign function to binarize the weights during the forward and backward propagation while using full-precision weights in the parameter update stage.
Binarized Neural Networks (BNNs) \cite{hubara2016binarized} and XNOR-net \cite{rastegari2016xnor} make some extensions to this method by binarizing both weights and activations.
BNNs \cite{hubara2016binarized} utilizes the sign function and additional scaling factors to binarize the real-value weights and the pre-activations at training time.
The straight-through estimator is used to back-propagate through the binarization operation.
XNOR-net takes BNNs one step further by approximating the real-valued tensor and activation tensor by a binary filter and a scaling factor.
ABC-nets \cite{lin2017towards} make an extension by using the linear combination of multiple binary tensors to approximate full-precision weights.
Moreover, to alleviate information loss, multiple binary activations are also employed.
McDonnell \cite{mcdonnell2018training} follows BinaryConnect \cite{courbariaux2015binaryconnect} and BNNs \cite{hubara2016binarized}, and applies a layer-dependent scaling to the sign of the weights.
Tang et al. \cite{tang2017train} explore how the learning rate, the scale factor, and the regularizer influence the performances of BNNs.
Shayer et al. \cite{shayer2017learning} propose LR-Net to train binary weight neural networks by using the central limit theorem and the local reparameterization trick.
They update the weight distribution at training time.
Binary weights are sampled from the learned distribution at test time.
PBNet \cite{peters2018probabilistic} extends the idea of LR-Net and uses a probabilistic method for training neural networks with both binary weights and binary activations.
Leng et al. \cite{leng2018extremely} propose to train low-bit neural networks by decoupling the continuous parameters from the discrete constraints and casting the original problem into several subproblems.
CI-BCNN \cite{wang2019learningCIBCNN} trains binary neural networks by using the channel graph structure and channel-wise interactions.
LQ-Nets \cite{zhang2018lq} jointly train the network parameters and its associated quantizers for DNNs.
Bi-real nets \cite{liu2018bireal} use the identity shortcut to connect the real activations to activations of the consecutive block, thus improving representational capability of binary neural networks.
ELQ \cite{zhou2018explicit} trains discrete weight DNNs through explicitly regularizing the weight approximation error and the loss perturbation.
Zhuang et al. \cite{zhuang2019structured} propose to divide the network into groups of which each can be reconstructed by using a set of binary branches.
Meng et al. \cite{meng2020training} propose to use the Bayesian rule to train binary weights for DNNs.
These approaches either have a large performance gap to the full-precision networks or need a complex training process.
\subsubsection{Approaches to Training Ternary Neural Networks}
Substantial research efforts have been made on training neural networks with ternary weights or activations.
Alemdar et al. \cite{alemdar2017ternary} use a two-stage teacher-student approach for training neural networks with ternary activations.
First, they train the teacher network with stochastically firing ternary neurons and then let the student network learn how to imitate the teacher's behavior using a layer-wise greedy algorithm.
Mellempudi et al. \cite{mellempudi2017ternary} propose to take advantage of a fine-grained quantization technique that involves multiple scaling factors to obtain a ternary neural network.
TTQ \cite{zhu2016trained} uses two full-precision scaling coefficients for each layer and quantizes the weights to three real values.
Li et al. \cite{li2016ternary} develop an approach to training ternary weight networks by minimizing the Euclidean distance between the full-precision weights and the ternary weights along with a non-negative scaling factor.
LR-Net \cite{shayer2017learning} attempts to train a stochastically ternary network by leveraging the local reparametrization trick and sampling ternary weights at the test time.
We notice that almost all the existing efforts on training ternary weight networks focus on training ternary weights with \{$-\alpha$, 0, $\alpha$\} \cite{li2016ternary, leng2018extremely, mellempudi2017ternary, zhou2018explicit} or \{$-\alpha$, 0, $\beta$\} \cite{zhu2016trained, zhou2017incremental} that are unable to avoid multiplications completely and thus are less efficient than \{-1, 0, +1\} in SCA.
More importantly, none of these approaches is able to control the sparsity of the ternary weights, which cannot fully use the advantage of ternary weights.
\par
SCA trains ternary weight networks by using parameterization and a novel regularizer.
Thus, it is also related to regularizer based approaches for training discrete weight networks.
Tang et al. \cite{tang2017train} and BNN+ \cite{darabi2018bnn+} propose three regularizers for training binary neural networks, which can be easily extended to train ternary weight networks.
However, these regularizers cannot control the sparsity of the ternary weights and highly rely on the straight-through estimator to approximate the gradients with respect to the discrete weights.
SCA overcomes all these shortcomings.
\par
\subsection{Network Pruning}
On the other hand, our work is also related to the studies on network pruning, including but not limited to \cite{denton2014exploiting,han2015deep,hu2016network,he2017channel,luo2017thinet,wang2016cnnpack,zhuang2018discrimination, renda2020comparing}.
Network pruning aims to compress a DNN into a sparse version by making the full-precision weights sparser.
Luo et al. \cite{luo2017thinet} propose to use the statistics of next layer to prune weights.
However, weight pruning approaches require substantial iterations to converge and the pruning threshold needs to be set manually.
Low-rank decomposition has also been introduced to network compression.
These approaches \cite{denil2013predicting,kim2015compression,ren2018deep} use matrix decomposition
technique to decompose the weight in DNNs.
The limitation is that these approaches increase the number of the layers in a DNN and thus are easy to result in gradient-vanishing issues during the training process.
SCA is different from all these existing studies in that SCA attempts to control the sparsity of ternary weights rather than full-precision weights, which is a more challenging problem.
To our best knowledge, SCA is first work to control the sparsity of ternary weight networks.
\section{Framework}
\label{frame}
In this section, we introduce SCA for training DNNs with ternary weights whose values are constrained to \{-1, 0, +1\}.
SCA is developed based on the fact that the ternary solution to a DNN is also included in the full-precision weight space.
To illustrate the connection between ternary weights and full-precision weights, we first review the basic process for training full-precision weight networks.
Then we make further derivations of the basic process to show how to train ternary weight networks.
\subsection{Deep Neural Networks with Full-Precision Weights}
In this part, we review how to train a DNN with full-precision weights.
Given the training data $(X, Y)$ where $X$ are the inputs and $Y$ are the targets, the output of a neural network $f$ with weights $W$ is $f(X, W)$.
The loss can be written as:
\begin{equation}
\label{1}
L_c(W) = \mathcal{L}(f(X, W), Y)
\end{equation}
where $\mathcal{L}(.)$ is any loss function, such as mean square error or cross entropy, and $W=[w_1, w_2, ..., w_n]$ with $w_i$ representing the weights in the $i$th layer.
\par
With the loss function, we can use backpropagation to compute the gradients with respect to the weights $W$ and utilize gradient descent to minimize the loss function.
This can be done easily for continuous weight networks while it is difficult for discrete weight networks due to no gradients.
Fortunately, the discrete solutions to a DNN are also included in the full-precision weight space.
We show below that with modifications to the basic training process above we can also train DNNs with ternary weights.
\subsection{Limiting the Weight Value Range with Parameterization}
\begin{wrapfigure}{r}{0.36\textwidth}
\vskip -0.5in
\centering
\includegraphics[width=0.36\textwidth]{tanh.pdf}
\vskip -0.1in
\caption{Graph of $\tanh(\theta)=\frac{\exp\left(\theta\right)-exp\left(-\theta\right)}{exp\left(\theta\right)+exp\left(-\theta\right)}$}
\label{f1}
\end{wrapfigure}
DNNs with ternary weights are dramatically efficient at the test time as it needs less memory and no multiplication operation.
The challenge for training ternary weights is that the gradients with respect to the discrete weights are 0 or do not exist.
The existing studies utilize the straight-through estimator to solve this problem.
In this paper, we propose a novel method to use the real gradient instead of the estimator to do backpropagation.
As the ternary solution to a DNN is also in the full-precision weight space, we are inspired to maintain continuous value weights during training, enabling us to do normal backpropagation.
However, the full-precision weight space (usually 16 bits or 32 bits) is too large to find an appropriate ternary solution.
To alleviate this issue, we propose to compress the continuous weight space.
The possible values for ternary weights are -1, 0, and +1 that are between -1 and +1.
The range of the function $\tanh(.)$ is between -1 and +1 as shown in Figure \ref{f1}, so that we use $\tanh(\Theta)$ to parameterize the continuous weights $W$:
\begin{equation}
\label{2}
W = \tanh(\Theta)
\end{equation}
where $\Theta=[\theta_1, \theta_2, ..., \theta_n]$ are the parameters for computing $W$ and $\theta_i$ is used to compute $w_i$.\par
Then we replace $W$ in loss (\ref{1}) with $\tanh(\Theta)$, obtaining the following loss term:
\begin{equation}
\label{3}
L_c(\tanh(\Theta)) = \mathcal{L}(f(X, \tanh(\Theta)), Y)
\end{equation}
where $\tanh(\Theta)$ can be seen as the instantiations of $W$.
The advantage of doing this is that the weight space is significantly reduced and the backprobagation can still be applied to computing the gradients with respect to $\Theta$, enabling us to search for a ternary solution in a much smaller, continue weight space.
Note that although $\tanh(\Theta)$ function has been widely used in deep learning, how to use it to control the sparsity of ternary weights has never been studied.
We show this below.
\par
\subsection{Weight Discretization Regularization and Objective Function}
Although the weight space is extremely decreased by using parameterization $\tanh(\Theta)$, it is still difficult to find the discrete solution in this space.
If we directly solve for the minimization problem with (\ref{3}) as the objective, we are able to obtain a solution in the continuous space (-1, 1) that may still be far from the discrete solution.
To address this issue, we develop a novel weight discretization regularizer (WDR) to force the weights to be ternary and to control the sparsity of the ternary weights by taking advantage of the property of tanh() function:
\begin{equation}
\begin{aligned}
\label{4}
R(\tanh(\Theta))= \sum_{i=1}^n \sum_{j=1}^{|\theta_i|} \left[\left(\alpha-\tanh^2\left(\theta_{ij}\right)\right)\tanh^2\left(\theta_{ij}\right)\right]
\end{aligned}
\end{equation}
where $|\theta_i|$ denotes the number of the elements in $\theta_i$; $\theta_{ij}$ represents the $j$th element in $\theta_i$; and $\alpha$ is the shape controller of function $R()$.
Note that one may have a \textbf{misunderstanding} at first glance that WDR enforces the weights to converge at \{$-\sqrt{\alpha}$, $+\sqrt{\alpha}$, 0\}.
We will show below (in Section \ref{wdr}) that this is \textbf{not true} and how WDR leads to the expected ternary weights \{-1, +1, 0\}.
More essentially, shape controller $\alpha$ is also the sparsity controller of the trained ternary weights (the sparsity here is measured by the percentage of 0s in the ternary weights).
We will theoretically (in Section \ref{wdr}) and empirically (in Section \ref{wdrexp1}) show that the sparsity of the ternary weights is positively related to $\alpha$.
We combine the loss term (\ref{3}) and WDR (\ref{4}) to obtain the final objective function of SCA:
\begin{equation}
\label{5}
J = L_c(\tanh(\Theta)) + \lambda R(\tanh(\Theta))
\end{equation}
where coefficient $\lambda$ is a hyperparameter to balance the contributions between the loss term $L_c(\tanh(\Theta))$ and the regularizer $R(\tanh(\Theta))$.
\par
At the training time, we solve for the minimization problem with objective (\ref{5}) using a gradient descent based optimizer as the gradients of $J$ with respect to $\Theta$ exist.
It is observed that SCA only needs two simple operations to convert a full-precision network to the ternary version, i.e., replacing weights $W$ with $\tanh(\Theta)$ and adding the WDR term $R(\tanh(\Theta))$.
Thus, it has an extremely high code reuse rate for converting full-precision networks to the ternary counterparts.
To further illustrate this point, we compare the pseudocode for training full-precision weight networks with that of SCA.
As shown in Figure \ref{code_com}, their differences lie in line 3 and line 10.
In line 3, SCA simply modifies the conv function by adding $\tanh()$ to parametrize the weights.
In line 10, SCA simply adds a regularizer to force the weights to be ternary.
SCA only changes the conv function and then adds a regularizer to the loss.
Therefore, it has a very high code reuse rate when converting a full-precision network to the ternary counterpart.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{code.pdf}
\vskip -0.1in
\caption{Pseudocode compariosn. (a): Pseudocode for training full-precision networks based on tensorflow. (b): Pseudocode for training ternary weight networks via SCA based on tensorflow where $b$ in line 10 is the balancing weight and $R()$ is the proposed regularizer WDR that can be simply implemented as a function.}
\label{code_com}
\end{figure}
\par
\subsection{Test-Time Inference}
After (\ref{5}) converges, we take the learned $\Theta$ to compute the ternary weights for inference.
Specifically, at the test time, we obtain the ternary weights $W_{ter}$ by rounding $\tanh(\Theta)$ to the nearest integer:
\begin{equation}
\label{6}
W_{ter}=round(\tanh(\Theta))
\end{equation}
Since the range of $\tanh(\theta_{ij})$ is between -1 and +1, the obtained nearest integer of $\tanh(\theta_{ij})$ can be -1, 0, or +1.
For given test data $x_t$, the prediction is obtained by using $W_{ter}$:
\begin{equation}
\label{7}
Pred=f(W_{ter}, x_t)
\end{equation}
Thanks to the regularizer WDR, simply rounding $\tanh(\Theta)$ to the nearest integer almost does not lead to any performance dropping.
We summarize SCA in Algorithm 1.
\begin{algorithm}
\caption{SCA}
\begin{algorithmic} [1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE Training data (X, Y), a neural network $f$ with initial parameters $\Theta_0$
\ENSURE Ternary weights $W_{ter}$
\STATE Construct objective function (\ref{5}) and minimize it with gradient descent to obtain $\Theta_{opt}$
\STATE Obtain ternary weights $W_{ter}$ by simply rounding $\tanh(\Theta_{opt})$ to nearest integers
\end{algorithmic}
\end{algorithm}
\begin{figure}[]
\begin{minipage}{0.34\textwidth}
\centering
\includegraphics[height=4cm]{Ra_0.pdf}
\caption{Function graph of $R(\tanh(\theta))$ \newline with $\alpha=0$}
\label{f2}
\end{minipage}\hfill
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[height=4cm]{Ra_2.pdf}
\caption{Function graph of $R(\tanh(\theta))$\newline with $\alpha\geq2$}
\label{f3}
\end{minipage}\hfill
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[height=4cm]{R0_a_2.pdf}
\caption{Function graph of $R(\tanh(\theta))$ \newline with $\alpha$ in (0, 2)}
\label{f4}
\end{minipage}
\end{figure}
\subsection{Theoretical Analysis}
\label{wdr}
In this part, we further examine the proposed regularizer WDR and theoretically show why $\alpha$ is able to control the sparsity of the trained ternary weights.
For clarity, we omit the subscripts $i$, $j$ of $\theta_{ij}$ and observe WDR w.r.t. each $\theta_{ij}$:
\begin{equation}
\label{8}
R(\tanh(\theta))=\left( \alpha- \tanh^2 \left( \theta \right) \right)\tanh^2\left(\theta\right)
\end{equation}
The derivative function of (\ref{8}) is expressed as:
\begin{equation}
\begin{aligned}
\label{9}
\frac{\partial R(\tanh(\theta))}{\partial \theta}=2\tanh\left(\theta\right)\left(1-\tanh^2\left(\theta\right)\right)\left(\alpha-2\tanh^2\left(\theta\right)\right)
\end{aligned}
\end{equation}
For training ternary weights \{-1, 0, +1\}, it is desired that $\tanh(\theta)=0$, $\tanh(\theta)=-1$ \footnote{Note that $\tanh(\theta)$ cannot equal to -1 or 1, but can be infinitely close to -1 or 1}, and $\tanh(\theta)=+1$ are three minimum points of the WDR term $R(\tanh(\theta))$.
We provide below the minimum and maximum points of $R(\tanh(\theta))$ in different cases, i.e., $\alpha=0$, $0< \alpha < 2$, and $\alpha \geq 2$
\begin{proposition}
\label{t1}
When $\sqrt{\frac{\alpha}{2}}=0$ (i.e., $\alpha=0$), $R(\tanh(\theta))$ has three zero-gradient points with $\tanh(\theta)=0$ as the maximum point, and $\tanh(\theta)=-1$ and $\tanh(\theta)+1$ as the two minimum points.
\end{proposition}
{\em Proof of of proposition \ref{t1}}: When $\alpha=0$, the derivative function of $R(\tanh(\theta))$ is written as:
\begin{equation}
\begin{aligned}
\label{11}
\frac{\partial R(\tanh(\theta))}{\partial \theta}=-4\tanh^3\left(\theta\right)\left(1-\tanh^2\left(\theta\right)\right)
\end{aligned}
\end{equation}
By solving $\frac{\partial R(\tanh(\theta))}{\partial \theta}=0$, we obtain three zero-gradient points, i.e., $\tanh(\theta)$ = 0, $\tanh(\theta)$ = -1, and $\tanh(\theta)$ = +1.
In addition, $\left(1-\tanh^2\left(\theta\right)\right)$ in right hand side of (\ref{11}) is always positive as the range of $\tanh(\theta)$ is (-1, 1) as shown in Figure \ref{f1}.
Thus, the sign of (\ref{11}) is determined by $-4\tanh^3\left(\theta\right)$.
When $\tanh(\theta)\in(-1, 0)$, $-4\tanh^3\left(\theta\right)$ is positive so that (\ref{11}) is positive.
It means that $R(\tanh(\theta))$ is increasing when $\tanh(\theta)\in(-1, 0)$.
Similarly, When $\tanh(\theta)\in(0, 1)$, $-4\tanh^3\left(\theta\right)$ is negative so that (\ref{11}) is negative.
It means that $R(\tanh(\theta))$ is decreasing when $\tanh(\theta)\in(0, 1)$.
We provide the function graph of $R(\tanh(\theta))$ with $\alpha=0$ in Figure \ref{f2}.
Therefore, $\tanh(\theta)=0$ is the maximum point and $\tanh(\theta)=-1$ and $\tanh(\theta)=+1$ are two minimum points.
The above proves Proposition \ref{t1}.
\par
However, the case in Proposition \ref{t1} is not desired for training ternary weights as the ideal case for training ternary weights is that $\tanh(\theta)=0$, $\tanh(\theta)=-1$, and $\tanh(\theta)=+1$ are three minimum points.
\begin{proposition}
\label{t2}
When $\sqrt{\frac{\alpha}{2}}\geq1$ (i.e., $\alpha\geq2$), $R(\tanh(\theta))$ has three zero-gradient points with $\tanh(\theta)$=0 as the minimum point, and $\tanh(\theta)$=-1 and $\tanh(\theta)$=+1 as the two maximum points
\end{proposition}
{\em Proof of Proposition \ref{t2}}: When $\sqrt{\frac{\alpha}{2}}\geq1$ (i.e., $\alpha\geq2$), by directly solving (\ref{9}) = 0, we obtain five zero-gradient points of $R(\tanh(\theta))$, i.e., $\tanh(\theta)$ = 0, $\tanh(\theta)$ = -1, $\tanh(\theta)$ = +1, $\tanh(\theta)$ = $-\sqrt{\frac{\alpha}{2}}$, and $\tanh(\theta)$ = $\sqrt{\frac{\alpha}{2}}$.
However, as $\sqrt{\frac{\alpha}{2}}\geq1$ but the range of $\tanh(\theta)$ is (-1, 1), zero-gradient points $\tanh(\theta)$ = $-\sqrt{\frac{\alpha}{2}}$ and $\tanh(\theta)$ = $\sqrt{\frac{\alpha}{2}}$ do not exist.
Thus, in this case, $\tanh(\theta)$ = 0, $\tanh(\theta)$ = -1, and $\tanh(\theta)$ = +1 are three zero-gradient points of $R(\tanh(\theta))$.
In addition, $\left(1-\tanh^2\left(\theta\right)\right)\left(\alpha-2\tanh^2\left(\theta\right)\right)$ in (\ref{9}) is always positive when $\sqrt{\frac{\alpha}{2}}\geq1$ (i.e., $\alpha\geq2$).
Thus, the sign of (\ref{9}) is determined by $2\tanh(\theta)$.
When $\tanh(\theta)\in(-1, 0)$, $2\tanh(\theta)$ is negative so that (\ref{9}) is negative.
It means that $R(\tanh(\theta))$ is decreasing when $\tanh(\theta)\in(-1, 0)$.
Similarly, when $\tanh(\theta)\in(0, 1)$, $2\tanh(\theta)$ is positive so that (\ref{9}) is positive.
It means that $R(\tanh(\theta))$ is increasing when $\tanh(\theta)\in(0, 1)$.
We provide the function graph of $R(\tanh(\theta))$ with $\alpha \geq2$ in Figure \ref{f3}.
Therefore, $\tanh(\theta)=0$ is the minimum point and $\tanh(\theta)=-1$ and $\tanh(\theta)=+1$ are two maximum points.
The above proves Proposition \ref{t2}.\par
Obviously the case in Proposition \ref{t2} is not desired for training ternary weights, either.
Let us pay more attention to the last case.
\begin{proposition}
\label{t3}
When $0<\sqrt{\frac{\alpha}{2}}<1$ (i.e, $0<\alpha<2$), $R(\tanh(\theta))$ has five zero-gradient points.
$\tanh(\theta)=0$, $\tanh(\theta)=-1$, and $\tanh(\theta)=+1$ are three minimum points, and $\tanh(\theta)=-\sqrt{\frac{\alpha}{2}}$ and $\tanh(\theta)=\sqrt{\frac{\alpha}{2}}$ are two maximum points.
Moreover, the percentage of 0s in the resulting ternary weights by minimizing $R(\tanh(\theta))$ is positively related to $\alpha$.
\end{proposition}
{\em Proof of proposition \ref{t3}}: When $0<\sqrt{\frac{\alpha}{2}}<1$, by directly solving (\ref{9}) = 0, we obtain five zero-gradient points for $R(\tanh(\theta))$, i.e., $\tanh(\theta)$ = 0, $\tanh(\theta)$ = -1, $\tanh(\theta)$ = +1, $\tanh(\theta)$ = $-\sqrt{\frac{\alpha}{2}}$, and $\tanh(\theta)$ = $\sqrt{\frac{\alpha}{2}}$, and all the five zero-gradient points are meaningful.
In addition, as $\left(1-\tanh^2\left(\theta\right)\right)$ in (\ref{9}) is always positive, the sign of (\ref{9}) is determined by $2\tanh(\theta)\left(\alpha-2\tanh^2\left(\theta\right)\right)$.
When $\tanh(\theta)\in(-1, -\sqrt{\frac{\alpha}{2}})$, $2\tanh(\theta)\left(\alpha-2\tanh^2\left(\theta\right)\right)$ is positive so that (\ref{9}) is positive.
It means that $R(\tanh(\theta))$ is increasing when $\tanh(\theta)\in(-1, -\sqrt{\frac{\alpha}{2}})$.
When $\tanh(\theta)\in(-\sqrt{\frac{\alpha}{2}}, 0)$, $2\tanh(\theta)\left(\alpha-2\tanh^2\left(\theta\right)\right)$ is negative so that (\ref{9}) is negative.
It means that $R(\tanh(\theta))$ is decreasing when $\tanh(\theta)\in(-\sqrt{\frac{\alpha}{2}}, 0)$.
When $\tanh(\theta)\in(0, \sqrt{\frac{\alpha}{2}})$, $2\tanh(\theta)\left(\alpha-2\tanh^2\left(\theta\right)\right)$ is positive so that (\ref{9}) is positive.
It means that $R(\tanh(\theta))$ is increasing when $\tanh(\theta)\in \left(0, \sqrt{\frac{\alpha}{2}}\right)$.
When $\theta\in(\sqrt{\frac{\alpha}{2}}, 1)$, $2\tanh(\theta)\left(\alpha-2\tanh^2\left(\theta\right)\right)$ is negative so that (\ref{9}) is negative.
It means that $R(\tanh(\theta))$ is decreasing when $\tanh(\theta)\in(\sqrt{\frac{\alpha}{2}}, 1)$.
We provide the function graph of $R(\tanh(\theta))$ with $0<\sqrt{\frac{\alpha}{2}}<1$ (i.e, $0<\alpha<2$) in Figure \ref{f4}.
Therefore, $\tanh(\theta)=0$, $\tanh(\theta)=-1$, and $\tanh(\theta)=+1$ are three minimum points, and $\tanh(\theta)=-\sqrt{\frac{\alpha}{2}}$ and $\tanh(\theta)=\sqrt{\frac{\alpha}{2}}$ are two maximum points.
Moreover, as shown in Figure \ref{f4}, the points with $\tanh(\theta)$ in $\left(-\sqrt{\frac{\alpha}{2}}, \sqrt{\frac{\alpha}{2}}\right)$ are more inclined to moving toward the minimum point with $\tanh(\theta)=0$ due to gradient $g_0$. The points with $\tanh(\theta)$ in $\left(-1, -\sqrt{\frac{\alpha}{2}}\right)$ are more inclined to moving toward the minimum point with $\tanh(\theta)=-1$ due to gradient $g_1$, and the points with $\tanh(\theta)$ in $\left(\sqrt{\frac{\alpha}{2}}, 1\right)$ are more inclined to moving toward the minimum point with $\tanh(\theta)=1$ due to gradient $g_2$.
Consequently, the percentage of 0s in the resulting ternary weights is positively related to the length of the range $\left(-\sqrt{\frac{\alpha}{2}}, \sqrt{\frac{\alpha}{2}}\right)$, i.e, $\sqrt{2\alpha}$, and it is thus positively related to $\alpha$.
The above proves Proposition \ref{t3}.\par
It is obvious that the case in Proposition \ref{t3} is desired for training ternary weight networks as $\tanh(\theta)=-1$, $\tanh(\theta)=0$, and $\tanh(\theta)=+1$ are three minimum points.
We also empirically verify the positive correlation between the sparsity of the ternary weights and the values of $\alpha$ in the experiment section.
\subsection{Differences between WDR in SCA and the Existing Regularizers}
\label{difs}
In this part, we provide the differences between regularizer WDR in SCA with the regularizers in the existing literature.\par
Tang et al. \cite{tang2017train} proposed a regularizer $1-W^2$ for training binary weight networks, which can be easily extended to $(1-W^2)W^2$ for training ternary weight networks.
However, this simple regularizer cannot control the sparsity of the ternary weights and it relies on the straight-through estimator to estimate the gradients with respect to discrete weights $W$.
In contrast, WDR addresses these two limitations by introducing the combination of a controller $\alpha$ and parameterization $\tanh(\Theta)$ to control the sparsity of the ternary weights and allow for using the real gradients.
\par
BNN+ \cite{darabi2018bnn+} introduced two regularizers for training binary neural networks, i.e., $|\alpha-|W||$ and $(\alpha-|W|)^2$, which can be easily extended to $|\alpha-|W||*|W|$ and $(\alpha-|W|)^2*|W|^2$, respectively, for training ternary weight networks.
First, it is obvious that the straight-through estimator is still required to estimate the gradients with respect to discrete weights $W$.
The proposed WDR uses the real gradients by introducing $\tanh(\Theta)$.
Second, besides $|W|=0$, the minimum point for these two regularizers in BNN+ is $|W|=\alpha$, which leads to ternary weights \{$-\alpha, 0, +\alpha$\}.
Consequently, BNN+ introduces $\alpha$ to \textbf{control the scaling} of $W$.
In contrast, $\alpha$ in our WDR is used to \textbf{control the sparsity} of the ternary weights, which results in more efficient ternary weight networks.
Moreover, \{$-\alpha, 0, +\alpha$\} in BNN+ is less efficient than \{-1, 0, +1\} in WDR.
\par
\section{Experiments}
In this section, we report extensive experiments for evaluating SCA on several benchmark datasets.
Through these experiments, we aim to address the following research questions:
\begin{itemize}
\item How much is the performance gap between a ternary weight network trained by SCA and the full-precision weight counterpart?
\item Does our approach SCA yield better performances than those of the existing approaches?
\item Is the sparsity of the trained ternary weights positively related to $\alpha$ on real datasets?
\item What is the difference between the features learned by SCA and the features of full-precision networks?
\end{itemize}
\subsection{Datasets}
We adopt five benchmark datasets: MNIST \cite{lecun-mnisthandwrittendigit-2010}, CIFAR-10 \cite{krizhevsky2009learning}, CIFAR-100
\cite{krizhevsky2009learning}, Tiny ImageNet \footnote{http://tiny-imagenet.herokuapp.com/}, and ImageNet (ILSVRC2012) \cite{deng2009imagenet}.
\textbf{MNIST} is a handwritten digit image classification dataset with 10 classes, containing 60,000 training images and 10,000 test images.
We do not use any data augmentation on MNIST.\par
$\textbf{CIFAR-10}$ is an image classification dataset with 10 classes, containing 50,000 training images and 10,000 test images with image size 32 $\times$ 32 in the RGB space.
We follow the standard data augmentation on CIFAR-10.
During training time, we pad 4 pixels on each side of an image and randomly flip it horizontally.
Then the image is randomly cropped to 32 $\times$ 32 size.
During test time, we only evaluate the single view of an original 32 $\times$ 32 image without padding or cropping.
$\textbf{CIFAR-100}$ comprises similar images to those in CIFAR-10, but has 100 classes.
We adopt the same data augmentation strategy as that in CIFAR-10.
$\textbf{Tiny ImageNet}$, i.e., a subset of ImageNet, is an image classification dataset with 200 classes, containing 100,000 training images and 10,000 test images with size 64 $\times$ 64 in the RGB space.
We adopt the standard data augmentation strategy on Tiny ImageNet, i.e., randomly padding, flipping, and cropping.
At test time, we only evaluate the original image.
$\textbf{ImageNet}$ is a large-scale image classification dataset with 1000 classes, containing 1.28 million training images and 50,000 validation images with different sizes in the RGB space.
On ImageNet, we use the standard scale and aspect ratio augmentation strategy from \cite{szegedy2015going}.
Test images are resized so that the shorter side is set to 256, and then are cropped to size 224 $\times$ 224.\par
\par
\subsection{Competitors}
For fair comparison, we only compare SCA with the existing approaches\footnote{We only compare SCA with the competitors in term of test accuracy because almost all the competitors except TTQ \cite{zhu2016trained} only report the accuracy without sparsity.} for training DNNs with binary or ternary weights without discrete activations.
The competitors for training DNNs with binary weights include BinaryConnect \cite{courbariaux2015binaryconnect}, BWN \cite{rastegari2016xnor}, DoReFa \cite{zhou2016dorefa}, and BayesBiNN \cite{meng2020training}. The competitors for training DNNs with ternary weights include TWN \cite{li2016ternary}, TTQ \cite{zhu2016trained}, INQ \cite{zhou2017incremental}, LR-Net \cite{shayer2017learning}, ELQ \cite{zhou2018explicit}, and ELB \cite{leng2018extremely}.
As SCA is a regularizer-based approache, we also compare SCA with the regularizers in \cite{tang2017train, darabi2018bnn+} which are extended to training ternary weight networks.
The performances of all the competitors are all taken from their original papers or obtained from the author-released codes unless otherwise specified.
We follow the same convention as that used in the existing work to keep the first and last layers in full precision.
All the results below are reported based on 3 runs.
\par
\subsection{Comparison with State-of-the-art Approaches}
We report the performances of different approaches as well as the full-precision networks on different datasets.
\begin{table}
\centering
\caption{Test accuracies (\%) on MNIST}
\label{m1}
\begin{tabular}{lcc}
\toprule
Methods & Weight Types & Accuracy (\%) \\ \midrule
BinaryConnect &\multirow{2}{*}{Binary: \{-1 ,+1\} }
& 98.71 \\
BayesBiNN & & 98.86 \\
\midrule
BWN &Ternary: \{-$\alpha$, +$\alpha$\} & 99.05 \\ \midrule
Extended \cite{tang2017train} &\multirow{3}{*}{Ternary: \{-1, 0, +1\}} & 99.11 \\
LR-Net & & 99.50 \\
SCA (Ours) & & \textbf{99.56$\pm$0.02} \\ \midrule
TWN &\multirow{2}{*}{Ternary: \{-$\alpha$, 0, +$\alpha$\}} & 99.35 \\
Extended \cite{darabi2018bnn+} & & 99.15 \\ \midrule
Full-precision network & Full-Precision & 99.56 \\ \bottomrule
\end{tabular}
\end{table}
\subsubsection{Performances on MNIST }
On MNIST, we use the same architecture as that in the competitors \cite{li2016ternary, shayer2017learning}, i.e., $(32-C5) + MP2 + (64-C5) + MP2 + 512FC + Softmax$, where ($32-C5$) is the convolutional layer containing 32 filters of size $5\times5$; $MP2$ is the max-pooling layer with stride 2; and $512FC$ denotes the fully connected layer with 512 nodes.
We adopt dropout \cite{JMLR:v15:srivastava14a} before the last layer with drop rate of 0.5.
$\lambda$ and $\alpha$ are set to 1e-7 and 1e-4, respectively.
The weights are initialized with Xavier initializer \cite{glorot2010understanding}.
The objective function is minimized with optimizer Adam \cite{kingma2014adam} and mini-batch size 128.
The initial learning rate is 0.01 and is divided by 10 at the 100th epoch and the 160th epoch.
We have trained the network for 200 epochs.
\par
The comparison results on MNIST are reported in Table \ref{m1}.
It is observed that SCA achieves the best performance among the approaches with ternary weights \{-1, 0 ,+1\}, also beats the other state-of-the-art approaches significantly, and even matches the performance of the full-precision weight counterpart.
This demonstrates that SCA is able to compress a full-precision weight network to the ternary counterpart without accuracy dropping.
We also notice that the performance gap between SCA (99.56\%) and the best competitor (99.50\%) is not large.
The reason is that the performances of SCA and the best competitor are both very close to that of the full-precision network (99.56\%).
The performance of the full-precision network can be considered as the upper bound of the ternary weight network performances.
Both SCA and the best competitor can match this upper bound so that their performance gap is not large.
However, SCA still shows its superiority over the existing approaches as it achieves the same accuracy as that of the full-precision network.
\subsubsection{Performances on CIFAR-10}
On CIFAR-10, we adopt the same architectures as those in the competitors \cite{li2016ternary, shayer2017learning, zhu2016trained}: VGG-S \cite{li2016ternary, shayer2017learning}, ResNet-20 \cite{zhu2016trained, he2016deep}, and VGG-Variate \cite{cai2017deep}.
For VGG-S, $\lambda$ and $\alpha$ are 5e-8 and 0.1, respectively; dropout with drop rate of 0.5 is adopted; weight decay is used in the last layer with parameter 1e-5; the objective function is minimized with Adam with mini-batch size 128; the initial learning rate is 0.01 and is divided by 10 at the 200th epoch and the 370th epoch; we have trained the network for 450 epochs.
For ResNet-20, $\lambda$ and $\alpha$ are set to 5e-6 and 0.1, respectively; weight decay is used in the last layer with parameter 1e-5; the initial learning rate is 0.005 and is divided by 5 at the 150th epoch, then divided by 2 at the 450th epoch; we have trained the network for 700 epochs with mini-batch size 128.
For VGG-Variate, $\alpha$ is set to 0.1; dropout \cite{JMLR:v15:srivastava14a} with drop rate of 0.5 is adopted before the last layer; the initial $\lambda$ is set to 5e-8 and is multiplied by 50 after 270 epochs of training; weight decay is used in the last layer with parameter 1e-5; the objective function is minimized with Adam with mini-batch size 128; the initial learning rate is 0.005 and is divided by 10 at the 120th epoch and the 270th epoch; we have trained the network for 370 epochs.
We adopt the same initialization strategy as that in LR-Net \cite{shayer2017learning} and TTQ \cite{zhu2016trained} by using the pretrained full-precision weights as the initialization for VGG-Variate and all ResNets while VGG-S is initialized with Xavier initializer \cite{glorot2010understanding}.
\par
\begin{table*}[!t]
\centering
\setlength{\abovecaptionskip}{0.1cm}
\setlength{\belowcaptionskip}{0.1cm}
\caption{Test accuracies (\%) on CIFAR-10}
\label{m2}
\begin{tabular}{lcccc}
\toprule
Methods & Weight Types & VGG-S & ResNet-20 &VGG-Variate \\ \midrule
BinaryConnect &Binary: \{-1 ,+1\} & 91.10 & - & - \\ \midrule
BWN & \multirow{2}{*}{Binary: \{$\alpha$, +$\alpha$\}} & 90.18 & - & - \\
DoReFa & &- &90.00 & - \\ \midrule
Extended \cite{tang2017train} &\multirow{3}{*}{Ternary: \{-1, 0, +1\}} & 90.17 & 89.97 & 91.06 \\
LR-Net & & 93.26 & 90.08 & 91.47 \\
SCA (Ours) & & \textbf{93.41$\pm$0.10} & \textbf{91.28 $\pm$ 0.15} & \textbf{92.75 $\pm$ 0.17} \\ \midrule
TWN &\multirow{2}{*}{Ternary: \{-$\alpha$, 0, +$\alpha$\} } & 92.56 & - & - \\
Extended \cite{darabi2018bnn+} & & 90.32 & 90.10 & 91.13 \\ \midrule
TTQ &Ternary: \{-$\alpha$, 0, +$\beta$\} & - & 91.13 & - \\ \midrule
Full-precision network & full-precision & 93.42 & 91.76 & 92.75 \\ \bottomrule
\end{tabular}
\end{table*}
Table \ref{m2} summarizes the comparison results on CIFAR-10.
We observe that SCA compresses the full-precision networks to the ternary versions almost without accuracy dropping for VGG-S and VGG-Variate but with a little accuracy dropping for ResNet-20.
The reason is that VGG-S and VGG-Variate ($\geq$ 0.84M parameters) have much more parameters than those in ResNet-20 (0.27M parameters) and compressing over-parameterized networks may not hurt their performances.
On the other other hand, despite its simplicity, SCA also significantly outperforms all the binary or ternary competitors on all the three networks, which demonstrates the effectiveness and superiority of SCA.
\begin{table*}[t]
\centering
\caption{Performances on ImageNet}
\label{m3}
\begin{tabular}{lccc|cc}
\toprule
& & \multicolumn{2}{c|}{ResNet-18} & \multicolumn{2}{c}{AlexNet} \\
Methods & Weight Types & TOP-1 (\%) & TOP-5 (\%) & TOP-1 (\%) & TOP-5 (\%) \\ \midrule
BWN & Binary: \{-$\alpha$, +$\alpha$\} & 60.8 & 83.0 & 56.8 & 79.4 \\ \midrule
Extended \cite{tang2017train} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Ternary: \{-1, 0, +1\} \end{tabular}} & 65.8 & 86.7 & 54.8 & 76.9 \\
LR-Net & &63.5 & 84.8 & 55.9 & 76.3 \\
SCA (Ours) & & \textbf{67.9$\pm$0.11} & \textbf{88.0$\pm$0.21} & \textbf{59.3$\pm$0.16} & \textbf{80.8$\pm$0.19} \\ \midrule
TWN & \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Ternary: \{-$\alpha$, 0, +$\alpha$\} \end{tabular}} & 65.3 & 86.2 & 54.5 & 76.8 \\
ELB & & 67.0 &87.5 & 58.2 & 80.6 \\
ELQ & & 67.3 & \textbf{88.0} & 57.9 & 80.2 \\
Extended \cite{darabi2018bnn+} & & 66.0 & 86.3 & 54.9 & 76.6
\\ \midrule
TTQ & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Ternary: \{-$\alpha$, 0, +$\beta$\} \end{tabular}} & 66.6 & 87.2 & 57.5 & 79.7 \\
INQ & & 66.0 & 87.1 & - & - \\ \midrule
\multicolumn{1}{c}{Full-precision network} & \multicolumn{1}{c}{Full-Precision} & \multicolumn{1}{c}{69.5} & \multicolumn{1}{c|}{89.2} & \multicolumn{1}{c}{60.8} & \multicolumn{1}{c}{81.9} \\ \bottomrule
\end{tabular}
\end{table*}
\subsubsection{Performances on ImageNet}
To investigate the performances of SCA on large-scale datasets, we conduct a series of experiments on ImageNet.
Limited by computation resources, we only adopt two networks on ImageNet.
We follow the competitors \cite{zhu2016trained,li2016ternary} and use 18-layer ResNet \cite{he2016deep} and AlexNet \cite{krizhevsky2012imagenet} architectures.
We adopt the same initialization strategy as that in TTQ \cite{zhu2016trained} and LR-Net \cite{shayer2017learning} by using the full-precision weights as the initialization.
For ResNet-18, $\lambda$ is initially set to 1e-9 and is multiplied by 1e2 and 10 at the 50th epoch and the 70th epoch, respectively; $\alpha$ is set to 0.3; weight decay is used in the last layer with parameter 1e-6; the objective function is minimized with Adam with mini-batch size 128; the initial learning rate is 0.005 and is divided by 10 after the 30th epoch, the 50th epoch, and the 70th epoch; we have trained the network for 90 epochs.
For AlexNet, $\lambda$ is initially set to 1e-9 and is multiplied by 1e4 and 1e3 at the 70th epoch and the 100th epoch, respectively; $\alpha$ is set to 1.0; weight decay is used in the last layer with parameter $1e-6$; the objective function is minimized with Adam; the initial learning rate is 0.005 and is divided by 2, 10, and 5 at the 50th epoch, the 70th epoch, and the 100th epoch, respectively; we have trained the network for 130 epochs.\par
Table \ref{m3} reports the comparison results \footnote{The results of LR-Net on AlexNet and Extended \cite{tang2017train, darabi2018bnn+} are obtained from our implementation based on the paper.
} on ImageNet.
Clearly, SCA outperforms the competitors with ternary weights \{-1, 0, +1\} in terms of both Top1 and Top5 accuracies by a large margin, and also beats the competitors with the other kinds of ternary weights \{$\alpha$, 0 $-\alpha$/$\beta$\} significantly, which demonstrates the usefulness and applicability of SCA on large-scale datasets.
We also notice that even on large-scale datae ImageNet, the performances of the ternary weight networks trained by SCA are still close to those of the full-precision counterparts, which demonstrates the effectiveness of SCA for converting full-precision networks to the ternary versions on large-scale datasets.
\subsubsection{Performances on CIFAR-100 and Tiny ImageNet}
To further explore the performances of SCA, we evaluate it on more datasets, i.e., CIFAR-100 and Tiny ImageNet.
As the existing approaches did not report the results on these two datasets, we only compare the performances of SCA with the upper bound of ternary weight network performances, i.e., the performances of full-precision networks.
If SCA can match the upper bound, it means that SCA at least does not perform worse than the existing approaches.
On CIFAR-100, the values of the hyperparameters are the same as those on CIFAR-10 except that $\alpha$ is set to 1e-4 and 0.5 for VGG-S and VGG-variate, respectively.
On Tiny ImageNet, $\alpha$ is set to 0.1; $\lambda$ is initially set to 5e-7 and is multiplied by 1e2 at the 90th epoch; weight decay is used with parameter 1e-5; dropout with drop rate 0.5 is adopted; we have the networks for 120 epochs
\par
Table \ref{mc100} reports the comparison results on CIFAR-100 and Tiny ImageNet.
It is observed that the ternary weight networks trained by SCA consistently match the performances of the full-precision counterparts on both datasets and both networks.
This demonstrates the effectiveness of SCA for compressing full-precision weight networks to the ternary weight counterparts.
\begin{table*}[!t]
\begin{minipage}[t]{0.55\textwidth}
\centering
\makeatletter\def\@captype{table}\makeatother\caption{Test accuracies (\%) on CIFAR-100 and Tiny ImageNet}
\label{mc100}
\resizebox{0.99\textwidth}{!}{%
\begin{tabular}{llcc}
\toprule
& & VGG-S & VGG-Variate \\ \midrule
\multirow{2}{*}{CIFAR-100} & SCA (Ours) & 72.2 & 70.3 \\
& Full-precision network & 72.2 & 70.4 \\ \midrule
\multirow{2}{*}{Tiny ImageNet} & SCA (Ours) & 55.2 & 52.1 \\
& Full-precision network & 55.6 & 52.4 \\ \bottomrule
\end{tabular}
}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.43\textwidth}
\centering
\makeatletter\def\@captype{table}\makeatother\caption{Comparison between SCA and SCA without parameterization on CIFAR-10}
\label{m4}
\resizebox{0.99\textwidth}{!}{%
\begin{tabular}{lccc}
\toprule
& VGG-S &ResNet-20 &VGG-Variate \\ \midrule
ES$\rm A_{w}$ & 10.00 & 10.00 & 10.00 \\
SCA &\textbf{93.41} &\textbf{91.28} &\textbf{92.75} \\ \bottomrule
\end{tabular}
}
\end{minipage}
\end{table*}
\subsection{Effectiveness of Parameterization with $\tanh()$}
\label{wdrexp}
Parameterization $\tanh(\Theta)$ in SCA is used to compress the full-precision weight space so that we can search for a ternary weight solution more easily.
In this part, we explore the effectiveness of the parameterization.
We compare the performance of SCA with that of SCA without using parameterization $\tanh(\Theta)$ (i.e., the objective function is reduced to $\mathcal{L}(f(X, W), Y)+\lambda*R(W)$).
We denote the latter by ES$\rm A_{w}$.
We use grid search to tune the hyperparameters in ES$\rm A_{w}$.
However, we find that the accuracy of ES$\rm A_{w}$ on CIFAR-10 is always 10\% as there are always some weights that are far from -1, 0, or +1 without the restriction of $\tanh()$.
As shown in Table \ref{m4}, the accuracies drop substantially on CIFAR-10 for all the three networks without parameterization $\tanh()$, which indicates the importance and the effectiveness of the parameterization $\tanh()$ for training ternary weight networks.
\begin{table*}[t]
\centering
\caption{Sparsity (\%) of ternary weights and accuracies (\%) on MNIST with different values of controller $\alpha$ for given $\lambda$s}
\label{m5}
\begin{tabular}{cccccccccc}
\toprule
& & $\alpha$ = 0 & $\alpha$ = 1e-4 &$\alpha$ = 1e-2 &$\alpha$ = 0.1 & $\alpha$ = 0.2 & $\alpha$ = 0.5 &$\alpha$ = 1 &$\alpha$ = 2 \\ \midrule
\multirow{2}{*}{$\lambda$ = 1e-5} & Sparsity & 0.008 & 0.096 & 2.89 & 29.69 & 61.31 & 99.63 & 99.89 & 99.97 \\
& Accuracy & 99.47 & 99.48 & 99.45 & 99.56 & 99.47 & 99.49 & 99.36 & 60.9 \\ \midrule
\multirow{2}{*}{$\lambda$ = 1e-7} & Sparsity & 0.21 & 0.48 & 3.25 & 11.00 & 15.15 & 43.39 & 93.94 & 97.44 \\
& Accuracy & 99.53 & 99.56 & 99.40 & 99.48 & 99.50 & 99.50 & 99.40 & 99.42 \\ \midrule
\multirow{2}{*}{$\lambda$ = 1e-9} & Sparsity & 11.48 & 12.65 & 13.41 & 18.30 & 25.86 & 33.46 & 51.99 & 67.21 \\
& Accuracy & 99.39 & 99.53 & 99.50 & 99.47 & 99.55 & 99.56 & 99.44 & 99.41 \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[t]
\centering
\setlength{\abovecaptionskip}{0.1cm}
\setlength{\belowcaptionskip}{0.1cm}
\caption{Sparsity (\%) of ternary weights and accuracies (\%) of VGG-S on CIFAR-10 with different values of $\alpha$ for given $\lambda$s}
\label{m6}
\resizebox{1\columnwidth}{!}{%
\begin{tabular}{ccccccccccc}
\toprule
& &$\alpha$ = 0 &$\alpha$ = 1e-4& $\alpha$ = 1e-2 & $\alpha$ = 0.1 &$\alpha$ = 0.2 &$\alpha$ = 0.5 &$\alpha$ = 1 &$\alpha$ = 1.5 &$\alpha$ = 2 \\ \midrule
\multirow{2}{*}{$\lambda$ = 1e-7} & Sparsity & 0.0143 & 0.099 & 1.001 &2.411 &6.278 & 29.17 & 79.65 & 93.31 & 97.21 \\
& Accuracy & 92.77 & 92.66 & 92.42 &92.83 & 92.68 &92.61 & 93.23 & 92.36 & 91.11 \\ \midrule
\multirow{2}{*}{$\lambda$ = 1e-8} & Sparsity & 0.29 & 0.32 & 1.58 & 3.91 &5.38 & 8.56 & 33.92 & 70.28 & 85.77 \\
& Accuracy & 93.38 & 93.25 & 93.14 &93.41 &93.00 & 93.29 & 93.15 & 93.36 & 92.75 \\ \midrule
\multirow{2}{*}{$\lambda$ = 1e-9} & Sparsity & 3.02 & 3.05& 4.34 & 7.10 &11.60 & 12.89 & 22.62 & 40.04 & 59.35 \\
& Accuracy & 93.05 & 92.90 & 92.97 & 92.96& 92.94 & 93.05 & 92.63 & 93.38 & 92.67 \\ \bottomrule
\end{tabular}
}
\vskip -0.1in
\end{table*}
\subsection{Sparsity Control}
\label{wdrexp1}
In this part, we investigate how the sparsity (i.e., the percentage of 0s) of the trained ternary weights varies with the values of controller $\alpha$ for given coefficients $\lambda$.
Table \ref{m5} and Table \ref{m6} report the results on MNIST and CIFAR-10, respectively.
It is observed that the ternary weights are becoming increasingly sparser as $\alpha$ increases for given $\lambda$s on both datasets, which verifies that the sparsity of the trained ternary weights is positively related to $\alpha$.
Essentially, when the ternary weights become sparser, the accuracy only changes a little, which indicates that SCA is able to control the sparsity of the trained ternary weights through $\alpha$ with little or no accuracy changes.
For example, on MNIST (Table \ref{m5}), for given $\lambda$ value 1e-5, when $\alpha$ is increased from 0 to 0.5, the sparsity is increased from 0.008\% (which means that there is almost no 0 in the ternary weights) to 99.63\% (which means that there are almost all 0s in the ternary weights), but the accuracy is almost not changed (i.e., from 99.47\% to 99.49\%).
Similarly, on CIFAR-10 (Table \ref{m6}), for given $\lambda$ value 1e-7, when $\alpha$ is increased from 0 to 1, the sparsity is increased from 0.0143\% (which means that there is almost no 0 in the ternary weights) to 79.65\% (which means that most of the values in the ternary weights are 0s), but the accuracy is only changed a little (i.e., from 92.77\% to 93.23\%).
Therefore, SCA can control the sparsity of the trained ternary weights through $\alpha$ with little or no accuracy changes.
We also notice an exception on MNIST that when $\lambda$=1e-5 and $\alpha$=2, the accuracy drops substantially due to over-sparsity with sparsity 99.97\%.
The reason is that 99.97\% 0s make the ternary weight network too sparse to fit the data.
\par
\begin{figure*}[!t]
\centering
\includegraphics[width=0.98\textwidth]{features.pdf}
\caption{Feature visualization of odd layers of ResNet-18 on ImageNet images}
\label{f5}
\end{figure*}
\subsection{Feature Visualization}
The experimental results on several benchmarks datasets have shown that the ternary weight networks trained by SCA perform on a par with the full-precision counterparts.
The natural question that arises is whether their feature maps are similar.
We visualize the average feature map in each odd layer of ternary weight ResNet-18 and full-precision weight ResNet-18.
\par
Figure \ref{f5} shows the feature comparison between each of the odd layers of ternary ResNet-18 and the full-precision ResNet-18 on ImageNet images.
It is observed that their features in the shallow layers (e.g., layer 1-13) are extremely similar and those in the top layers differ a little, which indicates that the ternary weight networks trained by SCA are able to learn effective features similar to those of the full-precision counterparts and thus demonstrates the effectiveness of SCA for feature extraction.
\subsection{Discussion}
We have shown that SCA is able to convert a full-precision network to the ternary version with no or little accuracy decrease.
Besides the performance improvements, the significance of SCA over the existing approaches is more at its simplicity, effectiveness, and sparsity-control ability.
Using a very simple method to achieve even surpass the performances of the state-of-the-art approaches and to control the sparsity of the ternary weights is the main contributions of this work.
From practical perspective, current state-of-the-art models in various applications are built on full-precision weight networks that require large amounts of computation and memory, which highly limits their implementation on resource-limited devices.
SCA paves the way for converting full-precision networks to the ternary versions simply and effectively with a very high code reuse rate, which makes the deployment on resource-limited devices suitable.
Second, SCA is able to control the sparsity of the ternary weights to further accelerate the inference according to the demands on portable devices.
\section{Conclusion}
It is well-known that full-precision DNNs have high-demanding memory and computation requirements.
In this work, we have proposed a simple yet effective approach SCA to training ternary weight \{-1, 0, +1\} networks that are dramatically efficient at inference and require much less memories.
SCA is able to control the sparsity of the ternary weights, which is to our best knowledge the first work alone this line.
We have theoretically and empirically shown that the sparsity of the ternary weights is positively related to controller $\alpha$.
Extensive experiments on five benchmark datasets have demonstrated the SCA outperforms the existing approaches significantly and even matches the performances of the full-precision counterparts.
\bibliographystyle{elsarticle-num-names}
|
2,877,628,090,286 | arxiv | \section{Introduction}
GR predicts that orbits about a massive central body suffer
periastron shifts yielding {\textit {rosette}} shapes. However,
the classical perturbing effects of other objects on inner orbits
give an opposite shift. Since the periastron advance depends
strongly on the compactness of the central body, the detection of
such an effect may give information about the nature of the
central body itself. This would apply for stars orbiting close to
the GC, where there is a \lq\lq dark object", the black hole
hypothesis being the most natural explanation of the observational
data. A cluster of stars in the vicinity of the GC (at a distance
$< 1\arcsec $) has been monitored by ESO and Keck teams for
several years (\cite{Genzel03,Schoedel03,Ghez03,Ghez04,Ghez05}).
In particular, Ghez et al. (\citeyear{Ghez03}) have reported on
observations of several stars orbiting close to the GC massive
black hole. Among those, the S2 star, with mass $M_{\rm S2}\simeq
15$ M$_{\odot}$, appears to be a main sequence star orbiting the
black hole with a Keplerian period of $\simeq 15$ yrs. This yields
\citet{Ghez05} a mass estimate of $M_{\rm Sgr~A^*}\simeq
3.67\times 10^6 ~M_{\odot}$ within $4.87\times 10^{-3}$ pc, that
is the S2 semi-major axis.
Several authors have discussed the possibility of measuring the
GR corrections to Newtonian orbits for Sgr~A$^*$ (see e.g.
\citealt{Jaroszynski98, Jaroszynski99, Jaroszynski00, Fragile00,
Rubilar01, Weinberg05}), usually assuming that the central body is
a Schwarzschild black hole. However, since black holes generally
rotate, and there is no reason why they should not be rotating
fast, the Kerr metric should be used instead. Not only stellar
mass black holes but also supermassive black holes are believed to
be spinning. Indeed, X-ray observations of Seyfert galaxies,
microquasars and binary systems (\cite{fabian1, tanaka1, Fabi00,
Fabian04} and references therein) show that the data could be
explained by a rotating black hole model (see e.g.
\cite{zak_rep03_aa, Zak_rep03_AR,ZKLR02} and \cite{ ZR_ASR04}).
Further, supermassive black holes at the center of QSOs, AGNs and
galaxies show beamed jet emission implying that they have non zero
angular momentum. Hence, Kerr black holes may be fairly common in
nature. The relatively short orbital period of the S2 star
encourages a search for genuine GR effects like the orbital
periastron shift. Quite possibly, more suitable stars, close to
the GC black hole, will be found in the future. Bini et al.
(\citeyear{bini2005}) studied the GR periastron shift around
Sgr~A$^*$ and estimated it for various solutions belonging to the
Weyl class, including the Schwarzschild and Kerr black holes.
However, they did not take into account the presence of a stellar
cluster, which could in principle be sizable.
The purpose of this paper is to try to find limits for the extent
and density of the cluster about Sgr~A$^*$ and if those limits
yield a measurement of its spin.
Clearly, a thorough knowledge of the cluster mass and density
distribution is necessary to be able to infer the mass and spin of
the black hole at the GC by measuring the periastron shift and
subtracting the Newtonian shift. Unfortunately, the star cluster
parameters are poorly known. \footnote{We remark that the star
cluster we are considering around the central black hole might
contain not only normal stars but also white dwarfs, neutron stars
and/or stellar mass black holes.} However, the measure of the
Brownian motion of the central black hole due to the surrounding
matter may be used to constrain the black hole to cluster mass
ratio.\footnote{Other methods for estimating the black hole
parameters (i.e. mass and angular momentum) based on gravitational
retrolensing have been proposed. For more details on this topic we
refer to \citet{depaolis1,depaolis2,depaolis3,depaolis4,ZNDI_04}
and reference therein.} The latest observations of the $Sgr~A^*$
proper motion, $v_{Sgr~A^*}= (0.4\pm 0.9)$ km $\rm s^{-1}$
(\citealt{reid2004}), is much tighter than the earlier one of
$2\mathrm{-}20$ km $\rm{s}^{-1}$ (see Reid, Readhead, Vermeulen et
al. \citeyear{reid1999}).
For a test particle orbiting a Schwarzschild black hole of mass
$M_{\rm BH}$, the periastron shift is given by (see e.g. Weinberg,
\citeyear{Weinberg72})
\begin{equation}
\Delta \phi_S \simeq \frac{6\pi G
M_{BH}}{d(1-e^2)c^2}+\frac{3(18+e^2)\pi
G^2M_{BH}^2}{2d^2(1-e^2)^2c^4}~, \label{schshift}
\end{equation}
$d$ and $e$ being the semi-major axis and eccentricity of the test
particle orbit, respectively. For a rotating black hole with spin
parameter $a=|{\bf a}|=J/GM_{\rm BH}$, the space-time is described
by the Kerr metric and, in the most favorable case of equatorial
plane motion ({\bf a.v} = 0), the shift is given by (Boyer and
Price \citeyear{boyerprice}, but see also Bini et al.
\citeyear{bini2005} for more details)
\begin{equation}
\begin{array}{l}
\displaystyle{\Delta \phi_K \simeq \Delta \phi_S +\frac{8a\pi
M_{BH}^{1/2}G^{3/2}}{d^{3/2}(1-e^2)^{3/2}c^3}+\frac{3a^2\pi
G^2}{d^{2}(1-e^2)^{2}c^4}~,} \label{kershift}
\end{array}
\end{equation}
which reduces to eq. (\ref{schshift}) for $a\rightarrow 0$. In the
more general case, {\bf a.v} $\neq 0$, the expected periastron
shift has to be evaluated numerically.
The expected periastron shifts (mas/revolution), $\Delta \phi$ (as
seen
from the center) and $\Delta \phi _E$ (as seen from Earth at
the distance $R_0\simeq~8$ kpc from the GC), for the Schwarzschild
and the extreme Kerr black holes, for the S2 and S16 stars turn
out to be $\Delta\phi^{S2}=6.3329\times 10^5$ and $6.4410\times
10^5$ and $\Delta \phi _E^{S2}=0.661$ and $0.672$ respectively,
and $\Delta\phi^{S16}=1.6428\times 10^6$ and $1.6881\times 10^6$
and $\Delta \phi _E^{S16}=3.307$ and $3.399$ respectively. Recall
that
\begin{equation}
\Delta \phi _E = \frac{d(1+e)}{R_0} \Delta \phi_{S,K}~.
\end{equation}
Notice that the differences between the periastron shifts for the
Schwarzschild and the maximally rotating Kerr black hole is at
most $0.01$ mas for the S2 star and $0.009$ mas for the S16 star.
In order to make these measurements with the required accuracy,
one needs to know the S2 orbit with a precision of at least $10$
$\mu$as.
There is a proposal to improve the angular resolution of VLTI with
the PRIMA facility (\cite{Rottgering03, Delplancke03,
Quirrenbach03} but see also the related
web-site\footnote{http://obswww.unige.ch/PRIMA/home/introduction.}),
which, by using a phase referenced imaging technique, will get
$\sim 10$ $\mu$as angular resolution. Hence, at least in
principle, the effect of a maximally rotating Kerr black hole on
the periastron shift of the S2 star can be distinguished from that
produced by a Schwarzschild black hole with the same mass.
The plan of the paper is as follows: In the next section we
briefly discuss the effect of a central star cluster on the
periastron advance. In Section 3 we use the Sgr~A$^*$ Brownian
motion to constrain the black hole to star cluster mass ratio.
Then we consider whether the detection of the spin of the black
hole from the periastron shift of the S2 star is possible, once
the cluster density and size have been adequately constrained. In
Section 5 we show how future measurements of the periastron shifts
for at least three stars close to the GC black hole may be used to
estimate the black hole mass and the star cluster mass density
distribution. In the next section we consider what the
observational requirements would be for adequate determination of
the cluster parameters to be able to resolve the Kerr effect.
Finally, in section 7, we present some concluding remarks.
\section{Retrograde shift due to a central stellar cluster}
The star cluster surrounding the central black hole in the GC
could be sizable. At least 17 members have been observed within 15
mpc up to now (\citealt{Ghez05}). However, the cluster mass and
density distribution, that is to say its mass and core radius, is
still unknown. The presence of this cluster affects the periastron
shift of stars orbiting the central black hole. The periastron
advance depends strongly on the mass density profile and
especially on the central density and typical length scale.
We model the stellar cluster by a Plummer model density profile
(\citealt{binneytremaine})
\begin{equation}
\rho_{CL}(r) = \rho_0 f(r)~,~~~~~~~~{\rm
with}~~~~~~~~~~f(r)={\left[1+\left(\frac{r}{r_c}\right)^2\right]^{-\alpha/2}}~,\label{plummer1}
\end{equation}
where the cluster central density $\rho _0$ is given by
\begin{equation}
\rho _0 = \frac{M_{CL}}{\int _0^{R_{CL}} 4\pi r^2 f(r)~dr}~,
\label{plummer2}
\end{equation}
$R_{CL}$ and $M_{CL}$ being the cluster radius and mass,
respectively. According to dynamical observations towards the GC,
we require that the total mass $M(r)=M_{BH}+M_{CL}(r)$ contained
within $r\simeq 5\times 10^{-3}$ pc is $M\simeq 3.67\times 10^
6~M_{\odot}$. Useful information is provided by the cluster mass
fraction, $\lambda_{CL}=M_{CL}/M$, and its complement,
$\lambda_{BH}=1-\lambda_{CL}$. As one can see, the requirement
given in eq. (\ref{plummer2}) implies that $M(r)\rightarrow
M_{BH}$ for $r\rightarrow 0$. The total mass density profile
$\rho(r)$ is given by
\begin{equation}
\rho(r) = \lambda_{BH} M \delta ^{(3)}(\overrightarrow{r}) +\rho_0
f(r)~ \label{totaldensity}
\end{equation}
and the mass contained within $r$ is
\begin{equation}
M(r) = \lambda_{BH}M + \int_0^r 4\pi r'^2\rho_0 f(r')~dr'~.
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[scale=0.50]{f1.eps}\qquad
\end{center}
\caption{The cluster mass density profile is shown for different
values of $\lambda_{BH}$. Solid, dotted and dashed lines
correspond to $\lambda_{BH}= 0,~0.7,~0.9$, respectively. Thick,
red lines have been obtained for $r_c=3$ mpc with the same values
$\lambda_{BH}$ as given above.} \label{density}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.50]{f2.eps}\qquad
\end{center}
\caption{The mass enclosed within the distance $r$ is shown for
different fractions $\lambda_{BH}$ of the total mass $M$ which is
contained in the central black hole. Solid, dotted and dashed
lines correspond to $\lambda_{BH}= 0$, $\lambda_{BH}= 0.7$ and
$\lambda_{BH}=0.9$, respectively.
Note that the case corresponding to $\lambda_{BH} = 0$ is not
realistic as shown by some observations (\cite{shen}).}
\label{ratiomass}
\end{figure}
In Figure \ref{density} we show the cluster mass density profile
$\rho_{CL}(r)$ as given by eq. (\ref{plummer1}), for selected
values of $\lambda_{BH}$. The total mass $M(r)$ enclosed within
the radius $r$ is also shown in Figure \ref{ratiomass}. In both
Figures, solid, dotted and dashed lines correspond to
$\lambda_{BH}= 0,~0.7,~0.9$, and we have assumed $r_c=3$ mpc
(thick lines) and $r_c=5.8$ mpc (thin lines).
The Newtonian gravitational potential $\Phi_N$ at a distance $r$
due to the mass contained within it can be evaluated as
\begin{equation}
\Phi_N(r) = -G \int_r^{\infty} \frac{M(r')}{r'^2}~dr'~.
\label{gravitationalpotential}
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[scale=0.50]{f3.eps}\qquad
\end{center}
\caption{The gravitational potential at distance $r$ as due to the
mass $M(r)$ is shown for different fractions $\lambda_{BH}$ of the
total mass $M$. Solid, dotted and dashed lines correspond to
$\lambda_{BH}= 0$, $\lambda_{BH}= 0.7$ and $\lambda_{BH}=0.9$,
respectively. Thick red lines have been obtained for $r_c=3$ mpc
while thin black lines are for $r_c=5.8$ mpc.} \label{potential}
\end{figure}
In Figure \ref{potential}, the gravitational potential $\Phi_N(r)$
due to the mass density distribution in eq. (\ref{totaldensity})
is given for selected values of $\lambda_{BH}$.
According to GR, the motion of a test particle can be fully
described by solving the geodesic equations. Under the assumption
that the matter distribution is static and pressureless, the
equation of motion of the test particle becomes (see e.g. Weinberg
\citeyear{Weinberg72})
\begin{equation}
\frac{d\textbf{v}}{dt}\simeq-\nabla(\Phi_N +2\Phi_N
^2)+4\textbf{v}(\textbf{v}\cdot \nabla)\Phi_N -v^2\nabla \Phi_N~.
\end{equation}
For a spherically symmetric mass distribution \footnote{We would
like to mention that the dynamical state of the region around Sgr
A$^*$ is known to be complex, with a significant population of
young stars of unclear origin making the assumption of an
undisturbed spherical cluster likely uncorrect. Considering the
effects caused by a non spherically symmetric mass distribution
makes the passage to an equation similar to eq. (\ref{setode}) not
analytically solvable. This problem will be addressed in a
subsequent work.} with a density profile given by eq.
(\ref{plummer1}) and for a gravitational potential given by eq.
(\ref{gravitationalpotential}), the previous relation becomes (see
for details Rubilar et al. \citeyear{Rubilar01})
\begin{equation}
\frac{d\textbf{v}}{dt}\simeq-\frac{GM(r)}{r^3}\left[\left(1+\frac{4\Phi_N}{c^2}+
\frac{v^2}{c^2}\right)\textbf{r}-\frac{4\textbf{v}(\textbf{v}\cdot\textbf{r})}{c^2}\right]~,
\label{setode}
\end{equation}
$\textbf{r}$ and $\textbf{v}$ being the radius vector of the test
particle (with respect to the center of the stellar cluster) and
the velocity vector, respectively. Once the initial conditions for
distance and velocity are given, the orbit of a test particle can
be found by solving the set of ordinary differential equations in
eq. (\ref{setode}) numerically.
Now consider the S2 star, which is moving around the central
distribution of matter on an elliptic orbit of semi-major axis $d$
and eccentricity $e$ in the Newtonian approximation. We take a
frame with the origin in the GC, $X$-$Y$ plane on the orbital
plane and the $X$ axis pointing toward the periastron of the
orbit. Hence, we can choose the Newtonian initial conditions to be
(see e.g. Smart (\citeyear{smart}))
\begin{eqnarray}
r_x^0&=&d(1+e)~, \nonumber \\
r_y^0&=&0~,\label{cond1}
\end{eqnarray}
and
\begin{eqnarray}
v_x^0&=&0,\nonumber \\
v_y^0&=&\sqrt{GM(r_x^0)\left[\frac{2}{d(1+e)}-\frac{1}{d}\right]}~.
\label{cond2}
\end{eqnarray}
For the S2 star, $d$ and $e$ given in the literature are 919 AU
and 0.87 respectively. They yield the orbits of the S2 star for
different values of the black hole mass fraction $\lambda _{BH}$
shown in Figure \ref{orbite}. The Plummer model parameters are
$\alpha =5$, core radius $r_c\simeq 5.8$ mpc. Note that in the
case of $\lambda _{BH}=1$, the expected (prograde) periastron
shift is that given by eq. (\ref{schshift}), while the presence of
the stellar cluster leads to a retrograde periastron shift. For
comparison, the expected periastron shift for the S16 star is
given in Figure \ref{orbite2}. In the latter case, the binary
system orbital parameters were taken from Sch\"odel et al.
(\citeyear{Schoedel03}) assuming also for the S16 mass a
conservative value of $\simeq 10$ M$_{\odot}$.
\begin{figure*}[htbp]
\vspace{0.2cm}
\begin{center}
$\begin{array}{c@{\hspace{0.1in}}c@{\hspace{0.1in}}c}
\epsfxsize=2.0in \epsfysize=2.0in \epsffile{f4.eps} &
\epsfxsize=2.0in \epsfysize=2.0in \epsffile{f5.eps} &
\epsfxsize=2.0in \epsfysize=2.0in \epsffile{f6.eps} \\
\end{array}$
\end{center}
\caption{Post Newtonian orbits for different values of the black
hole mass fraction $\lambda_{BH}$ are shown for the S2 star
(upper panels). Here, we have assumed that the Galactic central
black hole is surrounded by a stellar cluster whose density
profile follows a Plummer model with $\alpha=5$ and a core radius
$r_c\simeq 5.8$ mpc. The periastron shift values in each panel is
given in arcseconds.} \label{orbite}
\end{figure*}
\begin{figure*}[htbp]
\vspace{0.2cm}
\begin{center}
$\begin{array}{c@{\hspace{0.1in}}c@{\hspace{0.1in}}c}
\epsfxsize=2.0in \epsfysize=2.0in \epsffile{f7.eps} &
\epsfxsize=2.0in \epsfysize=2.0in \epsffile{f8.eps} &
\epsfxsize=2.0in \epsfysize=2.0in \epsffile{f9.eps} \\
\end{array}$
\end{center}
\caption{The same as in Figure \ref{orbite} but for the S16--Sgr
A$^*$ binary system. In this case, the binary system orbital
parameters were taken from Ghez et al. (\citeyear{Ghez05})
assuming for the S16 mass a conservative value of $\simeq 10$
M$_{\odot}$.} \label{orbite2}
\end{figure*}
In Figure \ref{shiftvscore} the S2 orbital shift $\Delta \Phi$ is
given as a function of the stellar cluster core radius $r_c$, for
different power law index values ($\alpha=4$ dashed line,
$\alpha=5$ dotted line and $\alpha=6$ solid line). In the left
panel, the black hole mass fraction is $\lambda_{BH}=0.8$ in order
to compare with Rubilar et al. (\citeyear{Rubilar01}) results,
while the right panel shows the $\lambda_{BH}=0.99$ case. Note
that for extremely compact clusters, $\Delta \Phi$ is quite small.
The same is true for large enough core radii, corresponding to
matter density profiles almost constant within the S2 orbit.
\begin{figure*}[htbp]
\vspace{0.2cm}
\begin{center}
$\begin{array}{c@{\hspace{0.1in}}c@{\hspace{0.2in}}c}
\epsfxsize=2.8in \epsfysize=2.8in \epsffile{f10.eps} &
\epsfxsize=2.8in \epsfysize=2.8in \epsffile{f11.eps} \\
\end{array}$
\end{center}
\caption{The expected S2 periastron shift as a function of the
stellar cluster core radius is shown. Here we have assumed a
Plummer density profile for the stellar cluster. Dashed, solid and
dotted lines correspond to $\alpha=4,~5$ and $6$, respectively.
The black hole mass fraction has been fixed to $\lambda_{BH}=0.8$
(left panel) and $\lambda_{BH}=0.99$ (right panel), respectively.
Note the existence of a maximum approximately corresponding to the
S2 semi-major axis.} \label{shiftvscore}
\end{figure*}
Figures \ref{orbite} and \ref{shiftvscore} show that the expected
S2 periastron shift depends strongly on the total mass of the
cluster. In particular, the shift due to the cluster is opposite
in sign (retrograde motion) to the relativistic effect due to the
black hole in the GC. Moreover, for each value of the cluster mass
and power law index, there exist two density profiles
(corresponding to two particular core radii) which have total
shift almost zero, implying that the periastron advance due to the
cluster is equal (in magnitude) to the periastron shift due to the
black hole. A numerical analysis shows that the transition from a
prograde shift (due to the black hole) to retrograde shift (due to
the extended mass) occurs at $\lambda_{BH} \simeq 0.9976$,
$0.9986$ and $0.9990$ for $\alpha =4$, $5$ and $6$, respectively.
This means that a small fraction of mass in the cluster
drastically changes the overall shift.
We would like to note that the assumption of the Plummer model to
describe the mass density distribution of the stellar cluster
around the central black hole is an oversimplification. Indeed,
one expects that in presence of a central black hole, the stellar
profile should follow a Bachall-Wolf law with density distribution
$\rho_c(r) \propto r^{-7/4}$ \citep{bw,binneytremaine} at least up
to $\tilde{r}_H\ll r_H$, where $r_H=GM_{BH}/\sigma_*^2\simeq 0.5$
pc is the radius of the black hole influence sphere. In the
following, we call $\tilde{r}_H$ the distance ($\ll r_H$) up to
which the cluster mass density follows the Bachall-Wolf law.
In order to study the effect of such a cusp on the expected S2
periastron shift, we consider three different cases {\it a)} the
cusp is entirely contained within the S2 periastron distance
$R_{S2}$ (i.e. $\tilde{r}_H\le R_{S2}$), {\it b)} the cusp extends
beyond the S2 periastron distance (thus making the S2 star move in
a mass gradient) and {\it c)} the stellar density profile follows
a cusp law up to the distance $\tilde{r}_H$ from the center and a
Plummer law for $r\geq \tilde{r}_H$. In cases {\it a)} and {\it
b)} all stars are in a cusp density profile. In any case we
require that the total mass enclosed within $4.87\times 10^{-3}$
pc is $M\simeq 3.67\times 10^6 ~M_{\odot}$.
In case {\it a)}, the total S2 periastron shift is just the sum of
the shift due to the black hole and the shift caused by the
stellar cusp (that contributes with the same sign). Hence, the S2
shift turns out to be $\Delta \Phi \simeq 0.17$ degree per
revolution.
In case {\it b)}, by requiring that the total mass enclosed within
$4.87\times 10^{-3}$ pc is $M\simeq 3.67\times 10^6 ~M_{\odot}$,
we find that the dependence of the cusp mass and the induced S2
periastron shift on $\tilde{r}_H$ vanishes. Indeed, in Figure
\ref{cusp1}, we give the mass enclosed within the distance $r$ for
different values of $\lambda_{BH}$. Solid, dotted and dashed lines
correspond to $\lambda_{BH}= 0.1$, $\lambda_{BH}= 0.5$ and
$\lambda_{BH}=0.9$, respectively. Figure \ref{cusp2} shows the
expected S2 periastron shift as a function of $\lambda_{BH}$. As
noted before, the shift due to the cluster is opposite in sign
with respect to that due to the black hole. Moreover, for
$\lambda_{BH}\simeq 0.998$ the total shift turns out to be zero
since the contributions of the black hole and the cluster cancel
out. It is noticing that, since in the case of cusp profiles the
density gradient is larger than in the case of a usual
($\alpha=5$) Plummer model, the value of the S2 periastron shift
gets generally larger values. Only if the Plummer core radius is
around $2$ mpc the resulting S2 periastron shifts are comparable
in both cases.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.50]{f16.eps}\qquad
\end{center}
\caption{The mass enclosed within the distance $r$ is shown for
different fractions $\lambda_{BH}$ of the total mass $M$ contained
within the S2 orbit. Solid, dotted and dashed lines correspond to
$\lambda_{BH}= 0.1$, $\lambda_{BH}= 0.5$ and $\lambda_{BH}=0.9$,
respectively. The stellar cluster is assumed to follow an
$r^{-7/4}$ density profile.} \label{cusp1}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.50]{f17.eps}\qquad
\end{center}
\caption{The expected S2 periastron shift as a function of the
mass ratio parameter $\lambda_{BH}$. Solid and dashed lines
correspond to the S2 shift due to the black hole and to the
stellar cusp, respectively. We note that the shift due to the
stellar cusp is independent on the $\tilde{r}_H$ value, that, in
this case, has been assumed to be larger than the S2 semi-major
axis (case {\it b}).} \label{cusp2}
\end{figure}
We have then considered the superposition of a Plummer model and a
Bahcall-Wolf profile (case {\it c}) extended up to $\tilde{r}_H$
such as the cusp density at $\tilde{r}_H$ equals that of the
Plummer model at the same distance. Here, if $\tilde{r}_H\ll
R_{S2}$ the S2 periastron shift will be practically equal to that
caused by the Plummer model (see right panel in Fig. 6) since in
this case the cusp will have a minor influence. On the contrary,
for an extended cusp ($\tilde{r}_H\gg R_{S2}$), the cusp effect on
the S2 periastron shift will dominate reconciling with case $b$.
As a last point, we mention that we have also considered the
effect due to an extrapolation of the observed stellar density
profile - the innermost point of which is the S2 star at a
distance of 0.1\arcsec - within $R_{S2}$. Following
\citet{Genzel03b} and assuming a cusp stellar density profile, we
find that the enclosed mass is in the range 30-300 $M_{\odot}$
(for a constant mass density or a power law with index
$\gamma=1.4$). Therefore, the cusp effect on the S2 periastron
shift is negligible since the corresponding $\lambda_{BH}$ is
always greater than 0.99992. However, we caution that the case
under investigation in the present paper is different with respect
to \citet{Genzel03b} since we are assuming that a fraction of the
mass contained within $R_{S2}$ may be in a stellar cluster. Hence,
the cluster mass content may be larger, thus providing a stronger
effect on the S2 periastron shift.
\section{Tightening mass limits of Sgr~A$^*$}
We know that the mass of Sgr~A$^*$ within the S2 orbit is
$3.67\times 10^6$ M$_{\odot}$ to a high accuracy. Though there is
nothing definite known about the mass distribution, there is
strong reason to believe that there is a black hole of several
solar masses, possibly surrounded by a significant cluster. In
principle the cluster mass could dominate over the black hole, be
comparable to it or be dominated by it. That there {\it is} a
cluster is highly likely on account of the large number of stars
observed near Sgr~A$^*$. Though these lie outside the S2 orbit,
many stars so far unseen probably do lie within the orbit as well.
In this section we use current data on the Brownian motion of
Sgr~A$^*$ and the evaporation time for the putative cluster to put
limits on the cluster mass and hence on the black hole mass.
Chatterjee, Hernquist and Loeb (\citeyear{chatterje}) have
developed a simple model to describe the dynamics of a massive
black hole surrounded by a dense stellar cluster. The total force
acting on the black hole is separated into two independent parts,
one of which is the slowly varying force due to the stellar
ensemble and the other the rapid stochastic force due to close
stellar encounters. In the case of a stellar system with a Plummer
distribution, the motion of the black hole is similar to that of a
Brownian particle in a harmonic potential. Thus the black hole
one-dimensional mean-square velocity is given by
\begin{equation}
<v_x^2>=\frac{2}{9}\frac{GM_{CL}m_*}{r_cM_{BH}}~, \label{eqvel1}
\end{equation}
where it has been assumed that the cluster is composed of objects
with equal mass, $m_{*}$. For a Plummer ($\alpha$ = 5) stellar
cluster, the total mass within $R$ is
\begin{equation}
M(R)=M_{BH}+\frac{M_{CL}R^3}{(R^2+r_c^2)^{3/2}}~. \label{eqmass1}
\end{equation}
Since $<v_x^2>$ is less than a certain maximum value
$<v_x^2>_{max}$, from eqs. (\ref{eqvel1}) and (\ref{eqmass1}) one
obtains
\begin{equation}
M_{BH} > M(R) \left\{1+\frac{9}{2} \left[ <v_x^2>_{max} \frac{r_c
R^3}{G (R^2+r_c^2)^{3/2}m_* } \right]\right\}^{-1}~,\label{eqlim1}
\end{equation}
the right hand side corresponding to a minimum black hole mass, as
constrained by the Brownian motion of the central black hole. In
Figure \ref{figlimiti1} the minimum black hole mass allowed by the
Brownian motion of Sgr~A$^*$ is given as a function of the stellar
cluster core radius, for two different proper motion velocities of
the black hole: 1.3 km s$^{-1}$ (dashed lines), and 2 km s$^{-1}$
(dotted lines). The total mass contained within $R=0.01$ pc of
Sgr~A$^*$ has been taken to be $M\simeq 3.67\times 10^6$
M$_{\odot}$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.55]{f12.eps}\qquad
\end{center}
\caption{The minimum black hole mass allowed by the Brownian
motion of Sgr~A$^*$ is given as a function of the stellar cluster
core radius for the different black hole proper motion velocities.
We assume that a total mass $M\simeq 3.67\times 10^6$ M$_{\odot}$
is contained within $R_{S2}=4.87$ mpc. Dashed and dotted lines
have been obtained for velocities of $1.3$ km ${\rm s}^{-1}$ and
$2$ km $\rm{s}^{-1}$, respectively. For each given curve only the
region above it is allowed.}\label{figlimiti1}
\end{figure}
Chatterjee, Hernquist and Loeb (\citeyear{chatterje}) derived an
evaporation time for a cluster, but concentrated on the large
scale cluster $r_c\simeq$ 10 pc about Sgr~A$^*$, and hence assumed
that M$_{CL}$ $\gg$ M$_{BH}$. On the other hand Rauch and Tremaine
(\citeyear{rauch}) and Mouawad et al. (\citeyear{moawad}) consider
only the region interior to the orbit of S2 and assume M$_{CL}$
$\ll$ M$_{BH}$.
We need to allow for all possibilities while considering the
cluster interior to the orbit of S2, including M$_{CL}\simeq$
M$_{BH}$. For this purpose we consider a cluster of core radius
$r_c$ and mass M$_{CL}$ = M $-$ M$_{BH}$. We now need to obtain
the generalization of the formula of Chatterjee, Hernquist and
Loeb (\citeyear{chatterje}) for the median relaxation time in this
more general situation. For this purpose, as usual, we assume that
the cluster consists of components of the same mass $m_*$ and
evaluate the crossing time in the usual way to obtain the general
median relaxation time
\begin{equation}
T_{r}=\frac{0.14(1.3~r_cM)^{3/2}}{\sqrt{G}M_{CL}m_*log(0.4M/m_*)}.
\end{equation}
It is easy to verify that in the approximation M$_{CL}$ $\gg$
M$_{BH}$ we recover the formula of Chatterjee, Hernquist and Loeb
(\citeyear{chatterje}) and in the approximation M$_{CL}$ $\ll$
M$_{BH}$ we recover the formula of Rauch and Tremaine
(\citeyear{rauch}).The evaporation time is then $T_{evap}\simeq
300~T_{r}$ (\citealt{binneytremaine}, p.525).
One can assume different ``reasonable" values of the time that the
cluster would have been in existence and hence use the evaporation
time to further limit the black hole mass in the GC. It is clear
that 10$^8$ years = 0.1 Gyr is less than the minimum value that
could be regarded as reasonable, 1 Gyr is more reasonable and 10
Gyr is likely to be a good value to take. The results are given in
Table \ref{tab3} for $m_*$ = 1 M$_{\odot}$. Note that the tightest
bound gives a very stringent upper limit of
$9\times10^4$M$_{\odot}$ on the cluster mass. Also note that the
value decreases if the average $m_*$ is taken to be larger.
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline \rule{0pt}{3ex} $\rm{ T_{evap} (Gyr)}$ & ${\rm r_c^{1.3}
(mpc)}$ & ${\rm \lambda_{BH}^{1.3}}$ & ${\rm r_c^{2.0}
(mpc)}$ & ${\rm \lambda_{BH}^{2.0}}$ \\
\hline
\rule{0pt}{3ex}$0.1$ & $0.87$ & $0.762$ & $1.28$ & $0.645$ \\
\hline
\rule{0pt}{3ex}$1$ & $2.11$ & $0.919$ & $2.61$ & $0.876$ \\
\hline
\rule{0pt}{3ex}$10$ & $4.12$ & $0.975$ & $5.27$ & $0.964$ \\
\hline
\end{tabular}
\end{center}
\caption{The cluster core radius $r_c$ and minimum black hole mass
fraction $\lambda_{BH}$ for the limits obtained by
$<v>_{max}^2=1.3$ and 2.0, for $T_{evap}=0.1$, 1 and 10 Gyr.}
\label{tab3}
\end{table*}
\section{The spin of the black hole}
The periastron shift is the net contribution of the relativistic
retrograde shift due to the black hole and the Newtonian prograde
shift due to the surrounding cluster. Obviously, if the periastron
advance due to the stellar cluster were known, the contribution of
periastron advance due to the black hole could be obtained by
subtracting from the measured quantity. The question arises
whether the information obtained would be adequate to obtain both
the black hole mass and spin parameters. Though we can put
reasonably sharp bounds on the stellar cluster about the black
hole, is it good good enough for our purpose? If so, we could use
eq.(\ref{kershift}) to obtain the spin of the black hole for
different values in the possible range for the periastron shift.
It is easy to see from Fig.\ref{shiftvscore} that for
$\lambda_{BH} = 0.99$ and allowing for the maximum range of
unknown values of $\alpha$ and $r_{c}$ the $1.8\times10^3<-\Delta
\phi<4.7\times10^3$ or $1.9\times10^{-3}<-\Delta
\phi_E<4.7\times10^{-3}$. For the sharpest limit obtained,
$\alpha=5$, and Brownian motion 1.3 km s$^{-1}$, $\lambda_{BH} =
0.975$, we find that $\Delta \phi \simeq -4.47\times10^3$ or
$\Delta \phi_E \simeq -4.7\times10^{-3}$. For Brownian motion 2.0
km s$^{-1}$, $\lambda_{BH} = 0.964$, $\Delta \phi \simeq
-5.75\times10^3$ or $\Delta \phi_E \simeq -6.0\times10^{-3}$. This
is a factor of 5 {\it less} than the effect of the spin. Hence
this method {\it cannot} be used to determine the spin. For this
we need the cluster parameter values, rather than upper limits for
them. Alternatively, one would need to rely on the retrolensing
method suggested earlier (\citealt{depaolis4,depaolis5}).
\section{Determination of cluster parameters}
Using the stronger (1$\sigma$) limit of 1.3 km $\rm{s}^{-1}$ and
the weaker (2$\sigma$) limit of 2.0 km $\rm{s}^{-1}$ to limit the
Brownian motion of Sgr A$^*$ for our calculations and evaporation
times of 1 and 10 Gyr for the cluster, we obtained the minimum
black hole mass. For the stronger limits on the Brownian motion
and the evaporation time, it is $3.579\times 10^6$ M$_{\odot}$
corresponding to a $\lambda_{BH} \simeq 0.975$ for $\alpha =5$.
Our numerical analysis shows that the transition from a prograde
shift (due to the black hole) to a retrograde shift (due to the
extended mass assumed to be distributed with a Plummer density
profile) occurs at $\lambda_{BH} \simeq 0.9976$, $0.9986$ and
$0.9990$ for $\alpha =4$, $5$ and $6$, respectively. Hence, even a
small cluster around the central massive black hole limits the
possibility to observe and use the periastron shift of the S2
star.
Since we have modeled the star cluster density profile by a
Plummer model, the periastron shift contribution due to the
stellar cluster depends on three parameters: the central density
$\rho _0$ (or equivalently $\lambda _{BH}$); the core radius
$r_c$; and the power-law index $\alpha$. This degeneracy in the
determination of the stellar cluster parameters is due to the
measurement of the periastron shift of a single star. This is
easily seen by inspecting Figure \ref{fig_par0}, which has been
obtained for illustrative purposes for the S2 star by setting
$\lambda _{BH}=0.99$ and varying both the core radius $r_c$ and
power-law index $\alpha$ for the star cluster density profile.
Each contour line corresponds to a given S2 periastron shift in
units of degrees. To solve the parameter degeneracy and determine
the stellar cluster parameters (by studying the periastron advance
effect), the periastron shifts for at least three different stars
have to be measured with sufficient accuracy. Consider, for
example, the S16 star having an orbital period of $\simeq 36$ yr
and eccentricity $e\simeq 0.97$. Measuring its periastron shift
and comparing with the S2 result will give much tighter
information about the stellar cluster parameters. From Figure
\ref{fig_par1} it is evident that there are regions (intersections
between dashed and solid lines) in the $\alpha$-$r_c$ plane for
which one measures values of the periastron shift for the S2 and
S16 stars. Obviously, there could be (as yet unobserved) stars
with orbit apocenters comparable to S2, but with different
eccentricities (for example larger than 0.87) or stars closer to
the GC black hole than S2 or S16 stars. Monitoring their orbits
and measuring their periastron shifts will be extremely helpful in
reconstructing the cluster density profile. As an example, in
Figure \ref{fig_par2} we compare the expected S2 periastron shift
(solid lines obtained for $\lambda _{BH}=0.99$) with the
periastron shift of a star whose orbit has an eccentricity of
$\simeq 0.87$ and semi-major axis 3 times smaller than that of S2.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.55]{f13.eps}\qquad
\end{center}
\caption{The expected S2 periastron shift for $\lambda _{BH}=0.99$
for different values of both the core radius $r_c$ and power-law
index $\alpha$ for the considered Plummer density profile. Each
contour line corresponds to a given S2 periastron shift in degree
units. Note that a degeneracy occurs since there exist different
values of the power law index and core radius corresponding to the
same periastron shift.} \label{fig_par0}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.55]{f14.eps}\qquad
\end{center}
\caption{The same as in Figure 8. Dotted lines show contours for
the S16 star. If the periastron shifts of both stars will be
measured in the future the intersection between the corresponding
contour lines will give information about the central stellar
density profile.} \label{fig_par1}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.55]{f15.eps}\qquad
\end{center}
\caption{The same as in Figure 8. Dotted lines show contours for a
star with orbit of $\simeq 0.87$ (the same as the S2 star) but
semi-major axis 3 times smaller with respect to the S2 one. If the
periastron shifts of both stars will be observed in the future the
intersection between the corresponding contour lines gives
information about the central stellar density profile.}
\label{fig_par2}
\end{figure}
As is evident from Figures 8 - 10, one can obtain estimates of the
$r_c$, $\alpha$ and $\lambda _{BH}$, provided that three stars
have been observed to sufficient accuracy. Assume that we have
adequate accuracy of observation to see periastron shifts of
10$^{-2.5}$ mas, which is the value required to see the
relativistic periastron shift. To what accuracy have we limited
the cluster parameters? To determine this, we could just vary
$\lambda_{BH}$ for a given $r_c$. The effect of this change would
be less than the effect of changing $r_c$ {\it and}
$\lambda_{BH}$. As such, if we want to know how accurately the
cluster parameters are determined, we need to calculate the
maximum change in $r_c$ {\it along with} the change in
$\lambda_{BH}$, as allowed by the Brownian motion limit. By
varying $\lambda_{BH}$ by 10$^{-2}$ and $r_c$ maximally we find
that we get the required accuracy, for evaporation rates from 1 to
10 Gyr and Brownian motion of 1.3 to 2.0 km s$^{-1}$. With this
accuracy we should also be able to separate out the classical
periastron shift of the stellar cluster and the relativistic
effect of a maximal (and even slightly less than maximal) Kerr
black hole. With better accuracy we should be able to get an
estimate, or at least an upper bound, for the black hole spin as
well. The question now is, what is required to achieve this
accuracy of observation of the periastron shift of three stars?
This point is discussed in the next section.
\section{Observational requirements for determination of black
hole spin and cluster parameters}
In the near future, observations using large diameter telescopes
in combination with adaptive optics may allow us to reach the
angular resolution needed to measure the periastron shift of stars
close to the GC back hole. Consider for example an instrument with
an angular resolution $\Delta \Phi _A$ and assume that the
relative position of stars can be determined to about $1/\epsilon$
of the achieved angular resolution, i.e. the position accuracy is
$\Delta \Phi _P\simeq \Delta \Phi _A/\epsilon$. The positional
accuracy can be increased by a factor $\sqrt{N}$, if $N$ reference
stars are used. In this case, the maximum positional accuracy is
simply given by (\citealt{Rubilar01})
\begin{equation}
\Delta \phi _{P} = \frac{\Delta \Phi _P}{\sqrt{N}}~.
\end{equation}
It follows that if the periastron position of a star shifts by an
amount $\Delta \Phi _E$ (as observed from Earth), to obtain the
desired accuracy we need at least that $\Delta \phi _{P} \simeq
\Delta \Phi _E$. In this case, the minimum number of reference
stars can be determined once both the instrument angular and
positional accuracies are known, i.e.
\begin{equation}
N_{max}=\left(\frac{\Delta \Phi _P}{\Delta \Phi _E}\right)^2~.
\end{equation}
As an example, the LBT interferometer has angular resolution
$\Delta \Phi _A\simeq 30$ mas and the relative position of stars
is conservatively estimated to be about 1/30 of that value
(\citealt{Rubilar01}). Therefore, to measure the periastron shift
with adequate accuracy to see the least shift for the cluster
parameters allowed, and thereby detect the relativistic shift, we
need $N$ to be $\simeq (6.6\times10^{-1}/2\times10^{-3})^2$ or
10$^5$ reference stars, which automatically provides the accuracy
required to see the maximal Kerr (spin) effect. For PRIMA (see
http://www.eso.org/projects/vlti/instru/prima/index\_prima.html)
the relative positional accuracy is planned to be $\simeq
10~\mu$as. As such, we only need a single reference star.
\section{Concluding remarks}
We have used the fact that the stellar cluster close to the
central black hole seems spherically symmetric to limit the
Brownian motion of Sgr A$^*$ to be the observed proper motion. We
have taken the stronger (1$\sigma$) limit of 1.3 km $\rm{s}^{-1}$
and the weaker (2$\sigma$) limit of 2.0 km $\rm{s}^{-1}$ for our
calculations. We also used evaporation times of 1 to 10 Gyr for
the cluster, appropriately modified to incorporate the
gravitational well due to the black hole, to put further
constraints on the cluster mass. The results of our calculations
show that the stellar periastron shifts due to the cluster, even
limited to the extent considered, may totally swamp not only the
Kerr (spin) effect but also the Schwarzschild effect. However, the
discussion focused on the observations for a single star, S2. By
modelling the star cluster density profile with a Plummer low,
the periastron shift contribution due to the stellar cluster
depends on three parameters: the central density $\rho _0$ (or
equivalently $\lambda _{BH}$), the core radius $r_c$, and the
power-law index $\alpha$. Consequently, with observations of three
stars we should be able to determine the cluster parameters
adequately. \footnote{Note that, wether we would have known that
the star cluster follows a Bahcall-Wolf profile, by measuring the
periastron advance of only one star we may be able to calculate
the only parameter: $\lambda_{BH}$ (from Fig. 8).} We have
addressed the question of what is required to obtain the desired
accuracy for observing the relativistic effect. It turns out that
we need about 10$^5$ reference stars with the LBT interferometer.
With the accuracy expected of PRIMA, it should be enough to use
only one reference star.
\acknowledgements
This work has been partially supported by MIUR (Programmi di
Ricerca Scientifica di Rilevante Interesse Nazionale (PRIN04) -
prot. 2004020323$\_$004). We would like to thank an unknown
referee for very useful suggestions and comments that have
improved our paper. Two of us (AQ and AFZ) would like to thank the
Department of Physics of University of Lecce and {\it INFN}
(Italy) where this work has been initiated. FDP and AAN would like
to thank the $30^{th}$ International Nathiagali Summer College
(Pakistan), where the original version of this work has been
completed. AFZ is also grateful to the National Natural Science
Foundation of China (NNSFC) (Grant \# 10233050) and National Key
Basic Research Foundation (Grant \# TG 2000078404) for a partial
financial support of the work.
|
2,877,628,090,287 | arxiv | \section{Introduction}
Please see our new paper ``Tableaux combinatorics for the asymmetric
exclusion process and Askey-Wilson polynomials"
at \textsf{arXiv:0910.1858} instead.
\end{document}
|
2,877,628,090,288 | arxiv | \section{R\'enyi diagonal entropies are half of thermodynamic ones}
\label{app}
In this appendix we prove that R\'enyi diagonal entropies are half of thermodynamic one for arbitrary index $\alpha$, generalizing the
result for $\alpha=1$ presented in the main text.
However, this proof turn out to be more cumbersome and technical than the one for the von Neumann entropy,
mainly because the thermodynamic R\'enyi entropies must be derived from the GGE since there is no analogous of the Yang-Yang entropy.
In particular, the proof requires the manipulation of the explicit TBA equations that we avoided in the main text.
In TBA language, the saddle point equation (\ref{min}) is a set of coupled integral equations for the root and hole densities
$\rho_n(\lambda)$ and $\rho^{(h)}_n(\lambda)$ for each string of length $n$.
These equations are written in a more compact form in terms of the ratios $\eta_n(\lambda)\equiv\rho^{(h)}_n(\lambda)/\rho_n(\lambda)$,
assuming the standard form \cite{taka-book,caux-16}
\begin{equation}
\label{qa-tba}
\ln\eta_n(\lambda)=g_n(\lambda)+\sum\limits_{m=1}^\infty a_{nm}\star\ln (1+\eta_m^{-1})(\lambda),
\end{equation}
where the symbol $\star$ denotes the convolution and
we introduced the driving term $g_n(\lambda)$ from
\begin{equation}
\label{driving}
{2\cal E}[\pmb\rho]=L\sum_{n=1}^\infty\int_0^\infty d\lambda\rho_n(\lambda) g_n(\lambda).
\end{equation}
In Eq. (\ref{qa-tba}) the functions $a_{nm}(\lambda)$ encode the interaction in the thermodynamic limit and are simply read
from the Bethe equations of the specific model (see e.g. \onlinecite{taka-book}).
Consequently the saddle point equation for general $\alpha$ in Eq. (\ref{mina}) are simply
\begin{equation}
\label{qa-tba-a}
\ln\eta_n(\lambda)=\alpha g_n(\lambda)+\sum\limits_{m=1}^\infty a_{nm}\star\ln (1+\eta_m^{-1})(\lambda).
\end{equation}
The generalized Gibbs ensemble describing local observables after a quench to an integrable model has the form
\begin{equation}
\rho_{\rm GGE}=\exp\Big(-\sum_{k} \beta_k \hat I_k\Big) ,
\end{equation}
where the operators $\hat I_k$'s form a complete set of local and quasi-local integrals of motion and $\beta_k$ are fixed by the condition that
the charges assume the same value in the initial state and in the GGE, i.e.
$\langle \Psi_0| \hat I_k| \Psi_0\rangle= {\rm Tr} [\rho_{\rm GGE} \hat I_k]$.
The expectation values of local operator in the GGE are obtained by generalized TBA \cite{mc-12}.
Analogously to (\ref{qa-d-ensemble}) for the diagonal ensemble, we write the expectation value of a local operator $O$ as a path integral
\begin{equation}
\langle O\rangle_{\rm GGE}
=\int{\mathcal D}\pmb{\rho}\exp\Big[ -\sum_k \beta_k I_k(\pmb{\rho})+S_{YY}(\pmb{\rho})\Big]\langle\pmb{\rho}|{\mathcal O}|
\pmb{\rho}\rangle,
\end{equation}
where we used the fact that Bethe states $|\pmb \rho\rangle$ are eigenstates of the conserved charges with eigenvalues $I_k(\pmb{\rho})$, i.e.
\begin{equation}
\hat I_k |\pmb{\rho}\rangle= I_k(\pmb{\rho}) |\pmb{\rho} \rangle.
\end{equation}
The resulting saddle point equations are easily read from the above path integral \cite{mc-12}
\begin{equation}
\label{qa-gge}
\ln\eta_n(\lambda)=\sum_{k} \beta_k f_{kn}(\lambda)+\sum\limits_{m=1}^\infty a_{nm}\star\ln (1+\eta_m^{-1})(\lambda),
\end{equation}
where we introduced the functions $f_{kn}(\lambda)$ as
\begin{equation}
I_k(\pmb \rho)= L \sum_n \int_{-\infty}^\infty d\lambda f_{kn} (\lambda)\rho_n(\lambda).
\label{Ivslam}
\end{equation}
These functions $f_{kn}(\lambda)$ are the representation of the local and quasi-local integrals of motion in terms of the rapidities and are
generically known but can be cumbersome. For example in the case of the repulsive Lieb-Liniger model where there are no strings
(i.e. there is not the sum over $n$ above), these functions are simply $f_k(\lambda)=\lambda^k$, i.e.
$ I_k(\pmb \rho)= L \int d\lambda \lambda^k \rho(\lambda)$.
At this point, since the expectation values of all the local observables in the diagonal ensemble and in the GGE must be equal,
these must have the same saddle point root distribution $\pmb \rho$ which implies Eqs. (\ref{qa-tba}) and (\ref{qa-gge}) to be equal.
This equality leads to the identification
\begin{equation}
\sum_k \beta_k f_{kn}(\lambda)=g_n(\lambda)\,,
\label{id1}
\end{equation}
for $\lambda\geq0$ where $g_n(\lambda)$ is defined.
This identification implies
\begin{equation}
4 {\cal E}(\pmb\rho)= \sum_k \beta_k I_k (\pmb\rho)\,.
\label{id2}
\end{equation}
where we used the fact that the integral in (\ref{driving}) is on the positive axis, while the one in (\ref{Ivslam}) is on the entire real line.
Notice that those functions $f_{kn}(\lambda)$ which are odd in $\lambda$ do not contribute to the GGE because the considered initial state
is even and the corresponding $\beta_k$'s are vanishing. This consideration ensures the validity of (\ref{id2}).
We are finally ready to obtain thermodynamic R\'enyi entropies.
These are given by
\begin{equation}
\textrm{Tr}\rho^\alpha_{\rm GGE}=\int{\mathcal D}\pmb{\rho}e^{-\alpha \sum_k \beta_k I_k[\pmb \rho]+ S_{YY}[\pmb{\rho}]}.
\end{equation}
Thus, the saddle point equation are the same as in (\ref{qa-gge}) with the replacement $\beta_k\to \alpha \beta_k$ for all $k$.
Because of the identification (\ref{id1}), these generalised TBA for the GGE equations are the same as those for the diagonal
ensemble (\ref{qa-tba-a}). This equivalence could have been expected, but it is far from trivial because R\'enyi entropies are global and
not local quantities (and indeed we will soon find a factor 2 between the two).
At this point, the GGE R\'enyi entropy is read off from the saddle-point of the above path integral, in analogy to the
diagonal one (\ref{d-renyi-main}), as
\begin{equation}
S_{\rm GGE}^{(\alpha)}=
\frac{1}{1-\alpha}\Big[-\alpha \sum_k \beta_k I_k(\pmb{\rho}^*_\alpha)+S_{YY}(\pmb{\rho}^*_\alpha)\Big].
\end{equation}
which using (\ref{id2}) is exactly twice the diagonal R\'enyi entropy (\ref{d-renyi-main}).
|
2,877,628,090,289 | arxiv | \section{Introduction}
There exist a range of methods for extracting primary low-level features from images.
This include edge, corner and blob detection. Edge detection methods vary from convolution
based measurement of local image gradient \cite{Canny86} to morphology based
methods \cite{Smith97}. Numerous corner and blob detectors now exist with noteworthy
methods being SIFT \cite{Lowe04} (local gradient based) and MSER \cite{Matas02robustwide} (area topology based).
Low level features are typically used to establish correspondence
between images or as indexing terms in object recognition or navigation.
The low level feature detector presented here is defined to be the 3-value-quantised difference between
a pixel value and an estimate of the local image mean. We have christened this the Sinclair-Town or
ST-transform for short. In spirit this falls between an edge detector and the MSER blob detector.
The transform is efficient to compute as rectangular areas can be used to compute the local mean
via the cumulative sum image. The range of corner and interest point detectors is sufficiently large
that comment on such is left to survey papers and this paper will focus on exposition of the ST transform,
however useful papers do include \cite{Dahl_findingthe}, \cite{FTombari08} \cite{DBLP:conf/bmvc/RichardsFJ92}.
Section \ref{section:1} gives details of the transform and examples of its output.
Section \ref{section:2} show how the output of the ST-transform can be used to define various extended
features.
Section \ref{section:3} demonstrates the use of ST-transform features in image matching and section \ref{section:4}
offers wise and considered conclusions.
\section{The Sinclair-Town (ST) transform.}
\label{section:1}
We define the ST (Sinclair-Town) transform of an image I to be:
\begin{equation}
if( (I - m(I,d)) > k1 ) ST(I) = 1,\\
if( (I - m(I,d)) < -k2 ) ST(I) = -1\\
else ST(I) = 0.
\end{equation}
Where m(I,d) is the local mean of I computed over a region of side length 2*d+1 centred on the pixel. k1 and k2 are
typically set to 4 for stereo matching and d to 12. The local mean is efficiently estimated using a cumulative sum
of image brightness.
Matlab code for the single scale ST transform.
\noindent
\begin{verbatim}
function trx = ST(im, d,k1,k2)
q = d*2+1;
D = d+1;
[nr,nc] = size(im);
s = cumsum(cumsum(im,2),1);
br = nr-d-1;
bc = nc-d-1;
b0 = D+1;
trx = zeros(nr,nc);
nx = q*q;
for r=b0:br
for c=b0:bc
v= (s(r-D,c-D)-s(r-D,c+d)-s(r+d,c-D)+s(r+d,c+d))/nx;
k = im(r,c)-v;
if k > k1
trx(r,c) = 1;
elseif k < -k2
trx(r,c) = -1;
end
end
end
\end{verbatim}
Figure~\ref{fig:ST_1} shows an image of some cars moving on a road and the associated ST transforms for d=6,12,18 k=4.
Increasing d gives a wider band round dark-light edges and increases tolerance to change in shape under correlation.
\begin{figure}
\centering$
\begin{array}{ll}
\includegraphics[width=.48\textwidth]{figures/images/img001874.eps}&
\includegraphics[width=.48\textwidth]{figures/images/img001874_ST6.eps} \\
a. & b. \\
\includegraphics[width=.48\textwidth]{figures/images/img001874_ST12.eps} \\
c. \\
\end{array}$
\caption{
\emph{a.} Image from a super interesting video sequence of passing cars. \emph{b.} ST-transform with \{d=6\}.
\emph{c.} ST-transform with \{d=12\}.
}
\label{fig:ST_1}
\end{figure}
The ST-transform can be extended to cope with detail on multiple scales by introducing more than one
estimate of local mean with differing sizes of domains of support (fig.~\ref{fig:ST_2}).
Sensitivity to local edges can also
be increased through the inclusion of local asymmetric mean estimates.
\begin{figure}
\centering$
\begin{array}{ll}
\includegraphics[width=.48\textwidth]{figures/images/UAE_0006b.eps}&
\includegraphics[width=.48\textwidth]{figures/images/UAE_0006b_ST18.eps}\\
a. & b.\\
\includegraphics[width=.48\textwidth]{figures/images/UAE_0006b_ST18_6.eps}\\
c.\\
\end{array}$
\caption{
\emph{ a.} Super interesting image of a car license plate. \emph{b.} single scale ST-transform.
\emph{c.} multi-scale ST-transform.
Adding a second scale to the TS transform allows fine detail to dominate over strong longer scale brightness differences.
}
\label{fig:ST_2}
\end{figure}
\section{Feature extraction from the ST transform.}
\label{section:2}
The ST transform produces a 3 valued quantised result representing areas that are either
1 (brighter than the local mean by a threshold), -1 (darker than the local mean by a threshold) or
0 (similar to the local mean).
The simplest feature to extract from the quantised image is areas of constant quantisation.
Figure ~\ref{fig:ST_3} shows regions of constant quantisation for the images in figure~\ref{fig:ST_1}
and ~\ref{fig:ST_2}.
\begin{figure}
\centering$
\begin{array}{l}
\includegraphics[width=.9\textwidth]{figures/images/img001874_ST12_lbd.eps}\\
\includegraphics[width=.9\textwidth]{figures/images/UAE_0006b_ST18_lbd.eps}\\
\end{array}$
\caption{
Mutlab artificially coloured regions of constant quantisation value form figures \ref{fig:ST_1}
and \ref{fig:ST_2}.
}
\label{fig:ST_3}
\end{figure}
These area features are faster to compute than MSER regions or multiscale SIFT features
and are generally stable to mild change in viewpoint.
\subsection{Edge and corner features from the ST transform}
The ST-transform may be used as an edge detector by labelling for example pixels on the boundary
of dark regions that are adjacent or close to adjacent to light pixels as edges: figure ~\ref{fig:ST_4}.
Equally a gradient threshold (original image gradient across the boundary) could be applied to boundary
pixels of dark regions.
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{figures/images/img001874_ST12_edges.eps}
\caption{
Dark ST edges of figure \ref{fig:ST_1}.
}
\label{fig:ST_4}
\end{figure}
Corners may be defined as maxima of curvature of dark (or light edges): figure ~\ref{fig:ST_5}.
A length scale is required for curvature estimation, in this example +/- 4 pixels along the edge chain was used.
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{figures/images/img001874_ST12_corners.eps}
\caption{
Maxima of curvature of dark ST-edges from \ref{fig:ST_1} overlayed.
}
\label{fig:ST_5}
\end{figure}
\section{Matching with the ST-transform.}
\label{section:3}
The ST-transform of an extended area provides a stable basis for establishing correspondence
between areas of an image and other images in a sequence or from a stereo pair.
A range of metrics are possible for comparing either the intensity values or the ST-quantised values
of two image patches. For the purposes of this paper we demonstrate a sum of absolute differences metric.
Figure ~\ref{fig:ST_6} shows a pair of images from a stereo camera a rig and their ST-transformed versions
with a mean area patch dimension of 25x25 pixels.
\begin{figure}
\centering$
\begin{array}{ll}
\includegraphics[width=.48\textwidth]{figures/images/lft0_00001.eps}&
\includegraphics[width=.48\textwidth]{figures/images/rgt0_00001.eps} \\
a. & b. \\
\includegraphics[width=.48\textwidth]{figures/images/lft0_00001_ST12.eps}&
\includegraphics[width=.48\textwidth]{figures/images/rgt0_00001_ST12.eps}\\
c. & d. \\
\end{array}$
\caption{
\emph{a.} Image from a plump, ripe stereo pair. \emph{b.} other image form the pair \{d=6\}.
\emph{c.} ST-transform of \emph{a} with \{d=12\} \emph{d.} ST-transform of \emph{b} with \{d=12\} .
}
\label{fig:ST_6}
\end{figure}
Figure ~\ref{fig:ST_7} shows a block matches between the stereo pair in figure ~\ref{fig:ST_6} together
with the images brought into registration via the projective transform of the plane of the road, estimated
from the matches \cite{Sinclair92}.
\begin{figure}
\centering$
\begin{array}{l}
\includegraphics[width=.75\textwidth]{figures/images/point_matches.eps}\\
\includegraphics[width=.9\textwidth]{figures/images/proj_match_overlay.eps} \\
\end{array}$
\caption{
Block matches created by the Matlab matching code, and a projectively rectified overlay of the image pair.
}
\label{fig:ST_7}
\end{figure}
Figure ~\ref{fig:ST_7} shows a block matches between the amply, gorgeous stereo pair in figure ~\ref{fig:ST_6} together
with the images brought into registration via the projective transform of the plane of the road,
estimated from the matches.
The projective distortion of the driveway increases towards the bottom of the field of view and matching is degraded.
In reality it would be best to include an interactive projective rectification step during the matching process or use
afinely normalised indexing features.
Figure ~\ref{fig:ST_8} compares block matching done with sum of absolute difference of ST transformed images against
normalised zero mean correlation of the original brightens function for the same patch.
For ease of comparison - the sum of absolute difference is plotted so in both cases peaks represent better matches.
\begin{figure}
\centering$
\begin{array}{ll}
\includegraphics[width=.48\textwidth]{figures/images/lft0_00001_ST12_patch.eps}&
\includegraphics[width=.48\textwidth]{figures/images/lft0_00001_patch.eps} \\
a. & b. \\
\includegraphics[width=.48\textwidth]{figures/images/lft0_00001_ST12_erx9.eps}&
\includegraphics[width=.48\textwidth]{figures/images/lft0_00001_conv9.eps}\\
c. & d. \\
\includegraphics[width=.48\textwidth]{figures/images/lft0_00001_ST12_erx21.eps}&
\includegraphics[width=.48\textwidth]{figures/images/lft0_00001_conv21.eps}\\
e. & f. \\
\includegraphics[width=.48\textwidth]{figures/images/lft0_00001_ST12_erx41.eps}&
\includegraphics[width=.48\textwidth]{figures/images/lft0_00001_conv41.eps}\\
g. & h. \\
\end{array}$
\caption{
\emph{a.} local ST patch, \emph{b.} corresponding image patch.
\emph{c,e,g} Error surface for -sum of absolute difference metric for
convolution with kernel sizes of 9, 21 and 41. \emph{d,f,h} normalised zero mean correlation value surface
for kernel sizes of 9, 21, 41.
}
\label{fig:ST_8}
\end{figure}
Mutlab code for a basic block matcher for a pair of ST-transformed images is.
\noindent
\begin{verbatim}
function [pts, pts2] = block_match(bx1, bx2, maxR, maxC, pflag)
[nr,nc] = size(bx1);
dim = 40;
br = nr-dim-2;
bc = nc-dim-2;
dog = zeros(maxR*2+1, maxC*2+1);
pts = zeros(0,2);
pts2 = zeros(0,2);
np = 0;
MR = maxR*2+1;
MC = maxC*2+1;
for r = dim+2:100:br
r
for c = dim+2:50:bc
m = bx1(r-dim:r+dim, c-dim:c+dim);
df = sum( abs(m(:)));
if df > 1050
dog(:) = 7000;
for R = r-maxR:r+maxR
i=R-r+maxR+1;
for C = c-maxC:c+maxC
if( R>dim && C> dim && R<br && C<bc)
j=C-c+maxC+1;
m2 = bx2(R-dim:R+dim,C-dim:C+dim);
dog(i,j) = sum(abs(m(:)-m2(:)));
end
end
end
[v,idx ] = min(dog(:));
if v < 6000
[rr,cc] = ind2sub([MR,MC],idx);
np = np+1;
pts(np,:) = [r,c];
pts2(np,:) = [(r+rr-maxR), (c+cc-maxC)];
end
if 0
figure(pflag +6)
surf(dog)
df = 9;
end
end
end
end
drcs = [pts,pts2];
if( pflag > 0 )
cim = uint8(zeros(nr,nc,3));
cim(:,:,1) = bx1(:,:)*60 + 38;
cim(:,:,2) = bx2(:,:)*20 + 68;
cim(:,:,3) = bx1(:,:)*60 + 38;
figure( pflag + 10 );
imagesc( cim )
colormap gray
hold on
for i=1:np
r = pts(i,1);
c = pts(i,2);
mr = pts2(i,1);
mc = pts2(i,2);
plot( [c;mc], [r;mr],'y' );
plot( c, r,'mx' );
plot( mc, mr,'g+' );
end
hold off
end
return
\end{verbatim}
\subsection{Matching function stability.}
There is a trade off between the increased cost using a larger area to perform
correlation type matching and the specificity of any match.
Figure ~\ref{fig:ST_8} shows the error surface for various sizes of patch centred about
a point in the image pair in figure ~\ref{fig:ST_6} for comparison standard zero mean cross
correlation for the local brightness function are included.
\section{Conclusions}
\label{section:4}
The ST-transform provides a fast and flexible basis for low level feature extraction from images.
The robust nature of the quantisation makes it a good basis for stereo matching and also for glyph
spotting or drivable region detection in applications like self driving cars or general OCR.
Matlab for some ST-transform methods is available at https://github.com/imensedave/ST-transform\_methods.
The methods used in this paper are used for a interactive highway sign reading demo at
https://www.imense.co.uk/HWAY.html
The ST-transform points out blank areas of an image that are not going to be great for matching. In general a
feature for frame to frame matching is going to need to contain a minimum turning angle in the boundary of a
ST-transform derived feature. On reflection this paper represents the last \emph{Hurrah} of
brutish correlation based matching with future methods liable to focus exclusively
on more direct convolution net based approaches.
\bibliographystyle{splncs}
|
2,877,628,090,290 | arxiv | \section{Introduction}
The observation
of jet quenching phenomenon
and hydrodynamical flow effects
in $AA$ collisions at RHIC and LHC
signals about formation of a hot quark-gluon plasma (QGP)
in the initial stage of $AA$ collisions. It seems likely
that the QGP formation goes via the thermalization of the
collective color fields of the so-called glasma stage
\cite{glasma1,glasma2} formed after multiple gluon exchanges between
two strongly Lorentz contracted nucleus disks.
It is believed that the QGP
should also reveal itself in thermal photon emission
that may be important in the low and
intermediate $k_T$ region \cite{Shuryak}.
However, the
photon production in $AA$ collisions
shows some inconsistency with
the QGP evolution supported by
the results of the jet quenching
analyses.
The data from RHIC and LHC on jet quenching in $AA$ collisions
can be explained in the picture
with radiative and collisional energy loss
for the hydrodynamical
QGP evolution with the QGP production time $\tau_0\sim 0.5$ fm
and the initial entropy determined from the measured
hadron multiplicities \cite{RAA12,UW_JQMC,CUJET}. However,
theoretical predictions for the thermal photon spectrum in this picture
obtained with a sophisticated viscous hydrodynamical model of
the fireball evolution \cite{Gale_best} underestimate the photon
spectrum measured at RHIC by PHENIX \cite{PHENIX_ph_PR}
in Au+Au collisions at $\sqrt{s}=0.2$ TeV
by a factor of $\sim 3$. Several mechanisms have been suggested
that can increase the photon emission in $AA$ collisions.
There were suggestions that very strong
magnetic field created in noncentral $AA$ collisions
can increase the
photon emission due to the conformal anomaly \cite{Kharzeev1} and
the synchrotron radiation \cite{T1}. However, these mechanisms
require too high magnitude of the magnetic field \cite{Z_syn2}, that contradicts
to calculations for realistic evolution of the plasma fireball \cite{Z_maxw}.
In Ref. \cite{Snigirev} it was suggested that a considerable additional
contribution to the photon
production may be due to the boundary bremsstrahlung
resulting from interaction of escaping quarks with collective
confining color field at the surface of the QGP.
In Refs. \cite{glasma_L,glasma_L2,glasma_W} it was
argued that the pre-equilibrium glasma phase also can give
large contribution to the photon emission in $AA$ collisions.
Unfortunately, uncertainties in the theoretical predictions
for the boundary photon emission \cite{Snigirev} and the photon
emission from the glasma \cite{glasma_L,glasma_L2,glasma_W} are rather large.
As compared to the glasma stage the photon production in the QGP
stage is better understood. However, even for the QGP phase the
theoretical uncertainties can be considerable, because
the available analyses are based on the pQCD picture of a weakly coupled QGP.
And its applicability to the QGP produced at RHIC and LHC may be questionable.
In the leading order (LO) pQCD the thermal photon emission from the QGP
is due to the
$2\to 2$ processes: $q(\bar{q})g\to \gamma q(\bar{q})$ (Compton)
and $q\bar{q}\to \gamma g$ (annihilation).
In the pQCD picture a significant contribution to the photon
emission comes also
from the higher order collinear processes $q\to \gamma q$ and $q\bar{q}\to
\gamma$ \cite{AMY1}. It turns out to be parametrically of the same
order as the $2\to 2$ processes \cite{AGZ2000}. The results of
Ref. \cite{AMY1}
show that at $k/T\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 3$ contribution
of the collinear processes turns out to be close to that from
the LO $2\to 2$ mechanisms, and at $k/T\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 2$
the collinear emission gives the dominant contribution to the photon emission
rate in the QGP.
The collinear photon radiation is due to
multiple scattering of thermal quarks in the QGP. This mechanism
is similar to that for the photon radiation
from hard quarks \cite{Z_phot}
and to the induced gluon radiation from fast partons that
dominates in the jet quenching phenomenon
\cite{GW,BDMPS,LCPI1,Z_1998rev}.
In \cite{AMY1} the collinear processes have been evaluated
for constant QCD coupling using the
thermal field theory methods within the hard thermal loop (HTL)
resummation scheme.
In the case of the
induced gluon emission from fast partons in the QGP
the results for constant and running $\alpha_s$ differ considerably.
For running $\alpha_s$ the energy dependence of the
radiative parton energy loss weakens \cite{Z_Ecoll}. The analyses of the
data on the nuclear modification factor $R_{AA}$ from RHIC and LHC
\cite{Z_Ecoll,RAA08,CUJET} show that running $\alpha_s$ allows to
obtain a better agreement with the data. In \cite{Gale_best} the photon
emission has been addressed using the AMY \cite{AMY1} formulas
obtained for a fixed QCD coupling constant.
For accurate confronting the QGP signals from jet quenching and from photon
production it would be of great interest to perform calculations
of the collinear photon emission with running $\alpha_s$ consistent with
that used in the successful jet quenching analyses.
Also it would be interesting to study the sensitivity of the collinear
photon emission to variation of the quark quasiparticle mass $m_q$.
The predictions of the pQCD analysis \cite{AMY1}, based on the HTL
resummation scheme, have been obtained for the standard pQCD
quark quasiparticle mass $m_q=gT/\sqrt{3}$. However, the analysis of
the lattice data within a quasiparticle model \cite{LH}
gives practically constant thermal quark mass $m_q\sim 300$ MeV.
In a more recent analysis
\cite{mq_sQGP} it was demonstrated that in a strongly coupled QGP
the thermal quark mass may be much smaller than that in the pQCD
HTL picture. A two-pole fit (with the normal and plasmino modes)
of the Euclidean lattice quark correlator
of Ref. \cite{Karsch_mq} also supports that the thermal quark mass
may be smaller than in the HTL scheme (by a factor of $\sim 2$). However,
unfortunately the fit is not very accurate due to
the insensitivity of the Euclidean correlator to the quark
spectral function at energies $\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} T$ \cite{Karsch_mq}.
The small thermal quark mass may increase the photon emission rate,
with a very small effect on the jet quenching that is practically insensitive
to the quark quasiparticle mass \cite{BDMPS,LCPI1}.
Due to the theoretical uncertainties for the thermal quark mass,
it would be interesting to study the collinear photon
emission in a phenomenological picture without the HTL constraints
on the quark quasiparticle mass.
In the present paper we study the effect of running $\alpha_s$
and the role of variation of the quark quasiparticle mass on
the collinear photon emission in $AA$ collisions.
We treat quark multiple scattering in the QGP in the scheme we used previously
in successful jet quenching analyses \cite{RAA08,RAA12}.
There we used the Debye mass obtained in the lattice calculations
that, contrary to the HTL scheme, give nonzero magnetic screening
\cite{magnetic_Md} in the QGP.
We compare the results for this scenario with the
results for the HTL scheme with static $\alpha_s$.
We use the formalism of \cite{AZ}
based on the light-cone path integral (LCPI) approach
\cite{LCPI1,Z_1998rev}. The formulation given in \cite{AZ} reproduces
the results of the AMY \cite{AMY1} approach.
In \cite{AMY1} the photon emission rate has been expressed via
solution of an integral equation. In the present paper
the photon emission rate is expressed via solution
of a two-dimensional Schr\"odinger equation with a smooth boundary
condition. The method is convenient for numerical calculations.
\section{Theoretical framework}
The contribution of
the collinear processes $q\to \gamma q$ and $q\bar{q}\to \gamma$
to the photon emission
rate per unit time and volume in the plasma rest frame
can be written as \cite{AZ,AMY1}
\begin{equation}
\frac{dN}{dtdVd\kb}=
\frac{dN_{br}}{dtdVd\kb}
+\frac{dN_{an}}{dtdVd\kb}\,,
\label{eq:10}
\end{equation}
where the first term corresponds to $q\to \gamma q$ and the second one
to $q\bar{q}\to \gamma$.
The bremsstrahlung contribution can be written as \cite{AZ}
\vspace{0.25cm}\begin{eqnarray}
\frac{dN_{br}}{dtdVd\kb}=\frac{d_{br}}{k^{2}(2\pi)^{3}}
\sum_{s}
\int_{0}^{\infty} dp p^{2}n_{F}(p)
\nonumber\\ \times
[1-n_{F}(p-k)]\theta(p-k)
\frac{dP^{s}_{q\rightarrow \gamma q}(\pb,\kb)}{dk dL}\,,
\label{eq:20}
\end{eqnarray}
where
$d_{br}=4N_{c}$ is the number of the quark and antiquark states,
\begin{equation}
n_{F}(p)=\frac{1}{\exp(p/T)+1}\,
\label{eq:30}
\end{equation}
is the thermal Fermi distribution,
and
${dP^{s}_{q\rightarrow \gamma q}(\pb,\kb)}/{dk dL}$
is the photon emission probability distribution
per unit length for a quark of type $s$.
In the small angle approximation
we can take the vectors $\pb$ and $\kb$ parallel.
The annihilation contribution
can be expressed via the probability
distribution for the photon absorption
with the help of the detailed balance principle.
It leads to the formula \cite{AZ}
\vspace{0.25cm}\begin{eqnarray}
\frac{dN_{an}}{dtdVd\kb}=\frac{d_{an} }{(2\pi)^{3}}
\sum_{s}
\int_{0}^{\infty} dp n_{F}(p)
\nonumber\\ \times
n_{F}(k-p)\theta(k-p)
\frac{dP^{s}_{\gamma\rightarrow q\bar{q}}(\kb,\pb)}{dp dL}\,,
\label{eq:40}
\end{eqnarray}
where $d_{an}=2$ is the number
of the photon helicities,
$
{dP^{s}_{\gamma\rightarrow q\bar{q}}(\kb,\pb)}/{dp dL}
$ is the probability distribution per unit length
for the $\gamma \rightarrow q\bar{q}$ transition
($p$ is the quark momentum and $k-p$ is the antiquark momentum,
and similarly to $q\to \gamma q$ we can take the vectors
$\pb$ and $\kb$ parallel).
In the LCPI formalism \cite{LCPI1}
the probability of the $q\to \gamma q$ transition
per unit length (in terms of the fractional photon momentum $x=k/p$)
can be written as
\vspace{0.25cm}\begin{eqnarray}
\frac{d P_{q\rightarrow \gamma q}}{d
x dL}=2\mbox{Re}
\int\limits_{0}^{\infty} d
z
\exp{\left(-i\frac{z}{L_{f}}\right)}
\hat{g}(x)
\nonumber\\ \times
\left[
{\cal K}(\ro_{2},z|\ro_{1},0)
-{\cal K}_{vac}(\ro_{2},z|\ro_{1},0)
\right]\bigg|_{\ro_{1,2}=0}\,,
\label{eq:50}
\end{eqnarray}
where $L_{f}=2M(x)/\epsilon^{2}$ with $M(x)=E_qx(1-x)$,
$\epsilon^{2}=m_{q}^{2}x^{2}+m_{\gamma}^{2}(1-x)$
(in general for $a\to b+c$ transition
$\epsilon^{2}=m_{b}^{2}x_{c}+m_{c}^{2}x_{b}-m_{a}^{2}x_{b}x_{c}$),
$\hat{g}$ is the vertex operator, given by
\begin{equation}
\hat{g}(x)=\frac{V(x)}{M^{2}(x)}\frac{\partial }{\partial \ro_{1}}\cdot
\frac{\partial }{\partial \ro_{2}}\,
\label{eq:60}
\end{equation}
with
\begin{equation}
V(x)=z_{q}^{2}\alpha_{em}(1-x+x^{2}/2)/x,
\label{eq:70}
\end{equation}
$\alpha_{em}=e^2/4\pi$ the fine-structure constant. In (\ref{eq:50})
$\cal{K}$ is the retarded Green's function of a two dimensional
Schr\"odinger equation
with the Hamiltonian
\begin{equation}
\hat{\cal{H}}=-\frac{1}{2M(x)}
\left(\frac{\partial}{\partial \ro}\right)^{2}
+ v(\ro)\,,
\label{eq:80}
\end{equation}
and
${\cal{K}}_{vac}$ is the Green function for $v=0$.
The potential $v$ can be written as
\begin{equation}
v=-i P(x\rho)\,,
\label{eq:90}
\end{equation}
where the function $P(\rho)$ describes interaction of the color singlet
$q\bar{q}$ dipole with the QGP. In the HTL scheme with static coupling
constant $g$
$P(\rho)$ can be written as \cite{AZ,PA_C}
\begin{equation}
P(|\ro|)= \frac{g^{2}C_{F}T}{(2\pi)^{2}}\int d\qbt [1-\exp(i\ro \qbt)]
C(\qbt)\,,
\label{eq:100}
\end{equation}
\begin{equation}
C(\qbt)=\frac{m_{D}^{2}}{\qbt^{2}(\qbt^{2}+m_{D}^{2})}\,,
\label{eq:110}
\end{equation}
where $C_F=4/3$ is the quark Casimir, $m_{D}=gT[(N_{c}+N_{F}/2)/3]^{1/2}$
is the Debye mass.
In the approximation of static color Debye-screened scattering centers
\cite{GW} the function $P(\rho)$ reads
\begin{equation}
P(\rho)=
\frac{n{\sigma}(\rho )}{2}\,,
\label{eq:120}
\end{equation}
where $n$ is the number density
of the color centers, and $\sigma(\rho )$
is the well known dipole cross section. For running
$\alpha_s$ the dipole cross section reads \cite{NZ12}
\begin{equation}
\sigma(|\ro|)={C_{T}C_{F}}\int d\qbt \alpha_{s}^{2}(q_{T}^2)
\frac{[1-\exp(i\qbt\ro)]}{(\qbt^{2}+m_{D}^{2})^{2}}\,,
\label{eq:130}
\end{equation}
where $C_T$ is the color center Casimir.
The dipole form (\ref{eq:120}) was used in
our previous jet quenching analyses \cite{RAA08,RAA12} with $\alpha_s(q^2)$
frozen at some value $\alpha_{s}^{fr}$ at low momenta.
For numerical calculations it is convenient to use the representation
of the spectrum
as a sum of the Bethe-Heitler term and an absorptive correction
due to higher order rescatterings (describing the Landau-Pomeranchuk-Migdal
suppression \cite{Z_1998rev})
\begin{equation}
\frac{d P_{q\to \gamma q}}{d
xdL}=
\frac{d P_{q\to \gamma q}^{BH}}{d x dL}+
\frac{d P_{q\to \gamma q}^{abs}}{d
x dL}\,.
\label{eq:140}
\end{equation}
It can be derived
by expanding the Green's function ${\cal K}$ in
(\ref{eq:50})
in a series in the potential
$v$ (see \cite{Z_SLAC1} for details).
The Bethe-Heitler contribution corresponds to the term linear in $v$.
It can be written as
\begin{equation}
\frac{d P_{q\to \gamma q}^{BH}}{d
xdL}=\frac{n}{2}
\sum\limits_{\{\lambda\}} \int
d\ro\,
|\Psi(x,\ro,\{\lambda\})|^{2}
\sigma(\rho
x)\,\,,
\label{eq:150}
\end{equation}
where $\{\lambda\}=(\lambda_{q},\lambda_{q'},
\lambda_{\gamma})$ is the set of helicities,
$
\Psi(x,\ro,\{\lambda\})
$
is the light-cone wave function for $q\to \gamma q$ transition
with $\lambda_{q'}=\lambda_{q}$ ).
The contribution of the higher order rescatterings reads
\vspace{0.25cm}\begin{eqnarray}
\frac{d P_{q\to \gamma q}^{abs}}{d
x dL}=-\frac{n^2}{4}\mbox{Re}
\sum\limits_{\{\lambda\}}
\int\limits_{0}^{\infty}
dz
\int
d\ro\,
\Psi^{*}(x,\ro,\{\lambda\})\nonumber\\
\times\sigma(\rho
x)
\Phi(x,\ro,\{\lambda\},z,0)
\exp\left(-\frac{iz}{L_{f}}\right)\,,
\label{eq:160}
\end{eqnarray}
where
\vspace{0.25cm}\begin{eqnarray}
\Phi(x,\ro,\{\lambda\},z_{2},z_1)=
\int d\ro'{\cal
K}(\ro,z_{2}|\ro',z_{1})
\nonumber\\ \times
\Psi(x,\ro',\{\lambda\})\,\sigma(\rho'
x)
\label{eq:170}
\end{eqnarray}
is the solution of the Schr\"odinger equation with the
boundary
condition
$
\Phi(x,\ro,\{\lambda\},z_{1},z_{1})=
\Psi(x,\ro,\{\lambda\})\sigma(\rho
x)\,.
$
For $\gamma\to q \bar{q}$ one can obtain similar formulas.
But now $M(x)=E_{\gamma}x(1-x)$ ($x$ is the quark fractional momentum)
$\epsilon^{2}=m_{q}^{2}-m_{\gamma}^{2}x(1-x)$, and
\begin{equation}
V(x)=z_{q}^{2}\alpha_{em}N_c[x^{2}+(1-x)^{2}]/2\,.
\label{eq:180}
\end{equation}
The formulas for the light-cone wave functions for the
$q\to \gamma q$ and $\gamma\to q\bar{q}$ are similar to that
that for the QED processes $e\to \gamma e$ and $\gamma\to e\bar{e}$
given in \cite{Z_1998rev}.
We will present the results for two versions
of the model: for the phenomenological scenario with running $\alpha_s$
and for the pQCD HTL scenario with static
coupling \cite{AMY1}.
For the scenario with running coupling, as in our jet quenching analyses,
we use the dipole formulas (\ref{eq:120}), (\ref{eq:130}).
In jet quenching analyses \cite{RAA08,RAA12} we used
the quark quasiparticle mass $m_{q}=300$ MeV.
For the relevant temperature region $T\sim (1-3)T_c$,
it is supported by the analysis of Ref. \cite{LH} of the lattice data in
the quasiparticle model.
For the induced gluon emission the results are practically
insensitive to the light quark mass. But for the photon emission
the value of the quark mass is important.
As was shown recently in Ref.~\cite{mq_sQGP}, in a strongly coupled
QGP the thermal quark mass may be much smaller
than the pQCD prediction based on the HTL scheme.
Therefore, for the phenomenological scenario we present the results for two
very different values of the thermal quark mass: $m_{q}=300$ MeV (as
obtained in Ref.~\cite{LH}) and $m_q=50$ MeV. The latter value is much smaller
than the thermal pQCD HTL quark mass, and the results should be close to
that for the massless quarks supported by the analysis \cite{mq_sQGP}.
As in jet quenching analyses, for the version with running $\alpha_s$ we use
the Debye mass obtained in the lattice calculations \cite{Bielefeld_Md}, that
give $m_{D}/T$ slowly decreasing with $T$
($m_{D}/T\approx 3$ at $T\sim 1.5T_{c}$, $m_{D}/T\approx 2.4$ at
$T\sim 4T_{c}$).
For the pQCD HTL scenario we use for the quark and Debye masses the standard
pQCD values ($m_q=gT/\sqrt{3}$, $m_{D}=gT[(N_{c}+N_{F}/2)/3]^{1/2}$),
and the formulas (\ref{eq:100}), (\ref{eq:110}) for the function $P(\rho)$.
To account for the mass suppression for strange quarks we take $N_f=2.5$
as in our jet quenching analyses \cite{RAA12}.
\section{Numerical results for photon spectrum in $AA$ collisions}
In calculating the photon spectrum in $AA$ collisions
we perform the four volume integration
using the proper time $\tau$ and rapidity $Y$ variables
\begin{equation}
\tau=\sqrt{t^2-z^2}\,,\,\,\,\,
Y=\frac{1}{2}\ln\left(\frac{t+z}{t-z}\right)\,.
\label{eq:190}
\end{equation}
In these coordinates the photon spectrum reads
\begin{equation}
\frac{dN}{dy d\kb_T}=\int \tau d\tau dY d\ro\, \omega'
\frac{dN(T',k')}{dt'dV'd\kb'}\,,
\label{eq:200}
\end{equation}
where primed quantities correspond to the comoving frame, and
$\omega'=k'=|\kb'|$.
We describe the plasma fireball
at $\tau>\tau_0$
in the Bjorken model \cite{Bjorken} without the transverse expansion
that gives the entropy density $s\propto 1/\tau$.
We present the results for the ideal gas model (with
the temperature dependence of the entropy density $s\propto T^3$), that gives
$
T=T_0(\tau_0/\tau)^{1/3}
$ in the plasma phase. We also perform calculations for the temperature
dependence of the entropy density $s(T)$ obtained in the lattice simulation
\cite{EoS}.
As in jet quenching analyses \cite{RAA12} we take $\tau_0=0.5$ fm.
To account for qualitatively the fact that
the process of the QGP production is not instantaneous,
we take the entropy density $\propto \tau$ in the interval $0<\tau<\tau_0$.
However, the contribution of this region is relatively small.
We calculate the initial
density profile of the QGP fireball
at the proper time $\tau_{0}$
assuming that the initial entropy is proportional to
the charged particle pseudorapidity
multiplicity density at $\eta=0$
calculated in the two component wounded nucleon
Glauber model \cite{KN}
(the details and model parameters can be found in \cite{Z_syn2,Z_MCGL}).
In the space-time integral (\ref{eq:200}) we drop the points
with $T_0<T_c$ (here $T_c=160$ MeV is the deconfinement temperature)
at $\tau=\tau_0$.
For the ideal gas model we treat the crossover region at
$T\sim T_c$ as a mixed phase,
and take the entropy density in this phase $\propto 1/\tau$ \cite{Bjorken}.
In the mixed phase we account for the photon emission only from
the QGP phase. Note that contribution of the space-time region with $T\sim T_c$
to the photon spectrum
(both for the ideal gas fireball model and for the lattice version of
the entropy
density) is relatively small
at $k_T\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 1.5-2$ GeV.
\begin{figure}
\begin{center}
\epsfig{file=Fig1.eps,height=6cm,angle=0}
\end{center}
\caption[.]{
The photon spectrum $(1/2\pi k_T)dN/dydk_T$
averaged over the azimuthal angle for Au+Au collisions at
$\sqrt{s}=0.2$ TeV in the $0-20$\% centrality range.
Solid: the sum of the $q\to \gamma q$ and $q\bar{q}\to \gamma$
processes for running coupling with $\alpha_s^{fr}=0.5$
for $m_q=300$ MeV,
dotted: the same as solid but for $m_q=50$ MeV,
dot-dashed: the sum of the $q\to \gamma q$ and $q\bar{q}\to \gamma$
processes for the HTL scheme for $\alpha_s=0.3$,
dashed: the sum of the collinear process with the LO $2\to 2$
processes for the HTL scheme for $\alpha_s=0.3$.
The theoretical curves are for the ideal gas model for $\tau_0=0.5$ fm.
Data points are from Ref.~\cite{PHENIX_ph_PR}.
}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=Fig2.eps,height=6cm,angle=0}
\end{center}
\caption[.]{
Same as in Fig.~1 but for calculations with the entropy density $s(T)$ from
the lattice simulation \cite{EoS}.
}
\end{figure}
For the phenomenological scenario with running QCD coupling
we assume that at low momenta $\alpha_s$ is
frozen at the value $\alpha_{s}^{fr}=0.5$,
that is supported by our jet quenching analyses of the
nuclear modification factors $R_{AA}$ \cite{RAA12}
and $I_{AA}$ \cite{Z_IAA}
in Au+Au collisions at
$\sqrt{s}=0.2$ TeV.
For the HTL scenario with fixed coupling we take $\alpha_s=0.3$.
In Fig.~1 we show our results for the ideal gas model of the QGP
for the photon
spectrum $dN/dy d\kb_T=(1/2\pi k_T)dN/dy dk_T$
(averaged over the azimuthal angle)
for Au+Au collisions at $\sqrt{s}=0.2$ TeV for
$0-20$\% centrality bin for the phenomenological scenario
with running coupling for $m_q=300$ and $50$ MeV and for the HTL scenario.
We compare our results with the data from PHENIX
\cite{PHENIX_ph_PR}.
The theoretical curves have been obtained integrating
in (\ref{eq:200})
up to
$\tau_{max}=10$ fm.
At $k_T\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 1$ GeV the photon spectrum is only weakly sensitive
to $\tau_{max}$. It occurs because the main contribution at $k_T\gg T_0$ comes
from the hottest space-time region of the QGP with $\tau$ up to
several units of $\tau_0$.
For $\tau_{max}=R_A\approx 6.4$ fm the photon spectrum
at $k_T\sim 1$ GeV is reduced only by $\sim 10$\%
and for $k_T\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 2$ GeV the change in the spectrum is negligible.
For the HTL scenario we also present in Fig.~1 the sum of the
contributions from the collinear
processes and from the LO contribution due to $2\to 2$ processes
in the form obtained in \cite{AMY1}.
From Fig.~1 one can see that
the results for the phenomenological scenario with running coupling
and $m_q=300$ MeV are close to that for the HTL scenario with fixed coupling.
But for the phenomenological scenario with $m_q=50$ MeV the photon
spectrum is bigger than that for the HTL scenario by a factor of $\sim 2$.
Note that the photon yields obtained for $m_q=300$ and
$50$ MeV do not follow the power low $1/m_q^2$. This is due to
the Landau-Pomeranchuk-Migdal suppression,
described by the absorptive term on the right-hand side of (\ref{eq:140}),
that becomes very strong for small $m_q$. In this regime the quark mass
dependence becomes very weak. Our calculations show that the photon spectrum
for $m_q=100$ MeV is smaller than that for $m_q=50$ MeV only by $\sim 20$\%.
From Fig.~1 one can see that at $k_T\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 1.5-2$
GeV for the HTL scenario the theoretical curves
for the sum of the contribution from the collinear processes
$q\to \gamma q$ and $q\bar{q}\gamma$ and the LO mechanisms
underpredict the data typically by a factor of $\sim 5-7$.
Assuming that for the phenomenological scenario the relative
effect of the $2\to 2$ processes is similar to that for the HTL scenario
\footnote{The incorporation of running $\alpha_s$ for
the $2\to 2$ processes
was not elaborated yet. However, calculations
in the HTL scheme with static $\alpha_s$ show that contribution
of the LO processes has relatively low sensitivity to $\alpha_s$
(say, for $\alpha_s=0.3$ the growth of the LO contribution
as compared to that for $\alpha_s=0.2$ is $\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 10-25$\%).
Therefore, the effect of running coupling constant
on the $2\to 2$ processes should not be very large.
Note that even for the scenario with a very small thermal quark mass
\cite{mq_sQGP} the contribution of the $2\to 2$ processes should not
change significantly. Because it depends logarithmically on the
quark quasiparticle mass. And even for a vanishing quasiparticle mass
in an infinite QGP, for the $2\to 2$ process in the expanding QGP
the effective
quark virtuality cannot be much smaller than $1/\tau_{ev}$, where
$\tau_{ev}\sim 1-4$ fm is the typical QGP evolution time
dominating the photon emission.
},
we can conclude that even for the version with $m_q=50$ MeV
the experimental spectrum will be underestimated by a factor of $\sim 3$.
The situation becomes better for the results obtained for
the lattice temperature dependence of the entropy density, that are shown
in Fig.~2. In this case the theoretical predictions are approximately
increased by a factor of $\sim 1.5-2$, and the disagreement with the data
becomes smaller. The inclusion of the hadron gas phase \cite{Gale_HG}
can improve the agreement with the data at low $k_T$ ($k_T\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 1$
GeV). But it cannot increase significantly the photon spectrum
at $k_T\sim 2-3$ GeV. Thus, we can conclude that
in this high-$k_T$ region, even for the scenario with a small thermal
quark mass,
one cannot avoid some underestimation of the photon spectrum.
The agreement with the data at high $k_T$ can be improved
for a smaller value of the thermalization time $\tau_0$.
Our calculations for $\tau_0=0.25$ fm show that the theoretical predictions
increase by a factor of $\sim 2$ at $k_T\sim 2-3$ GeV.
However, such a small value of $\tau_0$ does not seem realistic,
because it is of the order of the inverse saturation scale $1/Q_s$
($Q_s\sim 1-1.5$ GeV for RHIC conditions \cite{Lappi_qs}).
And in this time region the description in terms of the
pre-equilibrium glasma stage is more
appropriate. The considerable increase of the photon spectrum
for $\tau_0=0.25$ fm can be viewed as an indication that
the glasma phase contribution to the photon production at $k_T\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 2$ GeV
can be large. In fact, the glasma effect can be considerably bigger.
Because the typical Lorentz force that quarks undergo
in the glasma is by a factor of $\sim 10-20$ bigger than that
for the Debye screened color centers in the thermalized QGP \cite{AZ_glasma}.
However, due to finite formation length of the collinear photon emission
an accurate analysis of the collinear processes including the
pre-equilibrium glasma stage is a complicated task. Because,
due to the nonlocal nature the photon emission, the photon spectrum
should be sensitive
to the whole process of the QGP formation, and one simply cannot distinguish
the photon emission from the glasma and from the QGP at $\tau\sim \tau_0$.
It worth to note that the magnitude of the jet quenching is not
strongly affected by the variation of $\tau_0$ from $\sim 0.5$ fm
to $\sim 0.25$. It is due to a strong reduction of the
radiative parton energy loss by the finite size effects for the first fm/c of
the matter evolution \cite{Z_OA}. For the same reason the glasma effect
on jet quenching also turns out to be small \cite{AZ_glasma}.
\section{Summary}
We have studied the role of running coupling and
the effect of variation of the thermal quark mass on
contribution of the collinear processes
$q\to \gamma q$ and $q\bar{q}\to \gamma$ in the QGP phase to the photon
spectrum in $AA$ collisions in a phenomenological scheme
similar to that used in our previous successful jet quenching analyses
based on the LCPI approach \cite{LCPI1} to the induced gluon emission.
The analysis of the collinear photon emission is also performed within
the LCPI formalism \cite{LCPI1}.
We reduce calculation of the photon emission rate
to solving a two dimensional Schr\"odinger equation.
For the pQCD model with static coupling constant and the thermal quark mass
predicted by the HTL scheme our method
is equivalent to the well known AMY formalism \cite{AMY1}.
We found that for the model of the QGP evolution that allows one
to obtain a reasonable description of jet quenching
both the models for the photon emission
underestimate considerably the photon spectrum
measured by PHENIX \cite{PHENIX_ph_PR}.
For the phenomenological scenario with running $\alpha_s$
with a very small thermal quark mass
($m_q=50$ MeV) the contribution of the higher order collinear processes
summed with the LO $2\to 2$ processes can
explain $\sim 50$\%
of the experimental photon yield from PHENIX \cite{PHENIX_ph_PR}
at $k_T\sim 2-3$ GeV.
Thus, we conclude that,
for the picture of the QGP evolution and for the model of
multiple parton scattering in the QGP consistent with data on jet quenching,
the photon emission from the QGP stage alone is not enough to fit
the data on the photon production in Au+Au collisions
at $k_T\sim 2- 3$ GeV even for the scenario with a very small thermal quark
mass.
\begin{acknowledgments}
This work has been supported by the RScF grant 16-12-10151.
\end{acknowledgments}
\newpage
\section*{References}
|
2,877,628,090,291 | arxiv | \section{Introduction}
On a compact Riemannian manifold $({\bf M},g)$ certain
objects of geometric
interest, such as Killing vector fields and harmonic $1$-forms
must satisfy
additional differential equations when appropriate Levi-
Civita curvature
conditions are imposed. This leads to obstructions to
the existence of
these objects, known as Bochner-type vanishing theorems.
For example, the
well known theorems of Bochner state that if the Ricci
tensor
is non-negative
(resp. non-positive) then every harmonic 1-form (resp. every
Killing vector
field) must be parallel and if the Ricci tensor is negative
(resp. positive) at
one point then there are no harmonic $1$-forms (resp. no
Killing vector
fields). By the Hodge theory the nonexistence of harmonic
$1$-forms leads to the vanishing of
the first de Rham
cohomology group $H^1({\bf M},{\bf R})$.
\par
On a Hermitian manifold $({\bf M},g,J)$ another objects of
geometric interest are holomorphic vector fields and
holomorphic forms. Closely
related to the hermitian structure is the Chern connection.
On a compact
Hermitian manifold holomorphic vector fields and holomorphic
$(p,0)$-forms
have
to satisfy additional differential equations when
appropriate curvature
conditions on the Chern connection are imposed. This also
leads to some
obstructions to the existence of these objects, so
called
vanishing theorems
for holomorphic sections. For example, if the mean curvature
of the Chern
connection (in the sense of \cite{Ko1}) on a compact
Hermitian manifold is
non-negative (resp. non-positive) then every holomorphic
$(1,0)$-form (resp.
every holomorphic vector field) is parallel with respect to
the Chern
connection and if the mean curvature is positive (resp.
negative) at one point
then there are no holomorphic $(1,0)$-forms (resp. no
holomorphic vector
fields) (for more general formulations see \cite{KW,Ga}). By
the
Hodge theory (see
\cite{H}) this leads
to the vanishing of the Dolbeault cohomology group
$H^{1,0}({\bf M},{\bf C})$.
\par
In the Kaehler geometry there is a close relation between
the metric objects
(Killing vector fields, harmonic $1$-forms) on one hand and
the holomorphic objects
(holomorphic vector fields, holomorphic $(1,0)$-forms) on
the other hand.
For a Kaehler manifold the Levi-Civita connection coincides
with the Chern
connection. Then the mean curvature of the Chern connection
is exactly the
Ricci tensor. Thus, on a compact Kaehler manifold the
positive (resp. negative)
definitness of the Ricci tensor is an obstruction to the
existence not
only of harmonic $1$-forms (resp. Killing vector fields)
but also to the
existence of holomorphic $(1,0)$-forms (resp. holomorphic
vector fields).
Moreover, on a compact Kaehler manifold an $1$-form is
harmonic iff it is
analytic (i.e its $(1,0)$-part is holomorphic) and every
Killing vector field is analytic (see \cite{Ya2}). So the
first Betti mumber
$b_1 = dim H^1({\bf M},{\bf R})$ vanishes iff the Hodge
number $h^{1,0} = dim
H^{1,0}({\bf M},{\bf C})$ vanishes.
\par
In general, on a compact Hermitian manifold there is no
remarkable relation
between Killing and holomorphic vector fields and between
harmonic $1$-forms
and holomorphic $(1,0)$-forms. \par
In this paper we consider compact balanced Hermitian
manifolds and try to find
a connection between the metric objects and the holomorphic
objects mentioned above.
Balanced manifolds have been studied intensively in
\cite{Mi,A_B1,A_B2,A_B3};
in \cite{Ga} they are called semi-Kaehler of special type.
This class of manifolds
includes the class of Kaehler manifolds but also many
important classes of
non-Kaehler manifolds, such as: complex solvmanifolds,
twistor spaces of oriented riemannian 4-manifolds, 1-
dimensional families of Kaehler manifolds
(see \cite{Mi}), hermitian compact manifolds with flat Chern
connection (see
\cite{Ga}), twistor spaces of oriented distinguished Weyl
structure on compact self-dual 4-manifolds \cite{Ga1},
twistor spaces of quaternionic Kaehler manifolds
\cite{P,AGI}, manifolds obtained as modification of compact
Kaehler manifolds
\cite{A_B1} and of compact balanced manifolds \cite{A_B2},
see also
\cite{A_B3}.
\par
We construct on a Hermitian manifold a symmetric $(1,1)$-
tensor $H$ using the torsion and the curvature of the Chern
connection. On a compact balanced Hermitian
manifold we give in Theorem \ref{th42} necessary and
sufficient conditions in terms of the tensor $H$ for a
harmonic $1$-form to be analytic and for an analytic $1$-
form
to be harmonic.
This alows us to obtain a vanishing theorem of Bochner type
on compact balanced Hermitian manifolds
(Theorem \ref{th46}). We prove that if $H$ is positive
definite then $b_1 = 0$
and $h^{1,0} = 0$. We obtain an obstruction to the
existence of Killing vector
fields in terms of the Ricci tensor (or the Chern form) of the
Chern connection. In
Theorem \ref{th39} we prove that if the Chern form of the
Chern connection on
a compact balanced Hermitian manifold is non-positive then
every Killing
vector field is analytic; if moreover the Chern form is
negative then there
are no Killing vector fields.\par
It is well known that on a compact Riemannian manifold a
smooth vector field is
Killing iff it is affine with respect to the Levi-Civita
connection {\cite{Ya2}. Thus, on a compact Kaehler manifold
every affine vector field is analytic. On a compact
balanced Hermitian manifold we find
necessary and sufficient conditions in terms of the Lie
derivative of the Chern
connection for a smooth vector field to be analytic. In
particular we prove that every affine vector field with
respect to the Chern
connection is analytic
on a compact balanced Hermitian manifold.
\section{Preliminaries}
Let $({\bf M},J,g)$ be a $2n$-dimensional Hermitian manifold
with complex
structure $J$ and
Riemannian metric $g$.The algebra of all $C^{\infty }$
vector fields on {\bf
M} will be denoted by {\bf XM}. The complex structure $J$
on the tangent
bundle ${\bf TM}$ of ${\bf M}$ induces a splitting of the
complexified tangent
bundle ${\bf T}_c{\bf M}$ into two complementary subbundles,
conjugate to each
other:${\bf T}_c{\bf M} = {\bf T}^{(1,0)}{\bf M} + {\bf
T}^{(0,1)}{\bf M}$. The elements of ${\bf T}^{(1,0)}{\bf M}$
(resp. ${\bf T}^{(0,1)}{\bf M}$) are the (complex)
tangent vectors of type $
(1,0)$ (resp. of type $(0,1)$).Each real tangent vector
field $X$ can be
expressed in a unique way as a sum: $X = U + \bar{U}$,
where $U =
\frac{1}{2} (X - \sqrt {-1}JX) \in {\bf T}^{(1,0)}{\bf M}$
and
$\bar{U} = \frac{1}{2} (X +
\sqrt {-1}JX) \in {\bf T}^{(0,1)}{\bf M}$. With respect to
local
holomorphic coordinates
$\{z^{\alpha }\}$, ($\alpha = 1,...,n$) we have $U =
X^{\alpha } \frac{\partial
}{\partial z^{\alpha }}, \bar{U} = X^{\bar{\alpha }}
\frac{\partial }{\partial
Z^{\bar{\alpha }}}$ (summation
convention is assumed further in the paper). The induced
complex structure on
the cotangent bundle ${\bf T}^{*}{\bf M}$ (also denoted by
$J$) is defined by:
$(J\omega )(X) = - \omega (JX)$, where $\omega $ is a real
1-form and $X$ is a
real vector field on ${\bf M}$. For the complexified
cotangent bundle ${\bf
T}^{*}_c{\bf M}$ we have the splitting: ${\bf T}^{*}_c{\bf
M} = \Lambda ^{1,0}({\bf
M}) + \Lambda ^{0,1}({\bf M})$. The elements of $\Lambda
^{1,0}({\bf M})$ (resp.
$\Lambda ^{0,1}({\bf M})$) are the (complex) 1-forms of type
$(1,0)$ (resp. of
type $(0,1)$. Each real 1-form $\omega $ can be expressed
in a unique way
as a sum $\omega = \beta + \bar{\beta }$, where $\beta =
\frac{1}{2}(\omega -
\sqrt {-1}J\omega ) \in \Lambda ^{1,0}({\bf M}); \bar{\beta
} = \frac{1}{2}(\omega +
\sqrt {-1}J\omega ) \in \Lambda ^{0,1}({\bf M})$. With
respect to local
holomorphic coordinates we have $\beta = \omega _{\alpha
}dz^{\alpha },
\bar{\beta } = \omega _{\bar{\alpha }}dz^{\bar{\alpha }}$.
In the whole paper
all tensors and connections will be extended complex
multilinearly to the
complexification ${\bf T}_c{\bf M}$ of ${\bf T}{\bf M}$.
\par
The Kaehler form $\Omega $ on {\bf M} is
defined by $\Omega (X,Y) = g(JX,Y); X,Y \in {\bf XM}$. The
Lee form $\theta $ of
the Hermitian structure is defined by $\theta = - \delta
\Omega \circ J$.
\par
The Levi-Civita connection and the canonical Chern
connection (the Hermitian
connection) will be denoted by $\nabla $ and $D$,
respectively. We recall that
the Chern connection $D$ is the unique linear connection
preserving the metric
and the complex structure with torsion tensor $T$, having
the following
property: $T(JX,Y) = T(X,JY)$. This implies (e.g.
\cite{A-Z}):
\begin{equation}\label{1}
T(JX,Y) = JT(X,Y), \qquad X,Y \in {\bf XM}.
\end{equation}
The two connections are related by the following identity
\begin{equation}\label{2}
g(\nabla _XY,Z) = g(D_XY,Z) + \frac{1}{2} d\Omega (JX,Y,Z),
\qquad X,Y,Z \in
{\bf XM}.
\end{equation}
Let $e_{1},...,e_{2n}$ be an orthonormal local basis on {\bf
M}. We consider the
following Ricci-type tensors associated with the curvature
tensor $K$ of the
Chern connection:
$$k(X,Y) = - \frac{1}{2}\sum
_{j=1}^{2n}g(K(X,JY)e_{j},Je_{j});
\quad k^{*}(X,Y) = -
\frac{1}{2}\sum _{j=1}^{2n}g(K(e_{j},Je_{j})X,JY);$$
$$ s(X,Y) = \sum_{j=1}^{2n}g(K(e_{j},X)Y,e_{j}).$$
The $(1,1)$-form $\kappa $ corresponding to the tensor $k$
represents the first Chern
class of ${\bf M}$ (further we will call it the Chern form)
and the (1,1)-form $\kappa ^*$
corresponding to the tensor $k^*$ is the mean curvature of
the holomorphic
tangent bundle ${\bf T}^{(1,0)}{\bf M}$ with the hermitian
metric
$g$.
\par
Using the torsion tensor $T$ of the Chern connection we
construct another
remarkable symmetric (1,1) tensor as follows
$$
t(X,Y) = \sum_{\alpha ,\beta =1}^{n}g(T(E_{\alpha },E_{\beta
}),X)g(T(E_{\bar{\alpha }},E_{\bar{\beta}}),Y),
$$
where $E_1,...,E_n,E_{\bar{1}},...,E_{\bar{n}}$ is a
hermitian basis on ${\bf
T}_c{\bf M}$. From the definition and (\ref{1}) it follows
that $t$ is
symmetric, $J$-
invariant positive semi-definite tensor i.e. $t(X,Y) =
t(Y,X) = t(JX,JY);
\quad t(X,X) \ge 0, X,Y \in {\bf XM}$
\par
As we shall see below, the tensor $t$ plays an important
role for the
coinsidence of harmonic 1-forms with analytic 1-forms. \par
For a real 1-form $\omega $ we denote by $\omega ^{\#}$
the
corresponding
vector fiel defined by:$\omega (Y) := g(\omega ^{\#},Y), Y
\in {\bf XM}$ and
for
a real vector field $X$ we denote by $\omega _X$ the
corresponding real 1-form
defined by: $\omega _X(Y) := g(X,Y)$. If $\omega = \omega
_{\alpha
}dz^{\alpha }$ is an $(1,0)$-form (resp. $\omega
_{\bar{\alpha
}}dz^{\bar{\alpha
}}$ is a $(0,1)$-form) then $\omega ^{\#} = g^{\alpha
\bar{\beta }}\omega
_{\alpha
}\frac{\partial }{\partial z^{\bar{\beta }}}$ is the
corresponding
$(0,1)$-vector field
(resp. $\omega ^{\#} = g^{\alpha \bar{\beta }}\omega
_{\bar{\beta
}}\frac{\partial
}{\partial z^{\alpha }}$ is the $(1,0)$-vector field) and if
$X = X^{\alpha
}\frac{\partial }{\partial z^{\alpha }}$ is an $(1,0)$-
vector field (resp. $X
=
X^{\bar{\alpha }}\frac{\partial }{\partial z^{\bar{\alpha
}}}$ is a
$(0,1)$-vector
field) then $\omega _X = g_{\alpha \bar{\beta }}X^{\alpha
}dz^{\bar{\beta }}$
is a $(0,1)$-form (resp. $\omega _X = g_{\bar{\alpha }\beta
}X^{\bar{\alpha
}}dz^{\beta }$ is an $(1,0)$-form).
\par
For a real 1-form $\omega $ using (\ref{2}) we calculate
\begin{equation}\label{3}
(\nabla _X\omega )Y = (D_X\omega)Y - \frac{1}{2}d\Omega
(JX,Y,\omega ^{\#}),
\qquad X,Y \in {\bf XM}.
\end{equation}
>From (\ref{3}) it follows that
\begin{equation}\label{4}
\delta \omega = - \sum_{i=1}^{2n}(D_{e_i}\omega )e_i -
\theta (\omega
^{\#}). \end{equation}
\indent
We recall here the definition of a
balanced manifold and some characterizations given in
\cite{Mi} and \cite{Ga}, for
completeness: \par
DEFINITION: {\it A balanced} manifold {\bf M} is a compact
complex $n$-
manifold which satisfies one of the following equivalent
conditions:\par
i) {\bf M} admits a hermitian structure $(g,J)$ such that
$d\Omega ^{n-1} = 0$;
\par
ii) {\bf M} admits a hermitian structure $(g,J)$ such that
$\delta \Omega = 0$;
\par
iii) {\bf M} admits a hermitian structure $(g,J)$ such that
$\theta = 0$; \par
iv) {\bf M} admits a hermitian structure $(g,J)$ such that
$\Delta_{\partial}f =
\Delta_{\bar{\partial}}f = \frac{1}{2}\Delta_df$, for any
smooth function $f$
on
${\bf M}$, where $\Delta_{\partial},
\Delta_{\bar{\partial}}, \Delta_df$
denote
the Laplacians with respect to the operators $\partial,
\bar{\partial},d$,
respectively.
\par
v) there are no non-zero positive $(n-1,n-1)$-currents on
${\bf M}$ wich are $(n-
1,n-1)$-components of boundaries. \par
In this paper we shall use essentially iii) and iv).\par
On balanced manifolds the first Ricci tensor $k$ coincides
with the third Ricci
tensor $s$ (see e.g. \cite{Ba}).\par
If $({\bf M},J,g)$ is a balanced manifold then the equality
(\ref{4}) takes
the form \begin{equation}\label{5}
\delta \omega = - \sum_{i=1}^{2n}(D_{e_i}\omega )e_i .
\end{equation}
\section{Analytic and Killing vector fields}
A real vector field $\xi $ is said to be {\it analytic} if
$L_{\xi}J = 0$,where
$L_{\xi }$ denotes the Lie derivative with respect to $\xi
$. The vector field
$\xi $ is analytic iff the $(2,0)$ part of $D\omega _{\xi }$
vanishies. In local
holomorphic coordinates this condition can be written as
follows
\begin{equation}\label{6}
D_{\alpha }\xi _{\beta } = 0,
\end{equation}
i.e. the $(1,0)$-part of $\xi $ is a holomorphic vector field.
We consider the following real 1-form $\omega $ defined by
$$
\omega = \xi ^{\beta}D_{\alpha }\xi _{\beta }dz^{\alpha } +
\xi ^{\bar{\beta
}}D_{\bar{\alpha }}\xi _{\bar{\beta }}dz^{\bar{\alpha }}.
$$
Using essentially that $DJ = 0$ and (\ref{3}), we find
$$
\delta \omega = - \Vert D_{\alpha }\xi _{\beta } \Vert^2 -
2Re\left[ \xi ^{\beta
}D^{\alpha }D_{\alpha }\xi _{\beta } + \xi ^{\beta }\theta
^{\alpha }D_{\alpha
}\xi _{\beta } \right],
$$ where $2Re(f)$ denotes the real part of a complex valued
function $f$ and $\Vert D_{\alpha }\xi _{\beta } \Vert^2$ is
the norm of the $(2,0)+(0,2)$-part of $D\omega _{\xi }$. The
norm of the $(1,1)$-part of $D\omega _{\xi }$ will be
denoted by $\Vert D_{\alpha }\xi _{\bar{\beta }} \Vert^2$.\\
Thus, on a compact Hermitian manifold we have the formula
\begin{equation}\label{7}
\int_{{\bf M}} \left\{ \Vert D_{\alpha }\xi _{\beta }
\Vert^2 + 2Re\left[ \xi
^{\beta
}D^{\alpha }D_{\alpha }\xi _{\beta } + \xi ^{\beta }\theta
^{\alpha }D_{\alpha
}\xi _{\beta } \right]\right\} \,dV = 0.
\end{equation}
Taking into account the Ricci formula
\begin{equation}\label{7*}
D^{\alpha }D_{\alpha }\xi _{\beta } = D_{\alpha }D^{\alpha
}\xi _{\beta } +
k^*_{\beta \bar{\sigma }}\xi ^{\bar{\sigma }}
\end{equation}
from (\ref{7}) we obtain
\begin{pro}\label{pro31}
Let $\xi $ be a real vector field on a compact Hermitian
manifold. The following
conditions are equivalent:
\par i) $\xi $ is analytic;
\par ii) $D^{\alpha }D_{\alpha }\xi _{\beta } + \theta
^{\alpha }D_{\alpha
}\xi _{\beta } = 0;$
\par iii) $D_{\alpha }D^{\alpha }\xi _{\beta } + k^*_{\beta
\bar{\sigma }}\xi
^{\bar{\sigma }} + \theta ^{\alpha }D_{\alpha }\xi _{\beta }
= 0.$
\end{pro}
Hence, on a compact balanced Hermitian manifold we have
\cite{A-Z}
\begin{pro}\label{pro32}
Let $\xi $ be a real vector field on a compact balanced
Hermitian
manifold. The following
conditions are equivalent:
\par i) $\xi $ is analytic;
\par ii) $D^{\alpha }D_{\alpha }\xi _{\beta } = 0;$
\par iii) $D_{\alpha }D^{\alpha }\xi _{\beta } + k^*_{\beta
\bar{\sigma }}\xi
^{\bar{\sigma }} = 0.$
\end{pro}
On every compact balanced Hermitian manifold the Ricci
formula
(\ref{7*}) leads to the
following integral formula
\begin{equation}\label{4.6}
\int_{{\bf M}} \Vert D_{\alpha }\xi _{\beta } \Vert^2 \,dV
= \int_{{\bf M}} \Vert
D_{\alpha }\xi _{\bar{\beta }} \Vert^2 \,dV - \int_{{\bf M}}
k^*(\omega
^{\#},\omega ^{\#}) \,dV.
\end{equation}
DEFINITION. A real vector field $\xi $ on a Hermitian
manifold is said to be {\it
affine Hermitian } if it is affine vector field with respect
to the Chern connection
$D$, i.e $L_{\xi }D = 0$. If the linear connection $L_{\xi
}D$ preserves the
complex structure $J$, i.e.
\begin{equation}\label{8}
(L_{\xi }D) \circ J = J \circ (L_{\xi }D) ,
\end{equation}
then we call $\xi $ a {\it complex Hermitian} vector field. \par
In local holomorphic coordinates the condition (\ref{8}) is
equivalent to the
equations
$$
(L_{\xi }D)^{\lambda}_{\alpha \bar{\beta }} = 0; \qquad
(L_{\xi
}D)^{\bar{\lambda }}_{\alpha \beta } = 0.
$$
Using the general formulas expressing $L_{\xi }D$ with the
torsion and
curvature (see \cite{Ya1}) we find
$$
(L_{\xi }D)^{\lambda }_{\alpha \bar{\beta }} = D_{\alpha
}D_{\bar{\beta }}\xi
^{\lambda }; \qquad (L_{\xi }D)^{\bar{\lambda }}_{\alpha
\beta } = D_{\alpha
}D_{\beta }\xi ^{\bar{\lambda }}.
$$
Applying Proposition \ref{pro32} we obtain
\begin{th}\label{th33}
Let $\xi $ be a real vector field on a compact balanced
Hermitian manifold. The
following conditions are equivalent:
\par i) $\xi $ is analytic;
\par ii) $\xi $ is complex Hermitian.
\end{th}
In the Kaehler case the Chern connection coincides with the
Levi-Civita
connection . From Theorem \ref{th33} we have
\begin{co}\label{co34}
A real vector field $\xi $ on a compact Kaehler manifold is
analytic iff $L_{\xi
}\nabla $ preserves the complex structure.
\end{co}
Every affine Hermitian vector field is complex Hermitian.
>From Theorem
\ref{th33} we obtain
\begin{th}
Every affine Hermitian vector field on a compact balanced
Hermitian manifold is
analytic.
\end{th}
This result extends the well known result that every affine
vector field on a
compact Kaehler manifold is Killing and hence, it is
analytic. \par
We recall that a real vector field $\xi $ is said to be a
Killing vector field if
$L_{\xi }g = 0$. This condition is equivalent to
\begin{equation}\label{9}
(\nabla _X\omega _{\xi })Y + (\nabla _Y\omega _{\xi })X = 0,
X,Y \in {\bf XM}.
\end{equation}
The condition (\ref{9}) implies $\delta \omega _{\xi } = 0$.
\par
Let $\xi $ be a real vector field. We consider the following
real 1-form
$$
\phi = \xi ^{\beta }D_{\beta }\xi _{\alpha }dz^{\alpha } +
\xi ^{\bar{\beta
}}D_{\bar{\beta }}\xi _{\bar{\alpha }}dz^{\bar{\alpha }}.
$$
Using (\ref{4}) we find
\begin{equation}\label{10}
\delta \phi = - 2Re \left[ D_{\beta }\xi _{\alpha }
D^{\alpha }\xi ^{\beta
}\right] -
2Re \left[ \xi ^{\beta }D^{\alpha }D_{\beta }\xi _{\alpha }
+ \xi ^{\beta
}\theta ^{\alpha }D_{\beta }\xi _{\alpha }\right].
\end{equation}
Using the Ricci identities and (\ref{4}) we calculate
\begin{equation}\label{11}
2Re\left[\xi ^{\beta }D^{\alpha }D_{\beta }\xi _{\alpha
}\right] = s(\xi
,\xi ) -
\frac{1}{2}\xi \delta \omega _{\xi } - \frac{1}{2}J\xi
\delta \omega _{J\xi } -
\frac{1}{2}\xi \theta (\xi ).
\end{equation}
On a compact Hermitian manifold we derive from (\ref{11})
and (\ref{10}) the
folowing formula
$$
\int_{{\bf M}} \left\{ 2Re\left[D_{\beta }\xi _{\alpha
}D^{\alpha }\xi {\beta
} \right] +
s(\xi ,\xi ) - \frac{1}{2}(\delta \omega _{\xi })^2 -
\frac{1}{2}(\delta
\omega
_{J\xi })^2\right\} \,dV -$$ $$- \int_{{\bf M}}\left\{
\frac{1}{2}\xi \theta
(\xi ) + 2Re\left[ \xi
^{\beta }\theta ^{\alpha }D_{\beta }\xi _{\alpha
}\right]\right\} \,dV = 0.
$$
Thus, we proved
\begin{pro}\label{pro36}
If $\xi $ is a real vector field on a compact balanced
Hermitian manifold, then
\begin{equation}\label{12}
\int_{{\bf M}} \left\{ 2Re\left[D_{\beta }\xi _{\alpha
}D^{\alpha }\xi {\beta
} \right] +
k(\xi ,\xi ) - \frac{1}{2}(\delta \omega _{\xi })^2 -
\frac{1}{2}(\delta
\omega _{J\xi })^2 \right\} \,dV = 0.
\end{equation}
\end{pro}
Now, let $\xi $ be a Killing vector field. Using (\ref{3})
from (\ref{9}) we get
\begin{equation}\label{13}
D_{\alpha }\xi _{\beta } + D_{\beta }\xi _{\alpha } = 0.
\end{equation}
Using (\ref{13}) from Proposition \ref{pro36} we get
\begin{pro}\label{pro37}
If $\xi $ is a Killing vector field on a compact balanced
Hermitian manifold, then
\begin{equation}\label{14}
\int_{{\bf M}} \left\{ \Vert D_{\beta }\xi _{\alpha }
\Vert^2 - k(\xi ,\xi )
+ \frac{1}{2}(\delta \omega _{J\xi })^2 \right\} \,dV = 0.
\end{equation}
\end{pro}
As a corollary we obtain the following theorem of Bochner
type
\begin{th}\label{th39}
Let $({\bf M},g,J)$ be a compact balanced Hermitian
manifold.
\par i) If the Chern form $\kappa $ is non-positive definite,
then
every
Killing vector field $\xi $
on {\bf M} is analytic and satisfies the equality $$k(\xi
,\xi) = \delta (\omega _{J\xi }) = 0.$$
\par ii) If the Chern form $\kappa $ is negative definite,
then
there are no Killing vector
fields other than zero, i.e. the group of isometries of ({\bf
M},g,J) is discrete.
\end{th}
\begin{rem}\label{rem1}
If the Chern form $\kappa $ is negative definite then the
first
Chern class is negative.
By the theorem of S.Kobayashi \cite{Ko} it follows that
there are no holomorphic
vector fields and hence there are no Killing vector fields
with respect to any
Kaehler metric on {\bf M}.
\end{rem}
EXAMPLE. Let ${\bf M} = G/\Gamma$ be a compact quotient of
a
complex Lie
group $G$ with respect to its discrete subgroup $\Gamma $.
It is well known that
every compact Hermitian manifold with flat Chern connection
is isomorphic with
${\bf M}$ endowed with its flat hermitian structure \cite{Go}.
If
the group $G$ is non-abelian, then ${\bf M}$ is a non-Kaehler
balanced
Hermitian manifold. Since $k = k^* = 0$, then every Killing
vector field is
analytic and by (\ref{4.6}) it is parallel. (There
is a general result of P.Gauduchon \cite{Ga} that on a
compact Hermitian
manifold with negative semi-definite mean curvature $k^*$
every analitic vector
field is parallel (see also \cite{Ko1})).
\begin{rem}\label{rem2}
It is clear from above that Proposition \ref{pro37} and
Theorem \ref{th39} are
valid if we only assume the condition (\ref{13}) which is
weaker than the Killing
condition (\ref{9})
\end{rem}
\section{Harmonic and analytic 1-forms}
A real 1-form $\omega $ is {\it analytic } if
its $(1,0)$-part is holomorphic. In terms of the Chern
connection this
condition is equivalent
to the condition
\begin{equation}\label{4.1}
D_{\alpha }\omega _{\bar{\beta }} = 0
\end{equation}
We recall that a real 1-form $\omega $ on a Riemannian
manifold is {\it
harmonic} if it is closed and co-closed, i.e.
\begin{equation}\label{4.2}
d\omega = 0; \qquad \delta \omega = 0.
\end{equation}
Using (\ref{3}) the condition $d\omega = 0$ is equivalent to
the following two
conditions:
\begin{equation}\label{4.3}
D_{\alpha }\omega _{\bar{\beta }} - D_{\bar{\beta }}\omega
_{\alpha } = 0.
\end{equation}
\begin{equation}\label{4.4}
D_{\alpha }\omega _{\beta } - D_{\beta }\omega _{\alpha } =
- T_{\alpha \beta
}^{\sigma }\omega _{\sigma }.
\end{equation}
The condition (\ref{4.3}) implies
\begin{equation}\label{4.5}
\delta (J\omega ) = 0.
\end{equation}
We are going to obtain necessary and sufficient conditions
for a
harmonic
1- form to be analytic and for a holomorphic 1-form to be
harmonic on a compact balanced Hermitian manifold.For this
purpose we need some
integral
formulas.
\begin{pro}\label{pro41}
For every real 1-form $\omega $ on a compact balanced
Hermitian manifold the
following integral formulas are valid:
\begin{equation}\label{4.7}
\int_{{\bf M}} \frac{1}{2} \Vert D_{\alpha }\omega _{\beta }
- D_{\beta }\omega
_{\alpha } + T_{\alpha \beta }^{\sigma }\omega _{\sigma }
\Vert^2 \,dV =
\end{equation} $$\int_{{\bf
M}} \left[\Vert D_{\alpha }\omega _{\bar{\beta }} \Vert^2 +
k(\omega
^{\#},\omega
^{\#}) - k^*(\omega ^{\#},\omega ^{\#}) + \frac{1}{2}
t(\omega ^{\#},\omega
^{\#}) \right] \,dV - $$
$$
- \int_{{\bf M}} \left\{ \frac{1}{2} (\delta \omega )^2 +
\frac{1}{2} (\delta (J\omega
))^2 - 2Re \left[ T_{\alpha \beta }^{\sigma }\omega _{\sigma
}(D^{\alpha }\omega
^{\beta } - D^{\beta }\omega ^{\alpha })\right]\right\} \,dV
= 0.
$$
\begin{equation}\label{4.8}
\int_{{\bf M}} \left\{ k(\omega ^{\#},\omega ^{\#}) -
k^*(\omega ^{\#},\omega
^{\#})
+ \frac{1}{2} 2Re \left[ T_{\alpha \beta }^{\sigma }\omega
_{\sigma }(D^{\alpha
}\omega ^{\beta } - D^{\beta }\omega ^{\alpha })\right]
\right\} \,dV +
\end{equation}
$$
+ \int_{{\bf M}} 2Re \left[ T_{\alpha \beta }^{\sigma
}D^{\alpha }\omega _{\sigma
}\omega ^{\beta } \right] \,dV = 0 $$
\end{pro}
\indent {\it Proof:} The formula (\ref{4.7}) follows
immediately from (\ref{12})
and(\ref{4.6}).
\par
To prove (\ref{4.8}) we consider the following real 1-form
$$
\psi = T_{\alpha \beta }^{\sigma }\omega _{\sigma }\omega
^{\beta }dz^{\alpha } +
T_{\bar{\alpha }\bar{\beta }}^{\bar{\sigma }}\omega
_{\bar{\sigma }}\omega
^{\bar{\beta }}dz^{\bar{\alpha }}.
$$
Applying (\ref{4})we have
\begin{equation}\label{4.9}
- \delta \psi = 2Re \left[ D^{\alpha }T_{\alpha \beta
}^{\sigma }\omega _{\sigma
}\omega ^{\beta }\right] + \end{equation} $$ + 2Re \left[
T_{\alpha \beta
}^{\sigma }D^{\alpha }\omega
_{\sigma }\omega ^{\beta }\right] + Re \left[ T_{\alpha
\beta }^{\sigma }\omega
_{\sigma }(D^{\alpha }\omega ^{\beta } - D^{\beta }\omega
^{\alpha })\right]
$$
>From the second Bianchi identity we get
\begin{equation}\label{4.10}
2Re \left[ D^{\alpha }T_{\alpha \beta }^{\sigma }\omega
_{\sigma }\omega ^{\beta
}\right] = s(\omega ^{\#},\omega ^{\#})
- k^*(\omega ^{\#},\omega ^{\#}).
\end{equation}
Substituting (\ref{4.10}) into (\ref{4.9}) and integrating
the obtained
equality over {\bf M} we obtain (\ref{4.8}) \hfill {\bf
Q.E.D.}
We define the tensor $H$ by the equality
\begin{equation}\label{4.11}
H(X,Y) := k(X,Y) - k^*(X,Y) - \frac{1}{2}t(X,Y), \qquad X,Y
\in {\bf XM}
\end{equation}
>From this definition it follows that the tensor $H$ is
symmetric and $J$-invariant.\\
We have
\begin{th}\label{th42}
Let $({\bf M},J,g)$ be a compact balanced Hermitian
manifold.\\[1mm]
\indent i) A harmonic 1-form $\omega $ is analytic iff
$\int_{{\bf M}}H(\omega
^{\#},\omega ^{\#}) \,dV = 0$;\\[2mm]
\indent ii) An analytic 1-form $\omega $ is harmonic iff
$\int_{{\bf M}}H(\omega ^{\#},\omega ^{\#}) \,dV = 0$;
\end{th}
\indent {\it Proof:} The theorem follows from the following
two lemmas:
\begin{lem}\label{lm43}
Let $\omega $ be a harmonic 1-form on a compact balanced
Hermitian
manifold. Then we have
\begin{equation}\label{4.12}
\int_{{\bf M}} \left[ \Vert D_{\alpha }\omega _{\bar{\beta
}} \Vert^2 + H(\omega
^{\#},\omega ^{\#}) \right] \,dV = 0.
\end{equation}
\end{lem}
\begin{lem}\label{lm44}
Let $\omega $ be an analytic 1-form on a compact balanced
Hermitian manifold.
Then we have
$$
\int_{{\bf M}} \left[ \frac{1}{2} \Vert D_{\alpha }\omega
_{\beta } - D_{\beta
}\omega _{\alpha } + T_{\alpha \beta }^{\sigma }\omega
_{\sigma } \Vert^2 +
H(\omega ^{\#},\omega ^{\#})\right] \,dV = 0.
$$
\end{lem}
The proof of Lemma \ref{lm43} follows after substitution of
(\ref{4.3}), (\ref{4.4})
and (\ref{4.5}) into (\ref{4.7}). Combining (\ref{4.8}) with
(\ref{4.7}) and using
(\ref{4.1}) we get the proof of Lemma \ref{lm44}. This
completes the proof of
Theorem \ref{th42}. \hfill {\bf Q.E.D.}
\begin{rem}\label{rem41}
On a Kaehler manifold the tensor $H$ vanishes identically
and Theorem
\ref{th42} implies the well known result that on a compact
Kaehler manifold every
harmonic 1-form is analytic and vice versa.
\end{rem}
Using Hodge theory (see e.g. \cite{Be}) from Theorem
\ref{th42} we obtain
\begin{co}\label{co45}
For a compact balanced Hermitian manifold with zero tensor
$H$ the de Rham
cohomology group $H^1({\bf M},{\bf R})$ is isomorphic to the
Dolbeault
cohomology \\ group $H^{1,0}({\bf M},{\bf C})$.
\end{co}
Now we can state our main result (vanishing theorem of
Bochner type)
\begin{th}\label{th46}
Let $({\bf M},J,g)$ be a compact balanced Hermitian
manifold.
\par i) If the tensor $H$ is positive semi-definite then
every analitic $1$-form
$\omega $ is harmonic and vice versa, every harmonic $1$-
form $\omega $ is
analytic; moreover in the two cases $H(\omega ^{\#},\omega
^{\#}) = 0$.
\par ii) If the tensor $H$ is positive definite on {\bf M}
then:
\par - there are no harmonic $1$-forms on {\bf M};
\par - there are no analytic $1$-forms on {\bf M}.
\end{th}
\indent {\it Proof:} The proof of this theorem follows
immediately from Lemma
\ref{lm43} and Lemma \ref{lm44}. \hfill {\bf Q.E.D.}\\
Applying the Hodge theory we get
\begin{th}\label{th47}
Let $({\bf M},J,g)$ be a compact balanced Hermitian
manifold with positive
definite tensor $H$. Then
\par i) the first Betti number $b_1 = dimH^1({\bf M},{\bf
R}) = 0$;
\par ii) the Hodge number $h^{1,0} = dimH^{1,0}({\bf M},{\bf
C}) = 0$.
\end{th}
Using (\ref{4.6}) we can state Lemma \ref{lm43} as
\begin{lem}\label{l43'}
Let $\omega $ be a harmonic $1$-form on a compact balanced
Hermitian
manifold. Then the following formula is true
\begin{equation}\label{4.13}
\int_{{\bf M}} \left[ \Vert D_{\alpha }\omega _{\beta }
\Vert^2 + k(\omega
^{\#},\omega ^{\#}) - \frac{1}{2}t(\omega ^{\#},\omega
^{\#})\right] \,dV = 0
\end{equation}
\end{lem}
>From (\ref{4.13}) we get the following vanishing theorem of
Bochner type
\begin{th}\label{th48}
Let $({\bf M},J,g)$ be a compact balanced Hermitian
manifold.
\par i) If the tensor $k - \frac{1}{2}t$ is positive semi-definite
then the
vector field $\omega ^{\#}$ corresponding to any harmonic
$1$-form
$\omega $ is
holomorphic
vector field and $$(k - \frac{1}{2}t)(\omega ^{\#},\omega
^{\#}) = 0.$$
\par ii) If the tensor $k - \frac{1}{2}t$ is positive
definite on {\bf M} then there are
no harmonic $1$-forms on {\bf M} and consequently $b_1 = 0$.
\end{th}
\begin{rem}
If the tensor $k - \frac{1}{2}t$ is positive definite on
{\bf M} then the first Chern
form $\kappa $ is positive, by the properties of $t$. From
the
Calabi-Yau-Aubin theory
(see\cite{Be}) there exists a Kaehler metric on {\bf M} with
positive Ricci tensor
and the last conclusion in ii) of Theorem \ref{th48} follows
from the classical
vanishing theorem of Bochner.
\end{rem}
It is well known that on a compact Hermitian manifold if the
mean curvature
$k^*$ is positive semi-definite and it is positive definite
in one point then there
are no holomorphic $(p,0)$-forms (\cite{KW,Ga}, see also
\cite{Wu,Ko1}), in
particular
there are no holomorphic $(1,0)$-forms (the latter fact for
balanced manifolds
follows also from (\ref{4.6})). In view of Theorem
\ref{th46}, we can generalize
the latter fact for compact balanced manifolds as follows
\begin{th}\label{th49}
If on a compact balanced Hermitian manifold the mean
curvature satisfies the
condition
$$k^*(X,X) < k(X,X) - \frac{1}{2}t(X,X), \qquad X \in {\bf
XM}$$
then there are neither holomorphic $(1,0)$-forms nor
harmonic $1$-forms on
{\bf M}.
\end{th}
As a corollary we also have
\begin{th}\label{th10}
Let $({\bf M},J,g)$ be a compact balanced Hermitian
manifold. If the tensors
$H$ and $k^*$ are positive semi-definite on {\bf M} and
$k^*$ is positive definite
in one point then there are no harmonic $1$-forms on {\bf M}
and consequently
$b_1 = 0$
\end{th}
|
2,877,628,090,292 | arxiv | \section{Experiments details} \label{Appendix: Exp details}
In all real world experiments we use the Adam optimizer with a default regularization (weight decay) of $3\times 10^{-4}$, unless in the `no regularization' case when it is set to 0. We split the data between 2 agents by giving a bigger part of data to agent 1 at all 'same data' experiments. That is, for 'different data' experiments and heterogeneous data experiments where we vary \textit{Alpha} hyperparameter we split data equally between 2 agents. The non-equal split can help us to see another effect in the experiments: if agent 2 (with less data) benefits from the communication with agent 1 (with more data). In heterogeneous data experiments, we explore the significance of data heterogeneity effect on the behavior of KD scheme. That is why we design an almost ideal setting at all such experiments: the 'same model' setting (except experiments with RF and MLP), the equal split of data in terms of its amount between agents.
\paragraph{Toy experiments.}
The toy experiments solve a linear regression problem of the form $\mA \xx^\star = \bb$ where $\mA \in \R^{n \times d}$ and $\xx^\star \in \R^d$ is randomly generated for $n=1.5d$ and $d=100$. Then, the data $\mA$ and $\bb$ is split between the two agents at proportion $0.6 / 0.4$. This is done randomly in the `same data' case, whereas in the `different data' case the data is sorted according to $b$ before splitting to maximize heterogeneity. This experiment with the linear kernel is supposed to show if our theory is correct, especially if the EKD scheme really works.
\paragraph{Real world experiments.}
The real world experiments are conducted using CNN and MLP networks on MNIST, MLP network and RF model on MNIST, and VGG16\footnote{This model is also a convolutional neural network, but in our experiments it is bigger than CNN model.} \citep{simonyan2015deep} and CNN models on CIFAR10 datasets. Further, we split the training data randomly at proportion $0.7 / 0.3$ in the 'same data' setting. For the 'different data' setting, we split the data by labels: agent 1 has '0' to '4' labeled data points, agent 2 has '5' to '9'. Then we take randomly from each agent some $\textit{Alpha} = 0.1$ portion of data, combine it and randomly return data points to both agents from this combined dataset.
\section{Alternating KD with regularization} \label{Appendix: AKD}
\paragraph{Different models.}
Keeping the notation of section \ref{AKD} we can construct the following matrix:
\[
\mK =
\begin{pmatrix}
\mL & 0\\
0 & \mM
\end{pmatrix} =
\begin{pmatrix}
\mL_{11} & \mL_{12} & 0 & 0\\
\mL_{21} & \mL_{22} & 0 & 0\\
0 & 0 & \mM_{11} & \mM_{12}\\
0 & 0 & \mM_{21} & \mM_{22}
\end{pmatrix}.
\]
Notice that each of the sub-blocks $\mL$ and $\mM$ are symmetric positive semi-definite matrices. This makes matrix $\mK$ symmetric positive semi-definite matrix and it has eigendecomposition form:
\[
\mK = \mV^\top \mD \mV \quad \text{ and } \quad \mV = \begin{pmatrix}
\mV_1 \mV_2 \tilde\mV_1 \tilde\mV_2
\end{pmatrix}\,,
\]
where $\mD$ and $\mV$ are $\R^{2(N_1+N_2) \times 2(N_1+N_2)}$ diagonal and orthogonal matrices correspondingly. This means that the $\R^{2(N_1+N_2) \times 2(N_1+N_2)}$ matrices $\mV_1, \mV_2, \tilde\mV_1, \tilde\mV_2$ are also orthogonal to each other.
Then one can deduce:
\[
\mL_{ij} = \mV^\top_i \mD \mV_j, \quad \text{and} \quad \mM_{ij} = \tilde\mV^\top_i \mD \tilde\mV_j \quad \forall i,j = 1,2\,.
\]
In AKD setting only agent 1 has labeled data. The solution of learning from the dataset $\mathcal{D}_1$ evaluated at set $\mathcal{X}_2$ is the following:
\begin{equation} \label{eq: KD_supervised}
g^1_0(\mathcal{X}_2) = \mL_{21} (c \mI + \mL_{11})^{-1} \yy_1 = \mV_2^\top( (\mV_1^\top (c \mI + \mD) \mV_1)^{-1} \mV_1^\top \mD)^\top \yy_1\,.
\end{equation}
Let us introduce notation:
\[
\mP_1 = \mV_1 (\mV_1^\top (c \mI + \mD) \mV_1)^{-1} \mV_1^\top (c \mI + \mD) \quad \text{and} \quad \tilde\mP_2 = \tilde\mV_2 (\tilde\mV_2^\top (c \mI + \mD) \tilde\mV_2)^{-1} \tilde\mV_2^\top (c \mI + \mD)\,.
\]
These are well known oblique (weighted) projection matrices on the subspaces spanned by the columns of matrices $\mV_1$ and $\tilde\mV_2$ correspondingly, where the scalar product is defined with Gram matrix $\mG = c \mI + \mD$. Similarly one can define $\mP_2$ and $\tilde\mP_1$.
Given the introduced notation and using the fact that $\mV_2^\top \mV_1 = \0$, we can rewrite equation \ref{eq: KD_supervised} as:
\begin{equation} \label{eq: Reg_KD_supervised}
g^1_{0}(\mathcal{X}_2) = \mV_2^\top \mP_1^\top \mV_1 \yy_1\,.
\end{equation}
Similarly, given $\mathcal{X}_1 \subseteq \mathcal{D}_1$ and agent 1 learned from $\hat{\mathcal{Y}}_2 = g^1_{0}(\mathcal{X}_2)$, inferred prediction $\hat{\mathcal{Y}}_1 = g^1_{1}(\mathcal{X}_1)$ by agent 2 can be written as:
\[
g^1_{1}(\mathcal{X}_1) = \tilde\mV_1^\top \tilde{\mP}_2^\top \tilde\mV_2 \mV_2^\top \mP_1^\top \zz_1, \quad \text{ where } \quad \zz_1 = \mV_1 \yy_1\,.
\]
At this point we need to introduce additional notation:
\[
\mC_1 = \mV_1 \tilde\mV_1^\top \quad \text{and} \quad \tilde\mC_2 = \tilde\mV_2 \mV_2^\top\,.
\]
These are matrices of contraction linear mappings.
One can now repeat the whole process again with replacement $\mathcal{Y}_1$ by $\hat{\mathcal{Y}}_1$. The predictions by agent 1 and agent 2 after such $2t$ rounds of AKD are:
\begin{gather} \label{eq: Degrad}
g^1_{2t}(\mathcal{X}_2) = \mV_2^\top \mP_1^\top \left(\mC_1 \tilde{\mP}_2^\top \tilde\mC_2 \mP_1^\top \right)^t \zz_1 \quad \text{and} \quad g^1_{2t+1}(\mathcal{X}_1) = \tilde\mV_1^\top \tilde{\mP}_2^\top \tilde\mC_2 \mP_1^\top \left(\mC_1 \tilde{\mP}_2^\top \tilde\mC_2 \mP_1^\top \right)^t \zz_1\,.
\end{gather}
If one considers the case where agents have the same kernel $u_1 = u_2 = u$ then one should 'remove all tildas' in the above expressions and the obtained expressions are applied. This means $\mM = \mL$, $\mV_1 = \tilde\mV_1$ and $\mV_2 = \tilde\mV_2$. Crucial changes in this case are $\mC_1 = \tilde\mC_2 \to \mI$ and $\left(\mC_1 \tilde{\mP}_2^\top \tilde\mC_2 \mP_1^\top \right)^t \to \left(\mP_2^\top \mP_1^\top \right)^t$. The matrix $\left(\mP_2^\top \mP_1^\top \right)^t$ is an alternating projection \citep{BoydAlternatingP} operator after $t$ steps. Given 2 closed convex sets, alternating projection algorithm in the limit finds a point in the intersection of these sets, provided they intersect \citep{BoydAlternatingP, lewis2007local}. In our case matrices operators $\mP_1$ and $\mP_2$ project onto linear spaces spanned by columns of matrices $\mV_1$ and $\mV_2$ correspondingly. These are orthogonal linear subspaces, hence the unique point of their intersection is the origin point $\0$. This means that $\left(\mP_2^\top \mP_1^\top \right)^t \xrightarrow[t \to \infty]{} \0$ and in the limit of such AKD procedure both agents predict $0$ for any data point.
\paragraph{Speed of degradation.} The speed of convergence for the alternating projections algorithm is known and defined by the minimal angle $\phi$ between corresponding sets \citep{aronszajn1950theory} (only non-zero elements from sets one has to consider):
\[
||\left(\mP_2 \mP_1 \right)^t \vv|| \le (\cos(\phi))^{2t-1} ||\vv|| = (\cos(\phi))^{2t-1} \quad \text{as} \quad ||\vv|| = 1\,,
\]
where $\vv$ is one of the columns of matrix $\mV_1$. In our case we can write the expression for the cosine:
\[
\cos(\phi) = \max_{\vv_1, \vv_2}(\frac{|\vv^\top_1(c\mI+\mD) \vv_2|}{\sqrt{(\vv^\top_1(c\mI+\mD) \vv_1) \cdot (\vv^\top_2(c\mI+\mD) \vv_2)}}) = \max_{\vv_1, \vv_2}(\frac{|\vv^\top_1 \mD \vv_2|}{\sqrt{(c + \vv^\top_1 \mD \vv_1) \cdot (c+\vv^\top_2 \mD \vv_2)}})\,.
\]
where $\vv_1$ and $\vv_2$ are the vectors from subspaces spanned by columns of matrices $\mV_1$ and $\mV_2$ correspondingly. Hence the speed of convergence depends on the elements of the matrix $\mL_{12}$. Intuitively these elements play the role of the measure of 'closeness' between data points. This means that the 'closer' points of sets $\mathcal{X}_1$ and $\mathcal{X}_2$ the higher absolute values of elements in matrix $\mL_{12}$. Moreover, we see inversely proportional dependence on the regularization constant $c$. All these sums up in the following proposition which is the formal version of the proposition \ref{prop:akd-speed}:
\begin{proposition}[Formal]\label{prop:akd-speed formal}
The rate of convergence of $g^1_t(\xx)$ to $0$ gets faster if:
\begin{itemize}[nosep]
\item larger regularization constant $c$ is used during the training,
\item smaller the eigenvalues of the matrix $\mV_1 \tilde\mV^\top_1$, or
\item smaller absolute value of non-diagonal block $\mL_{12}$
\end{itemize}
\end{proposition}
\section{Alternating KD without regularization} \label{Appendix: AKD wo reg}
Before we saw that models of both agents degrade if one uses regularization. A natural question to ask if models will degrade in the lack of regularization. Consider the problem \eqref{Hilbert_problem} with $c \to 0$, so the regularization term cancels out. In the general case, there are many possible solutions as kernel matrix $\mK$ may have $0$ eigenvalues. Motivated by the fact that Stochastic Gradient Descent (SGD) tends to find the solution with minimal norm \citep{wilson2017marginal}, we propose to analyze the minimal norm solution in the problem of alternating knowledge distillation with 2 agents each with private dataset. For simplicity let us assume that agents have the same model architecture. Then the agent I solution evaluated at set $\mathcal{X}_2$ with all the above notation can be written as:
\begin{gather} \label{min_norm}
g^\star_0(\mathcal{X}_2) = \mK_{21} \mK_{11}^{\dagger} \yy_1 = \mV_2^\top \mD \mV_1(\mV_1^\top \mD \mV_1)^{\dagger} \yy_1,
\end{gather}
where $\dagger$ stands for pseudoinverse.\\
In this section let us consider 3 possible settings one can have:
\begin{enumerate}
\item Self-distillation: The datasets and models of both agents are the same.
\item Distillation with $\mK > 0$: The datasets of both agents are different and private. Kernel matrix $\mK$ is positive definite.
\item Distillation with $\mK \ge 0$: The datasets of both agents are different and private. Kernel matrix $\mK$ is positive semi-definite.
\end{enumerate}
\paragraph{Self-distillation.}
Given the dataset $\mathcal{D} = \mathcal{X} \times \mathcal{Y}$, the solution of the supervised learning with evaluation at any $\xx \in \mathbb{R}^d$ is:
\begin{align*}
& g^\star_0(\xx) = \ll_{\xx}^\top (\mV^\top \mD \mV)^{\dagger} \yy.
\end{align*}
Then the expression for the self-distillation step with evaluation at any $\xx \in \mathbb{R}^d$ is:
\begin{align*}
& g^\star_1(\xx) = \ll_{\xx}^\top (\mV^\top \mD \mV)^{\dagger} \mV^\top \mD \mV(\mV^\top \mD \mV)^{\dagger} \yy = \ll_{\xx}^\top(\mV^\top \mD \mV)^{\dagger} \yy = g^\star_0(\xx),
\end{align*}
where property of pseudoinverse matrix was used: $\mA^{\dagger} \mA \mA^{\dagger} = \mA^{\dagger}$.\\
That is, self-distillation round does not change obtained model. One can repeat self-distillation step $t$ times and there is no change in the model:
\begin{align*}
& g^\star_{t}(\xx) = \ll_{\xx}^\top (\mV^\top \mD \mV)^{\dagger} \yy = g^\star_0(\xx),
\end{align*}
Hence in the no regularization setting self-distillation step does not give any change in the obtained model.
\paragraph{Distillation with $\mK > 0$.} \label{Distill K>0}
In this setting we have the following identity $\mK^{\dagger} = \mK^{-1}$ and results are quite similar to the setting with regularization. But now the Gram matrix of scalar product in linear space is $\mD$ instead of $c \mI + \mD$ in the regularized setting. By assumption on matrix $\mK$ it follows that $\mD$ is of full rank and the alternating projection algorithm converges to the origin point $\0$. Therefore in the limit of AKD steps predictions by models of two agents will degrade towards $\0$.
\paragraph{Distillation with $\mK \ge 0$.}
In this setting kernel matrix $\mK$ has at least one $0$ eigenvalue and we take for the analysis the minimal norm solution \eqref{min_norm}. We can rewrite this solution:
\begin{gather*}
g^\star_0(\mathcal{X}_2) = \mV_2^\top \mD \mV_1(\mV_1^\top \mD \mV_1)^{\dagger} \yy_1 = \mV_2^\top \hat{\mP}_1^\top \zz_1,
\end{gather*}
where $\hat{\mP}_1 = \mV_1(\mV_1^\top \mD \mV_1)^{\dagger} \mV_1^\top \mD$ and $\zz_1 = \mV_1 \yy_1$.
One can notice the following projection properties of matrix $\hat{\mP}_1$:
\[
\hat{\mP}^2_1 = \hat{\mP}_1 \quad \text{and} \quad \hat{\mP}_1 \mV_1(\mV_1^\top \mD \mV_1)^{\dagger} = \mV_1(\mV_1^\top \mD \mV_1)^{\dagger}\,.
\]
That is, $\hat{\mP}_1$ is a projection matrix with eigenspace spanned by columns of the matrix $\mV_1(\mV_1^\top \mD \mV_1)^{\dagger}$. Similarly, one can define $\hat{\mP}_2$. Then the solution for the first AKD step evaluated at set $\mathcal{X}_1$ is:
\begin{gather*}
g^\star_1(\mathcal{X}_1) = \mV_1^\top \mD \mV_2(\mV_2^\top \mD \mV_2)^{\dagger} \mV_2^\top \hat{\mP}_1^\top \zz_1 = \mV_1^\top \hat{\mP}_2^\top \hat{\mP}_1^\top \zz_1,
\end{gather*}
and after $t$ rounds we obtain for agent I and agent II correspondingly:
\[
g^\star_{2t}(\mathcal{X}_2) = \mV_2^\top \hat{\mP}_1^\top (\hat{\mP}_2^\top \hat{\mP}_1^\top)^{t} \zz_1 \quad \text{and} \quad g^\star_{2t+1}(\mathcal{X}_1) = \mV_1^\top (\hat{\mP}_2^\top \hat{\mP}_1^\top)^{t+1} \zz_1.
\]
In the limit of the distillation rounds operator $(\hat{\mP}_2 \hat{\mP}_1)^t$ tends to the projection on the intersection of 2 subspaces spanned by the columns of matrices $\mV_1(\mV_1^\top \mD \mV_1)^{\dagger}$ and $\mV_2(\mV_2^\top \mD \mV_2)^{\dagger}$. We should highlight that in general, the intersection set in this case consists not only from the origin point $\0$ but some rays that lie simultaneously in the eigenspaces of both projectors $\hat{\mP}_1$ and $\hat{\mP}_2$. The illustrative example is shown in the Fig. \ref{fig: Altern 3d}. We perform the alternating projection algorithm between orthogonal linear spaces that have a ray in the intersection and we converge to some non-zero point that belongs to this ray. The intersection of eigenspaces of both projectors $\hat{\mP}_1$ and $\hat{\mP}_2$ is the set of points $\xx \in \text{Span}(\mV_1(\mV_1^\top \mD \mV_1)^{\dagger})$ s.t. $\hat{\mP}_1\hat{\mP}_2 \xx = \xx$.
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.8\linewidth]{figures/Alternating_proj_3d.pdf}
\caption{Illustrative projection 3D}
\label{fig: Altern 3d}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.8\linewidth]{figures/Avg+_scheme.pdf}
\caption{PKD scheme}
\label{fig: PKD scheme}
\end{subfigure}
\end{figure}
\section{Averaged KD}
In this section we present the detailed analysis of the algorithm presented in the section \ref{AKD Analysis} keeping the notation of the section \ref{Appendix: AKD}. As a reminder, models of both agents after round $t$ of AvgKD algorithm are as follows:
\begin{gather*}
g^1_t(\mX_2) = \frac{1}{2} \mL_{21} (c \mI + \mL_{11})^{-1} (\yy_1+g^2_{t-1}(\mX_1)) = \frac{1}{2} \mV^\top_2 \mP^\top_1(\zz_1+g^2_{t-1}),\\
g^2_t(\mX_1) = \frac{1}{2} \mM_{12} (c \mI + \mM_{22})^{-1} (g^1_{t-1}(\mX_2)+\yy_2) = \frac{1}{2} \tilde\mV^\top_1 \tilde{\mP}^\top_2(g^1_{t-1}+\zz_2).
\end{gather*}
where
\begin{gather}
g^1_{t-1} = \frac{1}{2} (c \mI + \mD) \mV_1 (c \mI + \mL_{11})^{-1} (\yy_1 + g^2_{t-2}(\mX_1)), \quad \forall t \ge 2\\
g^2_{t-1} = \frac{1}{2} (c \mI + \mD) \tilde\mV_2 (c \mI + \mM_{22})^{-1} (g^1_{t-2}(\mX_2) + \yy_2), \quad \forall t \ge 2\\
t = 1: \quad g^{1}_0 = \mP^\top_1 \zz_1 \quad \text{and} \quad g^{2}_0 = \tilde{\mP}^\top_2 \zz_2, \quad \text{ where } \quad \zz_1 = \mV_1 \yy^1, \quad \zz_2= \tilde\mV_2 \yy^2\,.
\end{gather}
The illustration of this process is presented in the Fig. \ref{fig: Averaged GT scheme}.
Consider the sequence of solutions for the agent 1 with evaluation at $\mX_2$:
\begin{itemize}
\item Supervised Learning: \[g^1_0(\mX_2) = \mV^\top_2 \mP^\top_1 \zz_1\]
\item 1st round of KD: \[g^1_1(\mX_2) = \mV^\top_2 \frac{\mP^\top_1}{2} (\zz_1 + \mC_1 \tilde{\mP}^\top_2 \zz_2)\]
\item 2nd round of KD: \[g^1_2(\mX_2) = \mV^\top_2 \frac{\mP^\top_1}{2} (\zz_1 + \frac{\mC_1 \tilde{\mP}^\top_2}{2} \zz_2 + \frac{\mC_1 \tilde{\mP}^\top_2 \tilde\mC_2 \mP^\top_1}{2} \zz_1)\]
\item 3rd round of KD: \[g^1_3(\mX_2) = \mV^\top_2 \frac{\mP^\top_1}{2} (\zz_1 + \frac{\mC_1 \tilde{\mP}^\top_2}{2} \zz_2 + \frac{\mC_1 \tilde{\mP}^\top_2 \tilde\mC_2 \mP^\top_1}{4} \zz_1 + \frac{\mC_1 \tilde{\mP}^\top_2 \tilde\mC_2 \mP^\top_1 \mC_1 \tilde{\mP}^\top_2}{4} \zz_2)\]
\item $t$-th round of KD: \[g^1_{t}(\mX_2) = \mV^\top_2 \frac{\mP^\top_1}{2}(\sum^{t}_{i=0} (\frac{\mC_1 \tilde{\mP}^\top_2 \tilde\mC_2 \mP^\top_1}{4})^i) \zz_1 + \mV^\top_2 \frac{\mP^\top_1 \mC_1 \tilde{\mP}^\top_2}{4} (\sum^{t-1}_{i=0} (\frac{\tilde\mC_2 \mP^\top_1 \mC_1 \tilde{\mP}^\top_2}{4})^i) \zz_2\]
\item The limit of KD steps: \[g^1_{\infty}(\mX_2) = \mV^\top_2 \frac{\mP^\top_1}{2}(\mI - \frac{\mC_1 \tilde{\mP}^\top_2 \tilde\mC_2 \mP^\top_1}{4})^{\dagger} \zz_1 + \mV^\top_2 \frac{\mP^\top_1 \mC_1 \tilde{\mP}^\top_2}{4} (\mI - \frac{\tilde\mC_2 \mP^\top_1 \mC_1 \tilde{\mP}^\top_2}{4})^{\dagger} \zz_2,\] where $\dagger$ stands for pseudoinverse.
\end{itemize}
Let us analyze the limit solution and consider the first term of its expression. One can deduce the following identity: \footnote{In case of $c = 0$, the whole analysis can be repeated by replacing inverse sign with $\dagger$ sign and using the following fact for positive semidefinite matrices \citep{zhang2006schur}:
\[
\mV_2^\top \mD \mV_1 (\mV_1^\top \mD \mV_1)^{\dagger}(\mV_1^\top \mD \mV_1) = \mV_2^\top \mD \mV_1\,.
\]}
\begin{align}
&\mV^\top_2 \frac{\tilde{\mP}^\top_1}{2}(\mI - \frac{\mC_1 \tilde{\mP}^\top_2 \tilde\mC_2 \mP^\top_1}{4})^{\dagger} \zz_1 =\\ \label{Avg 1st term}
&\frac{\mV^\top_2 \mD \mV_1}{2} (c\mI + \mV_1^\top \mD \mV_1 - \frac{\tilde\mV_1^\top \mD \tilde\mV_2}{2} (c\mI + \tilde\mV_2^\top \mD \tilde\mV_2)^{-1} \frac{\mV_2^\top \mD \mV_1}{2})^{\dagger} \yy_1
\end{align}
In a similar manner, we can deal with the second term:
\begin{align*}
&\mV^\top_2 \frac{\mP^\top_1 \mC_1 \tilde{\mP}^\top_2}{4} (\mI - \frac{\tilde\mC_2 \mP^\top_1 \mC_1 \tilde{\mP}^\top_2}{4})^{\dagger} \zz_2 = \\
&\frac{\mV^\top_2 \mD \mV_1}{2} (c\mI + \mV_1^\top \mD \mV_1 - \frac{\tilde\mV_1^\top \mD \tilde\mV_2}{2} (c\mI + \tilde\mV_2^\top \mD \tilde\mV_2)^{-1} \frac{\mV_2^\top \mD \mV_1}{2})^{\dagger} \frac{\tilde\mV_1^\top \mD \tilde\mV_2}{2} (c\mI + \tilde\mV_2^\top \mD \tilde\mV_2)^{-1} \yy_2
\end{align*}
Now, we notice Schur complement expression in the equation \eqref{Avg 1st term}: \[
(c\mI + \mV_1^\top \mD \mV_1 - \frac{\tilde\mV_1^\top \mD \tilde\mV_2}{2} (c\mI + \tilde\mV_2^\top \mD \tilde\mV_2)^{-1} \frac{\mV_2^\top \mD \mV_1}{2})\,.
\]
This means that we can consider the following problem:
\begin{gather} \label{Schur problem}
\begin{pmatrix}
\mL_{11} + c\mI & \frac{\mM_{12}}{2}\\
\frac{\mL_{21}}{2} & \mM_{22} + c\mI
\end{pmatrix}
\begin{pmatrix}
\bm\beta_1\\
\bm\beta_2
\end{pmatrix}
= \begin{pmatrix}
\mV_1^\top \mD \mV_1 + c\mI & \frac{\tilde\mV_1^\top \mD \tilde\mV_2}{2}\\
\frac{\mV_2^\top \mD \mV_1}{2} & \tilde\mV_2^\top \mD \tilde\mV_2 + c\mI
\end{pmatrix}
\begin{pmatrix}
\bm\beta_1\\
\bm\beta_2
\end{pmatrix} =
\begin{pmatrix}
\yy_1\\
-\yy_2
\end{pmatrix},
\end{gather}
and derive that $g^1_{\infty}(\mX_2) = \frac{\mV^\top_2 \mD \mV_1}{2} \bm\beta_1$ and $g^2_{\infty}(\mX_1) = - \frac{\tilde\mV^\top_1 \mD \tilde\mV_2}{2} \bm\beta_2$, where $\bm\beta_1, \bm\beta_2$ are defined as the solution to the problem \eqref{Schur problem}.
Given this, we can derive the following:
\begin{gather} \label{eq: Average_GT_final}
\mV^\top_1 \mD \mV_1 \bm\beta_1 + \frac{\tilde\mV^\top_1 \mD \tilde\mV_2}{2} \bm\beta_2 = \yy_1 - c\bm\beta_1 = 2 g^1_{\infty}(\mathcal{X}_1) - g^2_{\infty}(\mathcal{X}_1),\\ \label{eq: Average_GT_final2}
- \frac{\mV^\top_2 \mD \mV_1}{2} \bm\beta_1 - \tilde\mV^\top_2 \mD \tilde\mV_2 \bm\beta_2 = \yy_2 + c\bm\beta_2 = 2 g^2_{\infty}(\mX_2) - g^1_{\infty}(\mX_2).
\end{gather}
As one can see, there is a strong relation between the limit KD solutions and the solution of a linear system of equations with modified matrix $\mK$ and right-hand side. Mainly, we take the kernel matrix and divide its non-diagonal blocks by 2, which intuitively shows that our final model accounts for the reduction of the 'closeness' between datasets $\mathcal{D}_1$ and $\mathcal{D}_2$. And on the right-hand side, we see $-\yy_2$ instead of $\yy_2$ which is quite a 'artificial' effect. Overall these results in the fact that both limit solutions (for each agent) do not give a ground truth prediction for $\mX_1$ and $\mX_2$ individually, which one can see from equation \eqref{eq: Average_GT_final} with $c=0$. That is, we need combine the predictions of both agents in a specific way to get ground truth labels for datasets $\mathcal{D}_1$ and $\mathcal{D}_2$. Moreover, the way we combine the solutions differs between datasets $\mathcal{D}_1$ and $\mathcal{D}_2$ that one can see from comparison of right-hand sides of expressions \eqref{eq: Average_GT_final} and \eqref{eq: Average_GT_final2}. Given all the above, to predict optimal labels we need to change we way we combine models of agents in dependence on a dataset, but an usual desire is to have one model that predicts ground truth labels for at least both training datasets.
To obtain the expressions for the case of identical models one should 'remove all tildas' in the above expressions and by setting $\mV_1 = \tilde\mV_1$, $\mV_2 = \tilde\mV_2$, $\mC_1 = \tilde\mC_2 \to \mI$ and $(\mC_1 \tilde{\mP}_2^\top \tilde\mC_2 \mP_1^\top)^t \to (\mP_2^\top \mP_1^\top)^t$
\section{Parallel KD} \label{Appendix: PKD}
In this section, we theoretically analyze a slight modification of the AvgKD algorithm which we call Parallel KD (PKD). Keeping the notation of sections \ref{Appendix: AKD} and \ref{Appendix: AKD wo reg}, denote the data on agent 1 as $\cD_1 = (\mX^1, \yy^1)$ where $\mX^1[i, :] = \xx_n^1$ and $\yy^1[i] = y_i^1$. Correspondingly, for agent 2 we have $\cD^2 = (\mX^2, \yy^2)$. Now starting from $\hat\yy_0^1 = \yy^1, \hat\yy_0^2 = \yy^2$, in each round $t \geq 0$: \begin{enumerate}[label=\alph*., nosep]
\item Agents 1 and 2 train their model on datasets $(\mX^1, \hat\yy^1_t)$ and $(\mX^2, \hat\yy^2_t)$ to obtain $g^1_t$ and $g^2_t$.
\item Agents exchange $g^1_t$ and $g^2_t$ between each other.
\item Agents use exchanged models to predict labels $\hat\yy^1_{t+1} = \frac{\hat\yy^1_t + g^2_t(\mX^1)}{2}$, $\hat\yy^2_{t+1} = \frac{\hat\yy^2_t + g^1_t(\mX^2)}{2}$.
\end{enumerate}
The summary of the algorithm is depicted in Figure \ref{fig: PKD scheme}.
That is, we learn from the average of 2 agents' predictions. For simplicity, we are going to analyze the scheme without regularization and we take always a minimum norm solution. Notice that there is no exchange of raw data but only of the trained models.
Let us analyse KD algorithm where solutions for agent I ($g^1$) and agent II ($g^2$) obtained as:
\begin{gather}
g_t^1(\mathcal{X}_2) = \frac{1}{2} \mK_{21} (\mK_{11})^{\dagger} (g_{t-1}^1(\mathcal{X}_1)+g_{t-1}^2(\mathcal{X}_1)) = \frac{1}{2} \mV_2^\top \hat{\mP}^\top_1 (g_{t-1}^1+g_{t-1}^2),\\
g_t^2(\mathcal{X}_1) = \frac{1}{2} \mK_{12} (\mK_{22})^{\dagger} (g_{t-1}^1(\mathcal{X}_2)+g_{t-1}^2(\mathcal{X}_2))= \frac{1}{2} \mV_1^\top \hat{\mP}^\top_2 (g_{t-1}^1+g_{t-1}^2)\,,
\end{gather}
where
\begin{gather}
g_{t-1}^1 = \frac{1}{2} \mD \mV_1 (\mK_{11})^{\dagger} (g_{t-2}^1(\mathcal{X}_1) + g_{t-2}^2(\mathcal{X}_1)), \quad \forall t \ge 2\\
g_{t-1}^2 = \frac{1}{2} \mD \mV_2 (\mK_{22})^{\dagger} (g_{t-2}^1(\mathcal{X}_2) + g_{t-2}^2(\mathcal{X}_2)), \quad \forall t \ge 2\\
t = 1: \quad g^{1}_0 = \hat{\mP}^\top_1 \zz_1 \quad \text{and} \quad g_{0}^2 = \hat{\mP}^\top_2 \zz_2, \quad \text{ where } \quad \zz_i = \mV_i \yy_i, \quad i = 1,2\,.
\end{gather}
Consider $g_{t}^1$ for $t \geq 1$:
\begin{gather*}
g_{t}^1 = \frac{1}{2} \hat{\mP}^\top_1 (g_{t-1}^1+g_{t-1}^2) = \frac{1}{2} \hat{\mP}^\top_1(\frac{1}{2} \hat{\mP}^\top_1(g_{t-2}^1+g_{t-2}^2) + \frac{1}{2} \hat{\mP}^\top_2(g_{t-2}^1+g_{t-2}^2)) = \\
= \frac{1}{2} \hat{\mP}^\top_1(\frac{1}{2}(\hat{\mP}^\top_1 + \hat{\mP}^\top_2) (g_{t-2}^1+g_{t-2}^2)) = ... = \frac{1}{2^{t}} \hat{\mP}^\top_1(\hat{\mP}^\top_1 + \hat{\mP}^\top_2)^{t-1}(g^{1}_0+g_{0}^2) =\\
= \frac{1}{2^{t}} \hat{\mP}^\top_1(\hat{\mP}^\top_1 + \hat{\mP}^\top_2)^{t-1}(\hat{\mP}^\top_1 z_1+\hat{\mP}^\top_2 z_2)
\end{gather*}
The form of the solutions reminds the method of averaged projection \citep{lewis2007local} with operator $\hat{\mP}_1 + \hat{\mP}_2$, which is similar to alternating projection converges to the intersection point of 2 subspaces\footnote{Actually, there is an explicit relation between alternating and averaged projections \citep{lewis2007local}.}. That is, similarly to AKD in the case with the regularization we expect the solution to converge to the origin point $\0$ in the limit of the distillation steps. As a result, after some point, one expects steady degradation of the predictions of both agents in the PKD scheme.
\section{Ensembled KD} \label{Append: EKD}
One can consider the following problem:
\begin{gather} \label{eq: optimal}
\begin{pmatrix}
\mL_{11} & \mM_{12}\\
\mL_{21} & \mM_{22}
\end{pmatrix} =
\begin{pmatrix}
\mV_1^\top \mD \mV_1 & \tilde\mV_1^\top \mD \tilde\mV_2\\
\mV_2^\top \mD \mV_1 & \tilde\mV_2^\top\mD \tilde\mV_2
\end{pmatrix}
\begin{pmatrix}
\bm\beta_1\\
\bm\beta_2
\end{pmatrix} =
\begin{pmatrix}
\yy_1\\
\yy_2
\end{pmatrix},
\end{gather}
with the following identities:
\begin{equation} \label{eq: optimal identities}
\mV_1^\top \mD \mV_1 \bm\beta_1 + \tilde\mV_1^\top \mD \tilde\mV_2 \bm\beta_2 = \yy_1 \quad \text{and} \quad \mV_2^\top \mD \mV_1 \bm\beta_1 + \tilde\mV_2^\top \mD \tilde\mV_2 \bm\beta_2 = \yy_2\,.
\end{equation}
One can find $\bm\beta_1, \bm\beta_2$ and deduce the following prediction by the model associated with the system \eqref{eq: optimal} for $i = 1,2$:
\begin{equation}\label{Append: ekd eq}
\begin{gathered}
\mV_i^\top \mD \mV_1 \bm\beta_1 + \tilde\mV_i^\top \mD \tilde\mV_2 \bm\beta_2 =\\ \mV_i^\top \mP^\top_1(\mI - \mC_1 \tilde\mP^\top_2 \tilde\mC_2 \mP^\top_1)^{\dagger} \zz_1 - \mV_i^\top \mP^\top_1 \mC_1 \tilde\mP^\top_2(\mI - \tilde\mC_2 \mP^\top_1 \mC_1 \tilde\mP^\top_2)^{\dagger} \zz_2 + \\\tilde\mV_i^\top \tilde\mP^\top_2(\mI - \tilde\mC_2 \mP^\top_1 \mC_1 \tilde\mP^\top_2)^{\dagger} \zz_2 - \tilde\mV_i^\top \mP^\top_2 \tilde\mC_2 \mP^\top_1(\mI - \mC_1 \tilde\mP^\top_2 \tilde\mC_2 \mP^\top_1)^{\dagger} \zz_1 = \\ \mV_i^\top \mP^\top_1 \sum^{\infty}_{t=0}(\mC_1 \tilde\mP^\top_2 \tilde\mC_2 \mP^\top_1)^{t} \zz_1 -
\mV_i^\top \mP^\top_1 \mC_1 \tilde\mP^\top_2 \sum^{\infty}_{t=0}(\tilde\mC_2 \mP_1^\top \mC_1\tilde\mP_2^\top)^{t} \zz_2 + \\ \tilde\mV_i^\top \mP^\top_2 \sum^{\infty}_{t=0}(\tilde\mC_2 \mP_1^\top \mC_1 \tilde\mP_2^\top)^{t} \zz_2 - \tilde\mV_i^\top \mP^\top_2 \tilde\mC_2 \mP^\top_1 \sum^{\infty}_{t=0}(\mC_1 \tilde\mP_2^\top \tilde\mC_2 \mP_1^\top)^{t} \zz_1.
\end{gathered}
\end{equation}
From this one can easily deduce \eqref{eq: optimal identities} which means that for datasets of both agents this model predicts ground truth labels. To obtain the expressions for the case of identical models one should 'remove all tildas' in all the above expressions and by setting $\mV_1 = \tilde\mV_1$, $\mV_2 = \tilde\mV_2$, $\mC_1 = \tilde\mC_2 \to \mI$ and $(\mC_1 \tilde{\mP}_2^\top \tilde\mC_2 \mP_1^\top)^t \to (\mP_2^\top \mP_1^\top)^t$
The last question is how one can construct the scheme of iterative KD rounds to obtain the above expression for the limit model. From the form of the prediction, we conclude that one should use models obtained in the process of AKD. There are many possible schemes how one can combine these models to obtain the desired result. One of the simplest possibilities is presented in the section \ref{Optimal Analysis}.
\section{M-agent schemes}\label{sec:n-agents}
The natural question to ask is how one could extend the discussed schemes to the setting of M agents. In this section, we address this question and explicitly provide the description of each algorithm for the setting of $M$ agents.
\paragraph{Scalability and privacy.} Before diving into particular algorithms we investigate some important concerns about our KD based framework. A naive implementation of our methods will require each agent sharing their model with all other agents. This will incur significant communication costs ($M^2$), storage costs (each agent has to store $M$ models), and is not compatible with secure aggregation \citep{bonawitz2016practical} potentially leaking information. One potential approach to alleviating these concerns is to use an server and homomorphic encryption \citep{graepel2012ml}. Homomorphic encryption allows agent 1 to compute predictions on agent 2's data $f_1(\mX^2)$ without learning anything about $\mX^2$ i.e. there exists a procedure $\text{Hom}$ such that given encrypted data $\text{Enc}(\mX^2)$, we can compute
$$\text{Hom}(f_1, \text{Enc}(\mX^2)) = \text{Enc}(f_1(\mX^2))\,.$$
Given access to such a primitive, we can use a dedicated server (agent 0) to whom all models $f_1, \dots, f_M$ are sent. Let us define some weighted sum of the predictions as $f_{\alphav}(\mX) := \sum_{i=1}^M \alpha_if_i(\mX)\,.$
Then, using homomorphic encryption, each agent $i$ can compute $\text{Enc}(f_{\alphav}(\mX^i)))$ in a private manner without leaking any information to agent 0. This makes the communication cost linear in $M$ instead of quadratic, and also makes it more private and secure. A full fledged investigation of the scalability, privacy, and security of such an approach is left for future work. With this caveat out of the way, we next discuss some concrete algorithms.
\paragraph{AKD with M agents.}
To extend AKD scheme to a multi-agent setting and corresponding theory we need to start by understanding what the alternating projection algorithm is in the case of $M$ convex sets. Suppose we want to find the intersection point of $M$ affine sets $\mathcal{C}_i, \mbox{ for } i=1,...,M$. In terms of alternating projection algorithm we can write the following extension of it \citep{halperin1962product}:
\begin{equation}
\mP_{\cap_{i=1}^n \mathcal{C}_i}(\xx) = (\mP_{\mathcal{C}_M} \mP_{\mathcal{C}_{M-1}}...\mP_{\mathcal{C}_1})^{\infty}(\xx)
\end{equation}
For our algorithm, it means that agent 1 passes its model to agent 2, then agent 2 passes its model to agent 3 and so until agent M that passes its model to agent 1, and then the cycle repeats. That is, as before, we denote the data on agent 1 as $\cD_1 = (\mX^1, \yy^1)$ where $\mX^1[i, :] = \xx_n^1$ and $\yy^1[i] = y_i^1$., for all other agents we have $\cD^i = (\mX^i, \yy^i), \mbox{ for } i=2,...,M$. Now starting from $\hat\yy_0^1 = \yy^1$, in each rounds $t,...,t+M-1$, $t \geq 0$: \begin{enumerate}[label=\alph*., nosep]
\item Agent 1 trains their model on dataset $(\mX^1, \hat\yy^1_t)$ to obtain $g_t^1$.
\item for $i=2,...,M$:\begin{enumerate}[label=b.\arabic*]
\item Agent $i$ receives $g^1_{t+i-2}$ and uses it to predict labels $\hat\yy^i_{t+i-1} = g^1_{t+i-2}(\mX^i)$.
\item Agent $i$ trains their model on dataset $(\mX^i, \hat\yy^i_{t+i-1})$ to obtain $g^1_{t+i-1}$.
\end{enumerate}
\item Agent 1 receives a model $g^1_{t+M-1}$ from agent M and predicts $\hat \yy^1_{t+M} = g^1_{t+M-1}(\mX^1)$.
\end{enumerate}
As before, there is no exchange of raw data but only of the trained models. And given all the results deduced before, all the models from some point will start to degenerate. The rate of convergence of such an algorithm is defined similarly to alternating projection in case 2 sets and can be found in \citet{smith1977practical}.
\paragraph{PKD with M agents.}
PKD scheme can be easily extended to the multi-agent setting analogously to how averaged projection algorithm can be extended to the multi-set setting \citep{lewis2007local}. Suppose we want to find the intersection point of $M$ affine sets $\mathcal{C}_i, \mbox{ for } i=1,...,M$ then the in terms of averaged projection we have the following:
\begin{equation}
\mP_{\cap_{i=1}^n \mathcal{C}_i}(\xx) = (\frac{1}{M} \sum^M_{i=1}\mP_{\mathcal{C}_i})^{\infty}(\xx)
\end{equation}
This expression easily translates into the PKD algorithm for M agents. Denote the data on all agents as $\cD^i = (\mX^i, \yy^i), \mbox{ for } i=2,...,M$. Now starting from $\hat\yy_0^i = \yy^i, \mbox{ for } i=2,...,M$, in each round $t \geq 0$: \begin{enumerate}[label=\alph*., nosep]
\item for $i=1,...,M$:\begin{enumerate}[label=a.\arabic*]
\item Agent $i$ trains their model on dataset $(\mX^i, \hat\yy^i_t)$ to obtain $g^i_t$.
\end{enumerate}
\item for $i=1,...,M$:\begin{enumerate}[label=b.\arabic*]
\item Agent $i$ receives models $g^j_t, \mbox{ for } j=1,...,M, j \ne i$ from all other agents.
\item Agent $i$ use received models to predict $\hat\yy^i_{t+1} = \frac{\hat\yy^i_t + \sum^M_{j=1, j \ne i} g^j_t(\mX^i)}{M}$.
\end{enumerate}
\end{enumerate}
As in the case of 2 agents, there is no exchange of data between agents, but only models. This scheme requires all to all communication.
\paragraph{AvgKD with M agents.}
Similarly to PKD algorithm we can extend AvgKD algorithm as follows: starting from $\hat\yy_0^i = \yy^i, \mbox{ for } i=2,...,M$, in each round $t \geq 0$: \begin{enumerate}[label=\alph*., nosep]
\item for $i=1,...,M$:\begin{enumerate}[label=a.\arabic*]
\item Agent $i$ trains their model on dataset $(\mX^i, \hat\yy^i_t)$ to obtain $g^i_t$.
\end{enumerate}
\item for $i=1,...,M$:\begin{enumerate}[label=b.\arabic*]
\item Agent $i$ receives models $g^j_t, \mbox{ for } j=1,...,M, j \ne i$ from all other agents.
\item Agent $i$ use received models to predict $\hat\yy^i_{t+1} = \frac{\yy^i + \sum^M_{j=1, j \ne i} g^j_t(\mX^i)}{M}$.
\end{enumerate}
\end{enumerate}
That is, there is no exchange of data between agents, but only models. This scheme as well as PKD requires all to all communication. That means that the scheme can not be scaled, but it is still useful for small numbers of agents (e.g collaboration of companies). The last is motivated by the simplicity of the scheme without any need for hyperparameter tuning, the non-degrading behavior as well as its superiority over the FedAvg scheme at highly heterogeneous data regimes.
\paragraph{EKD with M agents.}
EKD scheme in the case of 2 agents is based on models obtained in the process of 2 simultaneous runs of the AKD algorithm. This means that the extension of EKD to the multi-agent setting, in this case, is straightforward by using the $M$ simultaneous runs of AKD algorithm starting from each agent in the multi-agent setting and summing the obtained models in the process as follows:
\begin{equation}
f_{\infty}(\xx) = \sum^\infty_{t=0} (-1)^t (\sum^M_{i=1} g_t^i(\xx))
\end{equation}
\section{Additional experiments}
\subsection{MNIST with varying data heterogeneity}
In this section, we present the results for MNIST dataset with varying data heterogeneity in the setting of 'different model'. The results one can see in Fig. \ref{fig: MNIST Non-iid diff}. There is a faster degradation trend for both AKD and PKD schemes if different models for agents are used (Fig. \ref{fig: MNIST Non-iid diff}) comparing to rhe 'same model' setting (Fig. \ref{fig: AvgKD Non-iid}) at all data heterogeneity regimes. The PKD scheme is a slight modification of the AvgKD scheme which is proven to degrade through rounds of distillation. We see the degradation trend for PKD scheme which is suggested by our theory presented in App.~\ref{Appendix: PKD}. EKD does not improve with subsequent rounds in the setting of 'different models'. AvgKD scheme outperforms both PKD and AKD in all settings. However, its convergence is not stable in extremely high heterogeneous settings, showing large oscillations. Investigating and mitigating this could be interesting future work.
\begin{figure}[!h]
\centering
\includegraphics[width=0.9\linewidth]{figures/New_Experiments/MNIST_2_agents_Diff.pdf}\vspace{-2mm}
\caption{Test accuracy of on MNIST with varying data heterogeneity in the setting of 'different model'. Performance of PKD and AKD degrade with degradation speeding up with the increase in data heterogeneity. Performance of AvgKD scheme converges to steady behavior at any regime of data heterogeneity. Both agents benefit from AvgKD, PKD and EKD schemes in the early rounds of communication.}\vspace{-5mm}
\label{fig: MNIST Non-iid diff}
\end{figure}
\subsection{CIFAR10 with varying data heterogeneity}
In this section, we present the results for CIFAR10 dataset with varying data heterogeneity. Note that we use cross-entropy loss function here. The results one can see in Fig. \ref{fig: CIFAR} and \ref{fig: CIFAR diff} that again show data heterogeneity plays key role in the behavior of all the schemes. All the trends we saw on the MNIST dataset are repeated here except one: EKD does not improve in subsequent rounds in the 'same model' setting.
\begin{figure}[!h]
\centering
\includegraphics[width=0.9\linewidth]{figures/New_Experiments/CIFAR10_2_agents.pdf}\vspace{-2mm}
\caption{Test accuracy of on CIFAR10 with varying data heterogeneity in the setting of 'same model'. As on MNIST: performance of PKD and AKD degrade with degradation speeding up with the increase in data heterogeneity; performance of AvgKD scheme converges to steady behavior; both agents benefit from AvgKD, PKD and EKD schemes in early rounds of communication.}\vspace{-5mm}
\label{fig: CIFAR}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.9\linewidth]{figures/New_Experiments/CIFAR10_2_agents_Diff.pdf}\vspace{-2mm}
\caption{Test accuracy of on CIFAR10 with varying data heterogeneity in the setting of 'different model'. All the schemes behave similarly to the 'same model' setting.}\vspace{-5mm}
\label{fig: CIFAR diff}
\end{figure}
\subsection{Cross-Entropy objective}
In this section, we present the results of experiments on MNIST with Cross-Entropy (CE) loss for 2 main schemes under investigation: AKD and AvgKD. In Fig. \ref{fig: AKD_CE} and \ref{fig: AvgKD_CE} one can see the results of AKD and AvgKD schemes correspondingly for CE loss. The results are aligned with our theory: in Fig. \ref{fig: AKD_CE} we see the degradation trend for AKD which is dependent on the amount of the regularization, model and data heterogeneity. In Fig. \ref{fig: AvgKD_CE} we see steady behavior of AvgKD scheme for both agents models: there is no degradation even if model and data are different.
\begin{figure}[!h]
\centering
\includegraphics[width=.9\linewidth]{figures/Experiments/AKD_CE.pdf}
\caption{Test accuracy of AKD on MNIST using CE loss and model starting from agent 1 (blue) and agent 2 (red) with varying amount of regularization, model heterogeneity, and data heterogeneity.
In all cases, performance degrades with increasing rounds with degradation speeding up with the increase in regularization, model heterogeneity, or data heterogeneity.}
\label{fig: AKD_CE}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=.9\linewidth]{figures/Experiments/AvgKD_CE.pdf}
\caption{Test accuracy of AvgKD on MNIST using CE loss and model starting from agent 1 (blue) and agent 2 (red) with varying model heterogeneity, and data heterogeneity.
In all cases, there is no degradation of performance, though the best accuracy is obtained by agent 1 in round 1 with only local training.}
\label{fig: AvgKD_CE}
\end{figure}
\subsection{Collaboration between MLPs and Random Forests}
In this section, we present the results for MNIST dataset for Random Forests (RF) and MLP models with MSE loss. That is, the experiments are in the setting 'different model', where agent 1 has MLP model and agent 2 has RF model. These experiments can show how AKD and AvgKD schemes behave in the setting of fundamentally different models. In the Figs. \ref{fig: RF acc} and \ref{fig: RF Acc heter} the results for accuracy are presented. We see the alignment of these results with theory and other experiments with deep learning models. Mainly, there is a degradation trend for AKD scheme which is speeding up with the increase in data heterogeneity, there is no degradation for AvgKD scheme, and the performance of both agents in AvgKD scheme is highly dependent on data heterogeneity.
\begin{figure}[!h]
\centering
\includegraphics[width=\linewidth]{figures/Experiments/Acc_RF.pdf}\vspace{-2mm}
\caption{Test accuracy of AKD and AvgKD on MNIST using models MLP (blue) and RF (red) with varying data heterogeneity. For AKD performance degrades with increasing rounds. Degradation is speeding up with the increase in data heterogeneity. For AvgKD there is no degradation of performance.}\vspace{-5mm}
\label{fig: RF acc}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\linewidth]{figures/Experiments/AvgKD_RF_noniid.pdf}\vspace{-2mm}
\caption{Test accuracy of AvgKD on MNIST using models MLP (blue) and RF (red) with varying data heterogeneity. The increase in data heterogeneity lowers the performance of both agents without degradation trend through rounds.}\vspace{-5mm}
\label{fig: RF Acc heter}
\end{figure}
\subsection{AvgKD with M agents}\label{subsec:avgkd-n-agents}
The AvgKD scheme does not degrade in comparison with AKD and PKD schemes that degrade already in the case of 2 agents. In this section, we present the results of the AvgKD scheme with M agents and use 5 agents on the MNIST dataset in the setting of the 'same model' with varying data heterogeneity. In case of full data heterogeneity ($\textit{Alpha} = 0$) we assign the labels $(2(i-1), 2i-1)$ to the actor number $i, \mbox{ for } i = 1... 5$. The results are presented in the Fig. \ref{fig: AvgKD N Non-iid}. We see that all the agents repeat the behavior pattern in all cases of data heterogeneity. In cases of $\textit{Alpha} < 0.05$ (high data heterogeneity) early stopping is beneficial.
\begin{figure}[!h]
\centering
\includegraphics[width=\linewidth]{figures/Experiments/AvgKD_N_noniid.pdf}\vspace{-2mm}
\caption{Test accuracy of AvgKD with M agents on MNIST with varying data heterogeneity in the setting of 'same model'. All agents can benefit from the distilled knowledge in early rounds of communication.}\vspace{-5mm}
\label{fig: AvgKD N Non-iid}
\end{figure}
\section{Introduction}\vspace{-0.5em}
\begin{quote}
``I speak and speak, but the listener retains only the words he is expecting... It is not the voice that commands the story: it is the ear.'' - Invisible Cities, Italo Calvino.
\end{quote}\vspace{-0.5em}
Federated learning (and more generally collaborative learning) involves multiple data holders (whom we call agents) collaborating with each other to train their machine learning model over their collective data. Crucially, this is done without directly exchanging any of their raw data \citep{mcmahan2017communication,kairouz2019advances}. Thus, communication is limited to only what is essential for the training process and the data holders (aka agents) retain full ownership over their datasets.
Algorithms for this setting such as FedAvg or its variants all proceed in rounds \citep{wang2021field}. In each such round, the agents first train their models on their local data. Then, the knowledge from these different models is aggregated by \emph{averaging the parameters}. However, exchanging knowledge via averaging the model parameters is only viable if all the agents use the same model architecture. This fundamental assumption is highly restrictive. Different agents may have different computational resources and hence may want to use different model architectures. Further, directly averaging the model parameters can fail even when all clients have the same architecture \citep{wang2019adaptive,singh2020model,yu2021fed2}. This is because the loss landscape of neural networks is highly non-convex and has numerous symmetries with different parameter values representing the same function. Finally, these methods are also not applicable when using models which are not based on gradient descent such as random forests. To overcome such limitations, we would need to take a \emph{functional view} view of neural networks i.e. we need methods that are agnostic to the model architecture and parameters. This motivates the central question investigated in this work:
\begin{center}
Can we design \emph{model agnostic} federated learning algorithms which would allow each agent to train their model of choice on the combined dataset?
\end{center}
Specifically, we restrict the algorithms to access the models using only two primitives (a universal model API): train on some dataset i.e. \emph{fit}, and yield predictions on some inputs i.e. \emph{predict}. Our goal is to be able to collaborate with and learn from any agent which provides these two functionalities.
\paragraph{Simple algorithms.} A naive such model agnostic algorithm indeed exists---agents can simply transfer their entire training data to each other and then each agent can train any model of choice on the combined dataset. However, transferring of the dataset is disallowed in federated learning. Instead, we will replace the averaging primitive in federated learning with \emph{knowledge distillation} (KD) \citep{bucilua2006model,hinton2015distilling}. In knowledge distillation (KD), information is transferred from model A to model B by training model B on the predictions of model A on some data. Since we only access model A through its predictions, KD is a functional model-agnostic protocol. The key challenge of KD however is that it is poorly understood and cannot be formulated in the standard stochastic optimization framework like established techniques \citep{wang2021field}. Thus, designing and analyzing algorithms that utilize KD requires developing an entirely new framework and approach.
\textbf{Our Contributions.} The main results in this work are
\begin{itemize}[nosep]
\item We formulate the model agnostic learning problem as two agents with local datasets wanting to perform kernel regression on their combined datasets. Kernel regression is both simple enough to be theoretically tractable and rich enough to capture non-linear function fitting thereby allowing each agent to have a different kernel (hence different models).
\item We analyze alternating knowledge distillation (AKD) and show that it is closely linked to the alternating projection method for finding the intersection of convex sets. Our analysis reveals that AKD sequentially loses information, leading to degradation of performance. This degradation is especially severe when the two agents have heterogeneous data.
\item Using the connection to alternating projection, we analyze other possible variants such as \emph{averaged} knowledge distillation (AvgKD) and attempt to construct an `optimal' scheme.
\item Finally, we evaluate all algorithms on real world deep learning models and datasets, and show that the empirical behavior closely matches our insights from the theoretical analysis. This demonstrates the utility of our framework for analyzing and designing new algorithms.
\end{itemize}
\section{Related work}
\textbf{Federated learning (FL).} In FL \citep{kairouz2019advances}, training data is distributed over several agents or locations. For instance, these could be several hospitals collaborating on a clinical trial, or billions of mobile phones involved in training a voice recognition application. The purpose of FL is to enable training on the union of all agents' individual data without needing to transmit any of the raw sensitive data. Typically, the training is coordinated by some trusted server. One can also instead use direct peer-to-peer communications \citep{nedic2020distributed}. A large body of work has designed algorithms for FL under the identical model setting where we either learn a single global model \citep{mcmahan2017communication,reddi2020adaptive,karimireddy2019scaffold,karimireddy2020mime,wang2021field}, or multiple personalized models \citep{wang2019federated,deng2020adaptive,mansour2020three,grimber2021goptimal}.
\textbf{Knowledge distillation (KD).} Initially, KD was introduced as a way to compress models i.e. as a way to transfer the knowledge of a large model to a smaller model~\citep{bucilua2006model,hinton2015distilling}. Since then, it has found much broader applications such as improving generalization performance via self-distillation, learning with noisy data, and transfer learning \citep{yim2017gift}. We refer to a recent survey \citep{gou2021knowledge} for progress in this vast area.
\textbf{KD in FL.} Numerous works propose to use KD to transfer knowledge from the agent models to a centralized server model \citep{seo2020federated,sattler2020communication,lin2020ensemble,li2020practical,wu2021fedkd}. However, all of these methods rely on access to some common public dataset which may be impractical. KD has also been proposed to combine personalization with model compression~\citep{ozkara2021quped}, but our focus is for the agents to learn on the combined data. In the closely related \emph{codistillation} setting \citep{zhang2018deep,anil2018large,sodhani2020closer}, an ensemble of students learns collaboratively without a central server model. While codistillation does not need additional unlabelled data, it is only suitable for distributed training within a datacenter since it assumes all agents have access to the same dataset. In FL however, there is both model and data heterogeneity. Further, none of these methods have a theoretical analysis.
\textbf{KD analysis.} Despite the empirical success of KD, it is poorly understood with very little theoretical analysis. \citet{phuong2021understanding} explore a generalization bound for a distillation-trained linear model, and
\citet{tang2021understanding} conclude that by using KD one re-weights the training examples for the student, and \citet{menon2020distillation} consider a Bayesian view showing that the student learns better if the teacher provides the true Bayes probability distribution. \citet{allen2020towards} show how ensemble distillation can preserve their diversity in the student model. Finally, \cite{mobahi2020selfdistillation} consider self-distillation in a kernel regression setting i.e.
the model is retrained using its own predictions on the training data.
They show that iterative self-distillation induces a strong regularization effect. We significantly extend their theoretical framework in our work to analyze KD in federated learning where agents have different models and different datasets.
\section{Framework and setup}
\paragraph{Notation.} We denote a set as $\mathcal{A}$, a matrix as $\mA$, and a vector as $\aa$. $\mA[i,j]$ is the (i, j)-th element of matrix $\mA$, $\aa[i]$ denotes the i'th element of vector $\aa$. $||\aa||$ denotes the $\ell_2$ norm of vector $\aa$.
\paragraph{Centralized kernel regression (warmup).}
Consider, as a warm-up, the centralized setting with a training dataset $\mathcal{D} \subseteq \mathbb{R}^d \times \mathbb{R}$. That is, $\mathcal{D} = \cup^{N}_i\{(\xx_i, y_i)\}$, where $\xx_n \in \mathcal{X} \subseteq \mathbb{R}^d$ and $y_n \in \mathcal{Y} \subseteq \mathbb{R}$. Given training set $\mathcal{D}$, our aim is to find best function $f^\star \in \mathcal{F}: \mathcal{X} \to \mathcal{Y}$. To find $f^\star$ we solve the following regularized optimization problem:
\begin{gather} \label{Hilbert_problem}
f^\star := \arg \min_{f \in \mathcal{F}} \frac{1}{N} \sum_n(f(\xx_n) - y_n)^2 + c R_u(f), \text{ with}
\\ R_u(f) := \int_{\mathcal{X}}\int_{\mathcal{X}} u(\xx, \xx')f(\xx)f(\xx')d\xx d\xx'\,.\label{eq:Hilbert_problem_regualarizer}
\end{gather}
Here, $\cF$ is defined to be the space of all functions such that \eqref{eq:Hilbert_problem_regualarizer} is finite, $c$ is the regularization parameter, and $u(\xx, \xx')$ is a kernel function. That is, $u$ is symmetric $u(\xx, \xx') = u(\xx', \xx)$ and positive with $R_u(f) = 0$ only when $f = 0$ and $R_u(f) > 0$ o.w. Further, let $k(\xx, \tt)$ be the function s.t.
\begin{equation}\label{eq:greens_func}
\int_{\mathcal{X}}u(\xx, \xx')k(\xx', \tt)d\xx' = \delta(\xx - \tt)\,, \quad\text{ where $\delta(\cdot)$ is the Dirac delta function.}
\end{equation}
Now, we can define the positive definite matrix $\mK \in \mathbb{R}^{N \times N}$ and vector $\kk_{\xx} \in \mathbb{R}^N$ as:
\begin{equation}\label{eq:kernel_matr}
\mK[i,j] := \frac{1}{N} k(\xx_i, \xx_j) \quad \text{and} \quad \kk_{\xx}[i] := \frac{1}{N} k(\xx, \xx_i), \quad \text{for} \quad \xx_i \in \mathcal{D}, \forall i \in [N]\,.
\end{equation}
Note that $\kk_{\xx}$ is actually a vector valued function which takes any $\xx \in \cX$ as input, and both $\kk_{\xx}$ and $\mK$ depend on the training data $\cD$. We can then derive a closed form solution for $f^\star$.
\begin{proposition}[\citet{scholkopf2001generalized}]\label{prop:closedform}
The $f^\star$ which minimizes \eqref{Hilbert_problem} is given by
\[
f^\star(\xx) = \kk_{\xx}^\top (c\mI + \mK)^{-1} \yy\,, \quad \text{ for } \quad \yy[i] := y_i, \forall i \in[N]\,.
\]
\end{proposition}
Note that on the training data $\mX \in \R^{N \times d}$ with $\mX[i,:] = \xx_i$, we have $f^\star(\mX) = \mK(c\mI + \mK)^{-1} \yy$.
Kernel regression for an input $\xx$ outputs a weighted average of the training $\{y_n\}$. These weights are computed using a learned measure of distance between the input $\xx$ and the training $\{\xx_n\}$. Intuitively, the choice of the kernel $u(\xx, \xx')$ creates an inductive bias and corresponds to the choice of a model in deep learning, and the regularization parameter $c$ acts similarly to tricks like early stopping, large learning rate, etc. which help in generalization. When $c=0$, we completely fit the training data and the predictions exactly recover the labels with $f^\star(\mX) = \mK(\mK)^{-1} \yy = \yy$. When $c > 0$ the predictions $f^\star(\mX) = \mK(c\mI + \mK)^{-1} \yy \neq \yy$ and they incorporate the inductive bias of the model. In knowledge distillation, this extra information carried by the predictions about the inductive bias of the model is popularly referred to as ``dark knowledge'' \citep{hinton2015distilling}.
\paragraph{Federated kernel regression (our setting).} We have two agents, with agent 1 having dataset $\cD_1 = \cup_{i}^{N_1} \{(\xx_i^1, y_i^1)\}$ and agent 2 with dataset $\cD_2 = \cup_{i}^{N_2} \{(\xx_i^2, y_i^2)\}$. Agent 1 aims to find the best approximation mapping ${g^1}^\star \in \cF_1: \cX \rightarrow \cY$ using a kernel $u_1(\xx , \xx')$ and objective:
\begin{gather} \label{eq:federated_problem}
{g^1}^\star := \arg \min_{g \in {\cF_1}} \frac{1}{N_1 + N_2} \sum_n (g(\xx^1_n) - y^1_n)^2 + \frac{1}{N_1 +N_2} \sum_n (g(\xx^2_n) - y^2_n)^2 + c R_{u_1}(g) \text{, with}\\
R_{u_1}(g) := \int_{\mathcal{X}}\int_{\mathcal{X}} u_1(\xx, \xx')g(\xx)g(\xx')d\xx d\xx'\,.
\end{gather}
Note that the objective of agent 1 is defined using its \emph{individual kernel} $u_1(\xx,\xx')$, but over the \emph{joint dataset} $(\cD_1, \cD_2)$. Correspondingly, agent 2 also uses its own kernel function $u_2(\xx, \xx')$ to define the regularizer $R_{u_2}(g^2)$ over the space of functions $g^2 \in \cF_2$ and optimum ${g^2}^\star$.
Thus, our setting has model heterogeneity (different kernel functions $u_1$ and $u_2$) and data heterogeneity ($\cD_1$ and $\cD_2$ are not i.i.d.). Given that the setting is symmetric between agents 1 and 2, we can focus solely on error in terms of agent 1's objective \eqref{eq:federated_problem} without loss of generality.
Proposition~\ref{prop:closedform} can be used to derive a closed form form for function ${g^1}^\star$ minimizing objective \eqref{eq:federated_problem}. However, computing this requires access to the datasets of both agents. Instead, we ask ``can we design an iterative federated learning algorithm which can approximate ${g^1}^\star$''?
\section{Alternating knowledge distillation} \label{AKD}
In this section, we describe a popular iterative knowledge distillation algorithm and analyze its updates in our framework. Our analysis leads to some surprising connections between KD and projection onto convex sets, and shows some limitations of the current algorithm.
\paragraph{Algorithm.} Denote the data on agent 1 as $\cD_1 = (\mX^1, \yy^1)$ where $\mX^1[i, :] = \xx_n^1$ and $\yy^1[i] = y_i^1$. Correspondingly, we have $\cD^2 = (\mX^2, \yy^2)$. Now starting from $\hat\yy_0^1 = \yy^1$, in each rounds $t$ and $t+1$: \begin{enumerate}[label=\alph*., nosep]
\item Agent 1 trains their model on dataset $(\mX^1, \hat\yy^1_t)$ to obtain $g^1_t$.
\item Agent 2 receives $g^1_t$ and uses it to predict labels $\hat\yy^2_{t+1} = g^1_t(\mX^2)$.
\item Agent 2 trains their model on dataset $(\mX^2, \hat\yy^2_{t+1})$ to obtain $g^1_{t+1}$.
\item Agent 1 receives a model $g^1_{t+1}$ from agent 2 and predicts $\hat \yy^1_{t+2} = g^1_{t+1}(\mX^1)$.
\end{enumerate}
Thus the algorithm alternates between training and knowledge distillation on each of the two agents. We also summarize the algorithm in Figure \ref{fig: Altern scheme}. Importantly, note that there is no exchange of raw data but only of the trained models. Further, each agent trains their choice of a model on their data with agent 1 training $\{g_t^1\}$ and agent 2 training $\{g_{t+1}^1\}$. Superscript 1 means we start AKD from agent 1.
\subsection{Theoretical analysis}
Similar to \eqref{eq:greens_func}, let us define functions $k_1(\xx, \xx')$ and $k_2(\xx, \xx')$ such that they satisfy
\[
\int_{\mathcal{X}}u_a(\xx, \xx')k_a(\xx', \tt)d\xx' = \delta(\xx - \tt)\quad \text{for } a \in \{1, 2\} \,.
\]
For such functions, we can then define the following positive definite matrix $\mL \in \R^{(N_1 + N_2)\times (N_1 + N_2)}$:
\[
\mL = \begin{pmatrix}
\mL_{11} & \mL_{12}\\
\mL_{21} & \mL_{22}\\
\end{pmatrix}, \quad \mL_{a,b}[i,j] = \frac{1}{N_1 + N_2} k_1(\xx_i^a, \xx_j^b) \text{ for } a,b \in \{1,2\} \text{ and } i \in [N_1], j \in [N_2]\,.
\]
Note $\mL$ is symmetric (with $\mL_{12}^\top = \mL_{21}$) and is also positive definite. Further, each component $\mL_{a,b}$ measures pairwise similarities between inputs of agent $a$ and agent $b$ using the kernel $k_1$. Correspondingly, we define $\mM \in \R^{(N_1 + N_2)\times (N_1 + N_2)}$ which uses kernel $k_2$:
\[
\mM = \begin{pmatrix}
\mM_{11} & \mM_{12}\\
\mM_{21} & \mM_{22}\\
\end{pmatrix}, \quad \mM_{a,b}[i,j] = \frac{1}{N_1 + N_2} k_2(\xx_i^a, \xx_j^b) \text{ and } i \in [N_1], j \in [N_2]\,.
\]
We can now derive the closed form of the AKD algorithm repeatedly using Proposition~\ref{prop:closedform}.
\begin{proposition}\label{prop:akd}
The model in round $2t$ learned by the alternating knowledge distillation algorithm is
\[
g_{2t}^1(\xx) = \ll_{\xx}^\top (c\mI + \mL_{11})^{-1}\left(\mM_{12}(c\mI + \mM_{22})^{-1}\mL_{21}(c\mI + \mL_{11})^{-1}\right)^{t}\yy^1\,,
\]
where $\ll_{\xx} \in \R^{N_1}$ is defined as $\ll_{\xx}[i] = \frac{1}{N_1 + N_2} k_1(\xx, \xx_i^1)$. Further, for any fixed $\xx$ we have
\[
\lim_{t \rightarrow \infty} g_t^1(\xx) = 0\,.
\]
\end{proposition}
First, note that if agents 1 and 2 are identical with the same data and same model, we have $\mM_{12} = \mM_{22} = \mL_{11} = \mL_{12}$. This setting corresponds to \emph{self-distillation} where the model is repeatedly retrained on its own predictions. Proposition~\ref{prop:akd} shows that after $2t$ rounds of self-distillation, we obtain a model is of the form $g_{2t}^1(\xx) = \ll_{\xx}^\top \left(c\mI + \mL_{11} \right)^{-1} \left(\mL_{11}\left(c\mI + \mL_{11} \right)^{-1}\right)^{2t} \yy$. Here, the effect of $c$ is amplified as $t$ increases. Thus, this shows that repeated self-distillation induces a strong regularization effect, recovering the results of \citep{mobahi2020selfdistillation}.
Perhaps more strikingly, Proposition~\ref{prop:akd} shows that not only does AKD fail to converge to the actual optimal solution ${g^1}^\star$ as defined in \eqref{eq:federated_problem}, it will also slowly degrade and eventually converges to 0. We next expand upon this and explain this phenomenon.
\subsection{Degradation and connection to projections}\label{AKD Analysis}
While mathematically, Proposition~\ref{prop:akd} completely describes the AKD algorithm, it does not provide much intuition. In this section, we rephrase the result in terms of projections and contractions which provides a more visual understanding of the method.
\paragraph{Oblique projections.} A projection operator $\mP$ linearly maps (projects) all inputs onto some linear subspace $\cA = \text{Range}(\mA)$. In general, we can always rewrite such a projection operation as\vspace{-1mm}
\[
\mP \xx = \min_{\yy \in \text{Range}(\mA)} (\yy - \xx)^\top \mW(\yy - \xx)= \mA (\mA^\top \mW \mA)^{-1} \mA^\top \mW \xx\,.\vspace{-1mm}
\]
Here, $\mA$ defines an orthonormal basis of the space $\cA$ and $\mW$ is a positive definite weight matrix which defines the geometry $\inp{\xx}{\yy}_{\mW} := \xx^\top \mW \yy$. When $\mW = \mI$, we recover the familiar orthogonal projection. Otherwise, projections can happen `obliquely' following the geometry defined by $\mW$.
\paragraph{Contractions.} A contraction is a linear operator $\mC$ which contracts all inputs towards the origin:\vspace{-1mm}
\[
\norm{\mC \xx} \leq \norm{\xx} \quad \text{for any } \xx\,.
\]
Given these notions, we can rewrite Proposition~\ref{prop:akd} as follows.
\begin{proposition}\label{prop:akd-proj}
There exist oblique projection operators $\mP_1$ and $\mP_2$, contraction matrices $\mC_1$ and $\mC_2$, and orthonormal matrices $\mV_1$ and $\mV_2$ such that the model in round $2t$ learned by the alternating knowledge distillation algorithm is\vspace{-1mm}
\[
g_{2t}^1(\xx) = \ll_{\xx}^\top (c\mI + \mL_{11})^{-1} \mV_1^\top \left(\mC_1 \mP_2^\top \mC_2 \mP_1^\top\right)^t \mV_1\yy^1\,.\vspace{-1mm}
\]
In particular, the predictions on agent 2's inputs are\vspace{-1mm}
\[
g_{2t}^1(\mX^2) = \mV_2^\top \mP_1^\top \left( \mC_1 \mP_2^\top \mC_2 \mP_1^\top \right)^t \mV_1\yy^1\,.\vspace{-1mm}
\]
Further, the projection matrices satisfy $\mP_1^\top \mP_2^\top \xx = \xx$ only if $\xx = 0$\,.
\end{proposition}
The term $\left( \mC_1 \mP_2^\top \mC_2 \mP_1^\top \right)^t$ is the only one which depends on $t$ and so captures the dynamics of the algorithm. Two rounds of AKD (first to agent 1 and then back to 2) correspond to a multiplication with $\mC_1 \mP_2^\top \mC_2 \mP_1^\top$ i.e. \emph{alternating projections} interspersed by contractions. Thus, the dynamics of AKD is exactly the same as that of alternating projections interspersed by contractions. The projections $\mP_1^\top$ and $\mP_2^\top$ have orthogonal ranges whose intersection only contains the origin 0. As Fig. \ref{fig: Altern 2d} shows, non-orthogonal alternating projections between orthogonal spaces converge to the origin. Contractions further pull the inputs closer to the origin, speeding up the convergence.
\begin{remark}To understand the connection to projection more intuitively, suppose that we had a 4-way classification task with agent 1 having data only of the first two classes, and agent 2 with the last two classes. Any model trained by agent 1 will only learn about the first two classes and its predictions on the last two classes will be meaningless. Thus, no information can be transferred between the agents in this setting. More generally, the transfer of knowledge from agent 1 to agent 2 is mediated by its data $\mX_2$. This corresponds to a \emph{projection} of the knowledge of agent 1 onto the data of agent 2. If there is a mismatch between the two, knowledge is bound to be lost.
\end{remark}
\begin{figure*}
\begin{subfigure}{.55\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{figures/AKD_scheme.pdf}
\caption{Alternating KD starting from agent 1. We predict and train using predictions, alternating between the two agents.}
\label{fig: Altern scheme}
\end{subfigure}
\hfill
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=.6\linewidth]{figures/Alternating_proj_2d.pdf}
\caption{Alternating oblique projections starting from $x$ converges to 0.}
\label{fig: Altern 2d}
\end{subfigure}
\end{figure*}
\paragraph{Speed of degradation.} The alternating projection algorithm converges to the intersection of the subspaces corresponding to the projection operators (their range) \citep{BoydAlternatingP}. In our particular case, the fixed point of $\mP_1^\top$ and $\mP_2^\top$ is 0, and hence this is the point the algorithm will converge to. The contraction operations $\mC_1$ and $\mC_2$ only speed up the convergence to the origin 0 (also see Fig. \ref{fig: Altern 2d}). This explains the degradation process notes in Proposition~\ref{prop:akd}. We can go further and examine the rate of degradation using known analyses of alternating projections~\citep{aronszajn1950theory}.
\begin{proposition}[Informal]\label{prop:akd-speed}
The rate of convergence to $g_t^1(\xx)$ to $0$ gets faster if:
\begin{itemize}[nosep]
\item a stronger inductive bias is induced via a larger regularization constant $c$,
\item the kernels $k_1(\xx,\yy)$ and $k_2(\xx,\yy)$ are very different, or
\item the difference between the datasets $\cD_1$ and $\cD_2$ as measured by $k_1(\xx,\yy)$ increases.
\end{itemize}
\end{proposition}
In summary, both data and model heterogeneity may speed up the degradation defeating the purpose of model agnostic FL. All formal proofs and theorem statements are moved to the Appendix.
\section{Additional Variants}
In the previous section, we saw that the alternating knowledge distillation (AKD) algorithm over multiple iterations suffered slow degradation, eventually losing all information about the training data. In this section, we explore some alternative approaches which attempt to correct this. We first analyze a simple way to re-inject the training data after every KD iteration which we call averaged distillation. Then, we show an ensemble algorithm that can recover the optimal model ${g^1}^\star$.
\begin{figure*}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.6\linewidth]{figures/Avg_scheme.pdf}
\caption{AvgKD scheme}
\label{fig: Averaged GT scheme}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.9\linewidth]{figures/Optimal_intuition.pdf}
\caption{Intuition behind ensemble scheme. In each round, AKD alternates between overfitting the data of agent 1 or of agent 2. We can construct an ensemble out of these models to correct for this bias and quickly converge to the true optima.}
\label{fig: optimal 1d}
\end{subfigure}
\label{fig: Other schemes}
\end{figure*}
\subsection{Averaged knowledge distillation} \label{AvgKD Analysis}
As we saw earlier, each step of knowledge distillation seems to lose some information about the training data, replacing it with the inductive bias of the model. One approach to counter this slow loss of information is to recombine it with the original training data labels such as is commonly done in co-distillation \citep{sodhani2020closer}.
\paragraph{Algorithm.} Recall that agent 1 has data $\cD_1 = (\mX^1, \yy^1)$ and correspondingly agent 2 has data $\cD^2 = (\mX^2, \yy^2)$. Now starting from $\hat\yy_0^1 = \yy^1, \hat\yy_0^2 = \yy^2$, in each round $t \geq 0$: \begin{enumerate}[label=\alph*., nosep]
\item Agents 1 and 2 train their model on datasets $(\mX^1, \hat\yy^1_t)$ and $(\mX^2, \hat\yy^2_t)$ to obtain $g^1_t$ and $g^2_t$.
\item Agents exchange their models $g^1_t$ and $g^2_t$ between each other.
\item Agents use exchanged models to predict labels $\hat\yy^1_{t+1} = \frac{\yy^1 + g^2_t(\mX^1)}{2}$, $\hat\yy^2_{t+1} = \frac{\yy^2 + g^1_t(\mX^2)}{2}$.
\end{enumerate}
The summary the algorithm is depicted in Figure \ref{fig: Averaged GT scheme}. Again, notice that there is no exchange of raw data but only of the trained models. The main difference between AKD and AvgKD (averaged knowledge distillation) is that we average the predictions with the original labels. This re-injects information $\yy^1$ and $\yy^2$ at every iteration, preventing degradation. We theoretically characterize its dynamics next in terms of the afore-mentioned contraction and projection operators.
\begin{proposition}\label{prop:avg_kd}
There exist oblique projection operators $\mP_1$ and $\mP_2$, contraction matrices $\mC_1$ and $\mC_2$, and orthonormal matrices $\mV_1$ and $\mV_2$ such that the model of agent 1 in round $t$ learned by the averaged knowledge distillation (AvgKD) algorithm is
\[
g^1_{t}(\xx) = \frac{\mF}{2}(\sum^{t-1}_{i=0} (\frac{\mC_1 \mP^\top_2 \mC_2 \mP^\top_1}{4})^i) \zz_1 + \frac{\mF \mC_1 \mP^\top_2}{4} (\sum^{t-2}_{i=0} (\frac{\mC_2 \mP^\top_1 \mC_1 \mP^\top_2}{4})^i) \zz_2\,.
\]
where $\mF = \ll_{\xx}^\top (c\mI + \mL_{11})^{-1} \mV_1^\top$.
Further, in the limit of rounds for any fixed $\xx$ we have
\[
\lim_{t \rightarrow \infty} g^1_{t}(\xx) = \frac{\mF}{2}(\mI - \frac{\mC_1 \mP^\top_2 \mC_2 \mP^\top_1}{4})^{\dagger} \zz_1 + \frac{\mF \mC_1 \mP^\top_2}{4} (\mI - \frac{\mC_2 \mP^\top_1 \mC_1 \mP^\top_2}{4})^{\dagger} \zz_2\,.
\]
\end{proposition}
This shows that the model learned through AvgKD does not degrade to 0, unlike AKD. Instead, it converges to a limit for which we can derive closed-form expressions. Unfortunately, this limit model is still not the same as our desired optimal model ${g^1}^\star$. We next try to overcome this using ensembling.
\subsection{Ensembled Knowledge Distillation} \label{Optimal Analysis}
We first analyze how the limit solution of AvgKD differs from the actual optimum ${g^1}^\star$. We will build upon this understanding to construct an ensemble that approximates ${g^1}^\star$. For simplicity, we assume that the regularization coefficient $c=0$.
\emph{Understanding AvgKD.} Consider AvgKD algorithm, then
\begin{proposition}\label{prop: opt linear}
There exist matrices $\mA_1$ and $\mA_2$ such that the minimizer ${g^1}^\star$ of objective \eqref{eq:federated_problem} predicts ${g^1}^\star(\mX_i) = \mA_1 \bm\beta_1 + \mA_2 \bm\beta_2\ $ for $i \in \{1,2\}$, where $\bm\beta_1$ and $\bm\beta_2$ satisfy\vspace{-2mm}
\begin{gather}\vspace{-2mm}
\begin{pmatrix} \label{eq: Optimal Linear}
\mL_{11} & \mM_{12}\\
\mL_{21} & \mM_{22}
\end{pmatrix}
\begin{pmatrix}
\bm\beta_1\\
\bm\beta_2
\end{pmatrix} =
\begin{pmatrix}
\yy_1\\
\yy_2
\end{pmatrix}.\vspace{-2mm}
\end{gather}
In contrast, for the same matrices $\mA_1$ and $\mA_2$, the limit models of AvgKD predict $g_{\infty}^1(\mX_i) = \frac{1}{2}\mA_1 \bm\beta_1$ and $g_{\infty}^2(\mX_i) = -\frac{1}{2} \mA_2 \bm\beta_2$, for $i \in \{1,2\}$ for $\bm\beta_1$ and $\bm\beta_2$ satisfying\vspace{-2mm}
\begin{gather}
\begin{pmatrix} \label{eq: AvgKD Linear}
\mL_{11} & \frac{\mM_{12}}{2}\\
\frac{\mL_{21}}{2} & \mM_{22}
\end{pmatrix}
\begin{pmatrix}
\bm\beta_1\\
\bm\beta_2
\end{pmatrix} =
\begin{pmatrix}
\yy_1\\
-\yy_2
\end{pmatrix}.
\end{gather}\vspace{-2mm}
\end{proposition}
By comparing the equations \eqref{eq: Optimal Linear} and \eqref{eq: AvgKD Linear}, the output $2(g_{\infty}^1(\xx) - g^2_{\infty}(\xx))$ is close to the output of ${g^1}^\star(\xx)$, except that the off-diagonal matrices are scaled by $\frac{1}{2}$ and we have $-\yy_2$ on the right hand side. We need two tricks if we want to approximate ${g^1}^\star$: first we need an ensemble using differences of models, and second we need to additionally correct for the bias in the algorithm.
\emph{Correcting bias using infinite ensembles.}
Consider the initial AKD (alternating knowledge distillation) algorithm illustrated in the Fig. \ref{fig: Altern scheme}. Let us run two simultaneous runs, ones starting from agent 1 and another starting from agent 2, outputting models $\{g_t^1\}$, and $\{g_t^2\}$ respectively.
Then, instead of just using the final models, we construct the following infinite ensemble. For an input $\xx$, we output:\vspace{-2mm}
\begin{equation} \label{eq: final optimal}
f_\infty(\xx) = \sum^\infty_{t=0} (-1)^t (g_t^1(\xx) + g_t^2(\xx))
\end{equation}\vspace{-2mm}\vspace{-2mm}
That is, we take the models from odd steps $t$ with positive signs and from even ones with negative signs and sum their predictions. We call this scheme Ensembled Knowledge Distillation (EKD).
The intuition behind our ensemble method is schematically visualized in 1-dimensional case in the Fig. \ref{fig: optimal 1d} where numbers denote the $t$ variable in the equation \eqref{eq: final optimal}. We start from the sum of both agents models obtained after learning from ground truth labels (0-th round). Then we subtract the sum of both agents models obtained after the first round of KD. Then we add the sum of both agents models obtained after the second round of KD and so on. From the section \ref{AKD Analysis} we know that in the AKD process with regularization model gradually degrades towards $\0$, in other words, intuitively each next round obtained model adds less value to the whole sum in the EKD scheme. Although, in the lack of regularization models degradation towards $\0$ is not always the case (see App. \ref{Appendix: AKD wo reg}). But under such an assumption we we gradually converge to the limit model, which is the point $\infty$ in the Fig. \ref{fig: optimal 1d}. We formalize this and prove the following.
\begin{proposition}\label{prop:optimal}
The predictions of $f_\infty$ using \eqref{eq: final optimal} satisfies $f_\infty(\mX_i) = {g^1}^\star(\mX_i)$ for $i \in \{1,2\}$.
\end{proposition}
Thus, not only did we succeed in preventing degeneracy to 0, but we also managed to recover the predictions of the optimal model. However, note that this comes at a cost of an infinite ensemble. While we can approximate this using just a finite set of models (as we explore experimentally next), this still does not recover a \emph{single} model which matches ${g^1}^\star$.
\section{Experiments}\vspace{-2mm}
\subsection{Setup}\vspace{-2mm}
We consider three settings corresponding to the cases Proposition \ref{prop:akd-speed} with the agents having
\begin{itemize}[nosep]
\item the same model architecture and close data distributions (Same model, Same data)
\item different model architectures and close data distributions (Different model, Same data)
\item the same model architecture and different data distributions (Same model, Different data).
\end{itemize}
The toy experiments solve a linear regression problem of the form $\mA \xx^\star = \bb$. The data $\mA$ and $\bb$ is split between the two agents randomly in the `same data' case, whereas in the `different data' case the data is sorted according to $b$ before splitting to maximize heterogeneity.
The real world experiments are conducted using Convolutional Neural Network (CNN), Multi-Layer Perceptron network (MLP), and Random Forest (RF). We use squared loss since it is closer to the theoretical setting. In the `same model' setting both agents use the CNN model, whereas agent 2 instead uses an MLP in the `different model' setting. Further, we split the training data randomly in the 'same data' setting. For the 'different data' setting, we split the data by labels and take some portion \textit{Alpha} of data from each agent and randomly shuffle taken points between agents. By varying hyperparameter \textit{Alpha} we control the level of data heterogeneity between agents. Notice that if $\textit{Alpha} = 0$ then datasets are completely different between agents, if $\textit{Alpha} = 1$ then we have the 'same data' setting with i.i.d. split of data between two agents. All other details are presented in the Appendix \ref{Appendix: Exp details}.
We next summarize and discuss our results.\vspace{-2mm}
\begin{figure*}[!t]
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{figures/Experiments/EKD_toy_iid.pdf}
\label{fig: toy EKD_iid}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{figures/Experiments/EKD_toy_heter.pdf}
\label{fig: toy EKD_heter}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{figures/Experiments/EKD_toy_highreg.pdf}
\label{fig: toy EKD_reg}
\end{subfigure}\vspace{-3mm}
\caption{AKD, AvgKD, and EKD methods for linear regression on synthetic data with same data (left), different data (middle), and strong regularization (right). EKD (black) eventually matches centralized performance (dashed green), whereas AvgKD (solid blue) is worse than only local training (dashed blue and red). AKD (in solid red) performs the worst and degrades with increasing rounds.}\vspace{-2mm}
\label{fig: toy}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.8\linewidth]{figures/Experiments/AKD_MSE.pdf}\vspace{-3mm}
\caption{Test accuracy of centralized (dashed green), and AKD on MNIST using model starting from agent 1 (blue) and agent 2 (red) with varying amount of regularization, model heterogeneity, and data heterogeneity.
In all cases, performance degrades with increasing rounds with degradation speeding up with the increase in regularization, model heterogeneity, or data heterogeneity.}
\label{fig: AKD}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=.6\linewidth]{figures/Experiments/AvgKD_MSE.pdf}\vspace{-2mm}
\caption{Test accuracy of AvgKD on MNIST using model starting from agent 1 (blue) and agent 2 (red) with varying model heterogeneity, and data heterogeneity. During training regularization is used.
In all cases, there is no degradation of performance, though the best accuracy is obtained by agent 1 in round 1 with only local training.}\vspace{-5mm}
\label{fig: AvgKD}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\linewidth]{figures/New_Experiments/MNIST_2_agents.pdf}\vspace{-2mm}
\caption{Test accuracy on MNIST with varying data heterogeneity in the setting of 'same model'. In case of high data heterogeneity (small \textit{Alphas}), both agents benefit from the AvgKD, PKD and EKD schemes. Moreover, AvgKD and EKD consistently outperform FedAvg scheme.}\vspace{-5mm}
\label{fig: AvgKD Non-iid}
\end{figure*}
\subsection{Results and Discussion}\vspace{-2mm}
\textbf{AvgKD > AKD.} In all settings (both synthetic in Fig. \ref{fig: toy} and real world in Fig. \ref{fig: AKD}), we see that with the increasing number of rounds the performance of AKD significantly degrades whereas that of AvgKD stabilizes (Fig. \ref{fig: AvgKD}) regardless of regularization, model and data heterogeneity. Moreover, from the experiments on MNIST in Fig. \ref{fig: AKD}, we see the following: AKD degrades faster if there is regularization used, model heterogeneity or data heterogeneity between agents, and the last plays the most significant role. AvgKD, on the other hand, quickly converges in a few rounds and does not degrade. However, it fails to match the centralized accuracy as well.
\textbf{EKD works but needs large ensembles.} In the synthetic setting (Fig. \ref{fig: toy}), in 250 rounds (ensemble of 500 models) it even matches the centralized model. However, in the real world setting (Fig. \ref{fig: AvgKD Non-iid}) the improvement is slower and it does not match centralized performance. This might be due to the small number of rounds run (only 20). EKD is also the only method that improves with subsequent rounds. Finally, we observed that increasing regularization actually speeds up the convergence of EKD in the synthetic setting (Fig. \ref{fig: toy}).
\textbf{Data heterogeneity is the main bottleneck.} In all our experiments, both data and model heterogeneity degraded the performance of AKD, PKD (Parallel KD introduced in App.~\ref{Appendix: PKD}) and AvgKD. However, data heterogeneity has a much stronger effect. This confirms our theory that mismatch between agents data leads to loss of information when using knowledge distillation. Overcoming this data heterogeneity is the crucial challenge for practical model agnostic FL. Fig. \ref{fig: AvgKD Non-iid} shows how all schemes behave in dependence on data heterogeneity. Indeed, the higher data heterogeneity the faster speed of degradation for the AKD and PKD schemes. In the case of the AvgKD scheme, we see that agents do improve over their local models and this improvement is larger with more data heterogeneity.
\vspace{-2mm}
\subsection{Additional extensions and experiments}\vspace{-2mm}
In App.~\ref{sec:n-agents}, we extend our algorithms to $M$ agents and show experimentally in App.~\ref{subsec:avgkd-n-agents} for AvgKD algorithm that the same trends hold there as well. Our conclusions also hold for the cross-entropy loss (Figs. \ref{fig: AKD_CE}, \ref{fig: AvgKD_CE}), for highly heterogeneous model case with MLP and Random Forests (Figs. \ref{fig: RF acc}, \ref{fig: RF Acc heter}), as well on other datasets and models (VGG on CIFAR10 in Figs. \ref{fig: CIFAR}, \ref{fig: CIFAR diff}). In the latter, we see the same trends as on MNIST for all the schemes except EKD which is probably due to the use of cross-entropy loss function and, as a result, models being further from the kernel regime. Moreover, speed of degradation is higher if there is the model heterogeneity (Figs. \ref{fig: MNIST Non-iid diff}, \ref{fig: CIFAR diff}) and EKD does not help even on MNIST dataset (Fig. \ref{fig: MNIST Non-iid diff}). Finally, all the schemes are compared to the more standard FedAvg that is not applicable in the 'different model' setting and is outperformed by the AvgKD scheme at highly data heterogeneous regimes. That is, AvgKD consistently outperforms all the methods at highly data heterogeneous regimes indicating it is the most promising variant.
\vspace{-2mm}
\section{Conclusion}\vspace{-2mm}
While the stochastic optimization framework has been very useful in analyzing and developing new algorithms for federated learning so far, it fundamentally cannot capture learning with different models. We instead introduced the federated kernel regression framework where we formalized notions of both model and data heterogeneity. Using this, we analyzed different knowledge distillation schemes and came to the conclusion that data heterogeneity poses a fundamental challenge limiting the knowledge that can be transmitted. Further, these theoretical predictions were exactly reflected in our deep learning experiments as well. Overcoming this data heterogeneity will be crucial to making KD based model agnostic federated algorithms practical.
We also utilized our framework to design a novel ensembling method motivated by our theory. However, this method could require very large ensembles (up to 500 models) in order to match the centralized performance. Thus, we view our method not as a practical algorithm, but more of a demo on how our framework can be leveraged. Similarly, our experiments are preliminary and are not on real world complex datasets. We believe there is great potential in further exploring our results and framework, especially in investigating how to mitigate the effect of data heterogeneity in knowledge distillation.
\section*{Acknowledgements}
We are really grateful to Martin Jaggi for his insightful comments and support throughout this work. SPK is partly funded by an SNSF Fellowship and AA is funded by a research scholarship from MLO lab, EPFL headed by Martin Jaggi. SPK also thanks Celestine D\"{u}nner for conversations inspiring this project. |
2,877,628,090,293 | arxiv | \section{Introduction}
\label{sec:intro} The promising coupling between electric and magnetic order parameters and the potential to manipulate one by the application of the other has created much attention in the past few decades. Research on materials with such coupling has grown interest because of their wide range of application in multifunctional devices. \cite{fiebig2005revival,spaldin2005renaissance,cheong2007multiferroics,ramesh2007multiferroics,bibes2008multiferroics} The ultimate goal of these research is to obtain single phase multiferroics with strong coupling between ferroelectric and magnetic order parameters at room temperature. There has been a considerable recent interest in developing lone pair based magnetoelectrics because of their high value of electrical polarization. Examples for such composites include BiFeO$_3 -$LaFeO$_3$\cite{gonzalez2012first}, BiFeO$_3 -$SrTiO$_3$\cite{ma2011enhanced}, BiFeO$_3$PbTiO$_3$\cite{zhu2008structural}, BiFeO$_3$--BiCoO$_3$\cite{dieguez2011first}, LaFeO$_3-$ PbTiO$_3$\cite{singh2008magnetization}, BiFeO$_3 -$BaTiO$_3$\cite{hang2012dielectric}, BiCoO$_3 -$BaTiO$_3$\cite{patra2018metamagnetism} etc. PbTiO$_3$ (PTO) is a perovskite ferrolectric material with a Curie temperature of 490$^\circ$ C and a large tetragonal distortion with $c/a$ = 1.06\cite{chen2015negative} at room temperature. But it is non--magnetic due to the absence of $d$ electrons. The Ti atom can be substituted with some magnetic ions as a result of which the new compound can be expected to produce both magnetic and ferrolectric behavior. PTO based magnetoelectrics could be much more interesting because of the existence of two types of mechanisms for ferroelectricity i.e. lone pair electrons from Pb$^{2+}$ and $d^0-$ness from Ti$^{4+}$. So, here we examine the magnetoelectric properties of a PTO based multiferroic series i.e. PbTi$_{1-x}$V$_x$O$_{3}$ ( $x$ = 0, 0.25, 0.33, 0.50, 0.67, 0.75, 1) where the 6$s$ electrons of Pb$^{2+}$ and the $d^0$--ness of Ti$^{4+}$ stabilizes the ferroelectricity and the V induces magnetism.
\par
On the other hand, PbVO$_3$ (PVO) , which is a member of this series ($x=$ 1) has stirred a lot of interest in last few years as a strong candidate for multiferroic oxide due to its large electric polarization. Though it is isostructural with PTO, it shows large tetragonal distortion ($c/a$ = 1.229). Isolated layers of corner shared VO$_5$ pyramids form a layer type perovskite structure with a space group $P4mm$ \cite{shpanchenko2004synthesis}. PVO has been proposed to have an antiferromagnetic ordering and a ferroelectric polarization as large as 152 $\mu$C/cm$^{2}$ due to the presence of large structural distortions\cite{okos2014synthesis,shpanchenko2004synthesis,belik2005crystallographic,uratani2009first,singh2006electronic}. To this point, however, the true magnetic structure and the multifunctional nature of this material is controversial to researchers. Shpanchenko \textit{et al.}\cite{shpanchenko2004synthesis} found no long-range magnetic ordering with their neutron powder diffraction measurements down to 1.5\,K. Belik \textit{et al.}\cite{belik2005crystallographic} proposed that PVO is a two$-$dimensional spin half square lattice strongly frustrated antiferromagnet due to the antiferromagnetic interactions of the next$-$nearest$-$neighbours. With the help of magnetic susceptibility and specific heat measurements as well as band$-$structure calculations, Tsirlin\cite{tsirlin2008frustrated} confirmed that the S = 1/2 square lattice of Vanadium 4+ ions in PVO is strongly frustrated due to the next$-$nearest$-$neighbor antiferromagnetic interactions and no long-range magnetic ordering was found down to 1.8\,K. Due to the presence of defects or ferromagnetic impurities in the sample, it has been very difficult to clarify the intrinsic magnetic property in PVO. Oka \textit{et al.}\cite{oka2008magnetic} investigated the magnetic properties of PVO by preparing a multidomain single$-$crystal without any magnetic impurity. The broad maximum centered around 180\,K in the temperature dependent magnetization curve indicates the presence of two$-$dimensional antiferromagnetism. Muon spin rotation ($\mu$SR) measurement displayed the presence of a long-range order below 43\,K. The epitaxial thin films of PVO has been grown\cite{martin2007growth} using pulsed laser deposition and this brings a step forward to synthesis multiferroic materials outside of high$-$temperature and high pressure techniques to realize devices with multifunctionalities. In another study on PVO thin films by Kumar \textit{et al.}\cite{kumar2007polar}, a transition from a ferroelectric only state to a ferroelectric and magnetic state was determined below 100$-$130\,K using second$-$harmonic generation and X$-$ray linear dichroism. Experimental electron energy loss spectroscopy (EELS) investigation on V$-$L edge show that V in the PVO thin films are in the V$^{4+}$ state resulting in a $d^{1}$ state\cite{chi2007studying}.
\par
PVO was also investigated computationally by different researchers. First principle calculations were made by Uratani \textit{et al.}\cite{uratani2009first} for PVO along with BiCoO$_3$. They found the easy axes of spin are different: [110] in PVO and [001] in BiCoO$_3$ even though both have similar crystal structure. A spin spiral structure was predicted by Solovyev\cite{solovyev2012magnetic} to analyze the absence of long$-$range magnetic ordering in PVO. Calculations along with experiments were performed by Parveen \textit{et al.}\cite{parveen2012thermal} in 2012 to study the thermal properties of PVO and the results confirmed the observations made by Tsirlin \textit{et al.}\cite{tsirlin2008frustrated}. Ming\textit{et al.}\cite{ming2010first} made a comparative study of the structural, electronic, magnetic and phase transition properties in which various exchange$-$correlation (XC) functionals were used and found that PVO is a 2D $C-$AF where the $d^1$ electron of the V$^{4+}$ ion occupies the $d_{xy}$ orbital. A ferroelectric to paraelectric phase transition at 1.75 GPa was also noticed. Zhou \textit{et al.}\cite{zhou2012structural} revisited the structural transitions in PVO with the help of a series of X$-$ray diffraction (XRD) measurements and first principle calculations. They found that the $C-$AF insulating and NM metallic states are the ground states for tetragonal and cubic phases, respectively. They have also noticed a noncentrosymmetric tetragonal to centrosymmetric cubic perovskite structural phase transition occurs between a pressure range of 2.7$-$6.4 GPa.
\par
Milo{\v{s}}evi{\'c} \textit{et al.}\cite{milovsevic2013ab} performed \textit{ab initio} calculations to study the electronic structure and optical properties of PVO (and BiCoO$_3$). With the help of first principle DFT calculations Xing \textit{et al.}\cite{ming2015first} showed a first order tetragonal to cubic phase transition with a volume collapse of 11.6\% under uniaxial pressure of 1.2 GPa accompanied by a $C-$AF insulator to a NM metal. Kadiri \textit{et al.}\cite{kadiri2017calculated} studied the magnetic properties of PVO with the help of first principle calculations and Monte Carlo simulations and they determined PVO as an S = 1/2 antiferromagnet with a N\'eel temperature $T_N$ = 182\,K. Recently, Oka \textit{et al.}\cite{oka2018experimental} examined the cubic phase of PVO under high pressure on experimental and theoretical bases. They determined the transition pressure to be 3 GPa and shown that above which the semiconductor$-$to$-$metal transition associated with the structural transition (tetragonal$-$to$-$cubic) occurs.
\par
There are some reports about the transition metal substitution at the V site of PVO. Tsuchiya \textit{et al.}\cite{tsuchiya2009high} synthesized PbFe$_{0.5}$V$_{0.5}$O$_3$ under high pressure and found the crystal structure as a tetragonal perovskite with $c/a$ = 1.18. The magnetic study revealed that the compound is an antiferromagnet and the electrical polarization is estimated to be as large as 88 $\mu$Ccm$^{-2}$. Ar\'evalo \textit{et al.}\cite{arevalo2011structural} studied Ti and Cr substitutions at V site in PVO. A tetragonal to cubic transition was observed as the Cr substitution level reaches 0.4 where as the Ti substitution preserves the $P4mm$ symmetry. The interesting result observed by Ar\'evalo \textit{et al.}\cite{arevalo2011structural} was the temperature induced phase transition from tetragonal to cubic in PbTi$_{0.8}$V$_{0.2}$O$_3$ at 730\,K. But PVO decomposes to Pb$_2$V$_2$O$_7$ at 570\,K before reaching its transition temperature.\cite{okos2014synthesis} In another V site Ti substituted Pb$M$O$_3$ system (where $M$ stands for V site substituted with Ti), Ricci\cite{ricci2013multiferroicity} predicted multiferroicity in PbTi$_{0.875}$V$_{0.125}$O$_3$ with a ferromagnetic$-$ferroelectric ground state and an electrical polarization of 95 $\mu$C/cm$^{2} $. Recently, Pan \textit{et al.}\cite{pan2017colossal} experimentally studied the Pb(Ti,V)O$_3$ system where the V substitution level varies from 0.1 to 0.6. They have found that the whole composition range the $P4mm$ symmetry was preserved and the tetragonality was abnormally improved (so the spontaneous polarization) with the replacement of Ti with V. Interestingly, they also observed an intrinsic giant volume contraction of ($\sim$3.7\%) for Pb(Ti$_{0.7}$V$_{0.3}$)O$_3$ during the ferroelectric$-$to$-$paraelectric phase transition. These observed interesting findings in the substituted systems motivated us for the present study.
\par
In the present work, we have shown that the V substitution in PTO can induce magnetism which can lead to multiferroicity in the substituted system. Thus our ultimate aim to design magnetoelectric materials from non$-$magnetic system is satisfied. The coupling between the electric and magnetic order parameters is strong due to the presence of two ferroelectric mechanisms (lone pair of Pb$^{2+}$ as well as $d^0-$ness of Ti$^{4+}$). The 2D magnetism arises in PVO due to the $d_{xy}$ type orbital ordering in the ferroelectric ground state. On the other hand, the disorder induced by the substitution may release the frustration in PVO and that can lead to a long-range magnetic ordering. We hope the substitution could reduce the tendency of the system to form a 2D arrangement of V cations and it may lead to form 3D magnetic ordering.
\section{Computational details}
All the results presented here are obtained from \textit{ab-initio} density functional calculations using the Vienna \textit{Ab initio} Simulation Package (VASP)\cite{kresse1996software} or full potential LAPW method (Wien2k). To obtain the ground state structural parameters for PbTi$_{1-x}$V$_x$O$_{3}$, we have optimized the crystal structure by minimizing the force and stress acting on the system. The generalized gradient approximation (GGA)\cite{perdew1996generalized} in the scheme of Perdew$-$Burke$-$Ernzerhof (PBE)\cite{ernzerhof1999assessment} is employed to treat the exchange$-$correlation as it gives better equilibrium structural parameters than the local density approximation. The ionic positions and the shape of the crystals were relaxed for different unit cell volumes until the energy as well as force convergence criterion were reached i.e. 10$^{-6}$eV per cell and 1 meV\AA$^{-1}$/atom, respectively. As the structural parameters are very sensitive to electrical polarization, a very high plane wave cut$-$off energy of 850 eV\cite{ravindran2006theoretical} was used. A 6x6x6 Monkhorst$-$Pack \textbf{k}$-$point\cite{monkhorst1976special} mesh was used for ferroelectric PTO and similar density of \textbf{k}$-$points was used for all other calculations. To find the ground state magnetic ordering, we have considered non$-$magnetic (NM), ferromagnetic (FM), $A-$type antiferromagnetic ($A-$AF), $C-$type antiferromagnetic ($C-$AF), and $G-$type antiferromagnetic ($G-$AF) configurations\cite{patra2016electronic}. In order to account for the strong electron correlation associated with V 3$d$ electrons on the electronic structure, GGA+$U$ method was used with $U_{eff}$ = 3 eV.The Born effective charges were calculated by the Berry phase method with a 8x8x8 \textbf{k}$-$point mesh per f.u. using the modern theory of polarization\cite{resta1994macroscopic,king1993theory}. We have used periodic supercell approach to simulate the substituted systems i.e. for compositions with 33\%, and 50\% V substitutions 1x1x3, and 1x1x2 supercells were created. Then one Ti atom was replaced with V to mimic the experimental conditions. The supercells are again doubled in each direction to incorporate different magnetic orderings. Similar kind of methodology was used to model other compositions.
\par
The orbital moments at the transition metal sites were calculated using the orbital polarization correction (SO + OP) method implemented in Wien2k\cite{blaha2001wien2k}. In addition, we have simulated X$-$ray circular magnetic dichroism (XMCD)\cite{erskine1975calculation} spectra using Wien2k and estimated the site selective spin and orbital magnetic moments of V ions using the XMCD sum rules\cite{carra1993x,thole1992x}. For simulating XMCD, we have used full-potential linearized augmented plane wave (FP-LAPW) method based on density functional theory as implemented in the Wien2k code. The generalized gradient approximation within the PBE scheme was used as the exchange correlation potential. The muffin tin radii for Pb, V and O were chosen as 1.66, 2.49, and 1.50 a.u., respectively. The Brillouin zone integration has been carried out with 5000 k-points. The energy convergence with a convergence criterion of 10$^{-5}$ was obtained. The spin orbit coupling is considered in the calculation. As the ferromagnetic state is a metal, the plasma frequency was first calculated and included in the XMCD calculation.
\section{Results and Discussions}
\subsection{Structural phase stability}
PVO is a $C-$type antiferromagnet (See Fig.~\ref{afstr}) due to the presence of a $d$ electron with S = 1/2. In oxides, it can either occur in the middle of an octahedron cage (CaVO$_3$, SrVO$_3$)\cite{nekrasov2005comparative} or it can form a strong vanadyl bond with one O with a short bond length ($\sim$1.55$-$1.60\,\AA)\cite{schindler2000crystal}. PVO is reported to exhibit a tetragonal$-$to$-$cubic phase transition under pressure but no structural transition was observed from 0\,K to its decomposition temperature\cite{belik2005crystallographic}. Moreover, the minimum (apical) V$-$O bond length in the VO$_5$ pyramid in tetragonal PVO is 1.68\,\AA. This relatively longer bond shows that a comparatively weaker vanadyl bond than usual. The average of the four planar V-O bond lengths is 1.99~\AA. In agreement with these findings, the calculated Goldschmidt tolerance factor for PVO also shows a value of 1.036 which usually indicates non-centrosymmetric distortion.
\begin{figure}[!t]
\includegraphics[scale=0.5]{fig/PbVO3t_lok2.png}
\caption{ The $C-$type antiferromagnetic structure of PVO. Pb and V atoms are labelled on the illustration as green (big) and blue (medium) spheres, respectively. Oxygen atoms occur at the corners of the VO$_5$ square$-$pyramid and are labelled as red (small) spheres . Two types of oxygen atoms are present in PbVO$_3$ one at apical (labelled as O1) and the other at the planar (labelled as O2) position. The optimized atomic positions are: Pb [0, 0, 0], V [0.5, 0.5, 0.5708 (0.5677)], O1 [0, 0, 0.2128(0.2087)] and O2 [0, 0, 0.6902(0.6919)]. The values in the bracket are from experimental measurements.\cite{belik2005crystallographic}}
\label{afstr}
\end{figure}
\begin{figure}[!t]
\includegraphics[width=3.2in,height=2.5in]{fig/lattice_para.png}
\caption{ The evolution of lattice parameters in PbTi$_{1-x}$V$_x$O$_{3}$ as a function of $x$. The rapidly increasing $c$ and slowly decreasing $a$ result in unusual enhancement in $c/a$ value. The inset shows the octahedral environment changes into pyramidal one due to increase in tetragonal distortion with increasing $x$.}
\label{lat_para}
\end{figure}
\par
Here, we have made structural optimizations for both the ferrolectric ($P4mm$) and paraelectric ($Pm\bar{3}m$) phases of PbTi$_{1-x}$V$_x$O$_{3}$ for $x$ = 0, 0.25, 0.33, 0.50, 0.67, 0.75 and 1 compositions. The substitution of V for Ti in PTO doesn't affect the symmetry of the crystal as it retains the tetragonal symmetry for the whole composition range\cite{pan2017colossal}. With increasing V-content ($x$), the lattice parameter $c$ increases almost linearly and $a$ decreases gradually as shown in Fig.~\ref{lat_para}. This results in an unusual enhancement in the tetragonality of the PbTi$_{1-x}$V$_x$O$_{3}$. This can be understood on the basis of ionic radii of Ti$^{4+}$ and V$^{4+}$ ions. The coordination number of Ti$^{4+}$ changes from 6 to 5 whilst increasing concentration of vanadium. The ionic radius of V$^{4+}$ is 0.53\,\AA; whereas, the ionic radii of Ti$^{4+}$ are 0.61\,\AA~and 0.51\,\AA~for 6 fold and 5 fold co$-$ordinations, respectively\cite{shannon1976revised}. So with the decrease in the ionic radii, the lattice parameter $a$($b$) decreases in order to accommodate the smaller cation. On the other hand, with the increase of V concentration, the chance of vanadyl bond formation increases which leads to an increase in the length of $c-$axis. This is also the reason behind the increase in volume with the increase of $x$. It may be noted that, PTO modified ferroelectrics such as (1-$x$)PTO$-x$Bi(Ni$_{1/2}$Ti$_{1/2}$)O$_3$\cite{choi2005structure} and (1-$x$)PTO$-x$Bi(Mg$_{1/2}$Ti$_{1/2}$)O$_3$\cite{randall2004investigation} usually do not show high value of $c/a$ ratio. But some of the PTO based compounds i.e. (1-$x$)PTO$-x$BiFeO$_3$\cite{chen2013effectively}, BiInO$_3-$PTO\cite{eitel2001new} and (1-$x$)PTO$-x$Bi(Zn$_{1/2}$Ti$_{1/2}$)O$_3$\cite{suchomel2005enhanced} show abnormally high tetragonality. In the present study, the achieved maximum tetragonality ($c/a$) for a PTO based compound i.e. PbTi$_{0.25}$V$_{0.75}$O$_3$ is 1.24. This value is larger than that of some of the conventional perovskite ferroelectrics such as 0.5PTO$-$0.5BiFeO$_3$ ($c/a$=1.14), 0.85PTO$-$0.15BiInO$_3$ (1.08) and 0.6PTO$-$0.4Bi(Zn$_{1/2}$Ti$_{1/2}$)O$_3$ ($c/a$= 1.11). PTO$-$ based ferroelectric materials with large $c/a$ posses interesting physical properties, such as high value of ferroelectric polarization ({\bf P}$_S$), high Curie temperature ($T_C$), and enhanced negative thermal expansion (NTE). In this study, we have shown that PTO$-$based material with large $c/a$ value possesses strong magnetoelectric coupling as well as giant magnetovolume effect.
\par
The unit cell volume is sensitive to structural transition in PbTi$_{1-x}$V$_x$O$_{3}$ i.e. it contracts when a ferroelectric to paraelectric transition takes place. For $x$ = 0.25, the volume contracts by 4.2\% during the phase transition. Most interestingly, the volume contraction increases with increase in V content. The volume contraction value increases to 6.3\% for $x$ = 0.50 and 8.4\% for $x$ = 0.67. When $x$ = 1, the volume contracts by a value as large as 10.4\% at the ferroelectric to paraelectric phase transition point. One of the main factors for volume contraction in this series of compounds at the phase transition is the change in bonding between Pb$-$O and Ti/V$-$O which is discussed below. PTO based composites with a high value of volume contraction during phase transition can give rise to interesting properties like high NTE. Moreover, a temperature induced phase transition is observed experimentally for $x$ $\leq$ 0.3\cite{pan2017colossal}. So it can be said that for higher amount of Ti substitution in PbVO$_{3}$, the $T_C$ can be measurable without decomposition.
\par
The cubic structure of PVO can be considered as a special case of ideal tetragonal perovskite with $c/a=$1. The ferroelectric polarization is very sensitive to structural parameters. So, to calculate the $c/a$ ratio for various compositions accurately, we have calculated the total energy as a function of $c/a$. We have performed this calculation with a 1x1x2 supercell which means we should get a minimum energy for $c/a=$2 for a cubic system. In consistent with this, our E $\sim c/a$ graph (See Fig. S1 in supporting information) for PVO ( $x$ = 0) shows a minimum energy for $c/a=$2 for the paraelectric phase; whereas, for the ferroelectric phase, we get a non$-$integer $c/a$. In order to find the possible polarization path, we have calculated the total energy as a function of Pb displacement with respect to V/Ti$-$O polyhedra. Figure \ref{fig:dist} helps us to find the easiest path of the ferroelectric polarization and the role of Pb ions in the tetragonal distortion. We have plotted the displacement of Pb ion with respect to VO$_5$ pyramid along [111] and [001] directions. The energy vs displacement curves for both the directions have double well shapes. It can be seen that the lowest energy for the off$-$center displacements is along the [001] direction. It shows that the ferroelectric properties are greatly effected by the polarizability of Pb. The energy difference between the two directions is very small (about 2\,meV/f.u.) compared to well$-$known multiferroic BiFeO$_3$\cite{ravindran2006theoretical}. So, in principle, the Curie temperature of PVO must be less than that of BiFeO$_3$.
However, the $T_C$ for PVO has not been estimated yet, as it decomposes before reaching to its paraelectric phase. But the $T_C$ of PbTi$_{0.9}$V$_{0.1}$O$_3$ was estimated to be 823\,K by Zhao Pan \textit{et. al.}~\cite{pan2017colossal} and it showed an increasing trend V concentration (Measured $T_C$ for PbTi$_{0.8}$V$_{0.2}$O$_3$ and PbTi$_{0.7}$V$_{0.3}$O$_3$ are in between 823\,K to 873\,K). Compositions with $x>$ 0.3 decompose before reaching their $T_C$.
\begin{figure}[!t]
\includegraphics[scale=0.5]{fig/001-111.eps}
\caption{(Color online) Total energy as a function of displacement of Pb ion along [001] and [111] directions for the ferroelectric PVO. The structural relaxations were consider through selective dynamics.}
\label{fig:dist}
\end{figure}
We have also performed nudged elastic band (NEB) calculations in order to find out the energetics of ferroelectric to paraelectric phase transition and to identify a minimum energy path for the phase transition. To calculate the minimum energy path, we have considered the tetragonal phase of PbVO$_3$ as the initial structure whereas the cubic phase as tetragonal structure. Different intermediate images were created and the energies for all the images were calculated considering the volume as sell as shape variation. The energies obtained are plotted versus the images (here mentioned as reaction coordinates).The graph given in Fig. \ref{fig:neb} is resulted from NEB calculations and it shows that the ferroelectric--to--paraelectric transition involves an energy barrier of 127 meV which is higher than the difference between the ground−state total energies of these two phases i.e. $\sim$70 meV. It also shows the V--O coordinated polyhedra. In case of ferroelectric phase, the V--O polyhedron is highly distorted and makes an pyramidal arrangement. The intermediate images along the FE--to--PE transition path with higher energies than the FE phase possess comparatively less distorted polyhedra. Finally, the end product i.e. the paraelectric phase exhibits an undistorted octahedron. The high energy barrier for the phase transition is consistent with the high temperature FE--to--PE phase transition.
\par
\subsection{Chemical Bonding}
The chemical bonding analysis in compounds similar to PVO i.e. BaTiO$_3$ and PTO shows that the ferroelectric instability arises due to the hybridization interaction between Ti 3$d$ and O 2$p$ states\cite{cohen1992origin}. So it is interesting to analyze the chemical bonding between V/Ti and O in PbTi$_{1-x}$V$_x$O$_{3}$ in order to reach the depth of the origin of ferroelectricity. Experimental results and theoretical analysis also show that a significant hybridization is present between Pb and O\cite{kuroiwa2001evidence}. This also contributes to the Born effective charges (BEC) and consequently to the polarization.
\begin{figure}[!t]
\includegraphics[scale=0.3]{fig/NEB.png}
\caption{ The ferroelectric to paraelectric transition path for PVO obtained from nudged elastic band method. Both volume and shape variation during the transition are considered. The insets show the V and O coordination for the particular images marked by arrows. The atoms colors are same as in Fig. ~\ref{afstr} }
\label{fig:neb}
\end{figure}
\par
The charge density plot for $x$ = 0, 0.50, and 1 compositions are given in Fig. \ref{charge}. It can be seen from the Fig.\ref{charge} (a), (b), and (c) that the charge density between the $B-$site cation and apical oxygen increases as the V concentration increases. In PVO, the presence of strong V$-$O1 bond along with lone pair electrons at the Pb$^{2+}$ site weaken the Pb$-$O1 bond. Due to these, the Pb atom shifts more towards O2 to maintain the charge balance. Hence, the covalent bonding interaction between Pb and O2 shows increasing trend which is evident from Fig.\ref{charge} (d), (e) and (f) as the width of the charge density increases with increase in $x$. These observations indicate that the increase of V concentration in PbTi$_{1-x}$V$_x$O$_{3}$ not only increases the covalent bonding between Ti/V$-$O1 but also that between Pb$-$O2. As a consequence of these, the tetragonal distortion increases with increase in $x$.
\begin{figure}[!t]
\includegraphics[scale=0.10]{fig/CHGCAR.jpg}
\caption{\label{fig:charge} Charge density between (a) Ti and O1 in PbTiO$_{3}$, (b) V and O1 in PbTi$_{0.5}$V$_{0.5}$O$_{3}$, (c) V and O1 bond in PbVO$_{3}$, (d) Pb and O2 in PbTiO$_{3}$, (e) in PbTi$_{0.5}$V$_{0.5}$O$_{3}$, and (f) in PbVO$_{3}$.}
\label{charge}
\end{figure}
\par
The charge density plots for PVO in cubic and tetragonal phases are given in Fig. \ref{charge_fp}(a) upper and lower panel, respectively. It can be seen that, the charge density distribution at the Pb and O sites are essentially isotropic in nature for the paraelectric phase. On the other hand, finite amount of charge can be seen in between Pb and O in the tetragonal phase showing an anisotropic nature of charge density distribution. So, it can be stated that in cubic phase the Pb-O bond has more ionic character and the tetragonal phase has a mixed iono-covalent bonding character. On the other hand, the bonding between V and O is found to have substantial covalency in both the cubic and tetragonal phases as we can find noticeable charge density between these atomic sites. But, it may be noted that, the covalent interaction between V and O in tetragonal phase is stronger than that in the cubic case. Due to this, the V atom is shifted from the center of the octahedron stabilizing the ferroelectric state..
\par
Charge transfer distribution shows isotropic nature at the Pb and O sites (Fig.~\ref{charge_fp}(b)) in the cubic phase confirming the ionic bonding between Pb$-$O. The anisotropic nature of charge transfer distribution between V and O indicates the presence of finite covalent bonding between these atoms in the cubic phase. The charge transfer distribution between V and O as well as Pb and O in tetragonal phase shows anisotropic behavior as shown in Fig. \ref{charge_fp}(b) indicating the presence of finite covalent bonding between them. It may be noted that, though the covalency between Pb$-$O is weaker than that between V$-$O in the tetragonal phase, it has equal importance in structural and ferroelectric properties.
\begin{figure}[!t]
\includegraphics[scale=0.3]{fig/cub_chg+elf.eps}
\newline
\includegraphics[scale=0.3]{fig/tet_chg+elf.eps}
\caption{(a) Charge density, (b) Charge transfer, and (c) Electron localization function for the paraelectric (upper panel) and ferroelectric (lower panel) structures.}
\label{charge_fp}
\end{figure}
\begin{figure}[!b]
\includegraphics[scale=0.2]{fig/ELFCAR1.png}
\caption{ (Color online) The valence electron localization function isosurface plotted at a value of 0.7 for PVO in the ferroelectric $P4mm$ structure.}
\label{ELF}
\end{figure}
\begin{figure*}
\includegraphics[width=2.5in,height=4in]{fig/cubpdos.png}
\includegraphics[width=2.5in,height=4in]{fig/tetpdos.png}
\caption{\label{fig:bnds}The site and orbital projected density of states (DOS) for PVO for the respective ground state magnetic structures in the (a) paraelectric and (b) ferroelectric phases. The Fermi level is set to zero.}
\label{partial_dos}
\end{figure*}
\par
Electron localization function (ELF) also provides important information about the bonding interaction between the constituents of a compound. It can be seen from Fig. \ref{charge_fp}(c) that the ELF is maximum at O sites and minimum at Pb and V sites for both the paraelectric and ferroelectric phases. Finite ELF can be seen in between V and O in the tetragonal phase showing the presence of finite covalent bonding between them. The presence of the stereochemically active lone pair electrons at Pb site is also clearly visible in the ELF plot for the tetragonal case. So the bonding interaction between Pb and O as well as V and O in the ferrolectric tetragonal phase can be concluded as having dominant ionic character with finite covalent bonding. The difference in the bonding behaviour of Ti/V$-$O and Pb$-$O bonds for paraelectric and ferroelectric phases play a crucial role for the large volume contraction during the ferroelectric-to-paraelectric phase transition. We have also plotted the isosurface of the valence electron localization function with an ELF value of 0.7 and given in Fig. \ref{ELF}. This figure shows the lobe like lone pairs of hybridized 6$s$ electrons at the Pb site. The charge density and charge transfer plot show that finite amount of electrons from Pb sites covalently interact with O 2$p$ ensuring that 6s electrons of Pb are not completely chemically inert.
\par
The site and orbital projected DOS for paraelectric and ferroelectric phases for their respective ground state magnetic configurations are given in Fig. \ref{partial_dos} (a) and (b), respectively. It can be seen that the V 3$d$ states and O 2$p$ states are spread over in the valence band in a range of -7\,eV to Fermi level (E$_F$). The energetically degenerate nature of these two states indicates the presence of strong covalent hybridization between V and O. This hybridization weakens the short range repulsion to lower the energy of the ferroelectric phase. The filled O 2$s$ states form narrow bands around -18\,eV (see Fig. \ref{partial_dos}(b)). Above that, at around -9\,eV, the Pb 6$s$ lone pair electronic states are present with small contributions from 2$p$ electrons. This also confirms that the 6$s$ electrons are stereochemically active and not inert. Though the Pb 6$p$ states are well separated from its 6$s$ states and are mostly present in CB, a small amount of Pb 6$p$ states can be seen which are degenerate with O 2$p$ states confirming the Pb$-$O covalency. The hybridization between these states enhances the stability of the ferroelectric phase by lowering the total energy of the system. The partial DOS for PbTi$_{0.5}$V$_{0.5}$O$_3$ calculated with GGA$+U$ ($U_{eff}$ = 3 eV) is given as Fig. S2 in the supporting information. Similar kind of DOS distribution can be seen for Pb, V and O atoms in PbTi$_{0.5}$V$_{0.5}$O$_3$ also. The almost empty $d$ states indicates the $d^0-$ness of Ti$^{4+}$. Due to the covalent interaction between Ti and O, the charges redistribute among themselves for which we see very small yet noticeable DOS in Ti valence band.
\par
The nature and strength of the bonding can also be determined by crystal orbital Hamilton population (COHP) analysis where the negative and positive COHP values indicate bonding and anti$-$bonding interactions, respectively. We have calculated the COHP among the constituents of PVO (Pb$-$O, Pb$-$O2, V$-$O1, V$-$O2, O1$-$O2) and given in Fig. \ref{fig:cohp}. It can be seen that, all the occupied states for Pb$-$O and V$-$O interactions have bonding states for both paraelectric and ferroelectric phases. The Pb$-$O2 interaction for both the paraelectric and ferroelectric phases are almost the same. But the interaction between Pb$-$O1 in the ferroelectric case has more bonding states which are due to the hybridization between O 2$p$ and Pb 6$s$/6$p$ states. This indicates the presence of stronger covalency between Pb and O1 in case of the ferroelectric phase than the paraelectric one which is also consistent with our charge density and partial DOS analyses. In the ferroelectric phase the V$-$O1 bonding interaction is stronger than V$-$O2 bonding interaction which is due to the off$-$center displacement of V atom towards the O1 atom.
\begin{figure}[!t]
\includegraphics[scale=0.35]{fig/cohp.eps}
\caption{ Crystal orbital Hamilton population (COHP) for PVO in the paraelectric and ferroelectric phases describing Pb$-$O, Pb$-$O2, V$-$O1, V$-$O2, O1$-$O2. O1 and O2 are the planar and apical oxygen atoms, respectively.}
\label{fig:cohp}
\end{figure}
\begin{figure*}
\includegraphics[scale=0.4]{fig/Tot_cub.eps}
\includegraphics[scale=0.4]{fig/Tot_tet.eps}
\caption{Total density of states for PVO in the nonspin polarized (NSP), ferromagnetic, $A-$, $C-$, and $G-$type antiferromagnetic configuration for (a) paraelectric and (b) ferroelectric phases. The Fermi level is set to zero.}
\label{total_dos}
\end{figure*}
\subsection{Electronic Structure}
The total DOS given in Fig. \ref{total_dos} (a) for the paraelectric phase shows a metallic behavior for all magnetic configurations. From the total DOS of the ferroelectric phase (Fig. \ref{total_dos}(b)), it can be seen that the nonmagnetic phase shows a metallic state; whereas, the ferromagnetic case is a half metal with a gap of $\sim$ 0.5\,eV in the down spin channel. But, when the antiferromagnetic ordering is introduced, the exchange potential produced by the exchange interaction pushes the V 3$d$ states towards the lower energy side which results in an insulating state with a narrow gap. So, a combination of spin polarization, magnetic ordering as well as crystal structure play important roles in stabilizing the ferroelectric phase of PVO. The broad band spreading from -7 to -2\,eV is mainly due to the strong hybridization between V 3$d$ O 2$p$ electrons. The localized states at around -9\,eV are generated from the hybridized Pb 6$s$ lone pair electrons. After including the correlation effect through the Hubbard $U$ into the Hamiltonian matrix, the states get more localized to give a larger band gap and this localization of the $d$ electrons produces a larger magnetic moment.
\par
The total DOS for $x$ = 0.33, 0.50 and 0.67 compositions were calculated with the GGA+$U$ method and given as Fig. S3 in the supporting information. The band gap value decreases with increase in $x$. As it is known that the bond lengths and lattice parameters have direct impact on band gaps, the decreasing band gap may be attributed to the increase in the average Ti/V$-$O bond length with $x$ in Ti/V$-$O$_6$ polyhedra (The average Ti$-$O bond length is 2.07~\AA in PTO whereas the V$-$O bond length is 2.11~\AA in case of PVO). The lattice parameter $c$ also increase with $x$.
\par
The band structure for both the para$-$ and ferroelectric phases of $x$ = 1 composition are given in Fig. \ref{bnds}. The \textbf{k}$-$path considered in the band structure is given as Fig. S4 in the supporting information. The lowest bands seen around 9\,eV below the E$_F$ are due to the Pb 6$s$ states. It is to be noted that, these bands are broader in the paraelectric phase; whereas, in the ferroelectric phase, they are more localized in a small energy range. This explains the effect of 6$s$ lone pair in stabilizing the ferroelectric state. A manifold of occupied O 2$p$ bands are spread over a range from -7\,eV to -2\,eV. A pair of bands separated from O 2$p$ bands and localized just below $E_F$. These are having V 3$d$ character with no dispersion along $\Gamma-$Z direction and a very weak dispersion along X$-$R as well as M$-$A directions indicating a 2D characteristic in the $ab-$plane\cite{singh2006electronic}. Above these band features a narrow gap with the value of 0.2\,eV is present in the ferroelectric phase. The conduction bands located in the vicinity of CB-edge are derived from V 3$d$ states, but the bands at higher energy ($\sim$ 3\,eV) have mainly Pb 6$p$ character. We have also calculated the band structure with GGA+$U$ method. The GGA+$U$ band structures for $x$ = 0.5 and $x$ = 1 compositions are given in Fig. S5 and S6 in the supporting information. It can be seen that when the V concentration is less, the vanadyl bond weakens and therefore the $d_{xy}$ band gets dispersed along $\Gamma-$Z, X$-$R and M$-$A directions. This indicates that the tendency of the system to form the 2D magnetism is reduced for lower $x$ values.
\begin{figure*}
\includegraphics[width=3.0in,height=2.5in,angle=270]{fig/cub_band.eps}
\includegraphics[width=3.0in,height=2.5in,angle=270]{fig/tetra_band.eps}
\caption{Electronic band dispersion for PVO in the (left) paraelectric and (right) ferroelectric phases. The Fermi level is set to zero.}
\label{bnds}
\end{figure*}
\subsection{Magnetic Properties}
We have calculated total energies for nonmagnetic and all the magnetic orderings mentioned above for PbTi$_{1-x}$V$_x$O$_{3}$ in both paraelectric and ferroelectric phases and given in Table S1 in the supporting information. PTO is a well$-$known non$-$magnetic compound. Our total energy calculation reveals that the $C-$AF is the ground state for other compositions. It is to be noted that the $C-$AF and $G-$AF states are almost degenerate (See Table S1 in the supporting information). The present observation of $C-$AF state as a magnetic ground state for PVO is in consistent with experimental observation of a two$-$dimensional $C-$AF phase and also with other theoretical studies\cite{shpanchenko2004synthesis, uratani2009first}. To identify the exact composition where the non$-$magnetic to magnetic transition happens, we have plotted $\Delta$E vs $x$ (where $\Delta E=E_{C-AF}-E_{NM}$) for the ferroelectric phase and given in the supporting information Fig. S7. The energy difference increases linearly as the concentration of V increases due to the increase in localized $d$ electrons density per cell making the antiferromagnetic state more stable. We have extrapolated the $\Delta$E vs $x$ curve and the magnetic transition point is found to be $x\sim$ 0.123. This is in agreement with previous theoretical finding by Ricci \textit{et al.}\cite{ricci2013multiferroicity} using similar calculations that the magnetic ground state stabilize in PbTi$_{1-x}$V$_x$O$_{3}$ for $x$ = 12.5\%.
\par
The total energy shows a non$-$magnetic ground state for the paraelectric phase of PbTi$_{1-x}$V$_x$O$_{3}$. So the ferroelectric to paraelectric phase transition is associated with a magnetic to non$-$magnetic transition which shows that a strong coupling between the electric and magnetic order parameters for $x>$ 0.123. Now, the large values of volume contraction during the structural transition can also be attributed to the associated magnetic phase transition. So it can be said that, this series of compounds possess not only giant magnetoelectric effect, but also giant magnetovolume effect. It is to be noted that compounds with strong magnetovolume effect can be used to convert magnetic energy into kinetic energy, or vice-versa, and are used to build actuators and sensors.
\begin{figure}[!b]
\includegraphics[scale=0.4]{fig/orb1.png}
\caption{The orbital projected density of states for the 3$d$ electrons in the V site of PbVO$_3$ in the (a) paraelectric and (b) ferroelectric phases for their respective ground state magnetic structures. The Fermi level is set to zero.}
\label{orb}
\end{figure}
\begin{figure*}
\includegraphics[width=3in,height=3in]{fig/orbord_T.png}
\includegraphics[width=3in,height=3in]{fig/orbord_C.jpg}
\caption{Orbital ordering pattern of PVO in (left) the ferroelectric and (right) paraelectric phases as derived from full potential calculations.}
\label{orbord}
\end{figure*}
\par
The orbital projected DOS for the V 3$d$ electrons for both paraelectric and ferroelectric phases in PVO are given in Fig. \ref{orb} (a) and (b), respectively. In the presence of a cubic octahedral crystal field the $d$ level of transition metal cation splits into $t_{2g}$ (with degenerate $d_{xy}$, $d_{xy}$ and $d_{zx}$ orbitals) and $e_g$ (with degenerate $d_{x^2-y^2}$ and $d_{z^2}$) orbitals. The $e_g$ levels are completely empty whereas the $t_{2g}$ states are partially filled in the paraelectric phase resulting in a metallic state. The ferroelectric phase of PVO has a square pyramidal coordination (Fig. \ref{afstr}) because of the off$-$center displacement of V. Due to the presence of VO$_5$ square pyramidal environment, the $t_{2g}$ level of V 3$d$ electrons splits into $b_{2g}$ ($d_{xy}$) and doubly degenerate $e_g$ ($d_{yz}$, $d_{xz}$) levels with an energy difference of $\sim$ 1\,eV. Similarly the degeneracy of $e_g$ level breaks to produce separate $b_{1g}$ ($d_{x^2-y^2}$) and $a_{1g}$ ($d_{z^2}$) levels \cite{solovyev2012magnetic}. The orbital degeneracy is broken because of the second-order-Jahn-Teller ordering. This was also noticed in the YTiO$_3$ with $d^1$ configuration of Ti$^{3+}$ similar to V$^{4+}$ in our study\cite{goodenough2015varied}. The orbital projected DOS for the ferroelectric phase given in Fig. \ref{orb} (b) shows that the localized V 3$d$ electron are present in the lowest lying $d_{xy}$ orbital which is separated from other $d$ orbitals by crystal field splitting, making an spin 1/2 antiferromagnetic ordering in the $ab$ plane. Due to the localized nature of this $d-$electron there is a strong intra$-$atomic exchange splitting that shifts the unoccupied $d_{xy}$ states to relatively higher energy.
\par
We know that the electrons present in the vicinity of Fermi level actively participate in electrical conduction and magnetic exchange interactions. So we have used the integrated values of the orbital projected DOS i.e. from $E_F$ to -1\,eV to study the orbital ordering. We have used same procedure for OO study as given in our previous studies\cite{vidya2008density,vidya2002spin}, and such analysis not only provides the OO pattern but also a pictorial illustration of the orientation of a particular $d$ orbital. The OO pattern for ferroelectric and paraelectric phases are given in Fig. \ref{orbord}. It can be seen that the V atom has a $d_{xy}$ pattern OO in the ferroelectric phase. This type of orbital ordering in PVO strengthens the intraplanar V-O-V interactions and also stabilizes the 2D magnetism. Hence, the magnetic and orbital ordering are ingeniously coupled with the lattice distortion and ionic displacements that result in strong magnetoelectric coupling. But, in the cubic paraelectric case, the V$-$O$-$V interactions are strong enough to create an itinerant-electron band with all the $t_{2g}$ orbitals partially filled. So, the OO pattern for paraelectric phase does not show particular manner of ordering from any of the $t_{2g}$ orbitals. This weakens the exchange coupling and stabilizes a non-magnetic ground state.
\par
The calculated magnetic moment at the V site is 0.93 $\mu_B$ for the ferroelectric phase. The covalency present between V and O induces a magnetic moment of 0.05 $\mu_B$ at O site. The orbital magnetic moment at the V site evaluated using SO + OP method is estimated to be $-$0.045 $\mu_B$. This indicates that the orbital magnetic moment is aligned antiparalelly with the spin magnetic moment. The magnitude of the orbital moment of PVO is almost 5 times less than that of BiCoO$_3$ \cite{ravindran2008magnetic}. This is due to the the presence of relatively larger spin moment and stronger spin orbit coupling in BiCoO$_3$.
\par
We have calculated the magnetic anisotropy energy\cite{brooks1940ferromagnetic},\cite{bozorth1937directional} for $x$ = 0.5 and 1 in their ground state $C-$AF configurations. For this, we have calculated the variation of total energy by changing the direction of the magnetization with the force theorem for the compositions $x$ = 0.5 and 1. The calculated relative magneto anisotropy energies for $x$ = 0.5 with respect to the easy axis are 0, 0.125, 0.070 and 0.048 meV/f.u. and 0.592, 1.630, 0, and 0.185 meV/f.u. for $x$ = 1 in the [001],[100],[110], and [111] directions, respectively. Therefore, the easy axis for PbTi$_{0.5}$V$_{0.5}$O$_3$ and PVO are [001] and [110], respectively. The change in magnetic easy axis from $x$ = 0.5 to 1 can be attributed to the formation of vanadyl bond with increase in V concentration i.e. the $x$ = 0.5 composition is a BiCoO$_3$ like $C-$AF (so it shows similar easy axis) and with increase in V concentration the tendency to form a 2D antiferromagnetism increases (which also reflects in our band structure).
\par
In order to get deeper understanding of the magnetic properties of PVO, we have simulated the XMCD and X-ray absorption spectra (XAS) at the $L_{2,3}$ edges of V in the ferromagnetic configuration of PVO. The XMCD and XAS spectra along with the left and right polarized spectra for V atom in PVO are given in Fig.~\ref{sumrule}. By applying the sum rules\cite{carra1993x,thole1992x} given in supporting information, we have evaluated the spin and orbital moments at the V site. The obtained orbital moment of -0.039 $\mu_B$ for V is consistent with that obtained directly from the self consistent calculations (-0.045 $\mu_B$). The sum rule analysis results a spin moment of at 0.8 $\mu_B$/V the V $L_{2,3}$ edges. However, this varies noticeably from the value calculated by SCF method which is $\sim$ 1 $\mu_B$/V. Detailed analysis on different 3$d$ compounds by Wende \textit{et al.}\cite{scherz2004fine} shows that the deviation of the sum rule results from the SCF predictions are larger for lighter 3$d$ elements\cite{wende2004recent,scherz2005limitations,wende2007xmcd} as integral sum rule analysis ignores the spectral shape of the XMCD. So, the detailed fine structure in the XMCD of V should be fitted with the multipole moment analysis as explained in Ref. [67]\cite{wende2007xmcd}.
\begin{table*}
\centering
\caption{Calculated diagonal components of the Born effective charge tensors for $x$ = 0, 0.5 and 1 of PbTi$_{1-x}$V$_x$O$_{3}$ in the ferroelectric phase with ground state magnetic ordering.}
\renewcommand{\arraystretch}{1}
\begin{tabularx}{\textwidth}{XXXXXXXXXX}
\hline\hline
\textbf{\textit{x~$\rightarrow$}} & & \textbf{0} & & & \textbf{0.5} & & & \textbf{1} & \\
\hline
\textbf{Atom} & \textbf{\textit{xx}} & \textbf{\textit{yy}} & \textbf{\textit{zz}} & \textbf{\textit{xx}} & \textbf{\textit{yy}} & \textbf{\textit{zz}} & \textbf{\textit{xx}} & \textbf{\textit{yy}} & \textbf{\textit{zz}}\\
\hline
\textbf{Pb} & 3.641 & 3.641 & 3.331 & 3.623 & 3.617 & 3.664 & 3.251 & 3.232 & 3.275 \\
\textbf{Ti} & 6.166 & 6.166 & 4.961 & 5.480 & 5.406 & 4.778 & $-$ & $-$ & $-$ \\
\textbf{V} & $-$ & $-$ & $-$ & 4.348 & 4.440 & 4.435 & 4.880 & 4.824 & 3.749 \\
\textbf{O1} &-1.938 &-1.938 &-4.245 &-1.148 &-1.154 &-3.098 &-1.442 &-1.525 &-3.720 \\
\textbf{O2} &-2.603 &-5.266 &-2.024 &-2.078 &-3.875 &-1.417 &-2.209 &-4.029 &-1.764 \\
\textbf{O3} &-5.266 &-2.603 &-2.024 &-3.716 &-1.899 &-1.423 &-3.877 &-2.513 &-1.724 \\ \hline\hline
\end{tabularx}
\label{BEC_table}
\end{table*}
\par
\subsection{Born effective charge and Spontaneous Polarization}
\label{sec:charge}
\par
We have calculated BECs for all the compositions mentioned above. The diagonal components of the BECs for $x$ = 0, 0.5 and 1.0 compositions are given in Table~\ref{BEC_table}. The formal valence for Pb, V, Ti and O in PbTi$_{1-x}$V$_x$O$_{3}$ are +2, +4, +4 and -2, respectively. But, the diagonal components of the calculated BEC tensors for the atoms are much larger than their nominal ionic values. This reveals that there is large dynamic contribution superimposed to the static charge. This is due to the strong covalency effect where a large amount of non-rigid delocalized charge flows across this compound during the ionic displacement. This results in additional charges with respect to the nominal ionic values (anomalous contribution) which are obtained at the atomic sites. A large BEC value at the oxygen site also suggests that a large force can be generated by applying a small field which can stabilize the polarized state.
\begin{figure}[!t]
\includegraphics[scale=0.35]{fig/XMCD_final.png}
\caption{Absorption spectra at the V L$_{2;3}$ edges for right circular, left circular polarization of the exciting X-rays, X-ray magnetic circular dichroism and X-ray absorption spectra in PVO}
\label{sumrule}
\end{figure}
\begin{figure}[!b]
\includegraphics[scale=0.45]{fig/Graph4.png}
\caption{Calculated site projected spontaneous electric polarization of PbTi$_{1-x}$V$_x$O$_{3}$ for $x$ = 0 (upper panel), 0.5 (middle panel) and 1 (lower panel).}
\label{fig:ppol}
\end{figure}
In an effort to shed light on the polarization of PbTi$_{1-x}$V$_x$O$_{3}$ and its origin, we have evaluated the polarization of PbTi$_{1-x}$V$_x$O$_{3}$ for $x$ = 0 to 1. PTO and PVO are well studied ferroelectrics with polarization values of 92 $\mu$C/cm$^{2}$ and 147 $\mu$C/cm$^{2}$, respectively. The calculated polarization values increase from 92 $\mu$C/cm$^{2}$ for pure PTO to 105, 110, 118, 130, 135, and 147 $\mu$C/cm$^{2}$ for $x$ = 0.25, 0.33, 0.5, 0.67, 0.75, and 1, respectively. This is due to the increase in tetragonal distortion with $x$ as discussed above. The polarization calculated with point charges showed comparatively low values. The reduction in polarization values using point charges indicates that covalency effect plays an important role deciding the polarization of these materials.
\par
The atom$-$decomposed spontaneous polarization is given for $x$ = 0, 0.5 and 1 compositions in Fig~\ref{fig:ppol}. The major contribution towards the polarization is coming from the displacement of the O atom from its centrosymmetric position. It can be seen that the polarization contribution of Ti towards higher $x$ value decreases. In contrast, an opposite trend is seen for V atom. This is because of the enhancement of covalency between B site cation and O with V substitution at Ti site.
\subsection{Conclusion}
PbTi$_{1-x}$V$_x$O$_{3}$ possesses strong coupling between magnetic and electric order parameters for V concentration $>$ 12.3\%. Lower substituted compositions are non-magnetic ferroelectrics. Also, from the other side, the inclusion of Ti at the V site increases the chance of forming a 3D arrangement of V cations rather than 2D. The strong covalency between Ti/V$-$O and noticeable covalency between Pb$-$O bring the non$-$centrosymmetric distortion and stabilize the ferroelectric ground state. The ferroelectric-to-paraelectric transition is accompanied by a magnetic to non$-$magnetic transition for $x$ $>$ 0.123. There is a large volume contraction during the above transition, indicating a strong lattice$-$ferroelectric coupling and strong magnetovolume effect. The calculations show high values of spontaneous electrical polarizations which are mainly due to the displacement of the apical oxygen in the BO$_6$ octahedra in the paraelectric phase due to strong Ti/V$-$O covalent interaction. Therefore, the PbTi$_{1-x}$V$_x$O$_{3}$ is a multifunctional series of compounds having giant magnetoelectric effect where one can change the magnetic properties drastically (even from magnetic to non$-$magnetic as shown here) by applying electric field and vice versa, strong magnetovolume effect, induce 2D magnetism as well as orbital ordering by electric field, strong ferrolectric$-$lattice coupling etc. Also, recent experimental studies by Zhao Pan \textit{et al.}\cite{pan2017colossal} show that for lower V concentration, this series shows good NTE response and also a temperature induced tetragonal to cubic transition happens which makes this series more interesting for the researchers to investigate its magneto/electro/multi$-$caloric properties. With all these properties coexisting in the series, PbTi$_{1-x}$V$_x$O$_{3}$ can be useful for many multifunctional device applications.
\label{sec:con}
\section{Acknowledgements}
The authors are grateful to the Research Council of Norway for providing computer time (under the project number NN2875k) at the Norwegian supercomputer consortium (NOTUR). This research was supported by the Indo$-$Norwegian Cooperative Program (INCP) via Grant No.F. 58-12/2014(IC). L. P. wishes to thank Prof. Anja Olafsen Sj{\aa}stad for her fruitful discussions. L. P. also thanks Mr Ashwin Kishore MR and Dr. A. Krishnamoorthy for critical reading of the manuscript.
\section{Associated content}
Energy vs. c/a ratio curves, partial dos for $x$ = 0.50 with GGA+$U$ method, total DOS for $x$ = 0.33, 0.50, and 0.67 with GGA+$U$ method, the Briilouine zone path used for band calculations, electronic band structure for $x$ = 0.50 with GGA+$U$ method, electronic band structure for $x$ = 1 with GGA+$U$ method, variation of energy difference between magnetic and nonmagnetic states with $x$, and sum rule analysis. This information is available free of charge via the Internet at http://pubs.acs.org.
\section{Author Information}
\subsection{Corresponding Author}
*Email: raviphy@cutn.ac.in \newline
Tel: +91 94890 54267
\subsection{Notes}
The authors declare no competing financial interest.
|
2,877,628,090,294 | arxiv | \section*{Introduction}
There are many results related to the algebraic and geometric
classification
of low-dimensional algebras in the varieties of Jordan, Lie, Leibniz and
Zinbiel algebras;
for algebraic classifications see, for example,
\cite{ack, ikm19, kkk18};
for geometric classifications and descriptions of degenerations see, for example,
\cite{ack, bb14, fkkv19, jkk19, ikm19, kkk18, kama, kk20g, kkl20, klp19, maz79, S90}.
In this present paper, we give a geometric classification of nilpotent commutative $\mathfrak{CD}$-algebras.
This is a new class of non-associative algebras introduced in \cite{ack,kps19}.
The idea of the definition of a $\mathfrak{CD}$-algebra comes from the following property of Jordan and Lie algebras: {\it the commutator of any pair of multiplication operators is a derivation}.
This gives three identities of degree four, which reduce to only one identity of degree four in the commutative or anticommutative case.
Commutative and anticommutative $\mathfrak{CD}$-algebras are related to many interesting varieties of algebras.
Thus, anticommutative $\mathfrak{CD}$-algebras is a generalization of Lie algebras,
containing the intersection of Malcev and Sagle algebras as a proper subvariety. Moreover, the following intersections of varieties coincide:
Malcev and Sagle algebras;
Malcev and anticommutative $\mathfrak{CD}$-algebras;
Sagle and anticommutative $\mathfrak{CD}$-algebras.
On the other hand,
the variety of anticommutative $\mathfrak{CD}$-algebras is a proper subvariety of
the varieties of binary Lie algebras
and almost Lie algebras \cite{kz}.
The variety of anticommutative $\mathfrak{CD}$-algebras coincides with the intersection of the varieties of binary Lie algebras and almost Lie algebras.
Commutative $\mathfrak{CD}$-algebras is a generalization of Jordan algebras,
which is a generalization of associative commutative algebras.
On the other hand, the variety of commutative $\mathfrak{CD}$-algebras is also known as the variety of almost-Jordan algebras, which states in the bigger variety of generalized almost-Jordan algebras \cite{arenas,hl,labra}.
The $n$-ary version of commutative $\mathfrak{CD}$-algebras was introduced in a recent paper by
Kaygorodov, Pozhidaev and Saraiva \cite{kps19}.
The variety of almost-Jordan algebras is the variety of commutative algebras,
satisfying \[2((yx)x)x+yx^3=3(yx^2)x.\]
This present identity appeared in a paper of Osborn \cite{os65},
during the study of identities of degree less than or equal to $4$ of non-associative algebras. The identity is a linearized form of the Jordan identity.
The systematic study of almost-Jordan algebras was initiated in the next paper of Osborn \cite{osborn65} and it was continued in some papers of Petersson \cite{petersson, petersson67}, Osborn \cite{osborn69}, and Sidorov \cite{Sidorov_1981}
(sometimes, it was called as Lie triple algebras).
Hentzel and Peresi proved that every semiprime almost-Jordan ring is Jordan \cite{peresi}.
After that,
Labra and Correa
proved that a finite-dimensional almost-Jordan right-nilalgebra is nilpotent \cite{cl09,cl09-2}.
Assosymmetric algebras under the symmetric product give almost-Jordan algebras \cite{askar18}.
\medskip
\paragraph{\bf Motivation and contextualization}
Geometric properties of a variety of algebras have been an object of study since 1970's. Gabriel~\cite{gabriel} described the irreducible components of the variety of $4$-dimensional unital associative algebras. Mazzola~\cite{maz79} classified algebraically and geometrically the variety of unital associative algebras of dimension $5$. Burde and Steinhoff~\cite{BC99} constructed the graphs of degenerations for the varieties of $3$-dimensional and $4$-dimensional Lie algebras over ${\mathbb C}$. Grunewald and O'Halloran~\cite{GRH} calculated the degenerations for the nilpotent Lie algebras of dimension up to $5$. Seeley~\cite{S90} solved the same problem for $6$-dimensional complex nilpotent Lie algebras.
Chouhy~\cite{chouhy} proved that, in the case of finite-dimensional associative algebras,
the $N$-Koszul property is preserved under the degeneration relation.
Degenerations have also been used to study a level of complexity of an algebra (see~\cite{g93,wolf1,wolf2, kh15}).
Given algebras ${\bf A}$ and ${\bf B}$ in the same variety, we write ${\bf A}\to {\bf B}$ and say that ${\bf A}$ {\it degenerates} to ${\bf B}$, or that ${\bf A}$ is a {\it deformation} of ${\bf B}$, if ${\bf B}$ is in the Zariski closure of the orbit of ${\bf A}$ (under the base-change action of the general linear group). The study of degenerations of algebras is very rich and closely related to deformation theory, in the sense of Gerstenhaber \cite{ger63}. It offers an insightful geometric perspective on the subject and has been the object of a lot of research.
In particular, there are many results concerning degenerations of algebras of small dimensions in a variety defined by a set of identities.
One of the main problems of the {\it geometric classification} of a variety of algebras is a description of its irreducible components. In the case of finitely-many orbits (i.e., isomorphism classes), the irreducible components are determined by the rigid algebras --- algebras whose orbit closure is an irreducible component of the variety under consideration.
The algebraic classification of complex $5$-dimensional nilpotent commutative $\mathfrak{CD}$-algebras was obtained in \cite{jkk20}, and in the present paper we continue the study of the variety by giving its geometric classification.
\section{The algebraic classification of complex $5$-dimensional nilpotent commutative $\mathfrak{CD}$-algebras}
The algebraic classifiction of $5$-dimensional nilpotent commutative $\mathfrak{CD}$-algebras has three steps:
the classification of all associative commutative algebras was given by Mazzola in 1979 \cite{maz79};
the next step is the classification of all non-associative Jordan algebras was given by Hegazi and Abdelwahab in 2016 \cite{ha16};
and the last step is the classification of all non-Jordan commutative $\mathfrak{CD}$-algebras was given by
Jumaniyozov, Kaygorodov and Khudoyberdiyev in 2021 \cite{jkk20}.
Let us give the list of algebras from the last part from this long classification:
\begin{theorem}\label{teor}
Let $\mathfrak{C}$ be a complex $5$-dimensional nilpotent commutative $\mathfrak{CD}$-algebra.
Then $\mathfrak{C}$ is a Jordan algebra or it is isomorphic to one algebra from the following list:
{\tiny
\begin{longtable}{lllllllllll}
$\mathfrak{C}^{5}_{01}$&$:$& $e_1 e_1 = e_2$ & $e_2 e_2=e_3$\\
$\mathfrak{C}^{5}_{02}(\alpha)$&$:$& $e_1 e_1 = e_2$ & $e_1 e_2=e_3$& $e_1e_3= \alpha e_4$ & \multicolumn{2}{l}{$e_2e_2= (\alpha +1)e_4$} \\
$\mathfrak{C}^{5}_{03}$& $: $& $e_1 e_1 = e_2$& $e_1e_3=e_4$& $e_2e_2=e_4$ \\
$\mathfrak{C}^{5}_{04}$& $: $& $e_1 e_1 = e_2$ & $e_2e_2=e_4$& $e_3e_3=e_4$ \\
$\mathfrak{C}^5_{05}$ & $: $ & $e_1 e_2 = e_3$ & $e_3 e_3=e_4$ \\
$\mathfrak{C}^5_{06}$ & $: $ & $e_1 e_1 = e_4$ & $e_1 e_2=e_3$ & $e_2e_2=e_4$& $e_3e_3=e_4$\\
$\mathfrak{C}^5_{07}$ & $: $ & $e_1 e_1 = e_4$ & $e_1 e_2=e_3$ & $e_3e_3=e_4$\\
$\mathfrak{C}^5_{08}$ & $: $ & $e_1e_1=e_2$ & $e_1e_3=e_4$ & $e_2e_2=e_5 $\\
$\mathfrak{C}^5_{09}$ & $: $& $e_1e_1=e_2$ & $e_1e_3=e_4$ & $e_2e_2=e_5$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{10}$ & $ :$ & $e_1e_1=e_2$ & $e_1e_2=e_4$ & $e_2e_2=e_5 $\\
$\mathfrak{C}^5_{11}$ & $ : $ & $e_1e_1=e_2$ & $e_1e_2=e_4$ & $e_1e_3=e_5$ & $e_2e_2=e_5 $\\
$\mathfrak{C}_{12}^5(\alpha)$&$:$&
$e_1e_1=e_2$ & $e_1e_2=e_3$ &$e_1e_3=(\alpha+1) e_5$ &$e_2e_2= \alpha e_5$ &$e_2e_4= e_5$\\
$\mathfrak{C}_{13}^5(\alpha, \beta)$&$:$&
$e_1e_1=e_2$ & $e_1e_2=e_3$ &$e_1e_3=(\alpha+1) e_5$ &$e_2e_2= \alpha e_5$ &$e_2e_4= \beta e_5$ &$e_4e_4= e_5$\\
$\mathfrak{C}^5_{14}$ & $ : $ & $e_1e_1=e_2$ & $e_2e_2=e_5$ & $e_3e_3=e_4 $\\
$\mathfrak{C}^5_{15}$ & $ : $ & $e_1e_1=e_2$ & $e_1e_3=e_5 $ & $ e_2e_2=e_5$ & $e_3e_3=e_4 $\\
$\mathfrak{C}_{16}^5(\alpha)$& $ : $ & $e_1e_1=e_2$ & $e_1e_2=e_4$ &$e_1e_4= (\alpha+1) e_5$& $e_2e_2=\alpha e_5$ &$e_3e_3=e_4$\\
$\mathfrak{C}^5_{17}$ & $ : $ & $e_1e_1=e_2$ & $e_1e_2=e_4$ & $e_1e_3=e_5$ & $e_2e_2=e_5$ & $e_3e_3=e_4 $\\
$\mathfrak{C}^5_{18}$ & $ : $ & $e_1e_1=e_2$ & $e_2e_2=e_5$ & $e_2e_3=e_4 $\\
$\mathfrak{C}^5_{19}$ & $ : $ & $e_1e_1=e_2$ & $e_2e_2=e_5$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{20}$ & $ : $ & $e_1e_1=e_2$ & $e_1e_3=e_5$ & $e_2e_2=e_5$ & $e_2e_3=e_4 $\\
$\mathfrak{C}^5_{21}$ & $ : $ & $e_1e_1=e_2$ & $e_1e_3=e_5$ & $e_2e_2=e_5$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{22}$ & $ : $ & $e_1e_1=e_2$ & $e_1e_2=e_5$ & $e_2e_2=e_5$ & $e_2e_3=e_4 $\\
$\mathfrak{C}^5_{23}$ & $ : $ & $e_1e_1=e_2$ & $e_1e_2=e_5$ & $e_2e_2=e_5$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{24}(\alpha)$ & $ : $ & $e_1e_1=e_2$ & $e_1e_2=e_5$ & $e_1e_3=e_5$ & $e_2e_2=e_5$ & $e_2e_3=e_4$ & $e_3e_3=\alpha e_5 $\\
$\mathfrak{C}_{25}^5$&$:$& $e_1e_1=e_2$ & $e_1e_2=e_3$ &$e_1e_3=e_4$ & $e_2e_2=e_5$\\
$\mathfrak{C}^5_{26}(\alpha,\beta)$ & $: $ & $e_1e_1=\alpha e_5$ & $e_1e_2=e_3$ & $e_2e_2=\beta e_5$ & $e_1e_3=e_4+e_5$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{27}(\alpha)$ & $ : $ & $e_1e_1=\alpha e_5$ & $e_1e_2=e_3$ & $e_2e_2=e_5$ & $e_1e_3=e_4$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{28}$ & $ : $ & $e_1e_1=e_5$ & $e_1e_2=e_3$ & $e_1e_3=e_4$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{29}$ & $ : $ & $e_1e_2=e_3$ & $e_1e_3=e_4$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{30}(\alpha)$ & $ : $ & $e_1e_1=e_4+\alpha e_5$ & $e_1e_2=e_3$ & $e_2e_2=e_5$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{31}$ & $ : $ & $e_1e_1=e_4+e_5$ & $e_1e_2=e_3$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{32}$ & $ : $ & $e_1e_1=e_4$ & $e_1e_2=e_3$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{33}$ & $ : $ & $e_1e_1=e_5$ & $e_1e_2=e_3$ & $e_2e_2=e_5$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{34}$ & $ : $ & $e_1e_1=e_5$ & $e_1e_2=e_3$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{35}$ & $ :$ & $e_1e_2=e_3$ & $e_2e_2=e_5$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{36}$ & $ :$ & $e_1e_2=e_3$ & $e_2e_3=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{37}$ & $ :$ & $e_1e_1=e_4+e_5 $ & $e_1e_2=e_3$ & $e_2e_2=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{38}$ & $ :$ & $e_1e_1=e_4$ & $e_1e_2=e_3$ & $e_2e_2=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{39}$ & $ :$ & $e_1e_1=e_5$ & $e_1e_2=e_3$ & $e_2e_2=e_4$ & $e_3e_3=e_5 $\\
$\mathfrak{C}^5_{40}$ & $ : $ & $e_1e_2=e_3$ & $e_2e_2=e_4$ & $e_3e_3=e_5 $ \\
$\mathfrak{C}_{41}^5$& $ : $ & $e_1e_1=e_2$ & $e_2e_2=e_5$ &$e_3e_4=e_5$\\
$\mathfrak{C}_{42}^5$& $ : $ & $e_1e_1=e_2$ & $e_1e_3=e_5$ &$e_2e_2=e_5$ &$e_4e_4= e_5$\\
$\mathfrak{C}_{43}^5$& $ : $ & $e_1e_1=e_5$ & $e_1e_2=e_3$ &$e_2e_4=e_5$ & $e_3e_3=e_5$\\
$\mathfrak{C}_{44}^5$& $ : $ & $e_1e_1=e_5$ & $e_1e_2=e_3$ & $e_2e_2=e_5$ & $e_3e_3=e_5$& $e_4e_4=e_5$\\
$\mathfrak{C}_{45}^5$& $ : $ & $e_1e_1=e_5$ & $e_1e_2=e_3$ & $e_3e_3=e_5$ & $e_4e_4=e_5$\\
$\mathfrak{C}_{46}^5$& $ : $ & $e_1e_2=e_3$ & $e_1e_4=e_5$ & $e_2e_4=e_5$ & $e_3e_3=e_5$\\
$\mathfrak{C}_{47}^5$& $ : $ & $e_1e_2=e_3$ & $e_2e_4=e_5$ & $e_3e_3=e_5$ \\
$\mathfrak{C}_{48}^5$& $ : $ & $e_1e_2=e_3$ & $e_3e_3=e_5$ & $e_4e_4=e_5$\\
$\mathfrak{C}_{49}^5(\alpha)$& $ : $ & $e_1e_1=e_3$ & $e_1e_2=e_5$ &$e_2e_2=e_4$ & $e_3e_3=\alpha e_5$ & $e_3e_4=e_5$& $e_4e_4=e_5$ \\
$\mathfrak{C}_{50}^5$& $ : $ & $e_1e_1=e_3$ & $e_1e_2=e_5$ &$e_2e_2=e_4$ & $e_3e_3=e_5$ & $e_4e_4=e_5$\\
$\mathfrak{C}_{51}^5$& $ : $ & $e_1e_1=e_3$ & $e_1e_2=e_5$ &$e_2e_2=e_4$ & $e_3e_4=e_5$\\
$\mathfrak{C}_{52}^5(\alpha)$& $ : $ & $e_1e_1=e_3$ & $e_1e_3=\alpha e_5$ &$e_2e_2=e_4$ &$e_2e_3=e_5$ &$e_3e_3=e_5$
&$e_3e_4=e_5$ &$e_4e_4=e_5$\\
$\mathfrak{C}_{53}^5$& $ : $ & $e_1e_1=e_3$ & $e_1e_3=e_5$ &$e_2e_2=e_4$ & $e_2e_3=e_5$ & $e_4e_4=e_5$\\
$\mathfrak{C}_{54}^5$& $ : $ & $e_1e_1=e_3$ & $e_1e_3=e_5$ &$e_2e_2=e_4$ & $e_4e_4=e_5$\\
$\mathfrak{C}_{55}^5$& $ : $ & $e_1e_1=e_3$ & $e_2e_2=e_4$ &$e_2e_3=e_5$ & $e_4e_4=e_5$\\
$\mathfrak{C}_{56}^5(\alpha)$& $ : $ & $e_1e_1=e_3$ &$e_2e_2=e_4$ & $e_3e_3=\alpha e_5$ & $e_3e_4=e_5$ & $e_4e_4=e_5$\\
$\mathfrak{C}_{57}^5$& $ : $ & $e_1e_1=e_3$ &$e_2e_2=e_4$ & $e_3e_3=e_5$ & $e_4e_4=e_5$\\
$\mathfrak{C}_{58}^5$& $ : $ & $e_1e_1=e_3$ &$e_2e_2=e_4$ & $e_3e_4=e_5$\\
$\mathfrak{C}_{59}^5$& $ : $ & $e_1e_1=e_3$ & $e_1e_2=e_4$ &$e_1e_3=e_5$ &$e_2e_2=e_5$ &$e_4e_4=e_5$\\
$\mathfrak{C}_{60}^5$& $ : $ & $e_1e_1=e_3$ & $e_1e_2=e_4$ &$e_1e_3=e_5$ &$e_2e_3=e_5$ &$e_4e_4=e_5$\\
$\mathfrak{C}_{61}^5$& $ : $ & $e_1e_1=e_3$ & $e_1e_2=e_4$ &$e_1e_3=e_5$ &$e_4e_4=e_5$\\
$\mathfrak{C}_{62}^5(\alpha)$& $ : $ & $e_1e_1=e_3$ & $e_1e_2=e_4$&$e_1e_4=e_5$ &$e_2e_2=\alpha e_5$ &$e_3e_3=e_5$\\
$\mathfrak{C}_{63}^5$& $ : $ & $e_1e_1=e_3$ & $e_1e_2=e_4$ &$e_2e_2=e_5$ &$e_3e_4=e_5$\\
$\mathfrak{C}_{64}^5$& $ : $ & $e_1e_1=e_3$ & $e_1e_2=e_4$ &$e_2e_3=e_5$ &$e_3e_3=e_5$ &$e_4e_4=e_5$\\
$\mathfrak{C}_{65}^5$& $ : $ & $e_1e_1=e_3$ & $e_1e_2=e_4$ &$e_2e_3=e_5$ &$e_4e_4=e_5$\\
$\mathfrak{C}_{66}^5$& $ : $ & $e_1e_1=e_3$ & $e_1e_2=e_4$ &$e_2e_4=e_5$ &$e_3e_3=e_5$\\
$\mathfrak{C}_{67}^5$& $ : $ & $e_1e_1=e_3$ & $e_1e_2=e_4$ &$e_3e_3=e_5$ &$e_4e_4=e_5$\\
$\mathfrak{C}_{68}^5$& $ : $ & $e_1e_1=e_3$ & $e_1e_2=e_4$ &$e_3e_4=e_5$\\
${\mathfrak{C}}_{69}^{5}(\alpha)$ & $ : $ & $e_1 e_1 = e_4$&$e_1e_2=\alpha e_5$ &$e_1e_3=e_5$ &$e_2e_2=e_5$
&$ e_2 e_3=e_4$ &$e_4e_4=e_5$\\
${\mathfrak{C}}_{70}^{5}$ & $ : $ & $e_1 e_1 = e_4$&$e_1e_2=e_5$ &$e_1e_3=e_5$ &$ e_2 e_3=e_4$ &$e_4e_4=e_5$ \\
${\mathfrak{C}}_{71}^{5}$& $ : $ & $e_1 e_1 = e_4$&$e_1e_2=e_5$&$ e_2 e_3=e_4$ &$e_4e_4=e_5$ \\
${\mathfrak{C}}_{72}^{5}$ & $ : $ & $e_1 e_1 = e_4$&$e_2e_2=e_5$&$ e_2 e_3=e_4+e_5$ &$e_4e_4=e_5$ \\
${\mathfrak{C}}_{73}^{5}$ & $ : $ & $e_1 e_1 = e_4$&$e_2e_2=e_5$&$ e_2 e_3=e_4$ &$e_4e_4=e_5$ \\
${\mathfrak{C}}_{74}^{5}$ & $ : $ & $e_1 e_1 = e_4$&$ e_2 e_3=e_4+e_5$ &$e_4e_4=e_5$ \\
${\mathfrak{C}}_{75}^{5}$ & $ : $ & $e_1 e_1 = e_4$&$ e_2 e_3=e_4$ &$e_4e_4=e_5$ \\
$\mathfrak{C}_{76}^5$& $ : $ & $e_1e_1=e_2$ & $e_1e_2=e_4$ &$e_1e_4= e_5$ &$e_2e_2= - 2 e_5$ &$e_3e_3=e_4+3e_5$\\
$\mathfrak{C}_{77}^5$& $ : $ & $e_1e_1=e_2$ & $e_1e_2=e_4$ &$e_1e_4= e_5$ &$e_2e_3= e_5$ &$e_3e_3=e_4$\\
$\mathfrak{C}_{78 }^5$& $ : $ & $e_1 e_1 = e_2$ & $e_1 e_2=e_3$ & $e_1e_3=e_4$ &$e_1e_4=e_5$&$e_2e_2=e_4+e_5$ &$e_2e_3=e_5$\\
$\mathfrak{C}_{79}^5 $& $ : $ &
$e_1 e_1 = e_2$&
$e_1 e_2=e_3$&
$e_1e_3= e_4$&
$e_1e_4= e_5$&
$e_2e_2= 2 e_4+e_5$&
$e_2e_3= 4 e_5$ \\
$\mathfrak{C}_{80}^5(\alpha)$& $ : $ &
$e_1 e_1 = e_2$&
$e_1 e_2=e_3$&
$e_1e_3= \alpha e_4$&
$e_1 e_4=e_5$&
$e_2e_2= (\alpha +1)e_4$&
\multicolumn{2}{l}{$e_2 e_3=(\alpha+3)e_5$}\\
$\mathfrak{C}_{81}^5 $& $ : $ &
$e_1 e_1 = e_2$&
$e_1 e_2=e_3$&
$e_1e_3= e_4$&
$e_2e_2= 2 e_4$&
$e_2e_4= e_5$ \\
\end{longtable}}
All algebras from the present list are non-isomorphic, excepting
\begin{longtable}{lcccccr}
$\mathfrak{C}_{13}^5(\alpha, \beta) \cong
\mathfrak{C}_{13}^5(\alpha,-\beta)$ & \ & $\mathfrak{C}^5_{26}(\alpha,\beta) \cong \mathfrak{C}^5_{26}(\beta, \alpha)$ & \ & $\mathfrak{C}^5_{27}(\alpha) \cong \mathfrak{C}^5_{27}(\frac 1 {\alpha})$ & \ &
${\mathfrak{C}}_{69}^{5}(\alpha) \cong {\mathfrak{C}}_{69}^{5}(\sqrt[3]{1}\alpha)$
\end{longtable}
\end{theorem}
\section{The geometric classification of complex $5$-dimensional nilpotent commutative $\mathfrak{CD}$-algebras}
\subsection{Degenerations of algebras}
Given an $n$-dimensional vector space ${\bf V}$, the set ${\rm Hom}({\bf V} \otimes {\bf V},{\bf V}) \cong {\bf V}^* \otimes {\bf V}^* \otimes {\bf V}$
is a vector space of dimension $n^3$. This space inherits the structure of the affine variety $\mathbb{C}^{n^3}.$
Indeed, let us fix a basis $e_1,\dots,e_n$ of ${\bf V}$. Then any $\mu\in {\rm Hom}({\bf V} \otimes {\bf V},{\bf V})$ is determined by $n^3$ structure constants $c_{i,j}^k\in\mathbb{C}$ such that
$\mu(e_i\otimes e_j)=\sum_{k=1}^nc_{i,j}^ke_k$. A subset of ${\rm Hom}({\bf V} \otimes {\bf V},{\bf V})$ is {\it Zariski-closed} if it can be defined by a set of polynomial equations in the variables $c_{i,j}^k$ ($1\le i,j,k\le n$).
The general linear group ${\rm GL}({\bf V})$ acts by conjugation on the variety ${\rm Hom}({\bf V} \otimes {\bf V},{\bf V})$ of all algebra structures on ${\bf V}$:
$$ (g * \mu )(x\otimes y) = g\mu(g^{-1}x\otimes g^{-1}y),$$
for $x,y\in {\bf V}$, $\mu\in {\rm Hom}({\bf V} \otimes {\bf V},{\bf V})$ and $g\in {\rm GL}({\bf V})$. Clearly, the ${\rm GL}({\bf V})$-orbits correspond to the isomorphism classes of algebras structures on ${\bf V}$. Let $T$ be a set of polynomial identities which is invariant under isomorphism. Then the subset $\mathbb{L}(T)\subset {\rm Hom}({\bf V} \otimes {\bf V},{\bf V})$ of the algebra structures on ${\bf V}$ which satisfy the identities in $T$ is ${\rm GL}({\bf V})$-invariant and Zariski-closed. It follows that $\mathbb{L}(T)$ decomposes into ${\rm GL}({\bf V})$-orbits. The ${\rm GL}({\bf V})$-orbit of $\mu\in\mathbb{L}(T)$ is denoted by $O(\mu)$ and its Zariski closure by $\overline{O(\mu)}$.
Let ${\bf A}$ and ${\bf B}$ be two $n$-dimensional algebras satisfying the identities from $T$ and $\mu,\lambda \in \mathbb{L}(T)$ represent ${\bf A}$ and ${\bf B}$ respectively.
We say that ${\bf A}$ {\it degenerates} to ${\bf B}$ and write ${\bf A}\to {\bf B}$ if $\lambda\in\overline{O(\mu)}$.
Note that in this case we have $\overline{O(\lambda)}\subset\overline{O(\mu)}$. Hence, the definition of a degeneration does not depend on the choice of $\mu$ and $\lambda$. If ${\bf A}\to {\bf B}$ and ${\bf A}\not\cong {\bf B}$, then ${\bf A}\to {\bf B}$ is called a {\it proper degeneration}. We write ${\bf A}\not\to {\bf B}$ if $\lambda\not\in\overline{O(\mu)}$ and call this a {\it non-degeneration}. Observe that the dimension of the subvariety $\overline{O(\mu)}$ equals $n^2-\dim\der({\bf A})$. Thus if ${\bf A}\to {\bf B}$ is a proper degeneration, then we must have $\dim\der({\bf A})>\dim\der({\bf B})$.
Let ${\bf A}$ be represented by $\mu\in\mathbb{L}(T)$. Then ${\bf A}$ is {\it rigid} in $\mathbb{L}(T)$ if $O(\mu)$ is an open subset of $\mathbb{L}(T)$.
Recall that a subset of a variety is called {\it irreducible} if it cannot be represented as a union of two non-trivial closed subsets. A maximal irreducible closed subset of a variety is called an {\it irreducible component}.
It is well known that any affine variety can be represented as a finite union of its irreducible components in a unique way.
The algebra ${\bf A}$ is rigid in $\mathbb{L}(T)$ if and only if $\overline{O(\mu)}$ is an irreducible component of $\mathbb{L}(T)$.
In the present work we use the methods applied to Lie algebras in \cite{GRH,GRH2}.
To prove
degenerations, we will construct families of matrices parametrized by $t$. Namely, let ${\bf A}$ and ${\bf B}$ be two algebras represented by the structures $\mu$ and $\lambda$ from $\mathbb{L}(T)$, respectively. Let $e_1,\dots, e_n$ be a basis of ${\bf V}$ and $c_{i,j}^k$ ($1\le i,j,k\le n$) be the structure constants of $\lambda$ in this basis. If there exist $a_i^j(t)\in\mathbb{C}$ ($1\le i,j\le n$, $t\in\mathbb{C}^*$) such that the elements $E_i^t=\sum_{j=1}^na_i^j(t)e_j$ ($1\le i\le n$) form a basis of ${\bf V}$ for any $t\in\mathbb{C}^*$, and the structure constants $c_{i,j}^k(t)$ of $\mu$ in the basis $E_1^t,\dots, E_n^t$ satisfy $\lim\limits_{t\to 0}c_{i,j}^k(t)=c_{i,j}^k$, then ${\bf A}\to {\bf B}$. In this case $E_1^t,\dots, E_n^t$ is called a {\it parametric basis} for ${\bf A}\to {\bf B}$.
To prove a non-degeneration ${\bf A}\not\to {\bf B}$ we will use the following lemma (see \cite{GRH}).
\begin{lemma}\label{main}
Let $\mathcal{B}$ be a Borel subgroup of ${\rm GL}({\bf V})$ and $\mathcal{R}\subset \mathbb{L}(T)$ be a $\mathcal{B}$-stable closed subset.
If ${\bf A} \to {\bf B}$ and ${\bf A}$ can be represented by $\mu\in\mathcal{R}$ then there is $\lambda\in \mathcal{R}$ that represents ${\bf B}$.
\end{lemma}
In particular, it follows from Lemma \ref{main} that ${\bf A}\not\to {\bf B}$, whenever $\dim({\bf A}^2)<\dim({\bf B}^2)$.
When the number of orbits under the action of ${\rm GL}({\bf V})$ on $\mathbb{L}(T)$ is finite, the graph of primary degenerations gives the whole picture. In particular, the description of rigid algebras and irreducible components can be easily obtained. Since the variety of $5$-dimensional nilpotent commutative $\mathfrak{CD}$-algebras contains infinitely many non-isomorphic algebras, we have to fulfill some additional work. Let ${\bf A}(*):=\{{\bf A}(\alpha)\}_{\alpha\in I}$ be a family of algebras and ${\bf B}$ be another algebra. Suppose that, for $\alpha\in I$, ${\bf A}(\alpha)$ is represented by a structure $\mu(\alpha)\in\mathbb{L}(T)$ and ${\bf B}$ is represented by a structure $\lambda\in\mathbb{L}(T)$. Then by ${\bf A}(*)\to {\bf B}$ we mean $\lambda\in\overline{\cup\{O(\mu(\alpha))\}_{\alpha\in I}}$, and by ${\bf A}(*)\not\to {\bf B}$ we mean $\lambda\not\in\overline{\cup\{O(\mu(\alpha))\}_{\alpha\in I}}$.
Let ${\bf A}(*)$, ${\bf B}$, $\mu(\alpha)$ ($\alpha\in I$) and $\lambda$ be as above. To prove ${\bf A}(*)\to {\bf B}$, it is enough to construct a family of pairs $(f(t), g(t))$ parametrized by $t\in\mathbb{C}^*$, where $f(t)\in I$ and $g(t)=\left(a_i^j(t)\right)_{i,j}\in {\rm GL}({\bf V})$. Namely, let $e_1,\dots, e_n$ be a basis of ${\bf V}$ and $c_{i,j}^k$ ($1\le i,j,k\le n$) be the structure constants of $\lambda$ in this basis. If we construct $a_i^j:\mathbb{C}^*\to \mathbb{C}$ ($1\le i,j\le n$) and $f: \mathbb{C}^* \to I$ such that $E_i^t=\sum_{j=1}^na_i^j(t)e_j$ ($1\le i\le n$) form a basis of ${\bf V}$ for any $t\in\mathbb{C}^*$, and the structure constants $c_{i,j}^k(t)$ of $\mu\big(f(t)\big)$ in the basis $E_1^t,\dots, E_n^t$ satisfy $\lim\limits_{t\to 0}c_{i,j}^k(t)=c_{i,j}^k$, then ${\bf A}(*)\to {\bf B}$. In this case, $E_1^t,\dots, E_n^t$ and $f(t)$ are called a {\it parametric basis} and a {\it parametric index} for ${\bf A}(*)\to {\bf B}$, respectively. In the construction of degenerations of this sort, we will write $\mu\big(f(t)\big)\to \lambda$, emphasizing that we are proving the assertion $\mu(*)\to\lambda$ using the parametric index $f(t)$.
\subsection{The geometric classification of $5$-dimensional nilpotent
commutative $\mathfrak{CD}$-algebras}
The geometric classification of $5$-dimensional nilpotent
commutative $\mathfrak{CD}$-algebras is based on some previous works:
namely, all irreducible components of $5$-dimensional nilpotent associative commutative algebras are given in \cite{maz79} and all degenerations between these algebras are
given in \cite{klp19};
all irreducible components of $5$-dimensional nilpotent Jordan algebras were described in \cite{kama}.
In the proof of the present theorem we give all necessary arguments for the description of all irreducible components of the variety of
$5$-dimensional nilpotent commutative $\mathfrak{CD}$-algebras.
\begin{theorem}\label{main-geo}
The variety of complex $5$-dimensional nilpotent commutative $\mathfrak{CD}$-algebras is $24$-dimensional and it has $10$ irreducible components.
In particular, there are $6$ rigid algebras:
non-Jordan algebras ${\mathfrak C}^5_{69}, {\mathfrak C}^5_{72}, {\mathfrak C}^5_{76}, {\mathfrak C}^5_{77}, {\mathfrak C}^5_{81}$
and Jordan algebra ${\mathcal J}_{21}.$
\end{theorem}
\begin{proof}[{\bf Proof}]
Thanks to \cite{kama} the algebras
$\epsilon_1,$ ${\mathcal J}_{21},$ ${\mathcal J}_{22},$
${\mathcal J}_{27}(\varepsilon,\phi)$ and ${\mathcal J}_{40}$
determine the irreducible components in the variety of complex
$5$-dimensional nilpotent Jordan algebras
(which is a proper subvariety of nilpotent commutative $\mathfrak{CD}$-algebras),
where
\begin{longtable}{lllllllllll}
$\epsilon_1$ &$ : $& $e_1e_1= e_2$ & $e_2e_2=e_4$& $e_1e_3=e_4$& $e_1e_2=e_3$ & $e_1e_4=e_5$ & $e_2e_3=e_5$\\
${\mathcal J}_{21}$ &$ : $ & $e_1e_1=e_5$ & $e_1e_2=e_4$ & $e_2e_2=e_5$ & $e_3e_3=e_4$ & $e_3e_4=e_5$ \\
${\mathcal J}_{22}$ &$ : $ & $e_1e_1=e_2$ & $e_1e_2=e_4$ & $e_1e_4=e_5$ & $e_2e_2=e_5$ & $e_3e_3=e_4$ \\
${\mathcal J}_{27}(\varepsilon,\phi)$ &$ : $ & $e_1e_1=e_3$ & $e_1e_3=\phi e_5$& $e_1e_4=e_5$ & $e_2e_2=e_4$ & $e_2e_3=e_5$ & $e_2e_4=\varepsilon e_5$ \\
${\mathcal J}_{40}$ &$ : $ & $e_1e_1=e_5$ & $e_1e_2=e_3$ & $e_1e_3=e_4$ & $e_2e_2=e_4$ & $e_2e_3=e_5$
\end{longtable}
Let us give the list of usefull degenerations:
{\tiny
\begin{longtable}{l ll}
\hline
${\mathfrak C}^5_{78} \to \epsilon_1$ &
$E_1^t= t^2 e_1 $ & $E_2^t=t^4 e_2 $ \\
$E_3^t= -t^8 e_5 $ & $E_4^t= i t^5 e_4$ & $E_5^t= t^4 e_3$\\
\hline
${\mathfrak C}^5_{16}(t^{-2}) \to {\mathcal J}_{22}$ &
\multicolumn{2}{l}{ $E_1^t= t^2( 1+ t^2) e_1 - \frac{t^6 (1 + t^2)^2}{3 + 2 t^2} e_2 - t^4 (1 + t^2) e_3 - \frac{t^{10} (1 + t^2)^2}{2 (3 + 2 t^2)^2} e_4 $}\\
&
\multicolumn{2}{l}{ $E_2^t= t^4 (1 + t^2)^2 e_2 + t^4 (1 + t^2) e_3 + \frac{t^8 (1 + t^2)^2}{3 + 2 t^2} e_4 $} \\
$E_3^t= t^3 (1 + t^2) e_3 $ &
$E_4^t= t^6 (1 + t^2)^2 e_4 $ &
$E_5^t= t^6 (1 + t^2)^4 e_5$\\
\hline
${\mathfrak C}^5_{52}\left(\frac{\phi}{\sqrt{t}}\right) \to {\mathcal J}_{27}(\varepsilon,\phi)$ &
$E_1^t= t^{\frac{5}{2}} e_1 - \varepsilon t^4 e_3 + t^3 (1 + \varepsilon t) e_4 $ \\&
$E_2^t= t^2 e_2 + \varepsilon t^3 e_4 $ &
$E_3^t= t^5 e_3 - t^5 e_4 + t^6 (1 - 2 \varepsilon \phi - \varepsilon^2 t) e_5 $ \\&
$E_4^t= t^4 e_4 + \varepsilon^2 t^6 e_5$ &
$E_5^t= t^7 e_5$\\
\hline
${\mathfrak C}^5_{26}(4 t^3, -3) \to {\mathcal J}_{40}$ &
$E_1^t=-2 t e_1 $ &
$E_2^t= -2 t^2 e_2 + 2 t^2 e_3$ \\
$E_3^t= 4 t^3 e_3 - 4 t^3 e_4 - 4 t^3 e_5 $ &
$E_4^t=-8 t^4 e_4 - 8 t^4 e_5$ &
$E_5^t= 16 t^5 e_5 $\\
\hline
${\mathfrak C}^5_{13}(-1,0) \to {\mathfrak C}^5_{01}$ &
$E_1^t= t^{-1} e_1 $ & $E_2^t=t^{-2} e_2 $ \\
$E_3^t= t^{-3} e_3+t^{-3} e_4+t^{-3} e_5 $ & $E_4^t= t^{-4} e_4+t^{-4} e_5$ & $E_5^t= t^{-5} e_5$\\
\hline
${\mathfrak C}^5_{13}(-1-\mathbf{A}, 0) \to {\mathfrak C}^5_{02}(\mathbf{A})$ &
$E_1^t= e_1 - t e_4$ & $E_2^t= e_2 + t ^2 e_5 $ \\
$E_3^t= e_3 $ & $E_4^t=- e_5 $ & $E_5^t= t e_4$\\
\hline
${\mathfrak C}^5_{13}\left(\frac{1}{t-1},-\frac{\sqrt{t^3}}{2 \sqrt{t-1}} \right) \to {\mathfrak C}^5_{03}$ &
$E_1^t= \sqrt[4]{t-1} e_1$ & $E_2^t=\sqrt{t - 1} e_2 + \sqrt{t^3} e_4 $\\
$E_3^t= \frac{\sqrt[4]{(t - 1)^3}}{t} e_3 $ & $E_4^t= e_5 $ & $E_5^t= \sqrt{t} e_4$\\
\hline
${\mathfrak C}^5_{13}(-1,0 ) \to {\mathfrak C}^5_{04}$ &
$E_1^t= t e_1 $ & $E_2^t=t^2 e_2 $ \\
$E_3^t= t e_3 + i (t^2) e_4 $ & $E_4^t= -t^4 e_5 $ & $E_5^t= i t^3 e_4$\\
\hline
${\mathfrak C}^5_{26}(0,0) \to {\mathfrak C}^5_{05}$ & $E_1^t=t e_1 $ & $E_2^t= t^{-1} e_2 $ \\
$E_3^t=e_3 $ & $E_4^t= e_5 $ & $E_5^t= t^{-2} e_4$\\
\hline
${\mathfrak C}^5_{26}(t^{-2}, t^{-2}) \to {\mathfrak C}^5_{06}$ &
$E_1^t= t^{-1} e_1$& $E_2^t= t^{-1} e_2$ \\
$E_3^t=t^{-2} e_3 $ & $E_4^t= t^{-4} e_5 $ & $E_5^t= t^{-4} e_4$\\
\hline
${\mathfrak C}^5_{26}(4 t^{-2},0 ) \to {\mathfrak C}^5_{07}$ & $E_1^t= t^{3} e_1$ & $E_2^t= t^{-1} e_2 $ \\
$E_3^t=t^2 e_3 $ & $E_4^t= t^4 e_5 $ & $E_5^t= e_4 $\\
\hline
${\mathfrak C}^5_{26}(0,1) \to {\mathfrak C}^5_{08}$ & $E_1^t= t e_1 - t e_2 - t e_3$ & $E_2^t= -2 t^2 e_3 $\\
$E_3^t= -t^2 e_2 - t^2 e_3 $ & $E_4^t= t^3 e_4 + t^3 e_5 $ & $E_5^t=5 t^4 e_5 $\\
\hline
${\mathfrak C}^5_{26}(5,4) \to {\mathfrak C}^5_{09}$ & $E_1^t=-t e_1 + t e_2 + t e_3 $ & $E_2^t=-2 t^2 e_3 + 8 t^2 e_5$\\
$E_3^t=t^2 e_2 $& $E_4^t= t^3 e_4 $ & $E_5^t=4 t^4 e_5 $\\
\hline
${\mathfrak C}^5_{26}(-1,0) \to {\mathfrak C}^5_{10}$ &$E_1^t=-t e_1 + t^{-1} e_2 + t e_3 $&$E_2^t= -2 e_3 + 2 e_4$\\ $E_3^t= e_2 $ & $E_4^t=( 2 t-2t^{-1}) e_4 $ & $E_5^t=4 e_5 $\\
\hline
${\mathfrak C}^5_{13}\left(\frac{1}{t-1},0 \right) \to {\mathfrak C}^5_{11}$ &
$E_1^t= e_1$ & $E_2^t= e_2$\\
$E_3^t= t^{-1} e_3 - \frac{\sqrt{t}}{\sqrt{t-1}} e_4 $ &$E_4^t= \frac{\sqrt{t^3}}{\sqrt{t-1}} e_4 $ & $E_5^t= \frac{1}{t-1}e_5$\\
\hline
${\mathfrak C}^5_{13}(\mathbf{A},t^{-\frac{1}{2}} ) \to {\mathfrak C}^5_{12}(\mathbf{A})$ & $E_1^t= e_1$ & $E_2^t= e_2$\\
$E_3^t=e_3 + t^{\frac{3}{2}} e_4 $ & $E_4^t= \sqrt{t} e_4 $ & $E_5^t= e_5$\\
\hline
${\mathfrak C}^5_{80}\left(\frac{3(1+\mathbf{A})}{\mathbf{B}\sqrt{\mathbf{B}^2-\mathbf{A}}-1-\mathbf{B}^2}\right) \to {\mathfrak C}^5_{13}(\mathbf{A},\mathbf{B})$ &
\multicolumn{2}{l}{ $E_1^t= te_1+
\frac{3 t (1+\mathbf{A}) \left(2-\mathbf{B}^2+\mathbf{B} \sqrt{\mathbf{B}^2-\mathbf{A}}\right)}{4 \left(4+3\mathbf{A}+(2\mathbf{A} +5 )\mathbf{B}^2+2 \mathbf{B}^4-\left(5 +3\mathbf{A} +2 \mathbf{B}^2 \right)\mathbf{B} \sqrt{\mathbf{B}^2-\mathbf{A}}\right)}e_2$}\\
& \multicolumn{2}{l}{$+\frac{27 t \left(-2+\mathbf{B}^2-\mathbf{B} \sqrt{\mathbf{B}^2-\mathbf{A}}\right)^3 \left(1+\mathbf{A}\right)^2\sqrt{\mathbf{B}^2-\mathbf{A}} \left((2+4 \mathbf{A}) \mathbf{B}-2 \mathbf{B}^3-\left(2+3 \mathbf{A}-2 \mathbf{B}^2 \right)\sqrt{\mathbf{B}^2-\mathbf{A}}\right)}{128 \left(1+\mathbf{B}^2-\mathbf{B}\sqrt{\mathbf{B}^2-\mathbf{A}}\right)^4 \left(4+3 \mathbf{A}+\mathbf{B}^2-\mathbf{B} \sqrt{\mathbf{B}^2-\mathbf{A}}\right)^3}e_3$}\\
& \multicolumn{2}{l}{$-\frac{3 t \left(2-\mathbf{B}^2+\mathbf{B} \sqrt{\mathbf{B}^2-\mathbf{A}}\right)^2 (1+\mathbf{A}) \left(2+3 \mathbf{A}-\mathbf{B}^2+\mathbf{B} \sqrt{\mathbf{B}^2-\mathbf{A}}\right)\left(\sqrt{\mathbf{B}^2-\mathbf{A}}-\mathbf{B}\right)}{32 \left(4+3\mathbf{A}+(2\mathbf{A} +5 )\mathbf{B}^2+2 \mathbf{B}^4-\left(5 +3\mathbf{A} +2 \mathbf{B}^2 \right)\mathbf{B} \sqrt{\mathbf{B}^2-\mathbf{A}}\right)^2}e_4$}\\
& \multicolumn{2}{l}{$E_2^t= t^2e_2-\frac{3 t^2\left(1+\mathbf{A}\right) \left(\sqrt{\mathbf{B}^2-\mathbf{A}}-\mathbf{B}\right)\left(-2+\mathbf{B}^2-\mathbf{B}\sqrt{\mathbf{B}^2-\mathbf{A}}\right)}{2 \left(1+\mathbf{B}^2-\mathbf{B}\sqrt{\mathbf{B}^2-\mathbf{A}}\right) \left(4+3 \mathbf{A}+B^2-\mathbf{B} \sqrt{\mathbf{B}^2-\mathbf{A}}\right)}e_4 $}\\
& \multicolumn{2}{l}{$E_3^t= \frac{9 t^2 (1+\mathbf{A}) \left(2 \mathbf{B} \left(5+\mathbf{B}^2\right) \left(-\mathbf{B}+\sqrt{\mathbf{B}^2-\mathbf{A}}\right)+\mathbf{A} \left(2-6 \mathbf{B}^2+7 \mathbf{B} \sqrt{\mathbf{B}^2-\mathbf{A}}\right)\right)}{4 \left(1+\mathbf{B}^2-\mathbf{B} \sqrt{\mathbf{B}^2-\mathbf{A}}\right)^2 \left(4+3 \mathbf{A}+\mathbf{B}^2-\mathbf{B} \sqrt{\mathbf{B}^2-\mathbf{A}}\right)}e_3+ $} \\
& \multicolumn{2}{l}{$t^2(\sqrt{\mathbf{B}^2-\mathbf{A}}-\mathbf{B})e_4+\frac{9 t^2 \left(1+\mathbf{A} \right){\bf Q}}{32 \left(1+\mathbf{B}^2-\mathbf{B} \sqrt{-\mathbf{A}+\mathbf{B}}\right)^3 \left(4+3 \mathbf{A}+\mathbf{B}^2-\mathbf{B} \sqrt{\mathbf{B}^2-\mathbf{A}}\right)^2}e_5$}\\
& \multicolumn{2}{l}{$E_4^t= -\frac{3 t^3 \left(1+\mathbf{A}\right)}{1+\mathbf{B}^2-\mathbf{B}\sqrt{\mathbf{B}^2-\mathbf{A}}}e_3+\frac{9 t^3 \left(\mathbf{B}-\sqrt{\mathbf{B}^2-\mathbf{A}}\right) (1+\mathbf{A}) \left(2 (1+\mathbf{A}) \mathbf{B}-\sqrt{\mathbf{B}^2-\mathbf{A}}\right)}{\left(1+\mathbf{B}^2-\mathbf{B}\sqrt{\mathbf{B}^2-\mathbf{A}}\right)^2 \left(4+3\mathbf{A}+\mathbf{B}^2-\mathbf{B} \sqrt{\mathbf{B}^2-\mathbf{A}}\right)}e_5 $} \\
&\multicolumn{2}{l}{$E_5^t=-\frac{3 t^4}{1+\mathbf{B}^2-\mathbf{B}\sqrt{\mathbf{B}^2-\mathbf{A}}}e_5$}\\
\multicolumn{3}{l}{${\bf Q} = -88\mathbf{A}-60\mathbf{A}^2+344 \mathbf{B}^2-204\mathbf{A} \mathbf{B}^2-558\mathbf{A}^2 \mathbf{B}^2-201\mathbf{A}^3 \mathbf{B}^2+1032 \mathbf{B}^4+966\mathbf{A} \mathbf{B}^4+233\mathbf{A}^2 \mathbf{B}^4+360 \mathbf{B}^6+232\mathbf{A} \mathbf{B}^6- 8\mathbf{B}^8 $}\\
\multicolumn{3}{l}{$ \sqrt{\mathbf{B}^2-\mathbf{A}} \left(-344 \mathbf{B}-312\mathbf{A} \mathbf{B}-60\mathbf{A}^2 \mathbf{B}-1032 \mathbf{B}^3-1146\mathbf{A} \mathbf{B}^3-346\mathbf{A}^2 \mathbf{B}^3-360 \mathbf{B}^5-228\mathbf{A} \mathbf{B}^5+8 \mathbf{B}^7\right)$}\\
\hline
${\mathfrak C}^5_{26}((t^2-1)^2t^{-4}, t^{-4}) \to {\mathfrak C}^5_{14}$ &
$E_1^t= - e_1 + e_2 + e_3 $&
$E_2^t= -2 e_3 + 2(1 - t^2)t^{-4} e_5 $\\
$E_3^t= t e_2 + t^{-1} e_3 $ &$E_4^t=2 e_4 + 2t^{-2} e_5 $ & $E_5^t= 4 e_5 $\\
\hline
${\mathfrak C}^5_{26}(1+t^{-4}-\frac{2}{ t^2}-8 t, t^{-4}) \to {\mathfrak C}^5_{15}$ &
$E_1^t= t e_1 - t e_2 - t e_3 $ &
$E_2^t= -2 t^2 e_3 - (2 - 2t^{-2} + 8 t^3) e_5 $\\
$E_3^t= -t^2 e_2 - e_3 $
& $E_4^t= 2 t^2 e_4 + 2 e_5 $& $E_5^t= 4 t^4 e_5 $\\
\hline
${\mathfrak C}^5_{26}\left(\frac{t^4-8 t^7+16 t^8-8 t^9}{(1-2 t)^2}, \frac{(t-1)^4}{(1-2 t)^2} \right) \to {\mathfrak C}^5_{17}$ &
\multicolumn{2}{l}{ $E_1^t= -\frac{(t-1)^2 t^3}{2 t-1} e_1 + \frac{(t-1 ) t^4}{2t-1} e_2 + \frac{(t-1 )^2 t^4}{(1 - 2 t)^2} e_3 $} \\&
\multicolumn{2}{l}{ $E_2^t=\frac{2 ( 1- t)^3 t^7}{(1 - 2 t)^2} e_3 + \frac{2 (1 - t)^3 t^7}{(1 - 2 t)^3} e_4 - \frac{ 2 (t-1 )^6 t^7 (4 t^6-1 - t )}{(1 - 2 t)^4} e_5 $}\\
$E_3^t= \frac{(1 - t) t^5}{1 - 2 t} e_2 - \frac{(t-1)^3 t^5}{(1 - 2 t)^2} e_3 $ &
$E_4^t= \frac{2 (t-1 )^4 t^{10}}{(1 - 2 t)^3} e_4 + \frac{ 2 (1 - t)^6 t^{10}}{(1 - 2 t)^4}e_5 $&
$E_5^t= \frac{(4 (1 - t)^6 t^{14}}{(1 - 2 t)^4} e_5 $\\
\hline
${\mathfrak C}^5_{26}(1,0) \to {\mathfrak C}^5_{18}$ &
$E_1^t=t^{-1} e_1 - t^{-1} e_2 - t^{-1} e_3 $ & $E_2^t=-2t^{-2} e_3$\\
$E_3^t=- e_2 $ & $E_4^t=2t^{-2} e_4 $ & $E_5^t=4t^{-4} e_5 $\\
\hline
${\mathfrak C}^5_{26} ( 1+4t^{-4},4t^{-4}) \to {\mathfrak C}^5_{19}$ &
$E_1^t=t^{-1} e_1 - t^{-1} e_2 - t^{-1} e_3 $ &
$E_2^t=-2t^{-2} e_3 + 8t^{-6} e_5 $\\
$E_3^t=- e_2 $ &
$E_4^t=2t^{-2} e_4 $ & $E_5^t=4t^{-4} e_5 $\\
\hline
${\mathfrak C}^5_{26} (1+8t^{-3},0 ) \to {\mathfrak C}^5_{20}$ &
$E_1^t=t^{-1} e_1 - t^{-1} e_2 - t^{-1} e_3 $&
$E_2^t=-2t^{-2} e_3 + 8t^{-5} e_5 $\\
$E_3^t=- e_2 $ & $E_4^t=2t^{-2} e_4 $ & $E_5^t=4t^{-4} e_5 $\\
\hline
${\mathfrak C}^5_{26} \left(\frac{4-8 t+t^2+2 t^4+t^6}{t^6},\frac{4+t^2}{t^6} \right) \to {\mathfrak C}^5_{21}$ &
$E_1^t= t^{-2} e_1 - t^{-2} e_2 - t^{-2} e_3 $ \\&
$E_2^t=-2t^{-4} e_3 + 2t^{-4} e_4 + 2 (t-2)^2t^{-10} e_5$ &
$E_3^t=-t^{-1} e_2 + t^{-3} e_3 $ \\&
$E_4^t= 2 t^{-5} e_4 - 2 t^{-7}e_5 $& $E_5^t=4 t^{-8} e_5 $\\
\hline
${\mathfrak C}^5_{26}(0,0) \to {\mathfrak C}^5_{22}$ &
$E_1^t= -\frac{1}{2} e_1 + \frac{1}{2} e_2 $& $E_2^t=-\frac{1}{2} e_3$\\
$E_3^t= \frac{t}{2}e_2$ & $E_4^t= -\frac{t}{4} e_4$ & $E_5^t= \frac{1}{4} e_5 $\\
\hline
${\mathfrak C}^5_{26}(t^{-2},t^{-2}) \to {\mathfrak C}^5_{23}$ &
$E_1^t= -\frac{1}{2} e_1 + \frac{1}{2} e_2 $& $E_2^t=-\frac{1}{2} e_3 + \frac{1}{2 t^2} e_5$\\ $E_3^t= -\frac{t}{2}e_2$ & $E_4^t= \frac{t}{4} e_4$ & $E_5^t= \frac{1}{4} e_5 $\\
\hline
${\mathfrak C}^5_{26}((\mathbf{A}+2 t)t^{-2},\mathbf{A} t^{-2}) \to {\mathfrak C}^5_{24}(\mathbf{A})$ &
$E_1^t= -\frac{1}{2} e_1 + \frac{1}{2} e_2$& $E_2^t= -\frac{1}{2} e_3 + \frac{\mathbf{A} + t}{2 t^2} e_5$\\ $E_3^t= -\frac{t}{2}e_2$ & $E_4^t= \frac{t}{4} e_4$ & $E_5^t= \frac{1}{4} e_5 $\\
\hline
${\mathfrak C}^5_{80} \left( \frac{1}{t-1}\right) \to {\mathfrak C}^5_{25}$ &
$E_1^t= t e_1 + \frac{t}{8 - 4 t}e_2$\\&
$E_2^t= t^2 e_2 + \frac{t^2}{4 - 2 t} e_3 + \frac{t^3}{16 (t-2)^2 (t-1)} e_4$ &
$E_3^t= t^3 e_3 - \frac{t^3 (2 + t)}{4 (t-2) (t-1)} e_4 + \frac{ t^3 (7 t-4)}{16 (t-2)^2 (t-1)} e_5$\\
&
$E_4^t= \frac{t^4}{t-1} e_4 - \frac{t^5}{2 - 3 t + t^2} e_5 $ &
$E_5^t= t^4 e_5 $\\
\hline
${\mathfrak C}^5_{26}(1+\mathbf{A} t^{-2},t^{-2}) \to {\mathfrak C}^5_{27}(\mathbf{A})$ &
$E_1^t= t^{-1} e_1 - t^{-1} e_3 $& $E_2^t= t^{-1} e_2 $\\
$E_3^t= t^{-2} e_3 - t^{-2} e_4$ & $E_4^t= t^{-3} e_4$ & $E_5^t= t^{-4} e_5 $\\
\hline
${\mathfrak C}^5_{26}(1+4 t^{-2},0 ) \to {\mathfrak C}^5_{28}$ &
$E_1^t= ( 2t^{-1}-1) e_1 + e_2 + (t-2)t^{-1} e_3 $&
$E_2^t= 2 t^{-1} e_2 $\\
$E_3^t=(4 - 2 t)t^{-2} e_3 + 2 (t-2 )t^{-2} e_4 $ &
$E_4^t= (8 - 4 t)t^{-3} e_4 $ & $E_5^t= 4 (t-2)^2 t^{-4} e_5 $\\
\hline
${\mathfrak C}^5_{26}(1,0) \to {\mathfrak C}^5_{29}$ & $E_1^t= t^{-1} e_1 - t^{-1} e_3 $& $E_2^t= t^{-1} e_2 $\\ $E_3^t= t^{-2} e_3 - t^{-2} e_4 $ & $E_4^t= t^{-3} e_4 $ & $E_5^t= t^{-4} e_5 $\\
\hline
${\mathfrak C}^5_{26} \left(1+\mathbf{A} t^2+\frac{t^4}{4},\frac{t^4}{4} \right) \to {\mathfrak C}^5_{30}(\mathbf{A})$ &
$E_1^t=-\frac{t^2}{2} e_1 +\frac{ t^2}{2} e_2 + \frac{t^2}{2} e_3 $&
$E_2^t= t e_2 $\\
$E_3^t=-\frac{t^3}{2} e_3 + \frac{t^3}{2} e_4 + \frac{t^7}{8} e_5 $ &
$E_4^t=-\frac{t^4}{2} e_4 $ & $E_5^t= \frac{t^6}{4} e_5 $\\
\hline
${\mathfrak C}^5_{26}(1+4 t^2,0) \to {\mathfrak C}^5_{31}$ &
$E_1^t= -2 t^2 e_1 + 2 t^2 e_2 + 2 t^2 e_3 $&
$E_2^t=2 t e_2 $\\
$E_3^t= -4 t^3 e_3 + 4 t^3 e_4 $ &
$E_4^t= -8 t^4 e_4 $ & $E_5^t= 16 t^6 e_5 $\\
\hline
${\mathfrak C}^5_{26}(1,0) \to {\mathfrak C}^5_{32}$ &
$E_1^t=-\frac{t^2}{2} e_1 + \frac{t^2}{2} e_2 + \frac{t^2}{2} e_3 $&
$E_2^t=2 t e_2 $\\
$E_3^t= -\frac{t^3}{2} e_3 +\frac{ t^3}{2} e_4 $ &
$E_4^t= -\frac{t^4}{2} e_4 $ & $E_5^t= \frac{t^6}{4} e_5 $\\
\hline
${\mathfrak C}^5_{26}(2+t^2,t^2) \to {\mathfrak C}^5_{33}$ &
$E_1^t=t e_1 - t e_2 - t e_3 $&
$E_2^t= e_2 $\\
$E_3^t=t e_3 - t e_4 - t^3 e_5 $ &
$E_4^t= t e_4 $ & $E_5^t=t^2 e_5 $\\
\hline
${\mathfrak C}^5_{26}(2,0) \to {\mathfrak C}^5_{34}$ &
$E_1^t=t e_1 - t e_2 - t e_3 $&
$E_2^t= e_2 $\\
$E_3^t=t e_3 - t e_4 $ &
$E_4^t= t e_4 $ & $E_5^t=t^2 e_5 $\\
\hline
${\mathfrak C}^5_{26}(2,1) \to {\mathfrak C}^5_{35}$ &
$E_1^t= e_1 - e_2 - e_3 $&
$E_2^t= t^{-1} e_2 $\\
$E_3^t=t^{-1} e_3 - t^{-1} e_4-t^{-1}e_5 $ &
$E_4^t= t^{-2} e_4 $ & $E_5^t=t^{-2} e_5 $\\
\hline
${\mathfrak C}^5_{26}(1,0) \to {\mathfrak C}^5_{36}$ &
$E_1^t= t e_1 - t e_2 - t e_3 $&
$E_2^t= e_2 $\\
$E_3^t=t e_3 - t e_4 $ &
$E_4^t= t e_4 $ & $E_5^t=t^{2} e_5 $\\
\hline
${\mathfrak C}^5_{26}(1,0) \to {\mathfrak C}^5_{37}$ & $E_1^t=-2 t^3 e_1 + 2 t^3 e_2 + 2 t^3 e_3 $&
$E_2^t= -2 t^2 e_2 + 2 t^4 e_3 $\\
$E_3^t= 4 t^5 e_3 - 4 t^5 e_4 - 4 t^9 e_5 $ &
$E_4^t=-8 t^6 e_4 + 8 t^8 e_5 $ & $E_5^t= 16 t^{10} e_5 $\\
\hline
${\mathfrak C}^5_{26}((1+t^2)^2 , t^4 ) \to {\mathfrak C}^5_{38}$ &
$E_1^t= -2 t^3 e_1 + 2 t^3 e_2 + 2 t^3 e_3 $&
$E_2^t= -2 t^2 e_2 + 2 t^4 e_3 $\\
$E_3^t=4 t^5 e_3 - 4 t^5 e_4 - 4 t^9 e_5 $ &
$E_4^t = -8 t^6 e_4 + 8 t^8 e_5 $ & $E_5^t= 16 t^{10} e_5 $\\
\hline
${\mathfrak C}^5_{26}(1,1) \to {\mathfrak C}^5_{39}$ &
$E_1^t= -t e_1 + t e_2 + t e_3 $& $E_2^t= e_2 + e_3 $\\
$E_3^t= -t e_3 + t e_4 + t e_5$ & $E_4^t =2 e_4 + 2 e_5 $ & $E_5^t= t^2 e_5 $\\
\hline
${\mathfrak C}^5_{26}(0,1) \to {\mathfrak C}^5_{40}$ &
$E_1^t= -t e_1 + t e_2 + t e_3 $& $E_2^t= e_2 + e_3 $\\
$E_3^t= -t e_3 + t e_4 + t e_5$ &
$E_4^t =2 e_4 + 2 e_5 $ & $E_5^t= t^2 e_5 $\\
\hline
${\mathfrak C}^5_{56}(1+t^3) \to {\mathfrak C}^5_{41}$ &
$E_1^t= t e_1 $& $E_2^t= t^2 e_4 + t e_5 $\\
$E_3^t= -t e_3 + t e_4 + e_5$ &
$E_4^t =t^2 e_2 - e_3 + e_4 $ & $E_5^t=t^4 e_5 $\\
\hline
${\mathfrak C}^5_{56}(2) \to {\mathfrak C}^5_{42}$ &
$E_1^t=\sqrt{t} e_1 + i \sqrt{t} e_2 $&
$E_2^t= t e_3 - t e_4 $\\
$E_3^t=i t e_2$ &
$E_4^t = -t e_4 - t^{\frac{3}{2}} e_5 $ & $E_5^t= t^2 e_5 $\\
\hline
${\mathfrak C}^5_{52}(i ) \to {\mathfrak C}^5_{43}$ &
$E_1^t= it^{-2} e_1 + t^{-2} e_2 $&
$E_2^t= -t^{-1} e_2 + \frac{1}{2t^3} e_3 - \frac{1 + 2 t }{2 t^3} e_4 $\\
$E_3^t= -t^{-3} e_4 $ &
$E_4^t = -t^{-5} e_3 + t^{-5} e_4 - t^{-7} e_5 $ &
$E_5^t= t^{-6} e_5 $\\
\hline
${\mathfrak C}^5_{49}(1+t^4 ) \to {\mathfrak C}^5_{44}$ &
$E_1^t=-(-1)^{\frac{3}{4}} e_1 + (-1)^{\frac{1}{4}} e_2 $&
$E_2^t= (-1)^{\frac{1}{4}} t e_2 $\\
$E_3^t= i t e_4 + t e_5 $ &
$E_4^t = it^{-1} e_3 - it^{-1} e_4 - (2 + t^2)t^{-1} e_5 $ &
$E_5^t= -t^2 e_5 $\\
\hline
${\mathfrak C}^5_{56}\left(\frac{1+2 t^2-2 t^3+2 t^4-2 t^5}{(1+t^2-t^3)^2}\right) \to {\mathfrak C}^5_{45}$ &
$E_1^t= t \sqrt{t^3 - t^2 -1 } e_1 + t e_2 $&
$E_2^t= t^2 e_2 $\\
$E_3^t= t^3 e_4$ &
$E_4^t = t (t^3 - t^2 -1 ) e_3 + (t + t^3) e_4 - t^5 e_5 $ & $E_5^t= t^6 e_5 $\\
\hline
${\mathfrak C}^5_{52}(i (t-1) ) \to {\mathfrak C}^5_{46}$ &
$E_1^t= (t^3-1)^{\frac{3}{2}}t^{-2} e_1 + (t^{-2} - t) e_2 $&
$E_2^t= (t^{-1} - t^2) e_2 $\\
$E_3^t=(-2 + t^{-3} + t^3) e_4$ &
$E_4^t =(1 - t^3)^3t^{-5} e_3 - (1 - t^3)^2)t^{-5} e_4 $ & $E_5^t= (1 - t^3)^4t^{-6} e_5 $\\
\hline
${\mathfrak C}^5_{52}\left(\frac{1}{\sqrt{t^3-1}}\right) \to {\mathfrak C}^5_{47}$ &
$E_1^t= t^{-2} e_1 + it^{-2} e_2 - it^{-4} e_3 + i t^{-4} e_4 $
&
$E_2^t= it^{-1} e_2 - t^{-2} e_4 $ \\
$E_3^t=-t^{-3} e_4 + t^{-5} e_5$ &
$E_4^t = -it^{-5} e_3 + it^{-5} e_4 - 2 i t^{-6} e_5 $ & $E_5^t= t^{-6} e_5 $\\
\hline
${\mathfrak C}^5_{56}(1+t^4) \to {\mathfrak C}^5_{48}$ &
$E_1^t= i t e_1 + t e_2 $& $E_2^t= t^2 e_2 $ \\
$E_3^t= t^3 e_4$ &
$E_4^t = -t e_3 + t e_4 $ & $E_5^t= t^6 e_5 $\\
\hline
${\mathfrak C}^5_{49}(1+t^{-4}) \to {\mathfrak C}^5_{50}$ &
\multicolumn{2}{l}{ $E_1^t=(-1)^{\frac{3}{4}} t^{\frac{3}{2}} e_1 + (-1)^{\frac{1}{4}} t^{\frac{3}{2}} e_2 - i t^4 e_3 +
i t^4 e_4 $}\\
& $E_2^t= (-1)^{\frac{1}{4}} t^{\frac{1}{2}} e_2 - i t^4 e_3 + i t^4 e_4 $ &
$E_3^t= -i t^3 e_3 + i t^3 e_4 $ \\
&
$E_4^t = i t e_4 - t^3 e_5 $ & $E_5^t= -t^2 e_5 $\\
\hline
${\mathfrak C}^5_{49}(0) \to {\mathfrak C}^5_{51}$ &
$E_1^t= -t^{-1} e_1 $& $E_2^t= -t e_2 $ \\
$E_3^t= t^{-2} e_3$ &
$E_4^t = t^2 e_4 $ & $E_5^t= e_5 $\\
\hline
${\mathfrak C}^5_{49}(1+t) \to {\mathfrak C}^5_{52}(\mathbf{A})$ &
$E_1^t= -t^{-1} e_1 $& $E_2^t= -t e_2 $ \\
$E_3^t= t^{-2} e_3$ &
$E_4^t = t^2 e_4 $ & $E_5^t= e_5 $\\
\hline
${\mathfrak C}^5_{52}(t^{-1}) \to {\mathfrak C}^5_{53}$ &
\multicolumn{2}{l}{$E_1^t=-\sqrt{\frac{t}{\mathbf{A} - t}} e_1 + \sqrt{\frac{t^3}{\mathbf{A} - t}} e_2 +
\frac{\mathbf{A}}{\mathbf{A} - t} e_3 - \frac{\mathbf{A}}{\mathbf{A} - t} e_4 $}\\
&
$E_2^t= \sqrt{\frac{t}{\mathbf{A} - t}} e_2 + \frac{1}{\mathbf{A} - t} e_3 + \frac{1}{t-\mathbf{A}} e_4 $ &
$E_3^t= \frac{t}{\mathbf{A} - t} e_3 + t e_5 $ \\ &
$E_4^t = \frac{t}{\mathbf{A} - t} e_4 + \frac{t}{(\mathbf{A} - t)^2} e_5 $ &
$E_5^t= \frac{t^2}{(\mathbf{A} - t)^2} e_5 $\\
\hline
${\mathfrak C}^5_{52}(t^{-4}) \to {\mathfrak C}^5_{54}$ & $E_1^t= e_1 $&$E_2^t= t^{-1} e_2 $\\
$E_3^t= e_3 $ & $E_4^t = t^{-2} e_4 $ & $E_5^t= t^{-4} e_5 $\\
\hline
${\mathfrak C}^5_{52}(0) \to {\mathfrak C}^5_{55}$ &
$E_1^t= t^3 e_1 $& $E_2^t= t^2 e_2 + t^5 e_4 $\\
$E_3^t= t^6 e_3 - t^6 e_4 - t^{12} e_5 $ &
$E_4^t = t^4 e_4 + t^{10} e_5 $ & $E_5^t= t^8 e_5 $\\
\hline
${\mathfrak C}^5_{49}(\mathbf{A}) \to {\mathfrak C}^5_{56}(\mathbf{A})$ &
$E_1^t= t^{-1} e_1 $& $E_2^t= t^{-1}e_2 $\\
$E_3^t= t^{-2} e_3$ &
$E_4^t = t^{-2} e_4 $ & $E_5^t= t^{-4} e_5 $\\
\hline
${\mathfrak C}^5_{56}(1+t^{-4}) \to {\mathfrak C}^5_{57}$ &
$E_1^t= t e_1 + i t e_2 $& $E_2^t= e_2 $\\
$E_3^t= t^2 e_3 - t^2 e_4$ &
$E_4^t =e_4 $ & $E_5^t= e_5 $\\
\hline
${\mathfrak C}^5_{56}(0) \to {\mathfrak C}^5_{58}$ &
$E_1^t= t e_1 $& $E_2^t= t^2 e_2 $\\
$E_3^t= t^2 e_3 $ &
$E_4^t = t^4e_4 $ & $E_5^t= t^6 e_5 $\\
\hline
${\mathfrak C}^5_{52}(2i) \to {\mathfrak C}^5_{59}$ &
$E_1^t= i t^{-2} e_1 + t^{-2} e_2 - t^{-6} e_3 + t^{-6} e_4 $ & $E_2^t= t^{-1} e_2 $\\
$E_3^t=-t^{-4} e_3 + t^{-4} e_4 + 2t^{-8} e_5 $ &
$E_4^t =t^{-3} e_4 - t^{-7} e_5 $ & $E_5^t= t^{-6} e_5 $\\
\hline
${\mathfrak C}^5_{52} \left(\frac{t -1}{\sqrt{t^2-1}} \right) \to {\mathfrak C}^5_{60}$ &
$E_1^t= (t^2 - 1)^{\frac{3}{2}} t^{-1} e_1 + (t - t^{-1}) e_2 $
&
$E_2^t= ( t^2 - 1) e_2 $ \\
$E_3^t= ( t^2 - 1)^3 t^{-2} e_3 + (t^{-1} - t)^2 e_4 $
&
$E_4^t = (t^{-1} - 2 t + t^3) e_4 $ &
$E_5^t= (t^2 - 1)^4 t^{-2} e_5 $\\
\hline
${\mathfrak C}^5_{52}(0) \to {\mathfrak C}^5_{61 }$ &
$E_1^t= -i t^{-2} e_1 - t^{-2} e_2 $&
$E_2^t= -t^{-1} e_2$\\
$E_3^t= -t^{-4} e_3 + t^{-4} e_4 $ &
$E_4^t = t^{-3} e_4 $ &
$E_5^t= t^{-6} e_5 $\\
\hline
${\mathfrak C}^5_{52}\left( -\frac{8+t}{t} \right) \to {\mathfrak C}^5_{62 }(\mathbf{A})$
&\multicolumn{2}{l}{ $E_1^t= 2 e_1 + 2 e_2 + 2 (4 - 4 \mathbf{A} + t) t^{-2} e_3 + ( 8 \mathbf{A}-8 + 6 t) t^{-2} e_4 $}\\
&$E_2^t= 4 t e_2 + 2 e_3 - 2 e_4 $&
$E_3^t=4 e_3 + 4 e_4 + 256 (\mathbf{A}-1)t^{-3} e_5 $ \\&
$E_4^t =8 t e_4 + (8 - 32 \mathbf{A} t^{-1}) e_5 $ &
$E_5^t= 64 e_5 $\\
\hline
${\mathfrak C}^5_{49}(1-t^{-4}) \to {\mathfrak C}^5_{63 }$ &
$E_1^t= i t^2 e_1 + i t e_2 + t^6 e_3 - t^6 e_4$
&
$E_2^t= i t^2 e_2 - t^6 e_3 + t^6 e_4$\\
&\multicolumn{2}{l}{ $E_3^t= -t^4 e_3 + t^2 ( t^2-1) e_4 + t^3 (t^2-2) e_5 $}\\
& $E_4^t=-t^3 e_4 - t^4 (1 + t^3) e_5 $ &
$E_5^t= t^5 e_5 $\\
\hline
\multicolumn{2}{l}{ ${\mathfrak C}^5_{49}(1+t^2-5 t^4+10 t^6-10 t^8+5 t^{10}-t^{12}) \to {\mathfrak C}^5_{64 }$} &
$E_1^t= -\frac{(-1)^{\frac{3}{4}}}{t^2-1} e_1 + (-1)^{\frac{1}{4}} e_2$\\
&
\multicolumn{2}{l}{ $E_2^t=(-1)^{\frac{1}{4}} t e_2 + \frac{i}{( t^2-1)^3} e_3 - \frac{i}{(t^2-1)^3} e_4 $}\\
&\multicolumn{2}{l}{ $E_3^t= -\frac{i}{(t^2-1)^2} e_3 +
i\left(t^{-2} + \frac{1}{(t^2-1)^{2}}\right) e_4 + \frac{2 - t^2}{3 t^2-1} e_5 $}\\
& $E_4^t= it e_4 + \frac{t}{t^2-1} e_5 $ &
$E_5^t=-t^2 e_5 $\\
\hline
${\mathfrak C}^5_{52}\left(\frac{1}{\sqrt{t^2-1}}\right) \to {\mathfrak C}^5_{65 }$ &
$E_1^t=-(t^2 - 1)^{\frac{3}{2}} t^{-1} e_1 + ( t-t{-1}) e_2 $\\
&
$E_2^t= ( t^2-1) e_2$&
$E_3^t= ( t^2-1)^3t^{-2}e_3 + ( t^{-2} -2 + t^2) e_4 $ \\
& $E_4^t= (t^{-1} - 2 t + t^3) e_4 $ &
$E_5^t= ( t^2-1)^4 t^{-2} e_5 $\\
\hline
${\mathfrak C}^5_{52}( t^{-1} ) \to {\mathfrak C}^5_{66 }$ &
$E_1^t= t^5 e_1 - t^4 e_2$&
$E_2^t=-t^5 e_2 + \frac{t^9}{2} e_3 + (t^7 - \frac{t^9}{2}) e_4 $\\
$E_3^t= t^{10} e_3 + ( t^8 - t^{10}) e_4 $ &
$E_4^t=t^9 e_4 $ &
$E_5^t= t^{16} e_5 $\\
\hline
${\mathfrak C}^5_{64} \to {\mathfrak C}^5_{67}$ &
$E_1^t= t^{-1} e_1 $&
$E_2^t= t^{-1} e_2 $\\
$E_3^t= t^{-2} e_3 $ &
$E_4^t= t^{-2} e_4 $ &
$E_5^t= t^{-4} e_5 $\\
\hline
${\mathfrak C}^5_{56}( -t^{-4} ) \to {\mathfrak C}^5_{68 }$ &
$E_1^t= t e_1 + e_2 $&
$E_2^t= t e_2 $\\
$E_3^t= t^2 e_3 + (1 - t^2) e_4 $ &
$E_4^t= t e_4 $ &
$E_5^t= t e_5 $\\
\hline
${\mathfrak C}^5_{69}\left(\frac{1+2 t^2+t^3}{t^4}\right) \to {\mathfrak C}^5_{70}$ &
$E_1^t= t^{-1} e_1 + t^{-2} e_3 + t^{-1} e_4$&
$E_2^t= t e_2 $\\
$E_3^t=t^{-3} e_3 $ &
$E_4^t =t^{-2} e_4 + (2 + t)t^{-3} e_5 $ & $E_5^t= t^{-4} e_5 $\\
\hline
${\mathfrak C}^5_{69}\left(\frac{1+t^3}{2 \sqrt[3]{2} t^3} \right) \to {\mathfrak C}^5_{71}$ &
$E_1^t=\frac{1}{\sqrt[3]{2}t} e_1 + \frac{1}{2 t} e_3 $&
$E_2^t= \sqrt[3]{2} e_2 + \frac{1}{2} e_3 $\\
$E_3^t=\frac{1}{2 t^2} e_3 $ &
$E_4^t = \frac{1}{\sqrt[3]{2}t^2} e_4 + \frac{1}{\sqrt[3]{2} t^2} e_5 $ & $E_5^t= \frac{1}{2 \sqrt[3]{2} t^4} e_5 $\\
\hline
${\mathfrak C}^5_{69}(0) \to {\mathfrak C}^5_{73}$ &
$E_1^t= t^{-1} e_1 + t e_3 $&
$E_2^t= t^{-2} e_2 + t e_3 $\\
$E_3^t= e_3 $ &
$E_4^t =t^{-2}e_4 $ & $E_5^t= t^{-4} e_5 $\\
\hline
${\mathfrak C}^5_{72} \to {\mathfrak C}^5_{74}$ &
$E_1^t= e_1 + e_3 $&
$E_2^t= t e_2 $\\
$E_3^t= t^{-1} e_3 $ &
$E_4^t = e_4 $ & $E_5^t= e_5 $\\
\hline
${\mathfrak C}^5_{74} \to {\mathfrak C}^5_{75}$ &
$E_1^t=t^{-1} e_1 + e_3 $&
$E_2^t= e_2 $\\
$E_3^t= t^{-2} e_3 $ &
$E_4^t = t^{-2} e_4 $ & $E_5^t= t^{-4} e_5 $\\
\hline
${\mathfrak C}^5_{80}(t^{-1}) \to {\mathfrak C}^5_{78}$ &
$E_1^t=2 (t-1) t e_1 + (1 - t) t e_2 $&\\
&\multicolumn{2}{l}{ $E_2^t=4 (t-1)^2 t^2 e_2 - 4 (t-1)^2 t^2 e_3 + (t-1)^2 t (1 + t) e_4 $}\\
&\multicolumn{2}{l}{ $E_3^t= 8 (t-1)^3 t^3 e_3 - 4 (t-1)^3 t^2 (3 + t) e_4 + 2 (t-1)^3 t^2 (3 + 7 t) e_5$}\\
& $E_4^t = 16 (t-1)^4 t^3 e_4 - 32 (t-1)^4 t^3 (1 + t) e_5 $ &
$E_5^t= (t-1)^5 t^4 e_5 $\\
\hline
${\mathfrak C}^5_{80}\left(\frac{3}{3+t}\right) \to {\mathfrak C}^5_{79}$ &
\multicolumn{2}{l}{ $E_1^t= 4 t (3 + t) e_1 - 36 t (3 + t) e_2 - 54 t (3 + t) (6 + t) e_3$}\\
& \multicolumn{2}{l}{$E_2^t= 16 t^2 (3 + t)^2 e_2 - 288 t^2 (3 + t)^2 e_3 +
11664 t^2 (3 + t) (24 + 10 t + t^2) e_5 $}\\
& \multicolumn{2}{l}{$E_3^t= 64 t^3 (3 + t)^3 e_3 - 576 t^3 (3 + t)^2 (12 + t) e_4 -
2592 t^3 (3 + t)^2 (-24 - 2 t + t^2) e_5 $} \\
& \multicolumn{2}{l}{$E_4^t= 768 t^4 (3 + t)^3 e_4 - 9216 t^4 (3 + t)^3 (6 + t) e_5 $} \\
&\multicolumn{2}{l}{$E_5^t=3072 t^5 (3 + t)^4 e_5$}\\
\hline
\end{longtable}
}
By calculation of dimension of derivation algebra, we have dimensions ($\mathrm{gdim}$) of algebraic varieties
defined by the following algebras:
\begin{longtable}{l}
$\mathrm{gdim} \ {\mathfrak C}_{49}^5(\alpha) = 24$\\
$\mathrm{gdim} \ {\mathfrak C}_{26}^5(\alpha,\beta) = 23$\\
$\mathrm{gdim} \ {\mathcal J}_{21}=\mathrm{gdim} \ {\mathfrak C}_{16}^5(\alpha)= \mathrm{gdim} \ {\mathfrak C}_{69}^5=\mathrm{gdim} \ {\mathfrak C}_{72}^5= \mathrm{gdim} \ {\mathfrak C}_{80}^5(\alpha)= \mathrm{gdim} \ {\mathfrak C}_{81}^5= 22$\\
$\mathrm{gdim} \ {\mathfrak C}_{76} = \mathrm{gdim} \ {\mathfrak C}_{77} =21$\\
\end{longtable}
Thanks to list of non-degeneration arguments presented below:
\begin{longtable}{|rcl|l|}
\hline
\multicolumn{3}{|c|}{\textrm{Non-degeneration}} & \multicolumn{1}{|c|}{\textrm{Arguments}}\\
\hline
\hline
${\mathfrak C}_{26}^5(\alpha,\beta)$ & $\not \to$&
$\begin{array}{l}
{\mathfrak C}_{16}^5(\mathbf{A}), {\mathfrak C}_{69}^5, {\mathfrak C}_{72}^5, {\mathfrak C}_{76}^5, \\
{\mathfrak C}_{77}^5, {\mathfrak C}_{80}^5(\mathbf{A}) ,{\mathfrak C}_{81}^5, {\mathcal J}_{21}\end{array} $
&
${\mathcal R}=
\left\{
\begin{array}{l} A_1A_4=0 \end{array} \right\} $
\\
\hline
${\mathfrak C}_{49}^5(\alpha)
$ & $\not \to$&
$\begin{array}{l}
{\mathfrak C}_{16}^5(\mathbf{A}), {\mathfrak C}_{26}^5(\mathbf{A},\mathbf{B}), {\mathfrak C}_{69}^5, {\mathfrak C}_{72}^5, {\mathfrak C}_{76}^5, \\
{\mathfrak C}_{77}^5, {\mathfrak C}_{80}^5(\mathbf{A}) ,{\mathfrak C}_{81}^5, {\mathcal J}_{21}\end{array} $
&
${\mathcal R}=
\left\{
\begin{array}{l}
A_1^2 \subseteq A_3, A_1A_2 \subseteq A_4, \\
A_1A_3 \subseteq A_5,
A_1A_5 \subseteq 0
\end{array} \right\} $
\\
\hline
${\mathfrak C}_{16}^5(\alpha)
$ & $\not \to$&
$\begin{array}{l}
{\mathfrak C}_{76}^5,
{\mathfrak C}_{77}^5 \end{array} $ &
${\mathcal R}=
\left\{
\begin{array}{l}
A_1^2 \subseteq A_3, \
A_1A_2 \subseteq A_4, \
A_1A_3 \subseteq A_5 \end{array} \right\} $ \\
\hline
${\mathfrak C}_{80}^5(\alpha), {\mathfrak C}_{81}^5
$ & $\not \to$&
$\begin{array}{l}
{\mathfrak C}_{76}^5,
{\mathfrak C}_{77}^5 \end{array} $ &
${\mathcal R}=
\left\{
\begin{array}{l}
A_3^2=0 \end{array} \right\} $ \\
\hline
${\mathfrak C}_{72}^5, {\mathfrak C}_{69}^5
$ & $\not \to$&
$\begin{array}{l}
{\mathfrak C}_{76}^5,
{\mathfrak C}_{77}^5 \end{array} $ &
${\mathcal R}=
\left\{
\begin{array}{l}
A_1^2 \subseteq A_4 \end{array} \right\} $ \\
\hline
${\mathcal J}_{21}
$ & $\not \to$&
$\begin{array}{l}
{\mathfrak C}_{76}^5,
{\mathfrak C}_{77}^5 \end{array} $ &
${\mathcal R}=
\left\{
\begin{array}{l}
{\mathcal J}_{21} \mbox{ is Jordan} \end{array} \right\} $ \\
\hline
\end{longtable}
we have that algebras
\[ \Omega=\{ {\mathcal J}_{21}, {\mathfrak C}_{16}^5(\alpha), {\mathfrak C}_{26}^5(\alpha,\beta), {\mathfrak C}_{49}^5, {\mathfrak C}_{69}^5, {\mathfrak C}_{72}^5, {\mathfrak C}_{76}, {\mathfrak C}_{77}, {\mathfrak C}_{80}^5(\alpha), {\mathfrak C}_{81}^5\}\]
give irreducible components.
\end{proof}
|
2,877,628,090,295 | arxiv | \section{Introduction}
Electron-neutrino and electron-anti-neutrino scattering off electrons
have played an important role in the searches for neutrino
oscillations. First hinted by the data from solar and atmospheric
neutrinos, oscillations have subsequently been confirmed with reactor
and accelerator data~\cite{fukuda:1998mi,ahmad:2002jz,eguchi:2002dm}.
Altogether, these experiments now give clear evidence that neutrinos
are massive~\cite{Maltoni:2004ei} and, therefore, expected to be
endowed with non-standard interactions that may violate leptonic
flavour and/or break weak universality~\cite{schechter:1980gr}.
Future experiments, such as BOREXINO~\cite{Alimonti:2000xc}, aim to
use the same reaction for detecting lower energy solar neutrinos.
The Standard Model cross section for this process has been known since
the 70's~\cite{Bardin:1970wr,'tHooft:1971ht,Chen:1972yi}, when the
first measurements have been carried out~\cite{Reines:1976pv}.
Radiative corrections have been calculated more recently
in~\cite{Bahcall:1995mm} and there have been recent
experiments~\cite{Auerbach:2001wg,Amsler:2002tu}. Currently there are
many proposals to perform new experiments either at relatively high
energies~\cite{Conrad:2004gw}, in order to test the NuTeV
anomaly~\cite{Zeller:2001hh}, as well as at low
energies~\cite{Giomataris:2003pd,Giomataris:2003bp,Kopeikin:2003bx,neganov:2001bn},
motivated by the search for a possible non-zero transition neutrino
magnetic moment~\cite{Schechter:1981hw}.
As already mentioned, it has been long noticed that massive neutrinos
are expected to have non-standard interactions that may arise either
from the structure of the charged and neutral current weak
interactions in seesaw--type models~\cite{schechter:1980gr}.
Alternatively, they could arise from the exchange of scalar bosons, as
present in radiative and/or supersymmetric models of neutrino
mass~\cite{zee:1980ai,babu:1988ki}. The strength of the expected NSI
depends strongly on the model. Here we adopt a model independent
approach of simply analyzing their phenomenological implications in
neutrino electron scattering. For previous recent studies see
Refs~\cite{Berezhiani:2001rs,Berezhiani:2001rt,Davidson:2003ha}.
This possibility has been revived recently as it was noted that both
solar and atmospheric neutrino data are consistent with sizable values
of the NSI
parameters~\cite{Friedland:2004pp,Guzzo:2004ue,Friedland:2004ah}. For
the case of neutrino interactions with the down--quark, it has been
shown that the presence of NSI brings in an ambiguous determination of
the solar neutrino oscillation parameters, with a new solution in the
so--called ``dark side'' (with $\sin^2\theta_{sol}\simeq
0.7$~\cite{Miranda:2004nb}), degenerate with the conventional one,
even after taking into account data from the KamLAND experiment. For
the case of $\nu_e e^-$ NSI the couplings are also allowed to be
large~\cite{Guzzo:2004ue}.
In this work we concentrate in the detailed study of $\nu_e e^-$ and
$\bar{\nu}_e e$ scattering in the presence of non-standard neutrino
interactions, which can not be found in previous studies, e.~g.
Refs.~\cite{Berezhiani:2001rs,Davidson:2003ha}.
We focus on short baseline terrestrial experiments such as the LSND
$\nu_e e^-$ scattering and a variety of $\bar{\nu}_e e$ scattering
experiments using reactor neutrinos, exploiting their complementarity.
Our analysis is new in two ways. First we relax the conditions under
which the constraints on weak couplings have been previously derived.
Second, we update the study through the inclusion of more recent data,
such as the recent data from the MUNU
experiment~\cite{Daraktchieva:2003dr}. Also for completeness, we
include the results from the Rovno reactor~\cite{Derbin:1993wy}.
Moreover, the results from the Irvine~\cite{Reines:1976pv} experiment
will be analyzed considering the two energy bins that were reported in
the original article.
This paper is organized as follows: in Sec.~\ref{sec:SM} we recall the
basics of $\nu_e e$ scattering in the context of the Standard Model,
in Sec.~\ref{sec:NSI} we analyse the role of non-standard neutrino
interactions and in Sec.~\ref{sec:res-fut} we discuss prospects for
further improvements, stressing the role of future low energy
experiments using solar neutrino, as well as experiments using
radioactive neutrino sources.
\section{The neutrino electron scattering}
\label{sec:SM}
As a warm-up exercise, before considering the case of non-standard
neutrino interactions, let us briefly consider the restrictions placed
by current experiments within the context of the Standard Model.
\subsection{Preliminaries}
\label{sec:preliminaries}
In the Standard Model the $\nu_e e$ differential cross section
scattering involves both neutral and charged currents and is well
known~\cite{Bardin:1970wr} to be
\begin{equation}
\frac{d\sigma}{dT} = \frac{2 G_F m_e}{\pi}
\big[
g^2_L + g^2_R(1 - \frac{T}{E_\nu})^2 - g_L g_R \frac{m_e T}{E^2_\nu}
\big]
\label{diff:cross:sec}
\end{equation}
where $G_F=1.666\times10^{-5}$~GeV~$^2$, $m_e$ is the electron mass,
$T$ is the kinetic energy of the recoil electron and $E_\nu$ is the
neutrino energy.
One can see explicitly that the differential cross section in
Eq.~(\ref{diff:cross:sec}) has a symmetry under the simultaneous
transformation $g_L\to -g_L$ and $g_R\to -g_R$. Apart from the last
term, it is also invariant under separate sign changes in $g_{L,R}$.
For the case of $\bar{\nu}_e e$ scattering one has to exchange $g_L$
by $g_R$.
For a fixed neutrino energy, the determination of the weak coupling
constants $g_L-g_R$, is ambiguous since the same cross section in
Eq.~(\ref{diff:cross:sec}) is achieved for any $g_L-g_R$ values in an
ellipse with one axis given by $1$ and the other one by $(1 -
\frac{T}{E_\nu})$.
However, measurements at different neutrino energies can potentially
lift this degeneracy, due to the last term in
Eq.~(\ref{diff:cross:sec}). For example, for sufficiently low
energies, comparable to the electron mass, the extra term rotates the
ellipse by a sizable angle
\begin{equation}
\tan 2\theta =
\frac{m_e}{(2 E_\nu - T)}.
\end{equation}
On the other hand, the anti-neutrino cross section defines another
ellipse which is perpendicular to the one corresponding to the neutrino
case, since the axis width of this ellipse is exactly opposite ($g_L
\leftrightarrow g_R$). Therefore, by judicious combinations of
energies and/or adding anti-neutrino data, one expects to lift the
above degeneracy, as we will see in the next subsection.
Within the Standard Model the coupling constants $g_L$ and $g_R$ are
expressed, at tree level, as
\begin{eqnarray}
\label{gLgR}
g_L &=& \frac12 + \sin^2\theta_W\\
g_R &=& \sin^2\theta_W
\end{eqnarray}
where $g_L \equiv 1+g_L^{SM}$, $g_L^{SM}$ being the conventional SM
definition. We have checked explicitly that for the present accuracy
of the experiments, the above simple formulae are sufficient, as there
is no sensitivity to the corresponding radiative corrections given
in~\cite{Bahcall:1995mm}.
\subsection{Analysis}
\label{sec:analysis}
In our global analysis of the $\nu_e e$ and $\bar{\nu}_e e$ scattering
we have included all current experiments, namely, the data from the
LSND measurement of the neutrino electron scattering cross
section~\cite{Auerbach:2001wg}; for the anti-neutrino electron
scattering we have considered the two bins measured in the Irvine
experiment~\cite{Reines:1976pv}, the results of the Rovno
experiment~\cite{Derbin:1993wy} and the more recent result from the
MUNU experiment~\cite{Daraktchieva:2003dr}. The experimental results
are summarized in Table~\ref{table:1}.
In order to perform the analysis we need the total cross section which,
for the antineutrino case we express as
\begin{equation}
\sigma = \int dT' \int dT \int dE_\nu
\frac{d\sigma}{dT} \lambda (E_\nu) R(T,T')
\label{diff:cross:sec-sm}
\end{equation}
where both spectra and the detector energy resolution function,
should be convoluted with the cross sections given in
Eq.~(\ref{diff:cross:sec}).
In particular for the most recent MUNU measurement from reactor
neutrinos~\cite{Daraktchieva:2003dr}, we use an anti-neutrino
energy spectrum given by
\begin{equation}
\lambda(E_nu) = \sum_{k=1}^4 a_k \lambda_k(E_\nu)
\label{diff:cross:sec-NSI}
\end{equation}
where $a_k$ is the abundance of $^{235}$~$U$ ($k=1$), $^{239}$~$Pu$
($k=2$), $^{241}$~$Pu$ ($k=3$) and $^{238}$~$U$ ($k=4$) in the
reactor, $\lambda_k(E_\nu)$ is the corresponding neutrino energy
spectrum which we take from the parametrization given
in~\cite{Huber:2004xh}, with the appropriate fuel composition. For
energies below $2$~MeV there are only theoretical calculations for the
antineutrino spectrum which we take from Ref.~\cite{Kopeikin:1997ve}.
For the case of the Irvine experiment we prefer to use the neutrino
energy spectrum used by the experimentalists at that
time~\cite{Avignone}.
Regarding the detector resolution function $R(T,T')$ for the case of
MUNU it was found to be 8 \% scaling with the power $0.7$ of the
energy~\cite{Daraktchieva:2003dr}.
For the other two anti-neutrino experiments included in our analysis
the resolution function was not reported, so we neglect resolution
effects.
\begin{table}[!t]
\begin{tabular}{|c|c|c|c|}
\hline
Experiment & Energy range (MeV) & events & measurement \\
\hline\hline
LSND $\nu_e e$ & 10-50 & 191 &
$\sigma=[10.1\pm1.5]\times
E_{\nu_e}({\rm MeV})\times10^{-45} {\rm cm}^2$ \\[.1cm]
Irvine $\bar{\nu}_e-e$ & 1.5 - 3.0 & 381 &
$\sigma=[0.86\pm0.25]\times \sigma_{V-A}$ \\[.1cm]
Irvine $\bar{\nu}_e-e$ & 3.0 - 4.5 & 77 &
$\sigma=[1.7\pm0.44]\times \sigma_{V-A}$ \\[.1cm]
Rovno $\bar{\nu}_e-e$ & 0.6 - 2.0 & 41 &
$\sigma=(1.26\pm 0.62)\times10^{-44} {\rm cm}^2 / {\rm fission}$ \\[.1cm]
MUNU $\bar{\nu}_e-e$ & 0.7 - 2.0 & 68 &
$1.07 \pm 0.34$ events day~$^{-1}$ \\[.1cm]
\hline
\end{tabular}
\caption{Current experimental data on (anti-)neutrino electron scattering.}
\label{table:1}
\end{table}
For the LSND electron neutrino experiment we use the theoretical
expectation for the total neutrino electron cross section, which is
\begin{equation}
\sigma(\nu_e e) = \frac{2m_e G_F^2 E_\nu}{\pi}[g_L^2 + \frac13 g_R^2].
\end{equation}
Notice that in this case the term $g_L g_R$ can be neglected, since
this experiment was done at high energies of tens of MeV. As a result
there is no tilt in the ellipse, as discussed in the previous section
(see also Fig.~\ref{fig:global}).
With this information we proceed to our $\chi^2$ analysis.
Altogether, we will have five observables, and therefore, it will be
possible to constrain up to four parameters simultaneously.
We neglect correlations between experiments; this is a good
approximation as the only possible correlation comes from the reactor
neutrino energy spectrum, estimated to be less than
2\%~\cite{Huber:2004xh}, small in view of the statistical errors.
Therefore we can define the $\chi^2$ simply as
\begin{equation}
\chi^2 = \sum_i\frac{(\sigma_i^{\rm theo}-\sigma_i^{\rm exp})^2}{\Delta_i^2}
\end{equation}
where the $\sigma_i^{\rm exp}$ are given by the measurements shown in
Table~\ref{table:1} and $\Delta_i$ are the corresponding errors, while
$\sigma_i^{\rm theo}$ is the theoretical expectation.
\subsection{The Standard Model parameters}
In this section we present the results of our fit first in terms of
the $g_L$ and $g_R$ coupling constants and, later, we will obtain the
value of the Standard Model weak mixing angle.
To obtain the allowed regions for the $g_L$ and $g_R$ coupling
constants we perform a $\chi^2$ analysis as discussed in the previous
subsection. These two parameters are determined by five measurements
and therefore we will have three degrees of freedom. The minimun
$\chi^2$ for this case was $0.52$.
The results are illustrated in Fig.~(\ref{fig:global}) for $90$ \% C L
($\Delta \chi^2 =$ $4.61$). In this case one can clearly notice
the existence of four possible regions for these parameters.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth,angle=-90]{gl-gr.new.ps}
\caption{
Allowed $90$ \% C. L. regions for $g_L$ and $g_R$ obtained by
a global fit to neutrino and anti-neutrino electron
scattering data. It is possible to see the existence of four
allowed regions. The plot also shows the contribution from
LSND neutrino electron scattering (horizontal ellipse) and
combined data from reactor experiments (vertical
ellipse). The tileted ellipse illustrates the potential of a
future low-energy artificial neutrino source (Tritium
proposal in Ref.~\cite{Giomataris:2003bp}). }
\label{fig:global}
\end{center}
\end{figure}
We overlay in the same figure the corresponding equi-cross section
regions for current neutrino and anti-neutrino experiments, which form
two perpendicular ellipses, as expected.
The neutrino LSND data gives rise to the horizontal ellipse, while the
combined anti-neutrino data lead to the vertical ellipse and therefore
reduce the allowed region by restricting the $g_L$ and $g_R$ values to
the intersection of the two.
Of the existing experiments the ones giving the main contribution to
the constraint are the LSND and the Irvine experiments, due to their
higher statistics. A more restrictive analysis from the MUNU
experiment might be possible by using its binned data, although this
is out of the scope of the present work.
We also show in Fig.~(\ref{fig:global}) the case of a future
low-energy neutrino experiment, in which case the ellipse is tilted.
To illustrate the potential of future low energy experiments we
consider, for definiteness, the case of the NOSTOS proposal, where
antineutrinos come from an intense Tritium source with a maximum
energy of 18.6 KeV~\cite{Giomataris:2003pd}.
For this case the anti-neutrino spectrum for the source is taken
as~\cite{Ianni:1999nk}
\begin{equation}
\lambda \left(E_{\nu}\right)=
A\frac{x}{1-e^{-x}}\left(Q+m_{e}-E_{\nu}\right) E_{\nu}^{2}
\sqrt{\left(Q+m_{e}-E_{\nu}\right)^{2}-m_{e}^{2}}
\, ,\label{lambda}
\end{equation}
where $A$ is a normalization factor, $Q=18.6$~KeV, $m_{e}$ is the
electron mass, and
\begin{equation}
x=2\pi \alpha_{\rm e.m.} \frac{Q+m_{e}-E_{\nu}}
{\sqrt{\left(Q+m_{e}-E_{\nu}\right)^{2}-m_{e}^{2}}} \,
.\label{lambda2}
\end{equation}
This spectrum is convoluted with the anti-neutrino differential cross
section. The total number of events is set to be
$3500$~\cite{Giomataris:2003bp} (for one year of data taking).
We see that there is room for such future low-energy neutrino
experiments to provide useful input to resolve the current degenerate
determination of the weak coupling constants, improving the existing
measurements.
Unfortunately, as discussed above, the symmetry of the cross section
when we make the transformations $g_L\to -g_L$ and $g_R\to -g_R$ can
not be lifted by this method. Such a degeneracy is therefore
irreducible. This is not an academic ambiguity as it means the
validity of the gauge theory description dictated by the Standard
Model. In order to test the future sensitivity we set the experimental
measure to be exactly the SM prediction, and we consider only the
statistical error. After considering this experimental set up for
NOSTOS we obtain the region shown in figure 1.
Assuming the validity of the Standard Model, given by
Eqs.~(\ref{gLgR}) our results can also be presented directly in terms
of the weak mixing angle. In this case, the combined analysis of the
existing (anti)-neutrino-electron scattering experiments gives
$\sin^2\theta_W=0.27\pm 0.03$. The corresponding minimum for the
$\chi^2$ function was $\chi^2_{\rm min}= 0.89.$
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth,angle=-90]{sin.ps}
\caption{
$\Delta \chi^2$ for $\sin^2 \theta_W$ from $\nu_e e$ or
$\bar{\nu}_e e$ scattering. The contribution of each
experiment to the $\Delta \chi^2$ is also shown.
}
\label{fig:sin}
\end{figure}
The various contributions to $\Delta \chi^2$ from different individual
experiments are indicated in Fig. \ref{fig:sin}. Note that the
present fit gives a central value higher than the world
average~\cite{Eidelman:2004wy}, though the error is larger than found
in other experiments, because of their small statistics relative to
collider experiments. Nevertheless we find this to be interesting as
an independent and clean probe of the Standard Model.
\section{Non-standard interactions in $\nu_e e$ and $\bar{\nu}_e e$ scattering}
\label{sec:NSI}
Solar neutrino data are robust with respect to possible modifications
in solar physics involving various types of magnetic fields both in
the convective zone~\cite{Miranda:2004nz} as well as radiative
zone~\cite{Burgess:2003fj,Burgess:2003su}. If present, non-standard
effects are expected to be sub--leading insofar as providing an
explanation of the existing data~\cite{pakvasa:2003zv}. However, even
taking into account the crucial data from reactor experiments, the
current accepted interpretation of solar neutrino data is not yet
robust when neutrinos are endowed with non-standard
interactions~\cite{Miranda:2004nb}. In fact it has been shown that the
presence of NSI brings in an ambiguous determination of the solar
neutrino oscillation parameters, with a new ``dark side'' solution
(with $\sin^2\theta_{sol}\simeq 0.7$~\cite{Miranda:2004nb}),
essentially degenerate with the conventional one. Similarly, despite
the good description provided by oscillations of contained and upgoing
events which leads to limits on the strength of the NSI strength in a
two--neutrino scenario~\cite{fornengo:2001pm}, atmospheric neutrino
data are still consistent with sizable values of the NSI parameters
when three neutrinos are considered~\cite{Friedland:2004ah}.
Here we focus on the case of terrestrial experiments involving
electron--type neutrinos and anti-neutrinos.
\subsection{Cross section}
\label{sec:cross-section}
A model independent way of introducing such non standard interactions
is via the effective four fermion Lagrangian \cite{Berezhiani:2001rs}
\begin{equation}
-{\cal L}^{eff}_{\rm NSI} =
\varepsilon_{\alpha \beta}^{fP}{2\sqrt2 G_F} (\bar{\nu}_\alpha \gamma_\rho
L
\nu_\beta)
( \bar {f} \gamma^\rho P f ) \label{lagrangian_nsi}
\end{equation}
where $f$ is a first generation SM fermion: $e,u$ or $d$, and $P=L$ or
$R$, are chiral projectors. With this Lagrangian (\ref{lagrangian_nsi})
added to the Standard Model Lagrangian one can compute the
differential cross section for the process $\nu_e e\to \nu_\alpha e$ as
\begin{eqnarray}\label{cross-section}
\lefteqn{{d\sigma(E_{\nu}, T) \over dT}= {2 G_F^2 M_e \over \pi} [ (\tilde
g_L^2+\sum_{\alpha \neq e}
|\epsilon_{\alpha e}^{e L}|^2)+{} } \nonumber\\ & & {}+
(\tilde g_R^2+\sum_{\alpha \neq e}
|\epsilon_{\alpha e}^{e R}|^2)\left(1-{T \over E_{\nu}}\right)^2-
(\tilde g_L \tilde g_R+ \sum_{\alpha \neq e}|\epsilon_{\alpha e}^{e L}||
\epsilon_{\alpha e}^{e R}|)m_e {T \over E^2_{\nu}}]
\end{eqnarray}
with $\tilde g_L=g_L+\epsilon_{e e}^{e L}$ and $\tilde
g_R=g_R+\epsilon_{e e}^{e R}$. This equation has six NSI parameters,
two of them correspond to non-universal (NU) NSI: $\epsilon_{ee}^{e
LR}$ and four to flavor changing (FC) NSI: $\epsilon_{e\mu}^{e LR}$
and $\epsilon_{e\tau}^{e LR}$. In view of the stringent (though
indirect) constraints on the FC parameters $|\epsilon_{e\mu}^{e
LR}|<7.7\times10^{-4}$~\cite{Davidson:2003ha} we will, for
simplicity, neglect FC NSI involvion muon neutrinos. This way we are
left with the two NU NSI parameters and two FC parameters,
$\epsilon_{e\tau}^{e LR}$.
The agreement between $\nu_e e$ scattering experiments and the
Standard Model predictions had been previously studied in
Ref.~\cite{Berezhiani:2001rs,Berezhiani:2001rt,Davidson:2003ha} in
order to place restrictions on the magnitude of non-standard
interactions.
However, existing analyses either restricted the variation of the
parameters, which were considered only
one--at--a--time~\cite{Davidson:2003ha}, or the combination of two NSI
parameters (the non-universal coupling $\epsilon_{e e}^{e R}$ and
$\epsilon_{e e}^{e R}$) but using only two
experiments~\cite{Berezhiani:2001rs}.
Here we revisit this question generalizing the conditions under which
these constraints have been derived and, as we have already mentioned,
updating the study through the inclusion of more recent data, such as
the data from the MUNU experiment~\cite{Daraktchieva:2003dr}. Also for
completeness, we will consider the results from the Rovno
reactor~\cite{Derbin:1993wy}. Moreover, the results from the Irvine
experiment will be analyzed considering the two energy bins that were
reported in the original article.
Although the constraints are expected to be weaker in our case, they
will be robust than the ones obtained in the case where the parameters
are taken only one--at--a time in the analysis.
However, as will be clear at the end of this section, by taking full
advantage of the combination of neutrino and anti-neutrino data we are
able to obtain more stringent bounds on ``right-handed'' NSI
parameters than previously.
\subsection{NSI Analysis}
\label{sec:nsi-analysis}
First we present the results for the case of non-universal NSI
($\epsilon^{eL}_{ee}$, $\epsilon^{eR}_{ee}$ ), with flavour changing
parameters set to zero. In Fig. (\ref{fig:NU-NSI-2par}) we show the
allowed regions at 90, 95 and 99 \% C L ($\Delta \chi^2 =$~$4.61$,
$5.99$, $9.21$). The minimum $\chi^2$ was 0.52.
One can see that its determination is improved with respect to the
current results, although a twofold ambiguity in $\epsilon^{eL}_{ee}$
remains.
This follows from the discussion given in section \ref{sec:SM}, where
we stressed that the intersection from the neutrino and anti-neutrino
ellipses (see Fig. \ref{fig:global}) does not allow for a unique
discrimination of the coupling constant values. It is here that future
low energy experiments have a chance of improving their determination.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth,angle=-90]{non-universal.ps}
\caption{
Allowed regions at 90, 95 and 99 \% C. L. for
$\varepsilon^{eL}_{ee}$ and $\varepsilon^{eR}_{ee}$ obtained by a global fit
to neutrino and anti-neutrino electron scattering. The flavor changing
NSI parameter were taken equal to zero.
}
\label{fig:NU-NSI-2par}
\end{figure}
The same analysis can be performed for the case where we allow only
flavor changing NSI parameters ($\epsilon^{eL}_{e\tau}$,
$\epsilon^{eR}_{e\tau}$ ), or for the general case when we take into
account all four parameter simultaneously. The results of this
analysis are summarized in table \ref{table:results}. The left column
collects previously reported constraints~\cite{Davidson:2003ha},
determined under the assumption that only one NSI parameter was
allowed to take on nonzero values. In the second column, for
comparison, we present the result of our fit for the same case of a
one-parameter analysis. The third column gives our result for a two
parameters analysis, where only NU or FC parameters are non-zero,
therefore the NU region corresponds to the one shown in
Fig.~\ref{fig:NU-NSI-2par}. Finally, the fourth column shows a more
general case when from the four parameters we take a projection over
two of them (either NU or FC) allowing the other two to take on
non-zero values. In this case for a 90 \% C L we have again to
consider $\Delta\chi^2=4.61$ but the regions are wider as can be seen
from the table. The minimum $\chi^2$ for this analysis was 0.49.
One can see that the constraints for the case when only one parameter
is considered are similar to the results previously
reported~\cite{Davidson:2003ha}, with the exception of
$\epsilon_{e\tau}^{eR}$ and $\epsilon_{ee}^{eR}$ where ours are clearly
better. This is natural to expect and follows from the fact that we
are combining the LSND neutrino electron scattering data with the
anti-neutrino electron scattering data. This allows us to obtain four
different regions for the left and right couplings as can also be seen
from Fig.~(\ref{fig:global}).
It is important to note, however that when the four parameters are
taken as freely-varying our constraints are weaker than the existing
ones for the case of ``left-handed'' couplings $\epsilon_{e\tau}^{eL}$ and
$\epsilon_{ee}^{eL}$, as expected (in fact they could be as large as order
unity). In contrast, for the case of the ``right-handed'' NSI
couplings, our constraints better than the previous limits. The
explanation of this puzzle is that, in this case, in contrast to
previous work, we combine neutrino and anti-neutrino data. As we have
already seen, this has a great impact in constraining the
``right-handed'' NSI paramneters.
\begin{table}[!t]
\begin{tabular}{|c|c|c|c|c|}
\hline {\rule[-3mm]{0mm}{8mm} } & Previous Limits & One parameter &
Two Parameters & All Parameters
\\ \hline \hline
$\epsilon_{ee}^{eL}$ &$-0.07 < \epsilon_{ee}^{eL} < 0.11 $
& $-0.05 < \epsilon_{ee}^{eL} < 0.12 $
& $-0.13< \epsilon_{ee}^{eL} < 0.12$
& $ -1.58 < \epsilon_{ee}^{eL} < 0.12$ \\ [.1cm]
\hhline{|-|-|-|||}
$\epsilon_{ee}^{eR}$ & $-1.0 < \epsilon_{ee}^{eR} < 0.5 $
& $-0.04 < \epsilon_{ee}^{eR} < 0.14 $
& $-0.07< \epsilon_{ee}^{eR} < 0.15 $
& $-0.61 < \epsilon_{ee}^{eR} < 0.15$ \\ [.1cm]
\hline
$\epsilon_{e\tau}^{eL}$ & $ |\epsilon_{e\tau}^{eL}| < 0.4 $
& $ |\epsilon_{e\tau}^{eL}| < 0.44 $
&$ |\epsilon_{e\tau}^{eL}| < 0.43$
& $|\epsilon_{e\tau}^{eL}| < 0.85$ \\ [.1cm]
\hhline{|-|-|-|||}
$\epsilon_{e\tau}^{eR}$ &$ |\epsilon_{e\tau}^{eR}| < 0.7 $
& $ |\epsilon_{e\tau}^{eR}| < 0.27 $
& $ |\epsilon_{e\tau}^{eR}| < 0.31 $
& $ |\epsilon_{e\tau}^{eR}| < 0.38$ \\ [.1cm]
\hline
\end{tabular}
\caption{Constrains on NSI parameters at 90 \% C L. In the first
column we show the previous constraints obtained
in~\cite{Davidson:2003ha}, while in the second we show the
corresponding results found in the present analysis. The last two
columns show the case in which two and four parameters are allowed to
vary simultaneously (see the text for explanation).}
\label{table:results}
\end{table}
\section{Summary and prospects}
\label{sec:res-fut}
We have presented a global analysis of non-standard neutrino
interactions in electron (anti)-neutrino scattering off electrons,
including all current experiments, such as the most recent MUNU
measurement from reactor neutrinos. We have discussed the resulting
constraints both in the context of the Standard Model as well as
extensions where non-standard neutrino interactions are present. We
obtained constraints on non-universal and flavor changing NSI and
compared our bounds with those obtained in previous analyses.
We find that substantial room for improvement is expected from
$\nu_e e$ or $\bar{\nu}_e e$ low-energy scattering experiments.
There are several proposals of this type, either using solar
neutrinos, such as BOREXINO~\cite{Alimonti:2000xc}, or experiments
using artificial neutrino sources, such as \cite{neganov:2001bn}, that
will be helpful in constrining NSI parameters as well as for other
type of new physics (see for
example~\cite{barabanov:1998bj,miranda:1998vs} ). From the point of
view of pinning down the interactions of the $\nu_e$ and $\bar{\nu}_e$
low energy scattering experiments offer an alternative frontier that
complements information that comes from higher
energies~\cite{Chooz,Gouvea}.
In summary, cross section measurements by themselves, at a given
energy, lead to a degeneracy in the coupling constants and, therefore,
in the determination of the NSI parameters. This degeneracy can be
partially removed by considering both neutrino and anti-neutrino
scattering off electrons. Further improvements require low energy
neutrino experiments.
\acknowledgments We thank Nicolao Fornengo for reading the manuscript.
This work has been supported by Spanish grant BFM2002-00345,
Conacyt-M\'exico, and by the EC RTN network MRTN-CT-2004-503369.
C. A. M. is supported by AlBan Scholarship
no. E04D044701BR. J. B. would like to thank to IFIC/CSIC for the kind
hospitality during the visit where part of this work was done.
|
2,877,628,090,296 | arxiv | \section{Introduction}
We consider the initial value problem
\[(P) \left \{ \begin{array}{ll}
i\,u_{t} + \omega \,u_{xx} + i\,\beta \,u_{xxx} + |u|^{2}\,u=0 &
\quad x,\,t\in \mathbb{R}\\
u(x,\,0) = u_{0}(x) &
\end{array}
\right. \] where $\omega ,\,\beta \in \mathbb{R},$ $\beta \neq 0 $
and $u=u(x,t)$ is a complex valued function. The above equation is a
particular case of the equation
\[(Q) \left \{ \begin{array}{ll}
i\,u_{t} + \omega \,u_{xx} + i\,\beta \,u_{xxx} + \gamma \,|u|^{2}\,u +
i\,\delta \,|u|^{2}\,u_{x} + i\,\epsilon \,u^{2}\,\overline{u}_{x}=0 &
\quad x,\,t\in \mathbb{R}\\
u(x,\,0) = u_{0}(x) &
\end{array}
\right. \] where $\omega ,\,\beta ,\,\gamma ,\,\delta $ are real
numbers with $\beta \neq 0.$ This equation was first proposed by A.
Hasegawa and Y. Kodama \cite{ha1} as a model for the propagation of
a signal in an optic fiber (see also \cite{ko1}). The equation $(Q)$
can be reduced to other well known equations. For instance, setting
$\omega =1,$ $\beta = \delta =\epsilon =0$ in $(Q)$ we have the
semilinear Schr\"{o}dinger equation, i. e.,
\begin{eqnarray*}
i\,u_{t} + u_{xx} + \gamma \,|u|^{2}\,u =0.\qquad (Q_{1})
\end{eqnarray*}
If we let $\beta = \gamma =0$ and $\omega =1$ in $(Q),$ we obtain
the derivative nonlinear Schr\"{o}dinger equation
\begin{eqnarray*}
i\,u_{t} + u_{xx} + i\,\delta \,|u|^{2}\,u_{x} + i\,\epsilon
\,u^{2}\,\overline{u}_{x}=0.\qquad (Q_{2})
\end{eqnarray*}
Letting $\alpha = \gamma = \epsilon =0$ in $(Q),$ the equation that arises is the
complex modified Korteweg-de Vries equation,
\begin{eqnarray*}
i\,u_{t} + i\,\beta \,u_{xxx} + i\,\delta \,|u|^{2}\,u_{x} =0.\qquad
(Q_{3})
\end{eqnarray*}
The initial value problem for the equations $(Q_{1}),$ $(Q_{2})$ and
$(Q_{3})$ has been extensively studied in the last few years. See,
for instance, \cite{bi1,bi2,bo1,ca1,ca2,cr1,cr2,ka1,ke1,sa1,sj1} and
references therein. In 1992, C. Laurey \cite{la1} considered the
equation $(Q)$ and proved local well-posedness of the initial value
problem associated for data in $H^{s}(\mathbb{R}),$ $s>3/4,$ and
global well-posedness in $H^{s}(\mathbb{R}),$ $s\geq 1.$ In 1997, G.
Staffilani \cite{st1} for $(Q)$ established local well-posedness for
data in $H^{s}(\mathbb{R}),$ $s\geq 1/4$ improving Laurey's result.
A similar result was given in \cite{ca1,ca2}
with $w(t),$ $\beta(t)$ real functions.\\
Our aim in this paper, is to study gain in regularity for the
equation $(P).$ Specifically, we prove conditions on $(P)$ for which
initial data $u_{0}$ possessing sufficient decay at infinity and
minimal amount of regularity will lead to a unique solution $u(t)\in
C^{\infty}(\mathbb{R})$ for $0<t<T,$ where $T$ is the existence time
of the solution. We are not considering the equation $(Q)$ because
of the technique used here, we shall see that the last two terms in
$(Q)$ are not outstanding in the main inequality, indeed the two
last terms are
observed in the last two terms in the main inequality.\\
In 1986, N. Hayashi {\it et al.} \cite{ha1} showed that for the
nonlinear Schr\"{o}dinger equation (NLS): $i\,u_{t} + u_{xx} =
\lambda \,|u|^{p - 1}\,u,$ $(x,\,t)\in \mathbb{R} \times \mathbb{R}$
with initial condition $u(x,\,0)=u_{0}(x),$ $x\in \mathbb{R}$ and a
certain assumption on $\lambda $ and $p,$ all solutions of finite
energy are smooth for $t\neq 0$ provided the initial functions in
$H^{1}(\mathbb{R})$(or on $L^{2}(\mathbb{R})$) decay sufficiently
fast as $|x|\rightarrow \infty .$ The main tool is the operator $J$
defined by
$Ju=e^{i\,x^{2}/4\,t}\,(2\,i\,t)\,\partial_{x}(e^{-\,i\,x^{2}/4\,t}\,u)=
(x + 2\,i\,t\,\partial_{x})u$ which has the remarkable property that
it commutes with the operator $L$ defined by $L=(i\,\partial_{t} +
\partial_{x}^{2}),$ namely
$LJ - JL =[L,\,J]=0.$\\
For the Korteweg-de Vries type equation (KdV), J. C. Saut and M.
Temam \cite{sa1} remarked that a solution $u$ cannot gain or lose
regularity. They showed that if $u(x,\,0)=u_{0}(x)\in
H^{s}(\mathbb{R})$ for $s\geq 2,$ then $u(\,\cdot\,,\,t)\in
H^{s}(\mathbb{R})$ for all $t>0.$ For the KdV equation on the line,
Kato \cite{ka1} motivated by work of Cohen \cite{co1} showed that if
$u(x,\,0)=u_{0}(x)\in L_{b}^{2}\equiv H^{2}(\mathbb{R})\cap
L^{2}$($e^{bx}\,dx$)($b>0$) then the solution $u(x,\,t)$ of the KdV
equation becomes $C^{\infty}$ for all $t>0.$ A main ingredient in
the proof was the fact that formally the semi-group
$S(t)=e^{-\,\partial_{x}^{3}}$ in $L_{b}^{2}(\mathbb{R})$ is
equivalent to $S_{b}(t)= e^{-\,t\,(\partial_{x} - b)^{3}}$ in
$L^{2}(\mathbb{R})$ when $t>0.$ One would be inclined to believe
that this was a special property of the KdV equation. However, his
is not the case. The effect is due to the dispersive nature of the
linear part of the equation. Kruzkov and Faminskii \cite{kr1} proved
that $u(x,\,0)=u_{0}(x)\in L^{2}(\mathbb{R})$ such that
$x^{\alpha}\,u_{0}(x)\in L^{2}((0,\,+\infty)),$ the weak solution of
the KdV equation, has $l$-continuous space derivatives for all $t>0$
if $l<2\,\alpha .$ The proof of this result is based on the
asymptotic behavior of the Airy function and its derivatives, and on
the smoothing effect of the KdV equation which was found in
\cite{ka1,kr1}. While the proof of Kato appears to depend on special
a priori estimates, some of this mystery has been solved by the
result of local gain of finite regularity for various others linear
and nonlinear dispersive equations due to Ginibre and Velo
\cite{gi1} and others.
However, all of them require growth conditions on the nonlinear term. \\
In 1992, W. Craig, T. Kappeler and W. Strauss \cite{cr1,cr2} proved
for the fully nonlinear KdV equation $u_{t} + $
$f(u_{xxx},\,u_{xx},\,u_{x},\,u,\,x,\,t)=0,$ $x\in \mathbb{R},$
$t>0$ and certain additional assumption over $f$ that $C^{\infty}$
solutions $u(x,\,t)$ are obtained for all $t>0$ if the initial data
$u_{0}(x)$ decays faster than polynomially on $\mathbb{R}^{+}=\{x
\in \mathbb{R}:\;x>0\}$ and has certain initial Sobolev regularity.
Following this idea, H. Cai \cite{ca0} studied the nonlinear
equation of KdV-type of the form $u_{t} + u_{xxx} +
a(x,,t)\,f(u_{xx},\,u_{x},\,u,\,x,\,t)=0,$ where $a(x,\,t)$ is
positive and bounded, obtaining the same conclusion. Subsequent
works were given by O. Vera \cite{ve1,ve2,ve3,ve4} for a nonlinear
dispersive evolution equation, a KdV-Burgers type equation and for
KdV-Kawahara type equation, respectively. In more than one spatial
dimension, J. Levandosky \cite{le1}, proved infinite gain in
regularity results for nonlinear third-order equations. While
\cite{cr1} included local smoothing results for some mth-order
dispersive equation in $n$ spatial dimension, their results and the
techniques are different from those presented by Levandosky. First,
they consider equations with only a mild solution and Levandosky
considers equations with very general nonlinearities including a
fully nonlinear equation of the form
\begin{eqnarray*}
& & u_{t} + f(D^{3}u,\,D^{2}u,\,Du,\,u,\,x,\,t)=0,\\
& & u(x,\,y,\,0)=u_{0}(x,\,y).
\end{eqnarray*}
Secondly, they indicate local gain in finite regularity and
Levandosky proved complementary results showing the relationship
between the decay at infinity of the initial data and the amount
of gain in regularity. More specifically, it is proved a condition
under which an equation of the form
\begin{eqnarray*}
& & u_{t} + a\,u_{xxx} + b\,u_{xxy} + c\,u_{xyy} + d\,u_{yyy} +
f(D^{2}u,\,Du,\,u,\,x,\,t)=0,\\
& & u(x,\,y,\,0)=u_{0}(x,\,y),
\end{eqnarray*}
where $a,$ $b,$ $c,$ $d$ are assumed constant. Indeed, Levandosky
proved sufficient conditions on this equation for which a solution
$u$ will experience an infinite gain in regularity. Specifically,
prove conditions for which initial data $u_{0}(x,\,y)$ possessing
sufficient decay at infinity and a minimal amount of regularity will
lead to a unique solution $u(t)\in C^{\infty}(\mathbb{R}^{2})$ for
$T^{*}$ where $T^{*}$ is the existence time of solutions. According
to the characteristics of equations $(P)$ and considering the
particular cases $(Q_{1})$ and $(Q_{2})$ we could hope that the
$(P)$ equation have gain in regularity following the steps of
N. Hayashi {\it et al.} \cite{ha1} or W. Craig {\it et al.} \cite{cr1}.\\
In our problem, the initial idea is to apply the technique given by
N. Hayashi {\it et al.} \cite{ha1,ha2} to obtain gain in regularity.
Firstly, using straightforward calculus we can see that the equation
$(P)$ has conservation of the energy, i. e.,
$||u||_{L^{2}(\mathbb{R})}=||u_{0}||_{L^{2}(\mathbb{R})}.$ On the
other hand, we look for estimates for $u_{x}$ that will help to
obtain a priori estimates, basically to obtain estimates in
$L^{\infty}(\mathbb{R}).$ Indeed, differentiating in the
$x$-variable the equation $(P)$ we have
\begin{eqnarray}
\label{e100}i\,u_{x\,t} + i\,\beta \,u_{xxxx} + \omega \,u_{xxx} +
(|u|^{2})_{x}\,u + |u|^{2}\,u_{x}=0,
\end{eqnarray}
and multiplying \eqref{e100} by $\overline{u}_{x}$
\begin{eqnarray*}
& & i\,\overline{u}_{x}\,u_{x\,t} + i\,\beta
\,\overline{u}_{x}\,u_{xxxx} + \omega\,\overline{u}_{x}\,u_{xxx} +
\,(|u|^{2})_{x}\,u\,\overline{u}_{x} + |u|^{2}\,|u_{x}|^{2}=0\\
& & -\,i\,u_{x}\,\overline{u}_{x\,t} - i\,\beta
\,u_{x}\,\overline{u}_{xxxx} + \omega \,u_{x}\,\overline{u}_{xxx}
+ \,(|u|^{2})_{x}\,\overline{u}\,u_{x} + |u|^{2}\,|u_{x}|^{2}=0.\;
(\mbox{applying conjugate})
\end{eqnarray*}
Subtracting and integrating over $x\in \mathbb{R},$ we have
\begin{eqnarray*}
\lefteqn{i\,\frac{d}{dt}\int_{\mathbb{R}}|u_{x}|^{2}dx + i\,\beta
\int_{\mathbb{R}}\overline{u}_{x}\,u_{xxxx}dx + i\,\beta
\int_{\mathbb{R}}u_{x}\,\overline{u}_{xxxx}dx }\\
& & +\;2\,i\,\omega \,Im
\int_{\mathbb{R}}\overline{u}_{x}\,u_{xxx}dx + 2\,i\,Im
\int_{\mathbb{R}}(|u|^{2})_{x}\,u\,\overline{u}_{x}dx=0.
\end{eqnarray*}
Performing integration by parts and straightforward calculations we
obtain
\begin{eqnarray*}
\frac{d}{dt}\int_{\mathbb{R}}|u_{x}|^{2}dx + 2\,Im
\int_{\mathbb{R}}(|u|^{2})_{x}\,u\,\overline{u}_{x}dx=0\qquad
(E_{1})
\end{eqnarray*}
where
\begin{eqnarray*}
\frac{d}{dt}\,||u_{x}||_{L^{2}(\mathbb{R})}^{2} + 2\,Im
\int_{\mathbb{R}}u^{2}\,\overline{u}_{x}^{2}dx=0\qquad (E_{2})
\end{eqnarray*}
or integrating by parts the second term in $(E_{1})$ we obtain
\begin{eqnarray*}
\frac{d}{dt}\,||u_{x}||_{L^{2}(\mathbb{R})}^{2} - 2\,Im
\int_{\mathbb{R}}|u|^{2}\,u\,\overline{u}_{xx}dx=0.\qquad (E_{3})
\end{eqnarray*}
Thus it is not possible to estimate in $H^{1}(\mathbb{R}),$
because it appears a second term with two derivatives. The reason
of having an estimate in the derivative is related to Sobolev
embedding. In one spatial dimension we have the embedding
$H^{1}(\mathbb{R})\hookrightarrow L^{\infty}(\mathbb{R}).$ It
seems that the term $i\,\beta\,u_{xxx}$ is crucial. It makes the
two "top" terms look like KdV equation; that is, $u_{t} + u_{xxx}
+ \ldots .$ Of course, the solution is complex, so that the
equation
is like two coupled real KdV equations. \\
This was our motivation to obtain gain in regularity using the idea
of W. Craig {\it et al.} \cite{cr1}. We prove conditions on $(P)$
for which initial data $u_{0}(x)$ possessing sufficient decay at
infinity and a minimal amount of regularity will lead to a unique
solution $u(t)\in C^{\infty}(\mathbb{R})$ for $t>0.$ We use a
technique of nonlinear multipliers, generalizing Kato's original
method, together with ideas of Craig and Goodman \cite{cr0} All the
physically significant dispersive equations and systems known to us
have linear parts displaying this local smoothing property. To
mention only a few, the KdV, Benjamin-Ono, intermediate long wave,
various Boussinesq, and Schr\"{o}dinger equation are included. This
paper is organized as follows: Section 2 outlines briefly the
notation and terminology to be used subsequently. In section 3 we
prove the main inequality. In section 4 we prove an important a
priori estimate. In section 5 we prove a basic-local-in-time
existence and uniqueness theorem. In section 6 we prove a basic
global existence theorem. In section 7 we develop a series of
estimates for solutions of equations $(P)$ in weighted Sobolev
norms. These provide a starting point for the a priori gain of
regularity. In section 8 we prove the
following theorem:\\
{\bf Theorem 1.1}(Main Theorem). {\it Let $|\omega |<3\,\beta ,$
$T>0$ and $u(x,\,t)$ be a solution of $(P)$ in the region
$\mathbb{R} \times [0,\,T]$ such that}
\begin{eqnarray}
\label{e101}u\in L^{\infty}([0,\,T]:\,H^{3}(W_{0\;L\;0}))
\end{eqnarray}
{\it for some $L\geq 2.$ Then }
\begin{eqnarray}
\label{e102}u\in L^{\infty}([0,\,T]:\,H^{3 + l}(W_{\sigma,\,L -
l,\,l}))\cap L^{2}([0,\,T]:\,H^{4 + l}(W_{\sigma,\,L - l - 1,\,l}))
\end{eqnarray}
{\it for all $0\leq l\leq L - 1$ and all $\sigma >0.$}\\
\\
{\it Remark.} We consider the Gauge transformation
\begin{eqnarray}
\label{e104}u(x,\,t) & = & e^{i\,d_{2}\,x + i\,d_{3}\,t}\,v\left(x -
d_{1}\,t,\,t\right)\equiv e^{\theta}\,v\left(\eta,\,\xi\right)
\end{eqnarray}
where $\;\theta = i\,d_{2}\,x + i\,d_{3}\,t,\;$ $\eta=x - d_{1}\,t$
and $\xi =t.$ Then
\begin{eqnarray*}
& & u_{t} = i\,d_{3}\,e^{\theta}\,v - d_{1}\,e^{\theta}\,v_{\eta} +
e^{\theta}\,v_{\xi}\quad :\quad
u_{x} = i\,d_{2}\,e^{\theta}\,v + e^{\theta}\,v_{\eta}\\
& & u_{xx} = -\;d_{2}^{2}\,e^{\theta}\,v + 2\,i\,d_{2}
\,e^{\theta}\,v_{\eta} + e^{\theta}\,v_{\eta \,\eta}\; :\; u_{xxx} =
-\;i\,d_{2}^{3}\,e^{\theta}\,v - 3\,d_{2}^{2}\,e^{\theta}\,v_{\eta}
+ 3\,i\,d_{2} \,e^{\theta}\,v_{\eta\eta} +
e^{\theta}\,v_{\eta\eta\eta}.
\end{eqnarray*}
Replacing in $(Q)$ we have
\begin{eqnarray*}
& & -\,d_{3}\,e^{\theta}\,v - i\,d_{1}\,e^{\theta}\,v_{\eta} +
i\,e^{\theta}\,v_{\xi} - \omega\,d_{2}^{2}\,e^{\theta}\,v +
2\,i\,\omega\,d_{2}\,e^{\theta}\,v_{\eta} +
\omega\,e^{\theta}\,v_{\eta\eta} \\
& & \beta\,d_{3}^{3}\,e^{\theta}\,v -
3\,i\,\beta\,d_{2}^{2}\,e^{\theta}\,v_{\eta} -
3\,\beta\,d_{2}\,e^{\theta}\,v_{\eta\eta} +
i\,\beta\,e^{\theta}\,v_{\eta\eta\eta} + \gamma\,|v|^{2}\,e^{\theta}\,v\\
& & -\;\delta\,d_{2}\,|v|^{2}\,e^{\theta}\,v +
i\,\delta\,|v|^{2}\,e^{\theta}\,v_{\eta} +
\epsilon\,d_{2}\,e^{\theta}\,v^{2}\overline{v} +
i\,\epsilon\,e^{\theta}\,v^{2}\,v_{\eta}=0
\end{eqnarray*}
where
\begin{eqnarray*}
& & i\,v_{\xi} + (\omega - 3\,\beta\,d_{2})\,v_{\eta\eta} +
i\,\beta\,v_{\eta\eta\eta} + (2\,i\,\omega\,d_{2} -
3\,i\,\beta\,d_{2}^{2} - i\,d_{1} + i\,\delta\,|v|^{2} +
i\,\epsilon\,v^{2})\,v_{\eta}\\
& & (\beta\,d_{2}^{3} - \omega\,d_{2}^{2} - d_{3} + \gamma\,|v|^{2}
- \delta\,d_{2}\,|v|^{2})\,v + \epsilon\,d_{2}\,v^{2}\overline{v}=0
\end{eqnarray*}
then
\begin{eqnarray}
\label{e105}d_{1}=\frac{\omega^{2}}{3\,\beta}\quad :\quad
d_{2}=\frac{\omega}{3\,\beta}\quad :\quad
d_{3}=\frac{-\,2\omega^{3}}{27\,\beta^{2}}.
\end{eqnarray}
This way in $(Q)$ we obtain
\begin{eqnarray*}
i\,v_{\xi} + i\,\beta\,v_{\eta\eta\eta} + i\,(\delta\,|v|^{2} +
\epsilon\,v^{2})\,v_{\eta} + \left(\gamma -
\frac{\omega\,\delta}{3\,\beta}\right)|v|^{2}v +
\frac{\epsilon\,\delta}{3\,\beta}\,v^{2}\overline{v}=0,
\end{eqnarray*}
but $v^{2}\,\overline{v}=v\,v\,\overline{v}=|v|^{2}v,$ then using
the Gauge transformation we have the equivalent problem to $(Q)$
\[(\mathbb{Q}) \left \{ \begin{array}{ll}
i\,v_{\xi} + i\,\beta\,v_{\eta\eta\eta} +
i\,\delta\,|v|^{2}\,v_{\eta} + i\,\epsilon\,v^{2}\,v_{\eta} +
\left(\gamma + \frac{\epsilon\,\delta}{3\,\beta} -
\frac{\omega\,\delta}{3\,\beta}\right)|v|^{2}v =0 &
\quad \eta,\,\xi\in \mathbb{R}\\
v(\eta,\,0) = e^{-\,i\,\frac{\omega}{3\,\beta}\,\eta}u_{0}(\eta). &
\end{array}
\right. \] Here, rescaling the equation, we take $\beta =1.$
\[(\widetilde{\mathbb{Q}}) \left \{ \begin{array}{ll}
i\,v_{t} + i\,v_{xxx} + i\,\delta\,|v|^{2}\,v_{x} +
i\,\epsilon\,v^{2}\,v_{x} + \left(\gamma +
\frac{\epsilon\,\delta}{3} - \frac{\omega\,\delta}{3}\right)|v|^{2}v
=0 &
\quad x,\,t\in \mathbb{R}\\
v(x,\,0) = e^{-\,i\,\frac{\omega}{3}\,x}u_{0}(x). &
\end{array}
\right. \] The above Gauge transformation is a bicontinuous map
from $L^{p}([0,\,T]:\,H^{s}(W_{\sigma\,i\,k}))$ to itself, as far
as $0<T<+\infty$ and $p,$ $s,$ $\sigma,$ $i,$ $k$ used in this
paper. With this, the assumption $|\omega|<3\,\beta$ imposed in
Theorem 1.1 can be removed.
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\setcounter{equation}{0}\section{Preliminaries}
We consider the initial value problem
\[(P)\left \{ \begin{array}{ll}
i\,u_{t} + \omega \,u_{xx} + i\,\beta \,u_{xxx} + |u|^{2}\,u=0, &
\quad x,\,t\in \mathbb{R} \\
u(x,\,0) = u_{0}(x) &
\end{array}
\right. \] where $\omega ,\,\beta \in \mathbb{R},$ $\beta \neq 0 $
and $u=u(x,\,t)$ is a
complex valued function.\\
\\
{\it Notation.} We write $\;\partial =\partial /\partial x,\;$
$\;\partial _{t}=\partial /\partial t\;$ and we abbreviate
$\;u_{j}=\partial ^{j}u.$\\
\\
{\it Definition 2.1.} A function $\xi = \xi (x,\,t)$ belongs to
the weight class $W_{\sigma \;i\; k}$ if it is a positive
$C^{\infty }$ function on $\mathbb{R}\times [0,\,T],$ $\partial
\xi
>0$ and there are constant $c_{j},$ $0\leq j\leq 5$ such that
\begin{eqnarray}
\label{e201}& & 0<c_{1}\leq t^{-\,k}\,e^{-\,\sigma \,x}\,\xi
(x,\,t) \leq c_{2 }\qquad \forall
\;x<-1,\quad 0<t<T.\\
\label{e202}& & 0<c_{3}\leq t^{-\,k}\,x^{-\,i}\,\xi (x,\,t)\leq
c_{4}\qquad \forall
\;x>1,\quad 0<t<T.\\
\label{e203}& & \left(t\mid \partial_{t}\xi \mid + \mid
\partial^{j}\xi \mid \right)/ \xi \leq c_{5}\quad \forall
\;(x,\,t)\in \mathbb{R}\times [0,\,T],\;\forall\;j\in\mathbb{N}.
\end{eqnarray}
{\it Remark.} We shall always take $\sigma \geq 0,$ $i\geq 1$
and $k\geq 0.$\\
\\
{\it Example.} Let
\[ \xi (x)= \left \{\begin{array}{ll}
1 + e^{-1/x} & \mbox { for $\,x>0$ } \\
1 & \mbox { for $x\leq 0$ }
\end{array}
\right. \]
then $\xi \in W_{0\;i\;0}.$\\
\\
{\it Definition 2.2.} Let $N$ be a positive integer. By
$H^{N}(W_{\sigma \;i\; k})$ we denote the Sobolev space on
$\mathbb{R}$ with a weight; that is, with the norm
\begin{eqnarray*}
||v||_{H^{N}(W_{\sigma \;i\;k})}^{2}= \sum _{j=0}^{N}
\int_{\mathbb{R}}|
\partial ^{j}v(x)|^{2}\,\xi (x,\,t)\,dx<+\,\infty
\end{eqnarray*}
for any $\xi \in W_{\sigma \;i\;k}$ and $0<t<T.,$
Even though the norm depends on $\xi,$ all such choices leads to equivalent norms. \\
\\
{\it Remark.} $H^{s}(W_{\sigma \;i\;k})\,$ depends on $t$ (because
$\xi =\xi (x,\,t)$).\\
\\
{\bf Lemma 2.1.} (See \cite{ca0}) For $\xi \in W_{\sigma \;i\;0}$
and $\sigma \geq 0,\, i\geq 0,$ there exists a constant $c>0$ such
that, for $u\in H^{1}(W_{\sigma \;i\;0}),$
\[\sup _{x\in \mathbb{R}}||\xi \,u^{2}||
\leq c\int _{\mathbb{R}}\left(\,|u|^{2} + |
\partial u|^{2}\,\right)\,\xi \,dx\]
{\bf Lemma 2.2}(The Gagliardo-Nirenberg inequality). Let $q,\,r$
be any real numbers satisfying $1\leq q,$ $r\leq \infty $ and let
$j$ and $m$ be nonnegative integers such that $j\leq m.$ Then
\begin{eqnarray*}
||\partial^{j}u||_{L^{p}(\mathbb{R})}\leq
c\;||\partial^{m}u||_{L^{r}(\mathbb{R})}^{a}\;||u||_{L^{q}(\mathbb{R})}^{1
- a}
\end{eqnarray*}
where $\frac{1}{p}=j + a\,\left (\frac{1}{r} - m\right ) +
\frac{(1 - a)}{q}$ for all $a$ in the interval $\frac{j}{m}\leq a\leq 1,$ and
$M$ is a positive constant depending only on $m,$ $j,$ $q,$ $r$ and $a.$\\
\\
{\it Definition 2.3.} By $L^{2}([0,\,T]:\,H^{N}(W_{\sigma \;i\;k}))$
we denote the space of functions $v(x,\,t)$ with the norm ($N$
integer positive)
\begin{eqnarray*}
||v||_{L^{2}([0,\,T]:\,H^{N}(W_{\sigma \;i\;k}))}^{2}=
\int_{0}^{T}||v(x,\,t)||_{H^{N}(W_{\sigma
\;i\;k})}^{2}dt<+\,\infty
\end{eqnarray*}
{\it Remark.} The usual Sobolev space is $\,H^{N}(\mathbb{R}) =
H^{N}(W_{0\;0\;0})\,$
without a weight.\\
\\
{\it Remark.} We shall derive the a priori estimates assuming that
the solution is $C^{\infty},$ bounded as $x\rightarrow -\,\infty ,$
and rapidly decreasing as
$x\rightarrow +\,\infty ,$ together with all of its derivatives.\\
\\
Considering the above notation, the higher order nonlinear
Schr\"{o}dinger equation can be written as
\begin{eqnarray}
\label{e204}i\,u_{t} + i\,\beta\,u_{3} + \omega \,u_{2} +
|u|^{2}\,u=0,\quad x,\,t\in \mathbb{R}
\end{eqnarray}
where $\omega ,\,\beta \in \mathbb{R},$ $\beta \neq 0 $ and
$u=u(x,\,t)$ is a
complex valued function.\\
\\
Throughout this paper $c$ is a generic constant, not necessarily
the same at each occasion(it will change from line to line), which
depends in an increasing way on the indicated quantities. In this
part, we only consider the case $t>0.$ The case $t<0$ can be
treated analogously.
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\setcounter{equation}{0}\section{Main Inequality} {\bf Lemma 3.1.}
{\it Let $|\omega |<3\;\beta .$ Let $u$ be a solution of
\eqref{e204} with enough Sobolev regularity (for instance, $u\in
H^{N}(\mathbb{R}),$ $N\geq \alpha + 3$), then}
\begin{eqnarray}
\label{e301}& & \partial_{t}\int_{\mathbb{R}}\xi
\,|u_{\alpha}|^{2}dx + \int_{\mathbb{R}}\eta \,|u_{\alpha +
1}|^{2}dx + \int_{\mathbb{R}}\theta \,|u_{\alpha}|^{2}dx +
\int_{\mathbb{R}}R_{\alpha}dx\leq 0
\end{eqnarray}
{\it where}
\begin{eqnarray*}
\eta & = & (3\,\beta - |\omega |)\,\partial \xi\qquad for\qquad
|\omega |<3\;
\beta \\
\theta & = & -\;[\,\partial_{t}\xi + \beta \,\partial^{3}\xi +
|\omega|\,\partial \xi + c_{0}\,\xi \,] \quad where\quad
c_{0}=||u||_{L^{\infty}(\mathbb{R})}^{2}
\end{eqnarray*}
{\it and $R_{\alpha}=R_{\alpha}(|u_{\alpha }|,\,|u_{\alpha - 1}|,\,\ldots ).$}\\
\\
{\it Proof.} Differentiating \eqref{e204} $\alpha $-times (for
$\alpha \geq 0$) over $x\in \mathbb{R}$ leads to
\begin{eqnarray}
\label{e302}i\,u_{\alpha \,t} + i\,\beta \,u_{\alpha + 3} + \omega
\,u_{\alpha + 2} + (|u|^{2})_{\alpha }\,u + \sum_{m=1}^{\alpha -
1}{\alpha\choose m}\,(|u|^{2})_{\alpha - m}\,u_{m} +
|u|^{2}\,u_{\alpha } = 0.
\end{eqnarray}
Let $\xi = \xi(x,\,t),$ then multiplying \eqref{e302} by $\xi
\,\overline{u}_{\alpha } $
we have
\begin{eqnarray*}
\lefteqn{i\,\xi \,\overline{u}_{\alpha }\,u_{\alpha \,t} +
i\,\beta \,\xi \, \overline{u}_{\alpha }\,u_{\alpha + 3} + \omega
\,\xi \,\overline{u}_{\alpha }\,
u_{\alpha + 2} + (|u|^{2})_{\alpha }\,\xi \,u\,\overline{u}_{\alpha } } \\
& & +\sum_{m=1}^{\alpha - 1}{\alpha\choose m}\,(|u|^{2})_{\alpha -
m}\,
\xi \,u_{m}\,\overline{u}_{\alpha} + \xi \,|u|^{2}\,|u_{\alpha }|^{2} = 0 \\
\\
\lefteqn{-\,i\,\xi \,u_{\alpha }\,\overline{u}_{\alpha \,t} -
i\,\beta \,\xi \,u_{\alpha }\, \overline{u}_{\alpha + 3} + \omega
\,\xi \,u_{\alpha }\,\overline{u}_{\alpha + 2} +
(|u|^{2})_{\alpha }\,\xi \,\overline{u}\,u_{\alpha } } \\
& & +\sum_{m=1}^{\alpha - 1}{\alpha\choose m}\,(|u|^{2})_{\alpha -
m} \,\xi \,\overline{u}_{m}\,u_{\alpha} + \xi \,|u|^{2}\,|u_{\alpha
}|^{2} = 0 . \qquad (\mbox{applying conjugate})
\end{eqnarray*}
Subtracting and integrating over $x\in \mathbb{R}$ we have
\begin{eqnarray}
\lefteqn{i\,\partial_{t}\int_{\mathbb{R}}\xi \,|u_{\alpha }|^{2}dx
+ i\,\beta \int_{\mathbb{R}}\xi \,\overline{u}_{\alpha
}\,u_{\alpha + 3}dx + i\,\beta \int_{\mathbb{R}}\xi
\,u_{\alpha}\,\overline{u}_{\alpha + 3}dx
- i\int_{\mathbb{R}}\xi_{t}\,|u_{\alpha }|^{2}dx } \nonumber \\
& & +\;\omega \int_{\mathbb{R}}\xi
\,\overline{u}_{\alpha}\,u_{\alpha + 2}dx - \omega
\int_{\mathbb{R}}\xi \,u_{\alpha}\,\overline{u}_{\alpha + 2}dx +
2\,i\,Im\int_{\mathbb{R}}\xi \,(|u|^{2})_{\alpha
}\,u\,\overline{u}_{\alpha }dx \nonumber \\
\label{e303}& & +\;2\,i\sum_{m=1}^{\alpha - 1}{\alpha\choose
m}\;Im\int_{\mathbb{R}}\xi \,(|u|^{2})_{\alpha - m}
\,u_{m}\,\overline{u}_{\alpha}dx = 0.
\end{eqnarray}
We estimate the second term integrating by parts
\begin{eqnarray*}
& & \int_{\mathbb{R}}\xi\,\overline{u}_{\alpha}\,u_{\alpha + 3}dx =
\int_{\mathbb{R}}\partial^{2}\xi \,\overline{u}_{\alpha}\,u_{\alpha
+ 1}dx + 2\int_{\mathbb{R}}\partial\xi \,|u_{\alpha + 1}|^{2}dx +
\int_{\mathbb{R}}\xi\,\overline{u}_{\alpha + 2}\,u_{\alpha + 1}dx.
\end{eqnarray*}
The other terms are calculated in a similar way. Hence, replacing in
\eqref{e303} and performing straightforward calculations we obtain
\begin{eqnarray*}
\lefteqn{i\,\partial_{t}\int_{\mathbb{R}}\xi \,|u_{\alpha }|^{2}dx +
i\,\beta
\int_{\mathbb{R}}\partial^{2}\xi\,\overline{u}_{\alpha}\,u_{\alpha +
1}dx + 2\,i\,\beta \int_{\mathbb{R}}\partial \xi\,|u_{\alpha +
1}|^{2}dx }\\
& & +\;i\,\beta \int_{\mathbb{R}}\xi \,\overline{u}_{\alpha +
2}\,u_{\alpha + 1}dx + i\,\beta \int_{\mathbb{R}}\partial^{2}\xi
\,u_{\alpha}\,\overline{u}_{\alpha + 1}dx
+ i\,\beta \int_{\mathbb{R}}\partial \xi\,|u_{\alpha + 1}|^{2}dx \\
& & -\;
i\,\beta \int_{\mathbb{R}}\xi\,u_{\alpha + 1}\,\overline{u}_{\alpha + 2}dx -
\omega \int_{\mathbb{R}}\partial \xi\,\overline{u}_{\alpha}\,u_{\alpha + 1}dx -
\omega \int_{\mathbb{R}}\xi \,|u_{\alpha + 1}|^{2}dx \\
& & +\;
\omega \int_{\mathbb{R}}\partial \xi\,u_{\alpha}\,\overline{u}_{\alpha + 1}dx
+ \omega \int_{\mathbb{R}}\xi \,|u_{\alpha + 1}|^{2}dx -
i\int_{\mathbb{R}}\partial_{t}\xi\,|u_{\alpha}|^{2}dx \\
& & +\;2\,i\,Im\int_{\mathbb{R}}\xi \,(|u|^{2})_{\alpha }\,
u\,\overline{u}_{\alpha }dx +
2\,i\sum_{m=1}^{\alpha - 1}{\alpha\choose m}Im\int_{\mathbb{R}}\xi \,
(|u|^{2})_{\alpha - m} \,u_{m}\,\overline{u}_{\alpha}dx = 0
\end{eqnarray*}
then
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi \,|u_{\alpha}|^{2}dx -
\beta \int_{\mathbb{R}}\partial^{3}\xi\,|u_{\alpha}|^{2}dx +
3\,\beta \int_{\mathbb{R}}\partial \xi\,|u_{\alpha + 1}|^{2}dx -
2\,\omega \,Im\int_{\mathbb{R}}\partial \xi\,
\overline{u}_{\alpha}\,u_{\alpha + 1}dx}\\
& & -\int_{\mathbb{R}}\partial_{t}\xi\,|u_{\alpha}|^{2}dx +
2\,Im\int_{\mathbb{R}}\xi\,(|u|^{2})_{\alpha
}\,u\,\overline{u}_{\alpha}dx + 2\sum_{m=1}^{\alpha -
1}{\alpha\choose m}Im\int_{\mathbb{R}}\xi \,(|u|^{2})_{\alpha - m}
\,u_{m}\,\overline{u}_{\alpha}dx = 0
\end{eqnarray*}
hence
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi\,|u_{\alpha}|^{2}dx -
\beta \int_{\mathbb{R}}\partial^{3}\xi\,|u_{\alpha}|^{2}dx +
3\,\beta \int_{\mathbb{R}}\partial \xi\,|u_{\alpha + 1}|^{2}dx +
2\,Im\int_{\mathbb{R}}(|u|^{2})_{\alpha}\,\xi
\,u\,\overline{u}_{\alpha}dx}\\
& & -\int_{\mathbb{R}}\partial_{t}\xi\,|u_{\alpha}|^{2}dx
+\,2\sum_{m=1}^{\alpha - 1}{\alpha\choose
m}Im\int_{\mathbb{R}}\xi\,(|u|^{2})_{\alpha - m}
\,u_{m}\,\overline{u}_{\alpha}dx =
2\,\omega\,Im\int_{\mathbb{R}}\partial \xi\,
\overline{u}_{\alpha}\,u_{\alpha + 1}dx \\
& \leq & |\omega |\int_{\mathbb{R}}\partial \xi\,|u_{\alpha}|^{2}dx
+ |\omega |\int_{\mathbb{R}}\partial \xi\,|u_{\alpha + 1}|^{2}dx
\end{eqnarray*}
therefore
\begin{eqnarray}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi \,|u_{\alpha}|^{2}dx +
\int_{\mathbb{R}}[\,3\,\beta - |\omega |\,]\,\partial
\xi\;|u_{\alpha + 1}|^{2}dx - \int_{\mathbb{R}}[\,\partial_{t}\xi +
\beta \,\partial^{3}\xi + |\omega |\,\partial
\xi\,]\,|u_{\alpha}|^{2}dx }\nonumber \\
\label{e304}& & +\;2\,Im\int_{\mathbb{R}}(|u|^{2})_{\alpha}\,\xi
\,u\,\overline{u}_{\alpha}dx + 2\sum_{m=1}^{\alpha -
1}{\alpha\choose m}Im\int_{\mathbb{R}}\xi \,(|u|^{2})_{\alpha - m}
\,u_{m}\,\overline{u}_{\alpha}dx\leq 0.
\end{eqnarray}
But
\begin{eqnarray*}
(|u|^{2})_{\alpha} & = & (\,u\,\overline{u}\,)_{\alpha} =
\sum_{k=0}^{\alpha}{\alpha\choose k}u_{\alpha - k}\,
\overline{u}_{k} = \overline{u}\,u_{\alpha} + \sum_{k=1}^{\alpha -
1}{\alpha\choose k}u_{\alpha - k}\,\overline{u}_{k} +
u\,\overline{u}_{\alpha}
\end{eqnarray*}
then
\begin{eqnarray*}
(|u|^{2})_{\alpha}\,u\,\overline{u}_{\alpha} =
|u|^{2}|u_{\alpha}|^{2} + \sum_{k=1}^{\alpha - 1}{\alpha\choose
k}u_{\alpha - k}\, \overline{u}_{k}\,u\,\overline{u}_{\alpha} +
u^{2}\,\overline{u}_{\alpha}^{2}
\end{eqnarray*}
thus,
\begin{eqnarray}
\lefteqn{2\;Im\int_{\mathbb{R}}(\,|u|^{2})_{\alpha}\,\xi
\,u\,\overline{u}_{\alpha}dx = 2\sum_{k=1}^{\alpha -
1}{\alpha\choose k}Im\int_{\mathbb{R}}\xi\,u_{\alpha -
k}\,\overline{u}_{k}\,u\,\overline{u}_{\alpha}dx +
2\;Im\int_{\mathbb{R}}\xi \,
u^{2}\,\overline{u}_{\alpha}^{2}dx }\nonumber \\
& \leq & 2\sum_{k=1}^{\alpha - 1}{\alpha\choose
k}\int_{\mathbb{R}}\xi\,|u_{\alpha -
k}|\,|u_{k}|\,|u|\,|u_{\alpha}|dx + 2\int_{\mathbb{R}}\xi
\,|u|^{2}\,
|u_{\alpha}|^{2}dx \nonumber \\
& \leq & 2\sum_{k=1}^{\alpha - 1}{\alpha\choose
k}\int_{\mathbb{R}}\xi\,|u_{\alpha -
k}|\,|u_{k}|\,|u|\,|u_{\alpha}|dx +
2\,||u||_{L^{\infty}(\mathbb{R})}^{2}\int_{\mathbb{R}}\xi
\,|u_{\alpha}|^{2}dx \nonumber \\
\label{e305}& \leq &
2\,||u||_{L^{\infty}(\mathbb{R})}\sum_{k=1}^{\alpha -
1}{\alpha\choose k}\int_{\mathbb{R}}\xi \,|u_{\alpha -
k}|\,|u_{k}|\,|u_{\alpha}|dx +
2\,||u||_{L^{\infty}(\mathbb{R})}^{2}\int_{\mathbb{R}}\xi
\,|u_{\alpha}|^{2}dx\qquad \;
\end{eqnarray}
hence, in \eqref{e304} we have
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi \,|u_{\alpha}|^{2}dx +
\int_{\mathbb{R}}[3\,\beta - |\omega |\,]\,\partial \xi\,|u_{\alpha
+ 1}|^{2}dx - \int_{\mathbb{R}}[\partial_{t}\xi + \beta
\,\partial^{3}\xi +
|\omega|\,\partial \xi + c_{0}\,\xi \,]\, |u_{\alpha}|^{2}dx} \\
& & -\;2\,c\sum_{k=1}^{\alpha - 1}{\alpha\choose
k}\int_{\mathbb{R}}\xi \,|u_{\alpha - k}|\,|u_{k}|\,|u_{\alpha}|dx -
2\sum_{m=1}^{\alpha - 1}{\alpha\choose m}\int_{\mathbb{R}}\xi \,
|(|u|^{2})_{\alpha - m}| \,|u_{m}|\,|u_{\alpha}|dx\leq 0.
\end{eqnarray*}
Therefore, using straightforward calculations we obtain the {\it
main inequality}
\begin{eqnarray*}
& & \partial_{t}\int_{\mathbb{R}}\xi \,|u_{\alpha}|^{2}dx +
\int_{\mathbb{R}}\eta \,|u_{\alpha + 1}|^{2}dx +
\int_{\mathbb{R}}\theta \,|u_{\alpha}|^{2}dx +
\int_{\mathbb{R}}R_{\alpha}dx\leq 0
\end{eqnarray*}
where
\begin{eqnarray*}
\eta & = & (3\,\beta - |\omega |\,)\,\partial \xi\qquad
\mbox{for}\qquad
|\omega |<3\;\beta \\
\theta & = & -\;[\,\partial_{t}\xi + \beta \,\partial^{3}\xi +
|\omega|\,\partial \xi + c_{0}\,\xi \,] \quad \mbox{where}\quad
c_{0}=||u||_{L^{\infty}(\mathbb{R})}^{2}
\end{eqnarray*}
and $R_{\alpha}=R_{\alpha}(|u_{\alpha }|,\,|u_{\alpha - 1}|,\,\ldots ).$\\
\\
{\it Remark.} In \eqref{e304} using Young's estimate and assuming
that $\beta>0$ we have
\begin{eqnarray*}
2\,\omega\;Im\int_{\mathbb{R}}\overline{u_{\alpha}}\,u_{\alpha +
1}\,dx\leq
\frac{|\omega|^{2}}{2\,\beta}\int_{\mathbb{R}}|u_{\alpha}|^{2}\,dx +
2\,\beta\int_{\mathbb{R}}|u_{\alpha + 1}|^{2}\,dx.
\end{eqnarray*}
Then, in \eqref{e304} we obtain
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi\,|u_{\alpha}|^{2}dx -
\beta \int_{\mathbb{R}}\partial^{3}\xi\,|u_{\alpha}|^{2}dx + \beta
\int_{\mathbb{R}}\partial \xi\,|u_{\alpha + 1}|^{2}dx +
2\,Im\int_{\mathbb{R}}(|u|^{2})_{\alpha}\,\xi
\,u\,\overline{u}_{\alpha}dx}\\
& & -\int_{\mathbb{R}}\partial_{t}\xi\,|u_{\alpha}|^{2}dx
+\,2\sum_{m=1}^{\alpha - 1}{\alpha\choose
m}Im\int_{\mathbb{R}}\xi\,(|u|^{2})_{\alpha - m}
\,u_{m}\,\overline{u}_{\alpha}dx =
2\,\omega\,Im\int_{\mathbb{R}}\partial \xi\,
\overline{u}_{\alpha}\,u_{\alpha + 1}dx \\
& \leq &
\frac{|\omega|^{2}}{2\,\beta}\int_{\mathbb{R}}|u_{\alpha}|^{2}\,dx
\end{eqnarray*}
and the assumption that $|\omega|<3\,\beta$ can be removed.\\
\\
{\bf Lemma 3.2.} {\it For $\eta \in W_{\sigma \;i\;k}$ an arbitrary
weight function and $|\omega |<3\,\beta ,$ there exists $\xi \in
W_{\sigma ,\;i + 1,\;k}$ that satisfies}
\begin{eqnarray}
\label{e306}\eta = (3\,\beta - |\omega |)\,\partial \xi \qquad
for\qquad |\omega |<3\;\beta .
\end{eqnarray}
Indeed, we have
\begin{eqnarray}
\label{e307}\xi = \frac{1}{(3\,\beta - |\omega
|)}\int_{-\,\infty}^{x}\eta (y,\,t)\,dy.
\end{eqnarray}
{\bf Lemma 3.3.} {\it The expression $R_{\alpha}$ in the inequality
of Lemma 3.1 is a sum of terms of the form}
\begin{eqnarray}
\label{e308}\xi
\,u_{\nu_{1}}\;\overline{u}_{\nu_{2}}\;\overline{u}_{\alpha}
\end{eqnarray}
{\it where $1\leq \nu_{1}\leq \nu_{2}\leq \alpha$ and }
\begin{eqnarray}
\label{e309}\nu_{1} + \nu_{2} = \alpha
\end{eqnarray}
{\it Proof.} It follows from \eqref{e305}.
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\setcounter{equation}{0}\section{An a priori estimate} We show now a
fundamental a priori estimate used for a basic local-in-time
existence theorem. We construct a mapping ${\cal
Z}:L^{\infty}([0,\,T]:\,H^{s}(\mathbb{R}))\longmapsto
L^{\infty}([0,\,T]:\,H^{s}(\mathbb{R}))$ with the property:\\
Given $u^{(n)}={\cal Z}(u^{(n - 1)})$ and
$essup_{t\in[0,\,T]}||u^{(n - 1)}||_{s}\leq c_{0}$ then
$essup_{t\in[0,\,T]}||u^{(n)}||_{s}\leq c_{0},$ where $s$ and
$c_{0}>0$ are constants. This property tells us that ${\cal
Z}:\mathbb{B}_{c_{0}}(0)\longmapsto \mathbb{B}_{c_{0}}(0)$ where
$\mathbb{B}_{c_{0}}(0)=\{v(x,\,t):\;||v(\,\cdot\,,\,t)||_{s}\leq
c_{0}\}$ is a ball in $L^{\infty}([0,\,T]:\,H^{s}(\mathbb{R})).$
To guarantee this property, we will appeal to an a
priori estimate which is the main object of this section.\\
\\
Differentiating \eqref{e204} two times leads to
\begin{eqnarray}
\label{e401}i\,\partial_{t} u_{2} + i\,\beta \,u_{5} + \omega
\,u_{4} + (|u|^{2})_{2}\,u + 2\,(|u|^{2})_{1}\,u_{1} +
|u|^{2}\,u_{2} = 0.
\end{eqnarray}
Let $u=\wedge v$ where $\wedge =(I - \partial^{2})^{-1}.$ Hence
$u=(I - \partial^{2})^{-1}v\;$ then $\;u - u_{2} = v\;$ where
$\;\partial _{t}u_{2} = -\,v_{t} + u_{t}.$ \\
\\
Replacing in \eqref{e401} we have
\begin{eqnarray}
\lefteqn{-\,i\,v_{t} + i\,\beta \,\wedge v_{5} + \omega \,\wedge
v_{4} + (|\wedge v|^{2})_{2}\wedge v + 2\,(|\wedge v|^{2})_{1}\wedge
v_{1} }
\nonumber \\
\label{e402}& & +\;|\wedge v|^{2}\,\wedge v_{2} - (i\,\beta
\,\wedge v_{3} + \omega \,\wedge v_{2} + |\wedge v|^{2}\,\wedge
v)=0.
\end{eqnarray}
The \eqref{e402} equation is linearized by substituting a new
variable $z$ in each coefficient:
\begin{eqnarray}
\lefteqn{-\,i\,v_{t} + i\,\beta \,\wedge v_{5} + \omega \,\wedge
v_{4} + (|\wedge z|^{2})_{2}\wedge v + 2\,(|\wedge z|^{2})_{1}\wedge
v_{1} }
\nonumber \\
\label{e403}& & +\;|\wedge z|^{2}\,\wedge v_{2} - (i\,\beta
\,\wedge v_{3} + \omega \,\wedge v_{2} + |\wedge z|^{2}\,\wedge
v)=0.
\end{eqnarray}
The linear equation which is to be solved at each iteration is of
the form
\begin{eqnarray}
\label{e404}i\,\partial_{t}v=i\,\beta \,\wedge v_{5}^{(n)} + \omega
\,\wedge v_{4}^{(n)} - i\,\beta \,\wedge v_{3}^{(n)} - \omega
\,\wedge v_{2}^{(n)} + b^{(1)}
\end{eqnarray}
where $b^{(1)}=(|\wedge z|^{2})_{2}\,\wedge v + 2\,(|\wedge
z|^{2})_{1}\,\wedge v_{1} + |\wedge z|^{2}\,\wedge v_{2} - |\wedge
z|^{2}\,\wedge v.$ Equation \eqref{e404} is a linear equation at
each iteration which can be solved in any interval of time in which
the
coefficient is defined.\\
\\
We consider the following lemma that will help us setting up the
iteration scheme.\\
\\
{\bf Lemma 4.1.} {\it Let $|\omega |<3\,\beta .$ Given initial data
$u_{0}(x)\in H^{\infty}(\mathbb{R}) = \bigcap _{N\geq
0}H^{N}(\mathbb{R})$ there exists a unique solution of \eqref{e404}
where $b^{(1)}$ is a smooth bounded coefficient with $z\in
H^{\infty}(\mathbb{R}).$ The solution is defined in any time
interval in which
the coefficient is defined.}\\
\\
{\it Proof.} Let $T>0$ be arbitrary and $M>0$ a constant. Let
\begin{eqnarray*}
\Gamma =\xi \,(\,i\,\partial_{t} - i\,\beta \,\wedge \partial^{5} -
\omega \,\wedge \partial^{4} + i\,\beta \,\wedge \partial^{3} +
\omega \,\wedge \partial^{2}\,)
\end{eqnarray*}
then in \eqref{e404} we have $\Gamma u=\xi \,b^{(1)}.$ We consider
the bilinear form ${\cal B} :{\cal D}\times {\cal D}\longmapsto
\mathbb{R},$
\begin{eqnarray*}
{\cal
B}(u,\,v)=<u,\,v>=Im\int_{0}^{T}\int_{\mathbb{R}}e^{-Mt}\,u\,
\overline{v}\,dx\,dt
\end{eqnarray*}
where ${\cal D}=\{u\in C_{0}^{\infty}(\mathbb{R} \times
[0,\,T]):\;u(x,\,0)=0\,\}.$ We have
\begin{eqnarray*}
\Gamma u\cdot \overline{u} & = & i\,\xi \,\overline{u}\,u_{t} -
i\,\beta \,\xi \, \overline{u}\,\wedge u_{5} - \omega \,\xi
\,\overline{u}\,\wedge u_{4} + i\,\beta \,\xi \,
\overline{u}\,\wedge u_{3} + \omega \,\xi \,\overline{u}\,\wedge u_{2}\\
\overline{\Gamma u\cdot \overline{u}} & = & -\,i\,\xi
\,u\,\overline{u}_{t} + i\,\beta \, \xi \,u\,\wedge \overline{u}_{5}
- \omega \,\xi \,u\,\wedge \overline{u}_{4} - i\,\beta \,\xi
\,u\,\wedge \overline{u}_{3} + \omega \,\xi \,u\,\wedge
\overline{u}_{2}.\; (\mbox{applying conjugate})
\end{eqnarray*}
Subtracting and integrating over $x\in \mathbb{R}$ we have
\begin{eqnarray*}
\lefteqn{2\,i\,Im\int_{\mathbb{R}}\Gamma u\cdot \overline{u}dx
= i\,\partial_{t}\int_{\mathbb{R}}\xi \,|u|^{2}dx -
i\int_{\mathbb{R}}\partial_{t}\xi\,|u|^{2}dx - i\,\beta
\int_{\mathbb{R}}\xi \,\overline{u}\,\wedge u_{5}dx -
i\,\beta \int_{\mathbb{R}}\xi \,u\,\wedge \overline{u}_{5}dx} \\
& & -\,\omega \int_{\mathbb{R}}\xi \,\overline{u}\,\wedge u_{4}dx +
\omega \int_{\mathbb{R}}\xi \,u\,\wedge \overline{u}_{4}dx +
i\,\beta \int_{\mathbb{R}}\xi \,\overline{u}\,\wedge u_{3}dx +
i\,\beta \int_{\mathbb{R}}\xi \,u\,\wedge \overline{u}_{3}dx \\
& & +\,\omega \int_{\mathbb{R}}\xi \,\overline{u}\,\wedge u_{2}dx -
\omega \int_{\mathbb{R}}\xi \,u\,\wedge \overline{u}_{2}dx.
\end{eqnarray*}
Each term is treated separately, integrating by parts
\begin{eqnarray*}
\lefteqn{\int_{\mathbb{R}}\xi \,\overline{u}\,\wedge u_{5}dx =
\int_{\mathbb{R}}\xi \,\wedge(I -
\partial^{2})\overline{u}\,\wedge u_{5}dx = \int_{\mathbb{R}}\xi
\,\wedge \overline{u}\,\wedge u_{5}dx -
\int_{\mathbb{R}}\xi \,\wedge \overline{u}_{2}\,\wedge u_{5}dx }\\
& = & \int_{\mathbb{R}}\partial^{4}\xi \,\wedge \overline{u}\,\wedge
u_{1}dx + \int_{\mathbb{R}}\partial^{3}\xi \,|\wedge u_{1}|^{2}dx -
3\int_{\mathbb{R}}\partial^{2}\xi \,\wedge \overline{u}_{1}\,\wedge
u_{2}dx - 2\int_{\mathbb{R}}\partial\xi \,|\wedge u_{2}|^{2}dx\\
& & +\int_{\mathbb{R}}\xi \,\wedge \overline{u}_{2}\,\wedge u_{3}dx
- \int_{\mathbb{R}}\partial^{2}\xi \,\wedge \overline{u}_{2}\,\wedge
u_{3}dx - \int_{\mathbb{R}}\partial\xi \,|\wedge u_{3}|^{2}dx +
\int_{\mathbb{R}}\xi \,\wedge \overline{u}_{3}\,\wedge u_{4}dx.
\end{eqnarray*}
The other terms are calculates in a similar way. Then
\begin{eqnarray*}
\lefteqn{2\,i\,Im\int_{\mathbb{R}}\Gamma u\cdot \overline{u}dx}\\
& = & i\,\partial_{t}\int_{\mathbb{R}}\xi\,|u|^{2}dx -
i\int_{\mathbb{R}}\partial_{t}\xi\,|u|^{2}dx - i\,\beta
\int_{\mathbb{R}}\partial^{4}\xi \,\wedge \overline{u}\,\wedge
u_{1}dx - i\,\beta \int_{\mathbb{R}}\partial^{3}\xi \,|\wedge
u_{1}|^{2}dx\\
& & +\;3\,i\,\beta \int_{\mathbb{R}}\partial^{2}\xi \,\wedge
\overline{u}_{1}\,\wedge u_{2}dx + 2\,i\,\beta
\int_{\mathbb{R}}\partial\xi \,|\wedge u_{2}|^{2}dx - i\,\beta
\int_{\mathbb{R}}\xi \,\wedge
\overline{u}_{2}\,\wedge u_{3}dx\\
& & +\;i\,\beta \int_{\mathbb{R}}\partial^{2}\xi \,\wedge
\overline{u}_{2}\, \wedge u_{3}dx + i\,\beta
\int_{\mathbb{R}}\partial\xi \,|\wedge u_{3}|^{2}dx - i\,\beta
\int_{\mathbb{R}}\xi \,\wedge
\overline{u}_{3}\,\wedge u_{4}dx \\
& & -\;i\,\beta \int_{\mathbb{R}}\partial^{4}\xi \,\wedge u\,\wedge
\overline{u}_{1}dx - i\,\beta \int_{\mathbb{R}}\partial^{3}\xi
\,|\wedge u_{1}|^{2}dx + 3\,i\,\beta
\int_{\mathbb{R}}\partial^{2}\xi \,\wedge
u_{1}\,\wedge \overline{u}_{2}dx\\
& & +\;2\,i\,\beta \int_{\mathbb{R}}\partial\xi \,|\wedge
u_{2}|^{2}dx - i\,\beta \int_{\mathbb{R}}\xi\,\wedge u_{2}\,\wedge
\overline{u}_{3}dx + i\,\beta\int_{\mathbb{R}}\partial^{2}\xi
\,\wedge u_{2}\,
\wedge\overline{u}_{3}dx\\
& & +\;2\,i\,\beta \int_{\mathbb{R}}\partial\xi \,|\wedge
u_{3}|^{2}dx + i\,\beta \int_{\mathbb{R}}\xi\,\wedge
\overline{u}_{3}\,\wedge u_{4}dx + \omega
\int_{\mathbb{R}}\partial^{3}\xi \,\wedge
\overline{u}\,\wedge u_{1}dx\\
& & +\;\omega \int_{\mathbb{R}}\partial^{2}\xi\,|\wedge
u_{1}|^{2}dx - 2\,\omega \int_{\mathbb{R}}\partial\xi\,\wedge
\overline{u}_{1}\,\wedge u_{2}dx - \omega \int_{\mathbb{R}}\xi\,|\wedge u_{2}|^{2}dx\\
& & -\;\omega
\int_{\mathbb{R}}\partial\xi\,\wedge\overline{u}_{2}\,\wedge u_{3}dx
- \omega \int_{\mathbb{R}}\xi \,|\wedge u_{3}|^{2}dx - \omega
\int_{\mathbb{R}}\partial^{3}\xi\,\wedge u\,\wedge
\overline{u}_{1}dx\\
& & - \omega \int_{\mathbb{R}}\partial^{2}\xi \,|\wedge
u_{1}|^{2}dx + 2\,\omega \int_{\mathbb{R}}\partial\xi
\,\wedge u_{1}\,\wedge \overline{u}_{2}dx +
\omega \int_{\mathbb{R}}\xi\,|\wedge u_{2}|^{2}dx\\
& & +\;\omega \int_{\mathbb{R}}\partial\xi\,\wedge u_{2}\,\wedge
\overline{u}_{3}dx + \omega \int_{\mathbb{R}}\xi \,|\wedge
u_{3}|^{2}dx + i\,\beta \int_{\mathbb{R}}\partial^{2}\xi \,\wedge
\overline{u}\,\wedge u_{1}dx\\
& & +\;i\,\beta \int_{\mathbb{R}}\partial\xi \,|\wedge
u_{1}|^{2}dx - i\,\beta \int_{\mathbb{R}}\xi \,\wedge
\overline{u}_{1}\,\wedge u_{2}dx -
i\,\beta\int_{\mathbb{R}}\xi\,\wedge
\overline{u}_{2}\,\wedge u_{3}dx\\
& & +\;i\,\beta \int_{\mathbb{R}}\partial^{2}\xi \,\wedge u\,\wedge
\overline{u}_{1}dx + i\,\beta \int_{\mathbb{R}}\partial\xi\,|\wedge
u_{1}|^{2}dx - i\,\beta \int_{\mathbb{R}}\xi\,\wedge u_{1}\,\wedge
\overline{u}_{2}dx\\
& & -\;i\,\beta \int_{\mathbb{R}}\xi \,\wedge u_{2}\,\wedge
\overline{u}_{3}dx - \omega \int_{\mathbb{R}}\partial \xi \,\wedge
\overline{u}\,\wedge u_{1}dx - \omega \int_{\mathbb{R}}\xi\,|\wedge
u_{1}|^{2}dx - \omega
\int_{\mathbb{R}}\xi \,|\wedge u_{2}|^{2}dx\\
& & +\;\omega \int_{\mathbb{R}}\partial \xi\,\wedge u\,\wedge
\overline{u}_{1}dx + \omega\int_{\mathbb{R}}\xi \,|\wedge
u_{1}|^{2}dx + \omega\int_{\mathbb{R}}\xi\,|\wedge u_{2}|^{2}dx
\end{eqnarray*}
hence
\begin{eqnarray*}
\lefteqn{2\,i\,Im\int_{\mathbb{R}}\Gamma u\cdot \overline{u}dx =
i\,\partial_{t}\int_{\mathbb{R}}\xi \,|u|^{2}dx -
i\int_{\mathbb{R}}\partial_{t}\xi\,|u|^{2}dx - i\,\beta
\int_{\mathbb{R}}\partial^{4}\xi
\,(|\wedge u|^{2})_{1}dx }\\
& & -\, 2\,i\,\beta \int_{\mathbb{R}}\partial^{3}\xi \,|\wedge
u_{1}|^{2}dx + 3\,i\,\beta \int_{\mathbb{R}}\partial^{2}\xi
\,(|\wedge u_{1}|^{2})_{1}dx + 4\,i\,\beta
\int_{\mathbb{R}}\partial\xi \,|\wedge
u_{2}|^{2}dx \\
& &-\,i\,\beta \int_{\mathbb{R}}\xi \,(|\wedge u_{2}|^{2})_{1}dx +
i\,\beta \int_{\mathbb{R}}\partial^{2}\xi \,(|\wedge
u_{2}|^{2})_{1}dx +
3\,i\,\beta \int_{\mathbb{R}}\partial\xi \,|\wedge u_{3}|^{2}dx \\
& & +\,2\,i\,\omega\,Im\int_{\mathbb{R}}\partial^{3}\xi\,\wedge
\overline{u}\,\wedge u_{1}dx - 4\,i\,\omega \,
Im\int_{\mathbb{R}}\partial\xi \,\wedge \overline{u}_{1}\,\wedge
u_{2}dx \\
& & -\;2\,i\,\omega \, Im\int_{\mathbb{R}}\partial\xi \,\wedge
\overline{u}_{2}\,\wedge u_{3}dx + i\,\beta
\int_{\mathbb{R}}\partial^{2}\xi \,(|\wedge u|^{2})_{1}dx +
2\,i\,\beta \int_{\mathbb{R}}\partial \xi \,|\wedge u_{1}|^{2}dx
\\
& & -\;i\,\beta \int_{\mathbb{R}}\xi \,(|\wedge u_{1}|^{2})_{1}dx -
i\,\beta \int_{\mathbb{R}}\xi \,(|\wedge u_{2}|^{2})_{1}dx -
2\,\omega \,Im\int_{\mathbb{R}}\partial \xi\,\wedge
\overline{u}\,\wedge u_{1}dx
\end{eqnarray*}
then, adding similar terms and cutting the letter $i$ we obtain
\begin{eqnarray*}
\lefteqn{2\,Im\int_{\mathbb{R}}\Gamma u\cdot \overline{u}\,dx =
\partial_{t}\int_{\mathbb{R}}\xi \,|u|^{2}dx -
\int_{\mathbb{R}}\partial_{t}\xi\,|u|^{2}dx + \beta
\int_{\mathbb{R}}\partial^{5}\xi \,|\wedge u|^{2}dx -
5\,\beta \int_{\mathbb{R}}\partial^{3}\xi \,|\wedge u_{1}|^{2}dx }\\
& & +\;6\,\beta \int_{\mathbb{R}}\partial\xi \,|\wedge u_{2}|^{2}dx
- \beta \int_{\mathbb{R}}\partial^{3}\xi\,|\wedge u_{2}|^{2}dx +
3\,\beta \int_{\mathbb{R}}\partial\xi\,|\wedge
u_{3}|^{2}dx \\
& & +\;2\,\omega\,Im\int_{\mathbb{R}}\partial^{3}\xi\,\wedge
\overline{u}\,\wedge u_{1}dx - 4\,\omega\,
Im\int_{\mathbb{R}}\partial\xi\,\wedge \overline{u}_{1}\,\wedge
u_{2}dx - 2\,\omega\,Im\int_{\mathbb{R}}\partial\xi \,\wedge
\overline{u}_{2}\,\wedge u_{3}dx \\
& & -\;\beta \int_{\mathbb{R}}\partial^{3}\xi \,|\wedge u|^{2}dx +
3\,\beta \int_{\mathbb{R}}\partial \xi \,|\wedge u_{1}|^{2}dx -
2\,\omega \,Im\int_{\mathbb{R}}\partial \xi \,\wedge
\overline{u}\,\wedge u_{1}dx
\end{eqnarray*}
then
\begin{eqnarray*}
\lefteqn{|\omega |\int_{\mathbb{R}}\partial \xi \,|\wedge
u_{3}|^{2}dx + |\omega |\int_{\mathbb{R}}\partial \xi \,|\wedge
u_{2}|^{2}dx + 2\,|\omega |\int_{\mathbb{R}}\partial \xi \,|\wedge
u_{1}|^{2}dx
+ 2\,|\omega |\int_{\mathbb{R}}\partial \xi \,|\wedge
u_{2}|^{2}\,dx }\\
& & +\;|\omega |\int_{\mathbb{R}}\partial \xi \,|\wedge u|^{2}dx +
|\omega |\int_{\mathbb{R}}\partial \xi \,|\wedge u_{1}|^{2}dx +
|\omega |\int_{\mathbb{R}}|\partial ^{3}\xi |\,|\wedge
u|^{2}dx\\
& & +\;|\omega |\int_{\mathbb{R}}|\partial ^{3}\xi |\,|\wedge
u_{1}|^{2}dx + \int_{\mathbb{R}}\partial_{t}\xi\,|u|^{2}dx +
2\,Im\int_{\mathbb{R}}\Gamma u\cdot \overline{u}dx\\
& \geq &
\partial_{t}\int_{\mathbb{R}}\xi \,|u|^{2}dx + 3\,\beta
\int_{\mathbb{R}}\partial\xi \,|\wedge u_{3}|^{2}dx -
\beta \int_{\mathbb{R}}\partial^{3}\xi \,|\wedge u_{2}|^{2}dx
+ 6\,\beta \int_{\mathbb{R}}\partial\xi \,|\wedge u_{2}|^{2}dx\\
& & -\;5\,\beta \int_{\mathbb{R}}\partial^{3}\xi \,|\wedge
u_{1}|^{2}dx + 3\,\beta \int_{\mathbb{R}}\partial \xi \,|\wedge
u_{1}|^{2}dx + \beta \int_{\mathbb{R}}\partial^{5}\xi \,|\wedge
u|^{2}dx - \beta \int_{\mathbb{R}}\partial^{3}\xi \,|\wedge u|^{2}dx
\end{eqnarray*}
where
\begin{eqnarray*}
& & 3\;|\omega |\int_{\mathbb{R}}\partial \xi \,|\wedge
u_{2}|^{2}dx + |\omega |\int_{\mathbb{R}}[|\partial^{3}\xi | +
3\,\partial \xi ]\,|\wedge u_{1}|^{2}dx \\
& & +\;|\omega |\int_{\mathbb{R}}[|\partial^{3}\xi | +
\partial \xi + \partial_{t}\xi]\,|\wedge u|^{2}dx +
2\,Im\int_{\mathbb{R}}\Gamma u\cdot \overline{u}dx \\
& \geq &
\partial_{t}\int_{\mathbb{R}}\xi \,|u|^{2}dx +
\int_{\mathbb{R}}[3\,\beta - |\omega |]\,\partial \xi \,|\wedge
u_{3}|^{2}dx - \beta
\int_{\mathbb{R}}\partial^{3}\xi \,|\wedge u_{2}|^{2}dx \\
& & +\;6\,\beta \int_{\mathbb{R}}\partial\xi \,|\wedge u_{2}|^{2}dx
- 5\,\beta \int_{\mathbb{R}}\partial^{3}\xi \,|\wedge u_{1}|^{2}dx
+\,3\,\beta \int_{\mathbb{R}}\partial \xi \,|\wedge
u_{1}|^{2}dx \\
& & +\;\beta \int_{\mathbb{R}}\partial^{5}\xi \,|\wedge u|^{2}dx
- \beta \int_{\mathbb{R}}\partial^{3}\xi\,|\wedge u|^{2}dx \\
& \geq &
\partial_{t}\int_{\mathbb{R}}\xi \,|u|^{2}dx + \beta
\int_{\mathbb{R}}[-\partial^{3}\xi +
5\partial \xi ]\,|\wedge u_{2}|^{2}dx \\
& & +\,\beta \int_{\mathbb{R}}[-5\,\partial^{3}\xi + 3\partial\xi
]\,|\wedge u_{1}|^{2}dx + \beta \int_{\mathbb{R}}[\partial^{3}\xi -
\partial ^{3}\xi ]\,|\wedge u|^{2}dx
\end{eqnarray*}
using \eqref{e203}, $\wedge u_{n}=(I - (I -
\partial^{2}))\wedge u_{n - 2}=\wedge u_{n - 2} - u_{n - 2}$ for $n$
a positive integer and standard estimates we obtain
\begin{eqnarray*}
Im\int_{\mathbb{R}}\Gamma u\cdot \overline{u}\,dx \geq
\partial_{t}\int_{\mathbb{R}}\xi\,|u|^{2}\,dx - c\int_{\mathbb{R}}\xi
\,|u|^{2}\,dx.
\end{eqnarray*}
Multiply this equation by $e^{-Mt},$ and integrate with respect to
$t$ for $t\in [0,\,T]$ and $u\in {\cal D}$
\begin{eqnarray*}
\lefteqn{Im\int_{0}^{T}\int_{\mathbb{R}}e^{-Mt}\,\Gamma u\cdot
\overline{u}\,dx\,dt \geq \int_{0}^{T}e^{-Mt}\left
(\partial_{t}\int_{\mathbb{R}}\xi\,|u|^{2}dx\right)dt -
c\int_{0}^{T}\int_{\mathbb{R}}\xi\,e^{-Mt}\,|u|^{2}dx\,dt } \\
& = & e^{-Mt}\int_{\mathbb{R}}\xi\,|u|^{2}dx\;\big /_{0}^{T} +
M\int_{0}^{T}\int_{\mathbb{R}}\xi\,e^{-Mt}\,|u|^{2}dx\,dt -
c\int_{0}^{T}\int_{\mathbb{R}}\xi\,e^{-Mt}\,|u|^{2}dx\,dt\\
& = & e^{-Mt}\int_{\mathbb{R}}\xi(x,\,T)\,|u(x,\,T)|^{2}dx +
M\int_{0}^{T}\int_{\mathbb{R}}\xi\,e^{-Mt}\,|u|^{2}dx\,dt -
c\int_{0}^{T}\int_{\mathbb{R}}\xi\,e^{-Mt}\,|u|^{2}dx\,dt.
\end{eqnarray*}
Thus
\begin{eqnarray*}
\lefteqn{<\Gamma
u,\,u>=Im\int_{0}^{T}\int_{\mathbb{R}}e^{-Mt}\,\Gamma u\cdot
\overline{u}\,dx\,dt} \\
& \geq &
e^{-Mt}\int_{\mathbb{R}}\xi(x,\,T)\,|u(x,\,T)|^{2}dx +
(M - c)\int_{0}^{T}\int_{\mathbb{R}}\xi\,e^{-Mt}\,|u|^{2}dx\,dt \\
& \geq & \int_{0}^{T}\int_{\mathbb{R}}\xi\,e^{-Mt}\,|u|^{2}dx\,dt
\end{eqnarray*}
provided that $M$ is chosen large enough. Then $<\Gamma u,\,u>\geq
<u,\,u>,$ for all $u\in {\cal D}.$ Let $\Gamma ^{*}$ be the formal
adjoint of $\Gamma $ defined by $\Gamma^{*}=\xi(-i\,\partial_{t} -
i\,\beta\,\wedge \partial^{5} - \omega\,\wedge
\partial^{4} + i\,\beta \,\wedge \partial^{3} + \omega \,\wedge
\partial^{2}).$ Let ${\cal D}^{*}=\{w\in C_{0}^{\infty}(\mathbb{R} \times
[0,\,T]):\;w(x,\,T)=0\,\}.$ In a similar way we prove that
\begin{eqnarray*}
<\Gamma^{*}w,\,w> \;\geq \;<w,\,w>,\quad \forall \;w\in {\cal
D}^{*}.
\end{eqnarray*}
>From this equation, we have that $\Gamma^{*}$ is one-one.
Therefore, $<\Gamma^{*}w,\,\Gamma^{*}v>$ is an inner product on
${\cal D}^{*}.$ We denote by $X$ the completion of ${\cal D}^{*}$
with respect to this inner product. By Riesz's Representation
Theorem, there exists a unique solution $V\in X,$ such that for
any $w\in {\cal D}^{*},$ $<\xi
b^{(1)},\,w>=<\Gamma^{*}V,\,\Gamma^{*}w>$ where we use that $\xi
\,b^{(1)}\in X.$ Then if $v=\Gamma^{*}V$ we have
$<v,\,\Gamma^{*}w>=<\xi b^{(1)},\,w>$ or
$<\Gamma^{*}w,\,v>=<w,\,\xi b^{(1)}>.$ Hence, $v=\Gamma^{*}V$ is a
weak solution of $\Gamma v=\xi b^{(1)}$ with $v\in
L^{2}(\mathbb{R} \times [0,\,T])\simeq
L^{2}([0,\,T]:\,L^{2}(\mathbb{R})).$\\
\\
{\it Remark.} To obtain higher regularity of the solution, we repeat
the proof with higher derivatives. It is a standard approximation
procedure to obtain a result for general initial data.\\
\\
The next step is to estimate the corresponding solutions $v=v(x,\,t)$
of the equation \eqref{e403} via the coefficients of that equation.\\
\\
The following estimate is related to the existence of solutions
theorem.\\
\\
{\bf Lemma 4.2.} {\it Let $|\omega |<3\,\beta $ and $0< \gamma _{1}
\leq \xi \leq \gamma_{2},$ with $\gamma_{2},\,\gamma_{2}$ real
constants. Let $v,\,z\in C^{k}([0,\,+\infty):\;H^{N}(\mathbb{R}))$
for all $k,\,N$ which satisfy \eqref{e403}. For each integer $\alpha
$ there exist positive nondecreasing functions $G$ and $F$ such that
for all $t\geq 0$}
\begin{eqnarray}
\label{e405}\partial_{t}\int_{\mathbb{R}}\xi
\,|v_{\alpha}|^{2}dx\leq G(||z||_{\lambda})\,||v||_{\alpha}^{2} +
F(||z||_{\alpha})
\end{eqnarray}
{\it where $||\;\cdot \;||_{\alpha}$ is the norm in
$H^{\alpha}(\mathbb{R})$
and $\lambda =\max\{1,\,\alpha\}.$}\\
\\
{\it Proof.} Differentiating $\alpha $-times the equation
\eqref{e403}, for some $\alpha \geq 0$ we have
\begin{eqnarray}
\label{e406}-i\;\partial_{t}v_{\alpha} + i\,\beta\wedge v_{\alpha +
5} + \omega \wedge v_{\alpha + 4} - i\,\beta\wedge v_{\alpha + 3} +
\sum_{j=3}^{\alpha + 2}h^{(j)}\wedge v_{j} + (|z|^{2})_{\alpha +
2}\wedge v + p(\wedge z_{\alpha + 1},\,\ldots )=0
\end{eqnarray}
where $h^{(j)}$ is a smooth function depending on $|\wedge
z|^{2},\,\ldots \,$ with $i=2 + \alpha - j.$ For $\alpha\geq 2,$
$p(\wedge z_{\alpha + 1},\,\ldots )$ depends at most linearly on
$\wedge z_{\alpha + 1},$ while for $\alpha =2,$ $p(\wedge z_{\alpha
+ 1},\,\ldots
)$ depends at most quadratically on $\wedge z_{\alpha + 1}.$\\
We multiply equation \eqref{e406} by $\xi\,\overline{v}_{\alpha}$
and integrate over $x\in \mathbb{R}$
\begin{eqnarray*}
\lefteqn{-\,i\int_{\mathbb{R}}\xi
\,\overline{v}_{\alpha}\,\partial_{t}v_{\alpha}dx +
i\,\beta\int_{\mathbb{R}}\xi \,\overline{v}_{\alpha}\wedge v_{\alpha
+ 5}dx + \omega \int_{\mathbb{R}}\xi \,\overline{v}_{\alpha}\,\wedge
v_{\alpha + 4}dx -
i\,\beta\int_{\mathbb{R}}\xi\,\overline{v}_{\alpha}\wedge
v_{\alpha + 3}dx} \\
& & +\sum_{j=3}^{\alpha + 2}h^{(j)}\int_{\mathbb{R}}\xi
\,\overline{v}_{\alpha}\wedge v_{j}dx +
\int_{\mathbb{R}}\xi\,(|z|^{2})_{\alpha +
2}\overline{v}_{\alpha}\wedge vdx
+
\int_{\mathbb{R}}\xi\,\overline{v}_{\alpha}
p(\wedge z_{\alpha + 1},\,\ldots )dx=0
\end{eqnarray*}
and applying conjugate
\begin{eqnarray*}
\lefteqn{i\int_{\mathbb{R}}\xi
\,v_{\alpha}\,\partial_{t}\overline{v}_{\alpha}dx -
i\,\beta\int_{\mathbb{R}}\xi\,v_{\alpha}\wedge \overline{v}_{\alpha
+ 5}dx + \omega \int_{\mathbb{R}}\xi \,v_{\alpha}\wedge
\overline{v}_{\alpha + 4}dx +
i\,\beta\int_{\mathbb{R}}\xi\,v_{\alpha}\wedge
\overline{v}_{\alpha + 3}dx}\\
& & +\sum_{j=3}^{\alpha + 2}h^{(j)}\int_{\mathbb{R}}\xi
\,v_{\alpha}\wedge \overline{v}_{j}dx + \int_{\mathbb{R}}\xi
\,(|z|^{2})_{\alpha + 2}v_{\alpha}\wedge \overline{v}dx +
\int_{\mathbb{R}}\xi \,\overline{v}_{\alpha}
p(\wedge z_{\alpha + 1},\,\ldots )dx=0.
\end{eqnarray*}
Subtracting, it follows that
\begin{eqnarray}
\lefteqn{-\,i\,\partial_{t}\int_{\mathbb{R}}\xi\,|v_{\alpha}|^{2}dx
+ i\,\int_{\mathbb{R}}\partial_{t}\xi\,|v_{\alpha}|^{2}dx +
i\,\beta\int_{\mathbb{R}}\xi\,\overline{v}_{\alpha}\wedge v_{\alpha
+ 5}dx + i\,\beta\int_{\mathbb{R}}\xi
\,v_{\alpha}\wedge\overline{v}_{\alpha + 5}dx } \nonumber \\
& & +\;\omega \int_{\mathbb{R}}\xi \,\overline{v}_{\alpha}\wedge
v_{\alpha + 4}dx - \omega \int_{\mathbb{R}}\xi \,v_{\alpha}\wedge
\overline{v}_{\alpha + 4}dx - i\,\beta\int_{\mathbb{R}}\xi
\,\overline{v}_{\alpha}\wedge v_{\alpha + 3}dx -
i\,\beta\int_{\mathbb{R}}\xi \,v_{\alpha}\wedge\overline{v}_{\alpha
+ 3}dx \nonumber \\
\label{e407}& & +\sum_{j=3}^{\alpha + 2}h^{(j)}\int_{\mathbb{R}}\xi
\,\overline{v}_{\alpha}\wedge v_{j}dx - \sum_{j=3}^{\alpha +
2}h^{(j)}\int_{\mathbb{R}}\xi \,v_{\alpha}\wedge\overline{v}_{j}dx +
\int_{\mathbb{R}}\xi
\,(|z|^{2})_{\alpha + 2}v_{\alpha}\wedge\overline{v}dx \\
& & -\int_{\mathbb{R}}\xi \,(|z|^{2})_{\alpha +
2}\overline{v}_{\alpha}\wedge v\,dx + \int_{\mathbb{R}}\xi
\,\overline{v}_{\alpha}p(\wedge z_{\alpha + 1},\,\ldots )\,dx -
\int_{\mathbb{R}}\xi\,v_{\alpha}\,p(\wedge z_{\alpha + 1},\,\ldots
)dx=0. \nonumber
\end{eqnarray}
Each term is treated separately, integrating by parts
\begin{eqnarray*}
\lefteqn{\int_{\mathbb{R}}\xi\,\overline{v}_{\alpha}\wedge v_{\alpha
+ 5}dx = \int_{\mathbb{R}}\xi\wedge(I -
\partial^{2})\overline{v}_{\alpha}
\wedge v_{\alpha + 5}dx } \\
& = & \int_{\mathbb{R}}\xi\,\wedge\overline{v}_{\alpha}\wedge
v_{\alpha + 5}dx -
\int_{\mathbb{R}}\xi\wedge\overline{v}_{\alpha + 2}\wedge v_{\alpha + 5}dx\\
& = & \int_{\mathbb{R}}\partial^{4}\xi\wedge\overline{v}_{\alpha
}\,\wedge v_{\alpha + 1}dx +
\int_{\mathbb{R}}\partial^{3}\xi\,|\wedge v_{\alpha + 1}|^{2}dx -
3\int_{\mathbb{R}}\partial^{2}\xi\wedge\overline{v}_{\alpha + 1}
\wedge v_{\alpha + 2}dx \\
& & -\,2\int_{\mathbb{R}}\partial\xi\,|\wedge\overline{v}_{\alpha +
2}|^{2}dx + \int_{\mathbb{R}}\xi\wedge\overline{v}_{\alpha +
2}\,\wedge v_{\alpha + 3}dx -
\int_{\mathbb{R}}\partial^{2}\xi\wedge\overline{v}_{\alpha + 2}
\wedge v_{\alpha + 3}dx \\
& & -\,2\int_{\mathbb{R}}\partial \xi \,|\wedge v_{\alpha +
3}|^{2}dx - \int_{\mathbb{R}}\xi\,\wedge\overline{v}_{\alpha +
4}\wedge v_{\alpha + 3}dx.
\end{eqnarray*}
The other terms are calculated in a similar way. Hence in
\eqref{e407} we have performing straightforward calculations as
above
\begin{eqnarray*}
\lefteqn{-\,\partial_{t}\int_{\mathbb{R}}\xi\,|v_{\alpha}|^{2}dx +
\int_{\mathbb{R}}\partial_{t}\xi\,|v_{\alpha}|^{2}dx - \beta
\int_{\mathbb{R}}\partial^{5}\xi \,|\wedge v_{\alpha}|^{2}dx +
2\,\beta \int_{\mathbb{R}}\partial^{3}\xi\,
|\wedge v_{\alpha + 1}|^{2}dx } \\
& & +\,3\,\beta \int_{\mathbb{R}}\partial^{3}\xi\,|\wedge v_{\alpha
+ 1}|^{2}dx - 4\,\beta \int_{\mathbb{R}}\partial\xi\,|\wedge
v_{\alpha + 2}|^{2}dx - \beta
\int_{\mathbb{R}}\partial\xi\,|\wedge v_{\alpha + 2}|^{2}dx\\
& & +\;\beta \int_{\mathbb{R}}\partial^{2}\xi\,|\wedge v_{\alpha +
2}|^{2}dx - 3\,\beta \int_{\mathbb{R}}\partial\xi\,|\wedge v_{\alpha
+ 3}|^{2}\,dx - 2\,\omega \,Im\int_{\mathbb{R}}\partial^{3}\xi
\,\wedge
\overline{v}_{\alpha}\wedge v_{\alpha + 1}dx \\
& & +\;4\,\omega \,Im\int_{\mathbb{R}}\partial\xi \wedge
\overline{v}_{\alpha + 1}\wedge v_{\alpha + 2}dx + 2\,\omega
\,Im\int_{\mathbb{R}}\partial\xi\wedge \overline{v}_{\alpha +
2}\wedge v_{\alpha + 3}dx \\
& & +\;2\,\beta \,Im\int_{\mathbb{R}}\partial \xi
\wedge\overline{v}_{\alpha}\wedge v_{\alpha + 2}dx + 2\,\beta
\,Im\int_{\mathbb{R}}\xi\wedge\overline {v}_{\alpha + 1}
\wedge v_{\alpha + 2}dx \\
& & -\,\beta \int_{\mathbb{R}}\partial \xi
\,|\wedge\overline{v}_{\alpha + 2}|^{2}dx + 2\,\sum_{j=3}^{\alpha +
2}h^{(j)}\,Im\int_{\mathbb{R}}\xi \,\overline{v}_{\alpha}\wedge
v_{j}dx \\
& & +\;2\,Im\int_{\mathbb{R}}\xi \,(|z|^{2})_{\alpha +
2}\,v_{\alpha}\wedge\overline{v}dx + 2\,Im\int_{\mathbb{R}}\xi
\,\overline{v}_{\alpha}p(\wedge z_{\alpha + 1},\,\ldots )\,dx=0
\end{eqnarray*}
then
\begin{eqnarray*}
\lefteqn{-\,\partial_{t}\int_{\mathbb{R}}\xi \,|v_{\alpha}|^{2}dx +
\int_{\mathbb{R}}\partial_{t}\xi\,|v_{\alpha}|^{2}dx - 3\,\beta
\int_{\mathbb{R}}\partial\xi\,|\wedge v_{\alpha + 3}|^{2}dx + \beta
\int_{\mathbb{R}}\partial^{2}\xi\,|\wedge v_{\alpha + 2}|^{2}dx } \\
& & -\,6\,\beta \int_{\mathbb{R}}\partial \xi
\,|\wedge\overline{v}_{\alpha + 2}|^{2}dx + 5\,\beta
\int_{\mathbb{R}}\partial^{3}\xi\,|\wedge v_{\alpha + 1}|^{2}dx -
\beta \int_{\mathbb{R}}\partial^{5}\xi\,|\wedge
v_{\alpha}|^{2}dx \\
& = & -\,2\,\omega\,Im\int_{\mathbb{R}}\partial\xi\,\wedge
\overline{v}_{\alpha + 2}\wedge v_{\alpha + 3}dx - 4\,\omega
\,Im\int_{\mathbb{R}}\partial\xi\wedge \overline{v}_{\alpha +
1}\wedge
v_{\alpha + 2}dx\\
& & -\,2\,\beta\,Im\int_{\mathbb{R}}\xi\wedge\overline {v}_{\alpha
+ 1}
\wedge v_{\alpha + 2}dx - 2\,\beta \,Im\int_{\mathbb{R}}\partial \xi
\wedge\overline{v}_{\alpha}\wedge v_{\alpha + 2}dx\\
& & +\,2\,\omega\,Im\int_{\mathbb{R}}\partial^{3}\xi \wedge
\overline{v}_{\alpha}\wedge v_{\alpha + 1}dx - 2\,\sum_{j=3}^{\alpha
+ 2}h^{(j)}\,Im\int_{\mathbb{R}}\xi \,
\overline{v}_{\alpha}\wedge v_{j}dx \\
& & -\,2\,Im\int_{\mathbb{R}}\xi \,(|z|^{2})_{\alpha +
2}v_{\alpha}\wedge\overline{v}dx - 2\,Im\int_{\mathbb{R}}\xi
\,\overline{v}_{\alpha}p(\wedge z_{\alpha + 1},\,\ldots )\,dx
\end{eqnarray*}
hence,
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi \,|v_{\alpha}|^{2}dx -
\int_{\mathbb{R}}\partial_{t}\xi\,|v_{\alpha}|^{2}dx + 3\,\beta
\int_{\mathbb{R}}\partial\xi\,|\wedge v_{\alpha + 3}|^{2}dx - \beta
\int_{\mathbb{R}}\partial^{2}\xi\,|\wedge v_{\alpha + 2}|^{2}dx } \\
& & +\, 6\,\beta \int_{\mathbb{R}}\partial\xi
\,|\wedge\overline{v}_{\alpha + 2}|^{2}dx - 5\,\beta
\int_{\mathbb{R}}\partial^{3}\xi\,|\wedge v_{\alpha + 1}|^{2}dx +
\beta
\int_{\mathbb{R}}\partial^{5}\xi \,|\wedge v_{\alpha}|^{2}dx \\
& = & 2\,\omega \,Im\int_{\mathbb{R}}\partial\xi\,\wedge
\overline{v}_{\alpha + 2}\wedge v_{\alpha + 3}dx + 4\,\omega
\,Im\int_{\mathbb{R}}\partial\xi\wedge \overline{v}_{\alpha +
1}\wedge
v_{\alpha + 2}dx\\
& & +\,2\,\beta \,Im\int_{\mathbb{R}}\xi\wedge\overline {v}_{\alpha
+ 1}
\wedge v_{\alpha + 2}dx + 2\,\beta \,Im\int_{\mathbb{R}}\partial \xi
\wedge\overline{v}_{\alpha}\wedge v_{\alpha + 2}dx\\
& & -\,2\,\omega \,Im\int_{\mathbb{R}}\partial^{3}\xi \wedge
\overline{v}_{\alpha}\wedge v_{\alpha + 1}dx + 2\,\sum_{j=3}^{\alpha
+ 2}h^{(j)}\,Im\int_{\mathbb{R}}\xi \,
\overline{v}_{\alpha}\wedge v_{j}dx \\
& & +\; 2\,Im\int_{\mathbb{R}}\xi \,(|z|^{2})_{\alpha +
2}v_{\alpha}\wedge\overline{v}dx + 2\,Im\int_{\mathbb{R}}\xi \,
\overline{v}_{\alpha}p(\wedge z_{\alpha + 1},\,\ldots )\,dx\\
& \leq & |\omega |\int_{\mathbb{R}}\partial\xi \,|\wedge v_{\alpha +
2}|^{2}\,dx + |\omega |\int_{\mathbb{R}}\partial\xi |\wedge
v_{\alpha + 3}|^{2}dx + 2\,|\omega |\int_{\mathbb{R}}\partial\xi
\,|\wedge v_{\alpha + 1}|^{2}dx \\
& & +\;2\,|\omega |\int_{\mathbb{R}}\partial\xi |\wedge v_{\alpha +
2}|^{2}dx + |\beta |\int_{\mathbb{R}}\xi \,|\wedge v_{\alpha +
1}|^{2}dx
+ |\beta |\int_{\mathbb{R}}\xi \,|\wedge v_{\alpha + 2}|^{2}dx\\
& & +\,|\beta |\int_{\mathbb{R}}\partial \xi |\wedge
v_{\alpha}|^{2}dx + |\beta |\int_{\mathbb{R}}\partial \xi |\wedge
v_{\alpha + 2}|^{2}dx + |\omega
|\int_{\mathbb{R}}\partial^{3}\xi |\wedge v_{\alpha}|^{2}dx\\
& & +\;|\omega |\int_{\mathbb{R}}\partial^{3}\xi |\wedge v_{\alpha
+ 1}|^{2}dx + 2\,\left |\sum_{j=3}^{\alpha +
2}h^{(j)}\int_{\mathbb{R}}\xi\,\overline{v}_{\alpha}\wedge
v_{j}dx\right | + 2\,\left |\int_{\mathbb{R}}\xi \,(|z|^{2})_{\alpha
+ 2}\,v_{\alpha}\wedge\overline{v}dx\right |
\\
& & +\;2\,\left |\int_{\mathbb{R}}\xi
\,\overline{v}_{\alpha}p(\wedge z_{\alpha + 1},\,\ldots )\,dx\right
|
\end{eqnarray*}
where
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi \,|v_{\alpha}|^{2}dx } \\
& \leq & -\int_{\mathbb{R}}(3\,\beta - |\omega |)\partial \xi
\,|\wedge v_{\alpha + 3}|^{2}dx + \int_{\mathbb{R}}[\beta
\,\partial^{2}\xi - 6\,\beta \,\partial \xi + 3\,|\omega
|\,\partial \xi + |\beta |\,\partial \xi
+ |\beta |\,\xi ]\,|\wedge v_{\alpha + 2}|^{2}dx\\
& & +\int_{\mathbb{R}}[5\beta\partial^{3}\xi + |\omega
|\partial^{3}\xi + 2\,|\omega |\partial\xi + |\beta |\,\xi
]\,|\wedge v_{\alpha + 1}|^{2}dx +
\int_{\mathbb{R}}[\partial_{t}\xi + \beta \,\partial^{5}\xi +
|\omega |\,\partial^{3}\xi
+ |\beta |\,\partial \xi ]\,|\wedge v_{\alpha }|^{2}dx\\
& & +\;2\left |\sum_{j=3}^{\alpha + 2}h^{(j)}\int_{\mathbb{R}}\xi
\,\overline{v}_{\alpha}\wedge v_{j}dx\right | + 2\left
|\int_{\mathbb{R}}\xi \,(|z|^{2})_{\alpha +
2}\,v_{\alpha}\wedge\overline{v}dx\right | + 2\left
|\int_{\mathbb{R}}\xi \,\overline{v}_{\alpha}p(\wedge z_{\alpha +
1},\,\ldots )\,dx\right |.
\end{eqnarray*}
using that $|\omega |<3\,\beta $ we have that the first term in
the right hand side of the above expression is not positive.
Hence,
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi \,|v_{\alpha}|^{2}dx } \\
& \leq & \int_{\mathbb{R}}[\beta \,\partial^{2}\xi - 6\,\beta
\,\partial \xi + 3\,|\omega |\,\partial \xi + |\beta |\,\partial
\xi + |\beta |\,\xi ]\,|\wedge v_{\alpha + 2}|^{2}dx \\
& & +\int_{\mathbb{R}}[5\,\beta \,\partial^{3}\xi + |\omega
|\,\partial^{3}\xi + 2\,|\omega |\,\partial \xi + |\beta |\,\xi
]\,|\wedge v_{\alpha + 1}|^{2}dx +
\int_{\mathbb{R}}[\partial_{t}\xi + \beta \,\partial^{5}\xi +
|\omega |\,\partial^{3}\xi
+ |\beta |\,\partial \xi ]\,|\wedge v_{\alpha }|^{2}dx\\
& & +\,2\left |\sum_{j=3}^{\alpha + 2}h^{(j)}\int_{\mathbb{R}}\xi
\,\overline{v}_{\alpha}\wedge v_{j}dx\right | + 2\left
|\int_{\mathbb{R}}\xi \,(|z|^{2})_{\alpha +
2}\,v_{\alpha}\wedge\overline{v}\,dx\right | + 2\left
|\int_{\mathbb{R}}\xi\,\overline{v}_{\alpha}p(\wedge z_{\alpha +
1},\,\ldots)\,dx\right |.
\end{eqnarray*}
Using that $\wedge v_{n} =\wedge v_{n - 2} - v_{n - 2}$ and a
standard estimate, the lemma follows.
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\setcounter{equation}{0}\section{Uniqueness and Existence of a Local
Solution} In this section, we study the uniqueness and the existence
of local strong solutions in the Sobolev space $H^{N}(\mathbb{R})$
for $N\geq 3$ for the problem \eqref{e204}. To establish the
existence of strong solutions for
\eqref{e204} we use the a priori estimate together with an
approximation procedure.\\
\\
{\bf Theorem 5.1}(Uniqueness). {\it Let $|\omega |<3\,\beta ,$
$u_{0}(x)\in H^{N}(\mathbb{R})$ with $N\geq 3$ and $0<T<+\infty .$
Then there is at most one strong solution $u\in
L^{\infty}([0,\,T]:\,H^{N}(\mathbb{R}))$ of \eqref{e204} with
initial data
$u(x,\,0)=u_{0}(x).$}\\
\\
{\it Proof.} Assume that $u,\,v\in
L^{\infty}([0,\,T]:\,H^{N}(\mathbb{R}))$ are two solutions of
\eqref{e204} with $u_{t},$ $v_{t}$ $\in L^{\infty}([0,\,T]:\,H^{N -
3}(\mathbb{R})),$ and with the same initial data. Then
\begin{eqnarray}
\label{e501}i\,(u - v)_{t} + i\,\beta \,(u - v)_{3} + \omega \,(u -
v)_{2} + |u|^{2}\,u - |v|^{2}\,v = 0
\end{eqnarray}
with $(u - v)(x,\,0)=0.$ By \eqref{e501}
\begin{eqnarray*}
i\,(u - v)_{t} + i\,\beta \,(u - v)_{3} +
\omega \,(u - v)_{2} + |u|^{2}\,(u - v) + (|u|^{2} - |v|^{2})\,v = 0
\end{eqnarray*}
or
\begin{eqnarray}
\label{e502}i\,(u - v)_{t} + i\,\beta \,(u - v)_{3} + \omega \,(u -
v)_{2} + |u|^{2}\,(u - v) + (|u| - |v|)\,(|u| + |v|)\,v = 0.
\end{eqnarray}
Multiplying \eqref{e502} by $\xi\overline{(u - v)}$ we have
\begin{eqnarray*}
\lefteqn{i\,\xi \,\overline{(u - v)}\,(u - v)_{t} + i\,\beta \,\xi
\, \overline{(u - v)}\,(u - v)_{3} + \alpha \,\xi \,\overline{(u -
v)}\,
(u - v)_{2} } \\
& & +\,|u|^{2}\,|u - v|^{2} +
\xi \,\overline{(u - v)}\,(|u| - |v|)\,(|u| + |v|)\,v = 0.\\
\\
\lefteqn{-\,i\,\xi \,(u - v)\,\overline{(u - v)}_{t} - i\,\beta \,
\xi \,(u - v)\,\overline{(u - v)}_{3} + \alpha \,\xi \,
(u - v)\,\overline{(u - v)}_{2} } \\
& & +\,|u|^{2}\,|u - v|^{2} + \xi \,(u - v)\,(|u| - |v|)\,(|u| +
|v|)\,\overline{v} = 0.\quad (\mbox{applying conjugate)}
\end{eqnarray*}
Subtracting and integrating over $x\in \mathbb{R}$ we obtain
\begin{eqnarray}
\lefteqn{i\,\partial_{t}\int_{\mathbb{R}}\xi \,|u - v|^{2}dx -
i\int_{\mathbb{R}}\partial_{t}\xi\,|u - v|^{2}dx + i\,\beta
\int_{\mathbb{R}}\xi \,\overline{(u - v)}\,(u - v)_{3}dx } \nonumber \\
& & +\;i\,\beta \int_{\mathbb{R}}\xi \,(u - v)\,\overline{(u -
v)}_{3}dx +\,\omega \int_{\mathbb{R}}\xi \,\overline{(u - v)}\,(u -
v)_{2}dx \nonumber \\
\label{e503}& & -\,\omega \int_{\mathbb{R}}\xi \,(u -
v)\,\overline{(u - v)}_{2}dx + 2\,i\,Im\int_{\mathbb{R}}\xi
\,\overline{(u - v)}\,(|u| - |v|)\,(|u| + |v|)\,v\,dx = 0\quad
\end{eqnarray}
Each term is treated separately, integrating by parts
\begin{eqnarray*}
\lefteqn{\int_{\mathbb{R}}\xi \,\overline{(u - v)}\,(u - v)_{3}dx} \\
& = & \int_{\mathbb{R}}\partial^{2}\xi\,\overline{(u - v)}\,(u -
v)_{1}dx + 2\int_{\mathbb{R}}\partial \xi \,|(u - v)_{1}|^{2}dx +
\int_{\mathbb{R}}\xi \,(u - v)_{1}\,\overline{(u - v)}_{2}dx.
\end{eqnarray*}
The other terms are calculated in a similar way. Hence in
\eqref{e503} we have
\begin{eqnarray*}
\lefteqn{i\,\partial_{t}\int_{\mathbb{R}}\xi \,|u - v|^{2}dx -
i\int_{\mathbb{R}}\partial_{t}\xi\,|u - v|^{2}dx + i\,\beta
\int_{\mathbb{R}}\partial^{2}\xi \,\overline{(u - v)}\,(u - v)_{1}dx } \\
& & +\,2\,i\,\beta \int_{\mathbb{R}}\partial \xi \,|(u -
v)_{1}|^{2}dx + i\,\beta \int_{\mathbb{R}}\xi\,(u -
v)_{1}\,\overline{(u - v)}_{2}dx + i\,\beta
\int_{\mathbb{R}}\partial^{2}\xi\,(u - v)\,\overline{(u -
v)}_{1}dx \\
& & +\,i\,\beta \int_{\mathbb{R}}\partial \xi\,|(u - v)_{1}|^{2}dx
- i\,\beta \int_{\mathbb{R}}\xi\,(u - v)_{1}\,\overline{(u -
v)}_{2}dx - \omega \int_{\mathbb{R}}\partial \xi \,\overline{(u -
v)}\,(u - v)_{1}dx
\\
& & -\,\omega \int_{\mathbb{R}}\xi\,|(u - v)_{1}|^{2}dx + \omega
\int_{\mathbb{R}}\partial \xi \,(u - v)\,\overline{(u - v)}_{1}dx
+ \omega
\int_{\mathbb{R}}\xi\,|(u - v)_{1}|^{2}dx \\
& & +\,2\,i\,Im\int_{\mathbb{R}}\xi \,\overline{(u - v)}\,(|u| -
|v|)\,(|u| + |v|)\,v\,dx = 0
\end{eqnarray*}
then
\begin{eqnarray*}
\lefteqn{i\,\partial_{t}\int_{\mathbb{R}}\xi \,|u - v|^{2}dx -
i\int_{\mathbb{R}}\partial_{t}\xi\,|u - v|^{2}dx + i\,\beta
\int_{\mathbb{R}}\partial^{2}\xi \,(|u - v|^{2})_{1}dx + 3\,i\,\beta
\int_{\mathbb{R}}\partial \xi \,|(u -
v)_{1}|^{2}dx } \\
& & -\;2\,i\,\omega \,Im\int_{\mathbb{R}}\partial \xi \,
\overline{(u - v)}\,(u - v)_{1}dx + 2\,i\,Im\int_{\mathbb{R}}\xi \,
\overline{(u - v)}\,(|u| - |v|)\,(|u| + |v|)\,v\,dx = 0
\end{eqnarray*}
if and only if
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi \,|u - v|^{2}dx -
\int_{\mathbb{R}}\partial_{t}\xi\,|u - v|^{2}\,dx + \beta
\int_{\mathbb{R}}\partial^{2}\xi\,(|u - v|^{2})_{1}dx + 3\,\beta
\int_{\mathbb{R}}
\partial\xi\,|(u - v)_{1}|^{2}dx } \\
& = & 2\,\omega\,Im\int_{\mathbb{R}}\partial\xi\,\overline{(u -
v)}\, (u - v)_{1}dx - 2\,Im\int_{\mathbb{R}}\xi\,\overline{(u -
v)}\, (|u| - |v|)\,(|u| + |v|)\,v\,dx \\
& \leq & |\omega|\int_{\mathbb{R}}\partial\xi\,|u - v|^{2}dx +
|\omega |\int_{\mathbb{R}}\partial\xi\,|(u - v)_{1}|^{2}dx +
2\int_{\mathbb{R}}\xi \,|u - v|\,|\;|u| - |v|\;|\,(|u| +
|v|)\,|v|\,dx.
\end{eqnarray*}
Using that $|\;|u| - |v|\;|\leq |u - v|,$ \eqref{e203}
and standard estimates, we have
\begin{eqnarray*}
& & \partial_{t}\int_{\mathbb{R}}\xi \,|u - v|^{2}dx +
\int_{\mathbb{R}}[3\,\beta - |\omega|\,]\,\partial \xi \,|(u -
v)_{1}|^{2}dx \leq c\int_{\mathbb{R}}\xi \,|u - v|^{2}dx.
\end{eqnarray*}
Integrating in $t\in [0,\,T],$ using the fact that $(u - v)$
vanishes at $t=0$ and Gronwall's inequality it follows that $u=v.$
This proves the
uniqueness of the solution.\\
\\
We construct the mapping ${\cal
Z}:L^{\infty}([0,\,T]:\,H^{s}(\mathbb{R})) \longmapsto
L^{\infty}([0,\,T]:\,H^{s}(\mathbb{R}))$ where the initial
condition is given by $u^{(n)}(x,\,0)=u_{0}(x)$ and the first
approximation is given by
\begin{eqnarray*}
u^{(0)} & = & u_{0}(x)\\
u^{(n)} & = & {\cal Z}(u^{(n - 1)})\qquad n\geq 1,
\end{eqnarray*}
where $u^{(n - 1)}$ is in place of $z$ in equation \eqref{e403} and
$u^{(n)}$ is in place of $v$ which is the solution of equation
\eqref{e403}. That is
\begin{eqnarray*}
\lefteqn{-\,i\,u_{t}^{(n)} + i\,\beta \,\wedge u_{5}^{(n)} + \omega
\,\wedge u_{4}^{(n)} + (|\wedge u^{(n - 1)}|^{2})_{2}\wedge u^{(n)}
+ 2\,(|\wedge u^{(n - 1)}|^{2})_{1}\wedge
u_{1}^{(n)} } \\
& & +\;|\wedge u^{(n - 1)}|^{2}\,\wedge u_{2}^{(n)} - (i\,\beta
\,\wedge u_{3}^{(n)} + \omega \,\wedge u_{2}^{(n)} + |\wedge u^{(n -
1)}|^{2}\,\wedge u^{(n)})=0.
\end{eqnarray*}
By Lemma 4.1, $u^{(n)}$ exists and is unique in $C((0,\,+\infty
):\,H^{N}(\mathbb{R})).$ A choice of $c_{0}$ and the use of the a
priori estimate in Section 4 shows that ${\cal
Z}:\mathbb{B}_{c_{0}}(0)\longmapsto \mathbb{B}_{c_{0}}(0)$ where
$\mathbb{B}_{c_{0}}(0)$ is a bounded ball in
$L^{\infty}([0,\,T]:\,H^{s}(\mathbb{R})).$\\
\\
{\bf Theorem 5.2}(Local solution). {\it Let $|\omega |<3\,\beta$
and $N$ an integer $\geq 3.$ If $u_{0}(x) \in H^{N}(\mathbb{R}),$
then there is $T>0$ and $u$ such that $u$ is a strong solution of
\eqref{e204},
$u\in L^{\infty}([0,\,T]:\,H^{N}(\mathbb{R}))$ and $u(x,\,0)=u_{0}(x).$}\\
\\
{\it Proof.} We prove that for $u_{0}(x)\in
H^{\infty}(\mathbb{R})=\bigcap_{k\geq 0}H^{k}(\mathbb{R})$ there
exists a solution $u\in L^{\infty}([0,\,T]:\,H^{N}(\mathbb{R}))$
with initial data $u(x,\,0)=u_{0}(x)$ where the time of existence
$T>0$ only depends on the norm of $u_{0}(x).$ We define a sequence
of approximations to equation \eqref{e403} as
\begin{eqnarray}
i\,v_{t}^{(n)} & = & i\,\beta \wedge v_{5}^{(n)} + \omega \,\wedge
v_{4}^{(n)} - i\,\beta \wedge v_{3}^{(n)}
- \omega \wedge v_{2}^{(n)} + |\wedge v^{(n - 1)}|^{2}\,\wedge v_{2}^{(n)} \nonumber \\
\label{e504}& & +\;O[\,(|\wedge v^{(n - 1)}|^{2})_{2},\,(|\wedge
v^{(n - 1)}|^{2})_{1},\,\ldots)\,]
\end{eqnarray}
where the initial condition is $v^{(n)}(x,\,0) = u_{0}(x) -
\partial^{2}u_{0}(x).$ The first approximation is given by
$v^{(0)}(x,\,0) = u_{0}(x) - \partial^{2}u_{0}(x).$ Equation
\eqref{e504} is a linear equation at each iteration which can be
solved in any interval of time in which the coefficients are
defined. This is shown in Lemma 4.1. By Lemma 4.2, it follows that
\begin{eqnarray}
\label{e505}\partial_{t}\int_{\mathbb{R}}\xi
\,|v_{\alpha}^{(n)}|^{2}dx\leq G(||v^{(n -
1)}||_{\lambda})\,||v^{(n)}||_{\alpha}^{2} + F(||v^{(n -
1)}||_{\alpha}).
\end{eqnarray}
Choose $\alpha =1$ and let $c\geq ||u_{0} -
\partial^{2}u_{0}||_{1}\geq ||u_{0}||_{3}.$
For each iterate $n,$ $||v^{(n)}(\,\cdot\,,\,t)||$ is continuous
in $t\in [0,\,T]$ and $||v^{(n)}(\,\cdot\,,\,0)||<c.$ Define
$c_{0}=\frac{\gamma_{2}}{2\,\gamma_{1}}\,c^{2} + 1.$ Let
$T_{0}^{(n)}$ be the maximum time such that $||v^{(k)}(\,\cdot
\,,\,t)||_{1}\leq c_{3}$ for $0\leq t\leq T_{0}^{(n)},$ $0\leq
k\leq n.$ Integrating \eqref{e505} over $[0,\,t]$ we have that for
$0\leq t\leq T_{0}^{(n)}$ and $j=0,\,1$
\begin{eqnarray*}
\int_{0}^{t}\left(\partial_{s}\int_{\mathbb{R}}\xi
\,|v_{j}^{(n)}|^{2}dx\right)ds\leq \int_{0}^{t}G\left(||v^{(n -
1)}||_{1}\right)||v^{(n)}||_{j}^{2}ds + \int_{0}^{t}F\left(||v^{(n -
1)}||_{j}\right)ds.
\end{eqnarray*}
It follows that
\begin{eqnarray*}
\int_{\mathbb{R}}\xi(x,\,t)|v_{j}^{(n)}(x,\,t)|^{2}dx & \leq &
\int_{\mathbb{R}}\xi(x,\,0)|v_{j}^{(n)}(x,\,0)|^{2}dx +
\int_{0}^{t}G\left(||v^{(n - 1)}||_{1}\right)||v^{(n)}||_{j}^{2}ds
\\
& & + \int_{0}^{t}F\left(||v^{(n - 1)}||_{j}\right)ds
\end{eqnarray*}
hence
\begin{eqnarray*}
\gamma_{1}\int_{\mathbb{R}}|v_{j}^{(n)}(x,\,t)|^{2}dx & \leq &
\int_{\mathbb{R}}\xi(x,\,t)|v_{j}^{(n)}(x,\,t)|^{2}dx \\
& \leq & \int_{\mathbb{R}}\xi(x,\,0)|v_{j}^{(n)}(x,\,0)|^{2}dx +
\int_{0}^{t}G\left(||v^{(n - 1)}||_{1}\right)||v^{(n)}||_{j}^{2}ds
\\
& & + \int_{0}^{t}F\left(||v^{(n - 1)}||_{j}\right)ds
\end{eqnarray*}
and
\begin{eqnarray*}
\int_{\mathbb{R}}|v_{j}^{(n)}|^{2}dx \leq
\frac{\gamma_{2}}{\gamma_{1}}\int_{\mathbb{R}}|v_{j}^{(n)}(x,\,0)|^{2}dx
+ \frac{G(c_{3})}{\gamma_{1}}\,c_{3}^{2}\,t +
\frac{F(c_{3})}{\gamma_{1}}\,t
\end{eqnarray*}
and we obtain for $j=0,\,1$ that
\begin{eqnarray*}
||v^{(n)}||_{1}\leq \frac{\gamma_{2}}{\gamma_{1}}\,c^{2} +
\frac{G(c_{0})}{\gamma_{1}}\,c_{0}^{2}\,t + \frac{F(c_{0})}{\gamma_{1}}\,t.
\end{eqnarray*}
{\it Claim.} $T_{0}^{(n)}$ does not approach to $0.$\\
On the contrary, assume that $T_{0}^{(n)}\rightarrow 0.$ Since
$||v^{(n)}(\,\cdot \,,\,t)||$ is continuous for $t\geq 0,$ there
exists $\tau \in [0,\,T]$ such that
$||v^{(k)}(\,\cdot\,,\,t)||_{1}=c_{0}$ for $0\leq \tau \leq
T_{0}^{(n)},$ $0\leq k\leq n.$ Then
\begin{eqnarray*}
c_{0}^{2}\leq \frac{\gamma_{2}}{\gamma_{1}}\,c^{2} +
\frac{G(c_{0})}{\gamma_{1}}\,c_{0}^{2}\,T_{0}^{(n)} +
\frac{F(c_{0})}{\gamma_{1}}\,T_{0}^{(n)}
\end{eqnarray*}
as $n\rightarrow \infty ,$ we have
\begin{eqnarray*}
\left(\frac{\gamma_{2}}{2\,\gamma_{1}}\,c^{2} + 1\right)^{2} \leq
\frac{\gamma_{2}}{\gamma_{1}}\,c^{2}\quad\mbox{then}\quad
\frac{\gamma_{2}^{2}}{4\,\gamma_{1}^{2}}\,c^{4} + 1\leq 0
\end{eqnarray*}
which is a contradiction. Consequently $T_{0}^{(n)}\not \rightarrow 0.$
Choosing $T=T(c)$ sufficiently small, and $T$ not depending on $n,$ one
concludes that
\begin{eqnarray}
\label{e506}||v^{(n)}||_{1}\leq C
\end{eqnarray}
for $0\leq t\leq T.$ This shows that $T_{0}^{(n)}\geq T.$ Hence,
from \eqref{e506} we imply that there exists a subsequence
$v^{(n_{j})}\equiv v^{(n)}$ such that
\begin{eqnarray}
\label{e507}v^{(n)}\stackrel{*}\rightharpoonup v\quad \mbox{weakly
on}\quad L^{\infty}([0,\,T]:\,H^{1}(\mathbb{R})).
\end{eqnarray}
{\it Claim.} $u=\wedge v$ is a solution.\\
\\
In the linearized equation \eqref{e504} we have
\begin{eqnarray*}
\wedge v_{5}^{(n)}=\wedge(I - (I - \partial^{2}))v_{3}^{(n)} =
\wedge v_{3}^{(n)} - v_{3}^{(n)} =
\partial^{2}(\underbrace{\wedge v_{1}^{(n)}}_{\in L^{2}(\mathbb{R})}) -
\underbrace{ \partial^{2}(v_{1}^{(n)})}_{\in H^{-2}(\mathbb{R})}\in
H^{-2}(\mathbb{R}).
\end{eqnarray*}
Since $\;\wedge= (I - \partial^{2})^{-1}\;$ is bounded in
$\;H^{1}(\mathbb{R}),\;$ $\;\wedge v_{5}^{(n)}\;$ belongs to
$\;H^{-2}(\mathbb{R}).\;$ $\;v^{(n)}\;$ is still bounded in
$L^{\infty}([0,\,T]:\;H^{1}(\mathbb{R}))\hookrightarrow
L^{2}([0,\,T]:\;H^{1}(\mathbb{R}))$ and since
$\wedge:L^{2}(\mathbb{R})\rightarrow H^{2}(\mathbb{R})$ is a bounded
operator,
\begin{eqnarray*}
||\wedge v_{1}^{(n)}||_{H^{2}(\mathbb{R})}\leq
c\,||v_{1}^{(n)}||_{L^{2}(\mathbb{R})}\leq c\,||
v_{1}^{(n)}||_{H^{1}(\mathbb{R})}.
\end{eqnarray*}
Consequently, $\wedge v_{1}^{(n)}$ is bounded in
$L^{2}([0,\,T]:\,H^{2}(\mathbb{R}))\hookrightarrow
L^{2}([0,\,T]:\,L^{2}(\mathbb{R})).$ It follows that
$\partial^{2}(\wedge v_{1}^{(n)})$ is bounded in
$L^{2}([0,\,T]:\,H^{-2}(\mathbb{R})),$ and
\begin{eqnarray}
\label{e508}\wedge v_{5}^{(n)}\quad \mbox{is bounded in}\quad
L^{2}([0,\,T]:\;H^{-2}(\mathbb{R})).
\end{eqnarray}
Similarly, the other terms are bounded. By \eqref{e504},
$v_{t}^{(n)}$ is a sum of terms each of which is the product of a
coefficient, uniformly bounded on $n$ and a function in
$L^{2}([0,\,T]:\,H^{-2}(\mathbb{R}))$ uniformly bounded on $n$
such that $v_{t}^{(n)}$ is bounded in
$L^{2}([0,\,T]:\,H^{-2}(\mathbb{R})).$ On the other hand,
$H_{loc}^{1}(\mathbb{R})\stackrel{c}\hookrightarrow
H_{loc}^{1/2}(\mathbb{R})\hookrightarrow H^{-4}(\mathbb{R}).$ By
Lions-Aubin's compactness Theorem \cite{li1} there is a
subsequence $v^{(n_{j})}\equiv v^{(n)}$ such that
$v^{(n)}\rightarrow v$ strongly on
$L^{2}([0,\,T]:\,H_{loc}^{1/2}(\mathbb{R})).$ Hence, for a
subsequence $v^{(n_{j})}\equiv v^{(n)},$ we have
$v^{(n)}\rightarrow v$ a. e. in
$L^{2}([0,\,T]:\,H_{loc}^{1/2}(\mathbb{R})).$ Moreover, from
\eqref{e508}, $\wedge v_{5}^{(n)}\rightharpoonup \wedge v_{5}$
weakly in $L^{2}([0,\,T]:\,H^{-2}(\mathbb{R})).$ Similarly,
$\wedge v_{2}^{(n)}\rightharpoonup \wedge v_{2}$ weakly in
$L^{2}([0,\,T]:\,H^{-2}(\mathbb{R})).$ Since $||\wedge
v^{(n)}||_{H^{2}(\mathbb{R})}\leq c\,||
v^{(n)}||_{L^{2}(\mathbb{R})}\leq c\,||
v^{(n)}||_{H^{1}(\mathbb{R})}\leq c\,||
v^{(n)}||_{H^{1/2}(\mathbb{R})}$ and $v^{(n)}\rightarrow v$
strongly on $L^{2}([0,\,T]:\,H_{loc}^{1/2}(\mathbb{R}))$ then
$\wedge v^{(n)}\rightarrow \wedge v$ strongly in
$L^{2}([0,\,T]:\,H_{loc}^{2}(\mathbb{R})).$ Thus, the fifth term
on the right hand side of \eqref{e504}, $|\wedge v^{(n -
1)}|^{2}\,\wedge v_{2}^{(n)}\rightharpoonup |\wedge v|^{2}\,\wedge
v_{2}$ weakly in $L^{2}([0,\,T]:\,L_{loc}^{1}(\mathbb{R}))$ as
$\wedge v_{2}^{(n)}\rightharpoonup \wedge v_{2}$ weakly in
$L^{2}([0,\,T]:\,H^{-2}(\mathbb{R}))$ and $|\wedge v^{(n -
1)}|^{2}\rightarrow |\wedge v|^{2}$ strongly on
$L^{2}([0,\,T]:\,H_{loc}^{2}(\mathbb{R})).$ Similarly, the other
terms in \eqref{e504} converge to their limits, implying
$v_{t}^{(n)}\rightharpoonup v_{t}$ weakly in
$L^{2}([0,\,T]:\,L_{loc}^{1}(\mathbb{R})).$ Passing to the limit
\begin{eqnarray*}
i\,v_{t} & = & \partial^{2}(i\,\beta \,\wedge v_{3} + \omega \,\wedge v_{2} +
|\wedge v|^{2}\,\wedge v) - (i\,\beta \,\wedge v_{3} +
\omega \,\wedge v_{2} + |\wedge v|^{2}\,\wedge v)\\
& = & -(I - \partial^{2})(i\,\beta \,\wedge v_{3} + \omega
\,\wedge v_{2} + |\wedge v|^{2}\,\wedge v).
\end{eqnarray*}
Thus $i\,v_{t} + (I - \partial^{2})(i\,\beta \,\wedge v_{3} +
\omega \,\wedge v_{2} + |\wedge v|^{2}\,\wedge v)=0.$ This way, we
have \eqref{e204} for $u=\wedge v.$\\
\\
Now, we prove that there exists a solution of \eqref{e204} with
$u\in L^{\infty}([0,\,T]:\,H^{N}(\mathbb{R}))$ and $N\geq 4,$
where $T$ depends only on the norm of $u_{0}$ in
$H^{3}(\mathbb{R}).$ We already know that there is a solution
$u\in L^{\infty}([0,\,T]:\,H^{3}(\mathbb{R})).$ It is suffices to
show that the approximating sequence $v^{(n)}$ is bounded in
$L^{\infty}([0,\,T]:\,H^{N - 2}(\mathbb{R})).$ Taking $\alpha = N
- 2$ and considering \eqref{e505} for $\alpha \geq 2,$ we define
$c_{N - 2}=\frac{\gamma_{2}}{2\,\gamma_{1}}\,||u_{0}(\cdot)||_{N}
+ 1.$ Let $T_{N - 3}^{(n)}$ be the largest time such that
$||v^{(k)}(\,\cdot\,,\,t)||_{\alpha}\leq c_{N - 3}$ for $0\leq
t\leq T_{N - 3}^{(n)},$ $0\leq k\leq n.$ Integrating \eqref{e505}
over $[0,\,t],$ for $0\leq t\leq T_{N - 3}^{(n)},$ we have
\begin{eqnarray*}
\int_{0}^{t}\left(\partial_{s}\int_{\mathbb{R}}\xi
\,|v_{\alpha}^{(n)}|^{2}dx\right)ds \leq \int_{0}^{t}G\left(||v^{(n
- 1)}||_{\alpha}\right)||v^{(n)}||_{\alpha}^{2}ds +
\int_{0}^{t}F\left(||v^{(n - 1)}||_{\alpha}\right)ds.
\end{eqnarray*}
It follows that
\begin{eqnarray*}
\int_{\mathbb{R}}\xi (x,\,t)\,|v_{\alpha}^{(n)}|^{2}dx & \leq &
\int_{\mathbb{R}}\xi (x,\,0)\, |v_{\alpha}^{(n)}(x,\,0)|^{2}dx +
\int_{0}^{t}G\left(||v^{(n - 1)}||_{\alpha}\right)
||v^{(n)}||_{\alpha}^{2}ds \\
& & + \int_{0}^{t}F\left(||v^{(n - 1)}||_{\alpha}\right)ds
\end{eqnarray*}
hence
\begin{eqnarray*}
\gamma_{1}\int_{\mathbb{R}}|v_{\alpha}^{(n)}|^{2}dx\leq
\int_{\mathbb{R}}\xi \,|v_{\alpha}^{(n)}|^{2}dx & \leq &
\int_{\mathbb{R}}\xi (x,\,0)\, |v_{\alpha}^{(n)}(x,\,0)|^{2}dx +
\int_{0}^{t}G
\left(||v^{(n - 1)}||_{\alpha}\right)||v^{(n)}||_{\alpha}^{2}ds \\
& & +\int_{0}^{t}F\left(||v^{(n - 1)}||_{\alpha}\right)ds
\end{eqnarray*}
then
\begin{eqnarray*}
\lefteqn{\int_{\mathbb{R}}|v_{\alpha}^{(n)}|^{2}dx\leq
\frac{\gamma_{2}}{\gamma_{1}}
\int_{\mathbb{R}}|v_{\alpha}^{(n)}(x,\,0)|^{2}dx + \frac{G(c_{N -
3})}{\gamma_{1}}\,c_{N - 3}^{2}\,t +
\frac{F(c_{N - 3})}{\gamma_{1}}\,t } \\
& \leq & \frac{\gamma_{2}}{\gamma_{1}}\,||v_{\alpha}^{(n)}(x,\,0)||_{\alpha}^{2}
+ \frac{G(c_{N - 3})}{\gamma_{1}}\,c_{N - 3}^{2}\,t +
\frac{F(c_{N - 3})}{\gamma_{1}}\,t\\
& \leq & \frac{\gamma_{2}}{\gamma_{1}}\,||u(x,\,0)||_{N}^{2} +
\frac{G(c_{N - 3})}{\gamma_{1}}\,c_{N - 3}^{2}\,t +
\frac{F(c_{N - 3})}{\gamma_{1}}\,t
\end{eqnarray*}
and we obtain
\begin{eqnarray*}
||v_{\alpha}^{(n)}(\,\cdot \,,\,t)||_{\alpha}^{2}dx\leq
\frac{\gamma_{2}}{\gamma_{1}}\,||u(x,\,0)||_{N}^{2} + \frac{G(c_{N -
3})}{\gamma_{1}}\,c_{N - 3}^{2}\,t + \frac{F(c_{N -
3})}{\gamma_{1}}\,t
\end{eqnarray*}
{\it Claim.} $T_{N - 3}^{(n)}$ does not approach to $0.$\\
On the contrary, assume that $T_{N - 3}^{(n)}\rightarrow 0.$ Since
$||v^{(n)}(\,\cdot\,,\,t)||$ is continuous for $t\geq 0,$ there
exists $\tau \in [0,\,T_{N - 3}]$ such that $||v^{(k)}(\,\cdot
\,,\,\tau)||_{\alpha}=c_{N - 3}$ for $0\leq \tau \leq T^{(n)},$
$0\leq k\leq n.$ Then
\begin{eqnarray*}
c_{N - 3}^{2}\leq \frac{\gamma_{2}}{\gamma_{1}}\,||u(x,\,0)||_{N}^{2} +
\frac{G(c_{N - 3})}{\gamma_{1}}\,c_{N - 3}^{2}\,T_{N - 3}^{(n)} +
\frac{F(c_{N - 3})}{\gamma_{1}}\,T_{N - 3}^{(n)}
\end{eqnarray*}
as $n\rightarrow +\infty ,$ and we have
\begin{eqnarray*}
\left(\frac{\gamma_{2}}{2\,\gamma_{1}}\,||u(x,\,0)||_{N}^{2} + 1\right )^{2}
\leq \frac{\gamma_{2}}{\,\gamma_{1}}\,||u(x,\,0)||_{N}^{2}\quad \mbox{then}\quad
\frac{\gamma_{2}^{2}}{4\,\gamma_{1}^{2}}\,||u(x,\,0)||_{N}^{4} + 1 \leq 0
\end{eqnarray*}
which is a contradiction. Then $T_{N - 3}^{(n)}\not\rightarrow 0.$
By choosing $T_{N - 3}=T_{N - 3}(||u(x,\,0)||_{N}^{2})$
sufficiently small, and $T_{N - 3}$ not depending on $n,$ we
conclude that
\begin{eqnarray}
\label{e509}||v^{(n)}(\,\cdot\,,\,t)||_{\alpha}^{2}\leq c_{N -
3}^{2}\quad \mbox{for all} \quad 0\leq t\leq T_{N - 3}.
\end{eqnarray}
This shows that $T_{N - 3}^{(n)}\geq T_{N - 3}.$ Thus,
\begin{eqnarray*}
v\in L^{\infty}([0,\,T_{N - 3}]:\,H^{\alpha}(\mathbb{R}))\equiv
L^{\infty}([0,\,T_{N - 3}]:\,H^{N - 2}(\mathbb{R})).
\end{eqnarray*}
Now, denote by $0\leq T_{N - 3}^{*}\leq +\infty $ the maximal
number such that for all $0<t\leq T_{N - 3}^{*},$ $u=\wedge v\in
L^{\infty}([0,\,t]:\,H^{N}(\mathbb{R})).$ In particular, $T_{N -
3}\leq T_{N - 3}^{*}$ for all $N\geq 4.$ Thus, $T$ can be chosen
depending only on the norm of $u_{0}$ in $H^{3}(\mathbb{R}).$
Approximating $u_{0}$ by $\{u_{0}^{(j)}\}\in
C_{0}^{\infty}(\mathbb{R})$ such that $||u_{0} -
u_{0}^{(j)}||_{H^{N}(\mathbb{R})}\rightarrow 0$ as $j\rightarrow
+\infty.$ Let $u^{j}$ be a solution of \eqref{e204} with
$u^{(j)}(x,\,0)=u_{0}^{(j)}.$ According to the above argument,
there exists $T$ which is independent on $n$ but depending only on
$\sup_{j}||u_{0}^{(j)}||$ such that $u^{(j)}$ there exists on
$[0,\,T]$ and a subsequence $u^{(j)}\stackrel{j\rightarrow
+\infty}\longrightarrow u$
in $L^{\infty}([0,\,T]:\,H^{N}(\mathbb{R})).$\\
\\
As a consequence of Theorem 5.1 and 5.2 and its proof, one obtains the
following result.\\
\\
{\bf Corollary 5.3.} {\it Let $|\omega |<3\,\beta $ and let
$u_{0}\in H^{N}(\mathbb{R})$ with $N\geq 3$ such that
$u_{0}^{(j)}\rightarrow u_{0}$ in $H^{N}(\mathbb{R}).$ Let $u$ and
$u^{(j)}$ be the corresponding unique solutions given by Theorems
5.1 and 5.2 in $L^{\infty}([0,\,T]:\,H^{N}(\mathbb{R}))$ with $T$
depending only on $\sup_{j}||u_{0}^{(j)}||_{H^{3}(\mathbb{R})}$
such that}
\begin{eqnarray*}
u^{(j)}\stackrel{*}\rightharpoonup u\quad weakly \;on
\quad L^{\infty}([0,\,T]:\,H^{N}(\mathbb{R})),\\
u^{(j)}\rightarrow u\quad strongly \;on \quad L^{2}([0,\,T]:\,H^{N +
1}(\mathbb{R})).
\end{eqnarray*}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\setcounter{equation}{0}\section{Existence of Global Solutions}
Here, we will try to extend the local solution $u\in
L^{\infty}([0,\,T]:\,H^{N}(W_{0\;i\;0}))$ of \eqref{e204} obtained
in Theorem 5.2 to $t\geq 0.$ A standard way to obtain these
extensions consists into deducing global estimations for the
$H^{N}(W_{0\;i\;0})$-norm of $u$ in terms of the
$H^{N}(W_{0\;i\;0})$-norm of $u(x,\,0)=u_{0}(x).$ These
estimations are frequently based on conservation laws which
contain the $L^{2}$-norm of the solution and their spatial
derivatives. It is not possible to do the same to give a solution
of the problem of global existence because the difficulty here is
that the weight depends on the $x$ and $t$ variables. To solve our
problem we follow a different method using Leibniz's
rule like in the proof of Theorem 3.1 of Bona and Saut \cite{bo1}.\\
\\
{\bf Theorem 6.1.} {\it For $|\omega |<3\,\beta$ there exists a
global solution to \eqref{e204} in the space
$H^{s}(\mathbb{R})\cap H^{N}(W_{0\;i\;0})$ with
$N$ integer $\geq 3$ and $s\geq 2.$}\\
\\
{\it Proof.} The first part was proved in \cite{bo1}.
Differentiating \eqref{e204} $\alpha $-times (for $\alpha \geq 0$)
over $x\in \mathbb{R}$ leads to
\begin{eqnarray}
\label{e601}i\,u_{\alpha \,t} + i\,\beta \,u_{\alpha + 3} + \omega
\,u_{\alpha + 2} + (|u|^{2})_{\alpha }\,u + \sum_{m=1}^{\alpha -
1}{\alpha\choose m}(|u|^{2})_{\alpha - m}\,u_{m} +
|u|^{2}\,u_{\alpha } = 0.
\end{eqnarray}
Let $\xi = \xi(x,\,t),$ then multiplying \eqref{e601} by $\xi
\,\overline{u}_{\alpha } $ we have
\begin{eqnarray*}
\lefteqn{i\,\xi \,\overline{u}_{\alpha }\,u_{\alpha \,t} + i\,\beta
\,\xi \, \overline{u}_{\alpha }\,u_{\alpha + 3} + \omega \,\xi \,
\overline{u}_{\alpha }\,u_{\alpha + 2} + (|u|^{2})_{\alpha }\,
\xi \,u\,\overline{u}_{\alpha } } \\
& & +\sum_{m=1}^{\alpha - 1}{\alpha\choose m} (|u|^{2})_{\alpha -
m}\,\xi \,u_{m}\,\overline{u}_{\alpha} + \xi \,|u|^{2}\,|u_{\alpha
}|^{2} = 0
\end{eqnarray*}
and
\begin{eqnarray*}
\lefteqn{-\,i\,\xi \,u_{\alpha }\,\overline{u}_{\alpha \,t} -
i\,\beta \,\xi \,u_{\alpha }\,\overline{u}_{\alpha + 3} + \omega
\,\xi \,u_{\alpha }\,\overline{u}_{\alpha + 2} + (|u|^{2})_{\alpha
}\,
\xi \,\overline{u}\,u_{\alpha } } \\
& & +\sum_{m=1}^{\alpha - 1}{\alpha\choose m}(|u|^{2})_{\alpha -
m}\,\xi \,\overline{u}_{m}\,u_{\alpha} + \xi \,|u|^{2}\,|u_{\alpha
}|^{2} = 0. \qquad (\mbox{applying conjugate})
\end{eqnarray*}
Subtracting and integrating over $x\in \mathbb{R}$ we have
\begin{eqnarray}
\label{e602}\lefteqn{i\,\partial_{t}\int_{\mathbb{R}}\xi
\,|u_{\alpha }|^{2}dx + i\,\beta \int_{\mathbb{R}}\xi
\,\overline{u}_{\alpha }\,u_{\alpha + 3}dx + i\,\beta
\int_{\mathbb{R}}\xi \,u_{\alpha}\,\overline{u}_{\alpha + 3}dx +
\omega \int_{\mathbb{R}}\xi \,\overline{u}_{\alpha}\,u_{\alpha +
2}dx } \\
& & -\;\omega \int_{\mathbb{R}}\xi
\,u_{\alpha}\,\overline{u}_{\alpha + 2}dx +
2\,i\,Im\int_{\mathbb{R}}\xi \,(|u|^{2})_{\alpha
}\,u\,\overline{u}_{\alpha }dx + 2\,i\sum_{m=1}^{\alpha -
1}{\alpha\choose m}Im\int_{\mathbb{R}}\xi \,(|u|^{2})_{\alpha - m}
\,u_{m}\,\overline{u}_{\alpha}dx = 0. \nonumber
\end{eqnarray}
Each term is calculated separately, integrating by parts in the
second term we have
\begin{eqnarray*}
\int_{\mathbb{R}}\xi\,\overline{u}_{\alpha}\,u_{\alpha + 3}dx =
\int_{\mathbb{R}}\partial^{2}\xi
\,\overline{u}_{\alpha}\,u_{\alpha + 1}dx +
2\int_{\mathbb{R}}\partial\xi\,|u_{\alpha + 1}|^{2}dx +
\int_{\mathbb{R}}\xi\,\overline{u}_{\alpha + 2}\,u_{\alpha + 1}dx.
\end{eqnarray*}
The other terms are calculated in a similar way. Hence in
\eqref{e602}
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi \,|u_{\alpha}|^{2}dx -
\beta \int_{\mathbb{R}}\partial^{3}\xi\,|u_{\alpha}|^{2}dx +
3\,\beta \int_{\mathbb{R}}\partial \xi\,|u_{\alpha + 1}|^{2}dx -
2\,\omega \,Im\int_{\mathbb{R}}\partial
\xi\,\overline{u}_{\alpha}\,u_{\alpha + 1}dx } \\
& & -\int_{\mathbb{R}}\partial_{t}\xi\,|u_{\alpha}|^{2}dx +
2\,Im\int_{\mathbb{R}}\xi \,(|u|^{2})_{\alpha
}\,u\,\overline{u}_{\alpha }dx + 2\sum_{m=1}^{\alpha -
1}{\alpha\choose m}Im\int_{\mathbb{R}}\xi \,(|u|^{2})_{\alpha - m}
\,u_{m}\,\overline{u}_{\alpha}dx = 0
\end{eqnarray*}
such that
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi \,|u_{\alpha}|^{2}dx -
\beta \int_{\mathbb{R}}\partial^{3}\xi\,|u_{\alpha}|^{2}dx +
3\,\beta \int_{\mathbb{R}}\partial \xi\,|u_{\alpha + 1}|^{2}dx +
2\,Im\int_{\mathbb{R}}(|u|^{2})_{\alpha}\,\xi
\,u\,\overline{u}_{\alpha}dx}\\
& & - \int_{\mathbb{R}}\partial_{t}\xi\,|u_{\alpha}|^{2}dx +
2\sum_{m=1}^{\alpha - 1}{\alpha\choose m} Im\int_{\mathbb{R}}\xi
\,(|u|^{2})_{\alpha - m}
\,u_{m}\,\overline{u}_{\alpha}dx \\
& = & 2\,\alpha \,Im\int_{\mathbb{R}}\partial \xi\,
\overline{u}_{\alpha}\,u_{\alpha + 1}dx \leq |\omega
|\int_{\mathbb{R}}\partial \xi\,|u_{\alpha}|^{2}dx + |\omega
|\int_{\mathbb{R}}\partial \xi\,|u_{\alpha + 1}|^{2}dx.
\end{eqnarray*}
Hence
\begin{eqnarray}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi \,|u_{\alpha}|^{2}dx +
\int_{\mathbb{R}}[\,3\,\beta - |\omega |\,]\,\partial
\xi\;|u_{\alpha + 1}|^{2}dx - \int_{\mathbb{R}}[\,\partial_{t}\xi +
\beta \,\partial^{3}\xi
+ |\omega|\,\partial \xi\,]\,|u_{\alpha}|^{2}dx}\nonumber \\
\label{e603}& & +\;2\,Im\int_{\mathbb{R}}(|u|^{2})_{\alpha}\,\xi
\,u\,\overline{u}_{\alpha}dx + 2\sum_{m=1}^{\alpha -
1}{\alpha\choose m}Im\int_{\mathbb{R}}\xi \,(|u|^{2})_{\alpha - m}
u_{m}\,\overline{u}_{\alpha}dx\leq 0.
\end{eqnarray}
But
\begin{eqnarray*}
(|u|^{2})_{\alpha} & = & (\,u\,\overline{u}\,)_{\alpha} =
\sum_{k=0}^{\alpha}{\alpha\choose k}u_{\alpha - k}\,\overline{u}_{k}
= \overline{u}\,u_{\alpha} + \sum_{k=1}^{\alpha - 1}{\alpha\choose
k}u_{\alpha - k} \,\overline{u}_{k} + u\,\overline{u}_{\alpha}
\end{eqnarray*}
then
\begin{eqnarray*}
(|u|^{2})_{\alpha}\,u\,\overline{u}_{\alpha} =
|u|^{2}\,|u_{\alpha}|^{2} + \sum_{k=1}^{\alpha - 1}{\alpha\choose
k}u_{\alpha - k}\, \overline{u}_{k}\,u\,\overline{u}_{\alpha} +
u^{2}\,\overline{u}_{\alpha}^{2}
\end{eqnarray*}
hence
\begin{eqnarray}
\lefteqn{2\;Im\int_{\mathbb{R}}(\,|u|^{2}\,)_{\alpha}\,\xi
\,u\,\overline{u}_{\alpha}dx = 2\sum_{k=1}^{\alpha -
1}{\alpha\choose k}Im\int_{\mathbb{R}}\xi \,u_{\alpha -
k}\,\overline{u}_{k}\,u\,\overline{u}_{\alpha}dx
+ 2\;Im\int_{\mathbb{R}}\xi \,u^{2}\,\overline{u}_{\alpha}^{2}dx } \nonumber \\
& \leq & 2\sum_{k=1}^{\alpha - 1}{\alpha\choose
k}\int_{\mathbb{R}}\xi \,|u_{\alpha -
k}|\,|u_{k}|\,|u|\,|u_{\alpha}|dx + 2\int_{\mathbb{R}}\xi\,|u|^{2}
\,|u_{\alpha}|^{2}dx \nonumber \\
& \leq & 2\sum_{k=1}^{\alpha - 1}{\alpha\choose
k}\int_{\mathbb{R}}\xi \,|u_{\alpha -
k}|\,|u_{k}|\,|u|\,|u_{\alpha}|dx +
2\,||u||_{L^{\infty}(\mathbb{R})}^{2}\int_{\mathbb{R}}\xi
\,|u_{\alpha}|^{2}dx
\nonumber \\
\label{e604}& \leq &
2\,||u||_{L^{\infty}(\mathbb{R})}\sum_{k=1}^{\alpha - 1}
{\alpha\choose k}\int_{\mathbb{R}}\xi \,|u_{\alpha -
k}|\,|u_{k}|\,|u_{\alpha}|dx +
2\,||u||_{L^{\infty}(\mathbb{R})}^{2}\int_{\mathbb{R}}\xi
\,|u_{\alpha}|^{2}dx
\end{eqnarray}
hence in \eqref{e603} we have
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi \,|u_{\alpha}|^{2}dx +
\int_{\mathbb{R}}[3\,\beta - |\omega |\,]\,\partial \xi\,|u_{\alpha
+ 1}|^{2}dx \leq \int_{\mathbb{R}}[\partial_{t}\xi + \beta
\,\partial^{3}\xi + |\omega|\,\partial \xi
+ c\,\xi \,]\,|u_{\alpha}|^{2}dx }\\
& & +\,2\,c\sum_{k=1}^{\alpha - 1}{\alpha\choose
k}\int_{\mathbb{R}}\xi \,|u_{\alpha -
k}|\,|u_{k}|\,|u|\,|u_{\alpha}|dx - 2\sum_{m=1}^{\alpha -
1}{\alpha\choose m}Im\int_{\mathbb{R}}\xi \,(|u|^{2})_{\alpha - m}
\,u_{m}\,\overline{u}_{\alpha}dx.
\end{eqnarray*}
Using \eqref{e203}, Gagliardo-Nirenberg's inequality and standard
estimates we get
\begin{eqnarray}
\label{e605}\partial_{t}\int_{\mathbb{R}}\xi\,|u_{\alpha}|^{2}dx +
[3\,\beta - |\omega |\,]\int_{\mathbb{R}}\partial \xi\,|u_{\alpha
+ 1}|^{2}\,dx \leq c\int_{\mathbb{R}}\xi \,|u_{\alpha}|^{2}dx.
\end{eqnarray}
Integrating \eqref{e605} in $t\in [0,\,T_{max}=T]$ we obtain
\begin{eqnarray*}
\int_{\mathbb{R}}\xi\,|u_{\alpha}|^{2}dx + [3\,\beta - |\omega
|\,]\int_{0}^{t}\int_{\mathbb{R}}\partial \xi\,|u_{\alpha +
1}|^{2}dx\,ds \leq ||u_{0}(x)||_{\alpha}^{2} +
\int_{0}^{t}\left(c\int_{\mathbb{R}}\xi
\,|u_{\alpha}|^{2}dx\right)ds,
\end{eqnarray*}
where
\begin{eqnarray*}
\int_{\mathbb{R}}\xi \,|u_{\alpha}|^{2}dx \leq
||u_{0}(x)||_{\alpha}^{2} + \int_{0}^{t}\left(c\int_{\mathbb{R}}\xi
\,|u_{\alpha}|^{2}dx\right)ds.
\end{eqnarray*}
Using Gronwall's inequality
\begin{eqnarray*}
\int_{\mathbb{R}}\xi \,|u_{\alpha}|^{2}dx \leq
||u_{0}(x)||_{\alpha}^{2}\,e^{c\,t}\leq
||u_{0}(x)||_{\alpha}^{2}\,e^{c\,T}
\end{eqnarray*}
it follows that
\begin{eqnarray*}
\int_{\mathbb{R}}\xi \,|u_{\alpha}|^{2}dx \leq
c=c(T,\,||u_{0}(x)||_{\alpha}^{2}).
\end{eqnarray*}
Then for any $T=T_{max}>0$ there exists
$c=c(T,\,||u_{0}(x)||_{\alpha}^{2})$ such that
\begin{eqnarray*}
||u||_{\alpha}^{2} + [3\,\beta - |\omega
|\,]\int_{0}^{t}\int_{\mathbb{R}}\partial\xi\,|u_{\alpha +
1}|^{2}dx\,ds \leq c.
\end{eqnarray*}
This concludes the proof.
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\setcounter{equation}{0}\section{Persistence Theorem} As a
starting point for the a priori gain of regularity results that
will be discussed in the next section, we need to develop some
estimates for solutions of the equation \eqref{e204} in weighted
Sobolev norms. The existence of these weighted estimates is often
called the persistence of a property of the initial data $u_{0}.$
We show that if $u_{0}\in H^{3}(\mathbb{R})\cap
H^{L}(W_{0\;i\;0})$ for $L\geq 0,$ $i\geq 1,$ then the solution
$u(\,\cdot \,,\,t)$ evolves in $H^{L}(W_{0\;i\;0})$ for $t\in
[0,\,T].$ The time interval of that persistence is at least as
long as the interval guaranteed
by the existence Theorem 5.2.\\
\\
{\bf Theorem 7.1} (Persistence). {\it Let $|\omega |<3\,\beta$ and
let $i\geq 1$ and $L\geq 0$ be non-negative integers, $0<T<+\infty.$
Assume that $u$ is the solution to \eqref{e204} in
$L^{\infty}([0,\,T]:\,H^{3}(\mathbb{R}))$ with initial data
$u_{0}(x)=u(x,\,0)\in H^{3}(\mathbb{R}).$ If $u_{0}(x) \in
H^{L}(W_{0\;i\;0})$ then}
\begin{eqnarray}
\label{e701}u\in L^{\infty}\left([0,\,T]:\,H^{3}(\mathbb{R})
\cap H^{L}(W_{0\;i\;0})\right)\\
\label{e702}\int_{0}^{T}\int_{\mathbb{R}}|\partial^{L +
1}u(x,\,t)|^{2}\,\eta\,dx\,dt<+\infty
\end{eqnarray}
{\it where $\sigma$ is arbitrary, $\eta\in W_{\sigma\;i\;0}$ for $i\geq 1.$}\\
\\
{\it Proof.} We use induction on $\alpha .$ Let
\begin{eqnarray*}
u\in L^{\infty}\left([0,\,T]:\,H^{3}(\mathbb{R})\cap
H^{\alpha}(W_{0\;i\;0})\right) \quad \mbox{for}\quad 0\leq \alpha
\leq L.
\end{eqnarray*}
We derive formally some a priori estimate for the solution where the
bound, involves only the norms of $u$ in
$L^{\infty}([0,\,T]:\,H^{3}(\mathbb{R}))$ and the norms of $u_{0}$
in $H^{3}(W_{0\;i\;0}).$ We do this by approximating $u(x,\,t)$
through smooth solutions and the weight functions by smooth bounded
functions. By Theorem 5.2, we have
\begin{eqnarray*}
u(x,\,t)\in L^{\infty}([0,\,T]:\,H^{N}(\mathbb{R}))\quad
\mbox{with}\quad N=\mbox{max}\{L,\,3\}.
\end{eqnarray*}
In particular, $u_{j}(x,\,t)\in L^{\infty}([0,\,T]\times
\mathbb{R})$ for $0\leq j\leq N - 1.$ To obtain \eqref{e701} and
\eqref{e702} there are two ways of approximation. We approximate
general solutions by smooth solutions, and we approximate general
weight functions by bounded weight functions. The first of these
procedure has already been discussed,
so we shall concentrate on the second.\\
Given a smooth weight function $\eta (x)\in W_{\sigma ,\;i -
1,\;0}$ with $\sigma >0,$ we take a sequence $\eta^{\nu}(x)$ of
smooth bounded weight functions approximating $\eta (x)$ from
below, uniformly on any half line $(-\infty,\,c).$ Define the
weight functions for the $\alpha$-th induction step as
\begin{eqnarray*}
\xi _{\nu}=\frac{1}{(3\,\beta -
|\omega|)}\int_{-\infty}^{x}\eta^{\nu}(y,\,t)\,dy
\end{eqnarray*}
then the $\xi_{\nu}$ are bounded weight functions which approximate
a desired weight function $\xi\in W_{0\;i\;0}$ from below, uniformly
on a compact set. For $\alpha =0,$ multiplying \eqref{e204} by
$\xi_{\nu}\,\overline{u},$ we have
\begin{eqnarray*}
& & i\,\xi_{\nu}\,\overline{u}\,u_{t} + i\,\beta\,\xi_{\nu}\,
\overline{u}\,u_{3} + \omega \,\xi_{\nu}\,\overline{u}\,u_{2} +
\xi_{\nu}\,|u|^{4}=0 \\
& & -\,i\,\xi_{\nu}\,u\,\overline{u}_{t} - i\,\beta \,\xi
_{\nu}\,u\,\overline{u}_{3} + \omega\,\xi
_{\nu}\,u\,\overline{u}_{2} + \xi_{\nu}\,|u|^{4}=0. \quad
\mbox{(applying conjugate)}
\end{eqnarray*}
Subtracting and integrating over $x\in \mathbb{R}$ we have
\begin{eqnarray}
& & i\,\partial_{t}\int_{\mathbb{R}}\xi_{\nu}\,|u|^{2}dx -
i\int_{\mathbb{R}}\partial_{t}\xi_{\nu}\,|u|^{2}dx + i\,\beta
\int_{\mathbb{R}}\xi_{\nu}\,\overline{u}\,u_{3}dx
+ i\,\beta \int_{\mathbb{R}}\xi_{\nu}\,u\,\overline{u}_{3}dx \nonumber \\
\label{e703}& & +\,\omega
\int_{\mathbb{R}}\xi_{\nu}\,\overline{u}\,u_{2}dx - \omega
\int_{\mathbb{R}}\xi_{\nu}\,u\,\overline{u}_{2}dx = 0.
\end{eqnarray}
Each term is treated separately, integrating by parts in the third
term we have
\begin{eqnarray*}
\int_{\mathbb{R}}\xi_{\nu}\,\overline{u}\,u_{3}dx & = &
\int_{\mathbb{R}}\partial^{2}\xi_{\nu}\,\overline{u}\,u_{1}dx +
2\int_{\mathbb{R}}\partial\xi_{\nu}\,|u_{1}|^{2}dx +
\int_{\mathbb{R}}\xi_{\nu}\,\overline{u}_{2}\,u_{1}dx.
\end{eqnarray*}
The other terms are calculated in a similar way. Hence in
\eqref{e703} we have
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi_{\nu}\,|u|^{2}dx -
\int_{\mathbb{R}}\partial_{t}\xi_{\nu}\,|u|^{2}dx - \beta
\int_{\mathbb{R}}\partial^{3}\xi_{\nu}\,|u|^{2}dx + 3\,\beta
\int_{\mathbb{R}}\partial \xi_{\nu}\,|u_{1}|^{2}dx } \\
& = & 2\,\omega \,Im\int_{\mathbb{R}}\partial
\xi_{\nu}\,\overline{u}\,u_{1}dx \leq |\omega
|\int_{\mathbb{R}}\partial \xi_{\nu}\,|u|^{2}dx + |\omega
|\int_{\mathbb{R}}\partial \xi_{\nu}\,|u_{1}|^{2}dx.
\end{eqnarray*}
Then, using \eqref{e203} we obtain
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi_{\nu}\,|u|^{2}dx +
\int_{\mathbb{R}}[3\,\beta - |\omega |]\,\partial
\xi_{\nu}\,|u_{1}|^{2}dx }\\
& \leq &\int_{\mathbb{R}}[\partial_{t}\xi_{\nu} + \beta
\,\partial^{3}\xi_{\nu} + |\omega |\,\partial \xi_{\nu}]\,|u|^{2}dx
\leq c\int_{\mathbb{R}}\xi_{\nu}\,|u|^{2}dx
\end{eqnarray*}
thus
\begin{eqnarray*}
\partial_{t}\int_{\mathbb{R}}\xi_{\nu}\,|u|^{2}dx \leq
c\int_{\mathbb{R}}\xi_{\nu}\,|u|^{2}dx.
\end{eqnarray*}
We apply Gronwall's Lemma to conclude that
\begin{eqnarray}
\label{e704}\partial_{t}\int_{\mathbb{R}}\xi_{\nu}\,|u|^{2}dx \leq
c(T,\,||u_{0}||).
\end{eqnarray}
for $0\leq t\leq T,$ and $c$ not depending on $\beta >0,$
the weighted estimate remains true for $\beta \rightarrow 0.$\\
Now, we assume that the result is true for $(\alpha - 1)$ and we
prove that it is true for $\alpha .$ To prove this, we start from
the main inequality \eqref{e301} with $\xi$ and $\eta$ given by
$\xi_{\nu}$ and $\eta_{\nu}$ respectively.
\begin{eqnarray*}
& & \partial_{t}\int_{\mathbb{R}}\xi _{\nu}\,|u_{\alpha}|^{2}dx +
\int_{\mathbb{R}}\eta_{\nu}\,|u_{\alpha + 1}|^{2}dx +
\int_{\mathbb{R}}\theta_{\nu}\,|u_{\alpha}|^{2}dx +
\int_{\mathbb{R}}R_{\alpha}dx\leq 0
\end{eqnarray*}
where
\begin{eqnarray*}
\eta_{\nu} & = & (3\beta - |\omega |\,)\,\partial \xi_{\nu}
\qquad \mbox{for}\qquad |\omega |<3\;\beta \\
\theta_{\nu} & = & -\;[\,\partial_{t}\xi_{\nu} + \beta
\,\partial^{3}\xi_{\nu} + |\omega|\,\partial \xi_{\nu} +
c_{0}\,\xi_{\nu} \,]\qquad \mbox{where}\quad
c_{0}=||u||_{L^{\infty}(\mathbb{R})}^{2}\\
R_{\alpha} & = & R_{\alpha}(|u_{\alpha }|,\,|u_{\alpha -
1}|,\,\ldots \,)
\end{eqnarray*}
then
\begin{eqnarray*}
\lefteqn{\partial_{t}\int_{\mathbb{R}}\xi _{\nu}\,|u_{\alpha}|^{2}dx
+ \int_{\mathbb{R}}\eta_{\nu} \,|u_{\alpha + 1}|^{2}dx \leq
-\int_{\mathbb{R}}\theta_{\nu}\,|u_{\alpha}|^{2}dx -
\int_{\mathbb{R}}R_{\alpha}dx }\\
& \leq & \left|-\int_{\mathbb{R}}\theta_{\nu}\,|u_{\alpha}|^{2}dx
- \int_{\mathbb{R}}R_{\alpha}dx\,\right|\leq
\int_{\mathbb{R}}|\theta_{\nu}|\,|u_{\alpha}|^{2}dx +
\int_{\mathbb{R}}|R_{\alpha}|dx.
\end{eqnarray*}
Using \eqref{e203} in the first part of the right hand side we
obtain
\begin{eqnarray*}
\int_{\mathbb{R}}\theta_{\nu}\,|u_{\alpha}|^{2}dx \leq
c\int_{\mathbb{R}}\xi_{\nu}\,|u_{\alpha}|^{2}dx
\end{eqnarray*}
thus
\begin{eqnarray}
\label{e705}\partial_{t}\int_{\mathbb{R}}\xi
_{\nu}\,|u_{\alpha}|^{2}dx + \int_{\mathbb{R}}\eta_{\nu}
\,|u_{\alpha + 1}|^{2}dx \leq
c\int_{\mathbb{R}}\xi_{\nu}\,|u_{\alpha}|^{2}dx +
\int_{\mathbb{R}}|R_{\alpha}|dx.
\end{eqnarray}
According to \eqref{e308}, $\int_{\mathbb{R}}R_{\alpha}\,dx$
contains a term of the form
\begin{eqnarray}
\label{e706}
\int_{\mathbb{R}}\xi_{\nu}\,u_{\nu_{1}}\,\overline{u}_{\nu_{2}}\,
\overline{u}_{\alpha}dx.
\end{eqnarray}
We estimate the term
\begin{eqnarray}
\label{e707}\int_{\mathbb{R}}\xi_{\nu}\,u_{\nu_{1}}\,\overline{u}_{\nu_{2}}\,
\overline{u}_{\alpha}\,dx\quad \mbox{for}\quad \nu_{1} +
\nu_{2}=\alpha .
\end{eqnarray}
Let $\nu_{2}\leq \alpha - 2.$ Integrating by parts one time in
\eqref{e707} we have
\begin{eqnarray*}
\int_{\mathbb{R}}\xi_{\nu}\,u_{\nu_{1}}\,\overline{u}_{\nu_{2}}\,
\overline{u}_{\alpha}\,dx & = &
-\int_{\mathbb{R}}\partial\xi_{\nu}\,u_{\nu_{1}}\,\overline{u}_{\nu_{2}}\,
\overline{u}_{\alpha - 1}\,dx -
\int_{\mathbb{R}}\xi_{\nu}\,u_{\nu_{1} +
1}\,\overline{u}_{\nu_{2}}\,
\overline{u}_{\alpha - 1}\,dx\\
& & -\int_{\mathbb{R}}\xi_{\nu}\,u_{\nu_{1}}\,\overline{u}_{\nu_{2}
+ 1}\, \overline{u}_{\alpha - 1}\,dx.
\end{eqnarray*}
We estimates the first term in the right hand side in \eqref{e707}.
Using Holder's inequality and standard estimates we obtain
\begin{eqnarray}
\label{e709}c\,\left[\left(\int_{\mathbb{R}}\xi_{\nu}\,|u_{\nu_{2} +
1}|^{2}dx\right)^{1/2} +
\left(\int_{\mathbb{R}}\xi_{\nu}\,|u_{\nu_{2}}|^{2}dx\right)^{1/2}
\right]\left(\int_{\mathbb{R}}\xi_{\nu}\,|u_{\alpha -
1}|^{2}dx\right)^{1/2}
\end{eqnarray}
where \eqref{e709} is bounded by hypothesis. The other terms are
estimates in a similar way. Now suppose that $\nu_{1}=\nu_{2}=
\alpha - 1,$ then in \eqref{e707} we have
\begin{eqnarray*}
\int_{\mathbb{R}}\xi_{\nu}\,u_{\alpha - 1}\,\overline{u}_{\alpha -
1}\,\overline{u}_{\alpha}dx,
\end{eqnarray*}
hence
\begin{eqnarray*}
\left |\int_{\mathbb{R}}\xi_{\nu}\,|u_{\alpha -
1}|^{2}\,\overline{u}_{\alpha}dx \right |\leq ||u_{\alpha -
1}||_{L^{\infty}(\mathbb{R})}\,
\left(\int_{\mathbb{R}}\xi_{\nu}\,|u_{\alpha -
1}|^{2}dx\right)^{1/2}\,
\left(\int_{\mathbb{R}}\xi_{\nu}\,|u_{\alpha}|^{2}dx\right)^{1/2}
\end{eqnarray*}
where $||u_{\alpha - 1}||_{L^{\infty}(\mathbb{R})}$ is bounded by
hypothesis, and the estimate is complete. In a similar way we
estimate all the other terms of $R_{\alpha}.$ Using these estimates
in \eqref{e705} and applying Gronwall's argument, we obtain for
$0\leq t\leq T$
\begin{eqnarray*}
\partial_{t}\int_{\mathbb{R}}\xi_{\nu}\,|u_{\alpha}|^{2}dx +
\int_{\mathbb{R}}\eta_{\nu}\,|u_{\alpha + 1}|^{2}dx \leq
c_{0}\,e^{c_{1}\,t}\,\left(\int_{\mathbb{R}}\xi_{\nu}\,
|\partial^{\alpha}u_{0}(x)|^{2}dx + 1\right)
\end{eqnarray*}
where $c_{0}$ and $c_{1}$ are independent of $\nu$ and such that
letting the parameter $\nu \rightarrow 0$ the desired estimate
\eqref{e702} is obtained.
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\setcounter{equation}{0}\section{Main Theorem} In this section we
state and prove our main theorem, which states that if the initial
data $u(x,\,0)$ decays faster than polynomially on
$\mathbb{R}^{+}=\{x\in \mathbb{R}:\;x>0\}$ and possesses certain
initial Sobolev regularity, then the solution $u(x,\,t)\in
C^{\infty}$ for all $t>0.$ \\
\\
If $\eta$ is an arbitrary weight function in $W_{\sigma\;i\;k},$
then by Lemma 3.2, there exists $\xi\in W_{\sigma,\;i + 1,\;k}$
which satisfies \eqref{e301}. For the main theorem, we take $4\leq
\alpha \leq L + 2.$ For $\alpha \leq L + 4,$ we take
\begin{eqnarray}
\label{e801}\eta \in W_{\sigma ,\,L - \alpha - 2,\,\alpha - 3}\;
\Longrightarrow \;\xi \in W_{\sigma ,\,L - \alpha - 3,\,\alpha - 3}.
\end{eqnarray}
{\bf Lemma 8.1}(Estimate of error terms). {\it Let $4\leq \alpha
\leq L + 2$ and the weight functions be chosen as in \eqref{e801},
then}
\begin{eqnarray}
\label{e802}\left|\int_{0}^{T}\int_{\mathbb{R}}(\theta
\,|u_{\alpha}|^{2} + R_{\alpha})dx\,dt\right| \leq c,
\end{eqnarray}
{\it where $c$ depends only on the norms of $u$ in }
\begin{eqnarray*}
L^{\infty}([0,\,T]:\,H^{\beta}(W_{\sigma,\,L - \beta + 3,\,\beta - 3}))
\cap L^{2}([0,\,T]:\,H^{\beta + 1}(W_{\sigma,\,L - \beta + 2,\,\beta - 3}))
\end{eqnarray*}
{\it for $3\leq \beta \leq \alpha - 1,$ and the norms of
$u$ in} $L^{\infty}([0,\,T]:\,H^{3}(W_{0\;L\;0})).$\\
\\
{\it Proof.} We must estimate both $R_{\alpha}$ and $\theta .$ We
begin with a term in $R_{\alpha}$ of the form
\begin{eqnarray}
\label{e803}\xi \,|u_{\nu_{1}}|\,\,|u_{\nu_{2}}|\,\,|u_{\alpha}|
\end{eqnarray}
assuming that $\nu_{1}\leq \alpha - 2.$ \\
\\
By the induction
hypothesis, $u$ is bounded in
$L^{\infty}([0,\,T]:\,H^{\beta}(W_{\sigma,\,L - (\beta - 3)^{+},
\,(\beta - 3)^{+}}))$ for $0\leq \beta \leq \alpha - 1.$ By Lemma
2.1,
\begin{eqnarray}
\label{e804}\sup_{t\geq 0} \,\sup_{x\in\mathbb{R}}\,\zeta
\,|u_{\beta}|^{2}<+\infty
\end{eqnarray}
for $0\leq \beta \leq \alpha - 2$ and $\zeta\in W_{\sigma ,\,L -
(\beta - 2)^{+},\,(\beta - 2)^{+}}.$ We estimate $|u_{\nu_{1}}|$
using \eqref{e804}. We estimate $|u_{\nu_{2}}|$ and $|u_{\alpha}|$
using the weighted $L^{2}$ bounds
\begin{eqnarray}
\label{e805}\int_{0}^{T}\int_{\mathbb{R}}\zeta
\,|u_{\nu_{2}}|^{2}dx\,dt<+\infty \quad \mbox{for}\quad \zeta \in
W_{\sigma ,\,L - (\nu_{2} - 3)^{+},\,(\nu_{2} - 4)^{+}}
\end{eqnarray}
and the same with $\nu_{2}$ replaced by $\alpha .$ It suffices to check the
powers to
$t,$ the powers of $x$ as $x\rightarrow +\infty $ and the exponential of
$x$ as $x\rightarrow -\infty .$\\
\\
For $x>1.$ In the \eqref{e803} term, the factor $\xi $ contributed
according to \eqref{e801}
\begin{eqnarray*}
\xi(x,\,t) = t^{\alpha - 3}\,x^{(L - \alpha + 3)}\,t^{-(\alpha -
3)}\,x^{-(L - \alpha + 3)}\xi(x,\,t)\leq c_{2}\,t^{\alpha -
3}\,x^{(L - \alpha + 3)}\quad (\mbox{using} \eqref{e203})
\end{eqnarray*}
then $\xi \,|u_{\nu_{1}}|\,|u_{\nu_{2}}|\,|u_{\alpha}|\leq
c_{2}\,t^{\alpha - 3}\,x^{(L - \alpha +
3)}|u_{\nu_{1}}|\,|u_{\nu_{2}}|\,|u_{\alpha}|.$ Moreover
\begin{eqnarray*}
|u_{\nu_{1}}|\,|u_{\nu_{2}}|\,|u_{\alpha}| & = &
t^{\frac{(\nu_{1} - 2)^{+}}{2}}\,
x^{\frac{L - (\nu_{1} - 2)^{+}}{2}}\,
t^{\frac{-(\nu_{1} - 2)^{+}}{2}}\,
x^{\frac{(L - (\nu_{1} - 2)^{+})}{2}}\,|u_{\nu_{1}}|\times \\
& & t^{\frac{(\nu_{2} - 4)^{+}}{2}}\, x^{\frac{L - (\nu_{2} -
3)^{+}}{2}}\,t^{\frac{-(\nu_{2} - 4)^{+}}{2}}\,x^{\frac{(L -
(\nu_{2} - 3)^{+})}{2}}\,
|u_{\nu_{2}}|\times \\
& & t^{\frac{(\alpha - 4)^{+}}{2}}\, x^{\frac{L - (\alpha -
3)^{+}}{2}}\,t^{\frac{-(\alpha - 4)^{+}}{2}}\,x^{\frac{(L -
(\alpha - 3)^{+})}{2}}\,|u_{\alpha}|.
\end{eqnarray*}
tt follows that
\begin{eqnarray}
\lefteqn{\xi
\,|u_{\nu_{1}}|\,|u_{\nu_{2}}|\,|u_{\alpha}|}\nonumber\\
\label{e806}& & \leq c_{2}\,t^{M}\,x^{T}\,t^{\frac{(\nu_{1} -
2)^{+}}{2}}\, x^{\frac{L - (\nu_{1} - 2)^{+}}{2}}\,|u_{\nu_{1}}|\,
t^{\frac{(\nu_{2} - 4)^{+}}{2}}\, x^{\frac{L - (\nu_{2} -
3)^{+}}{2}}\,|u_{\nu_{2}}|\,t^{\frac{(\alpha - 4)^{+}}{2}}\,
x^{\frac{L - (\alpha - 3)^{+}}{2}}\,\,|u_{\alpha}|\quad
\end{eqnarray}
where
\begin{eqnarray*}
M=\alpha - 3 - \frac{1}{2}(\nu_{1} - 2)^{+} -
\frac{1}{2}(\nu_{2} - 4)^{+} - \frac{1}{2}(\alpha - 4)^{+}
\end{eqnarray*}
and
\begin{eqnarray*}
T=(L - \alpha + 3) - \frac{1}{2}(L - (\alpha - 3)^{+}) -
\frac{1}{2}(L - (\nu_{2} - 3)^{+}) - \frac{1}{2}(L - \nu_{1} - 2)^{+}).
\end{eqnarray*}
{\it Claim.} $M\geq 0$ is large enough, that the extra power of $t$
can be omitted
\begin{eqnarray*}
2\,M & = & 2\,\alpha - 6 - (\nu_{1} - 2)^{+} - (\nu_{2} - 4)^{+} - (\alpha - 4)^{+}\\
& = & \alpha - 2 - (\nu_{1} - 2)^{+} - (\nu_{2} - 4)^{+}\\
& = & \alpha - 2 - \nu_{1} + 2 - \nu_{2} + 4
= \alpha + 4 - (\nu_{1} + \nu_{2})\\
& = & \alpha + 4 - \alpha = 4\geq 0.
\end{eqnarray*}
{\it Claim.} $T\leq 0$ is such that the extra power of $t$ can be
omitted.
\begin{eqnarray*}
2\,T & = & 2\,L - 2\,\alpha + 6 - L + (\alpha - 3)^{+} - L +
(\nu_{2} - 3)^{+} - L + (\nu_{1} - 2)^{+}\\
& = & -\,L - \alpha + \nu_{1} + \nu_{2} - 2
= -\,L - \alpha + \alpha - 2\\
& = & -(L + 2) \leq 0.
\end{eqnarray*}
Now, we study the behavior as $x\rightarrow -\infty .$ Since each
factor $u_{\nu_{j}}$($j=1,\,2$) must grow slower that an
exponential $e^{\sigma'\,|x|}$ and $\xi$ decays as an exponential
$e^{-\sigma \,|x|},$ we simply need to choose the appropriate
relationship $\sigma$ and $\sigma'$ at each induction step. The
analysis will be completed with the case where $\nu_{1}\geq \alpha
- 1.$ Then, in \eqref{e309}, if $2(\alpha - 1)\leq \alpha,$ but
$\alpha \geq 3.$ So this possibility is impossible. For $x<1$ the
estimate is similar, except for an exponential weight. The
analysis of all terms of $R_{\alpha}$ is estimated
in a similar form. This completes the estimate of $R_{\alpha}.$ \\
Now, we estimate the term $\theta \,|u_{\alpha}|^{2}$ where
$\theta $ is given in \eqref{e301}. We have that $\theta $
involves derivatives of $u$ only up to order one, and hence,
$\theta \,|u_{\alpha}|^{2}$ is a sum of terms of the same type
which we have already encountered in $R_{\alpha}.$ So, its
integral can be bounded in the same type. Indeed, \eqref{e301}
shows that $\theta$ depends on $\xi_{t},$ $\partial^{3}\xi$ and
derivatives of lower order. By using \eqref{e306}
we have the claim.\\
\\
{\bf Theorem 8.2}(Main Theorem). {\it Let $|\omega |<3\,\beta ,$
$T>0$ and $u(x,\,t)$ be a solution of \eqref{e204} in the region
$\mathbb{R} \times [0,\,T]$ such that}
\begin{eqnarray}
\label{e807}u\in L^{\infty}([0,\,T]:\,H^{3}(W_{0\;L\;0}))
\end{eqnarray}
{\it for some $L\geq 2.$ Then }
\begin{eqnarray}
\label{e808}u\in L^{\infty}([0,\,T]:\,H^{3 + l}(W_{\sigma,\,L -
l,\,l})) \cap L^{2}([0,\,T]:\,H^{4 + l}(W_{\sigma,\,L - l - 1,\,l}))
\end{eqnarray}
{\it for all $0\leq l\leq L - 1$ and all $\sigma >0.$}\\
\\
{\it Remark.} If the assumption \eqref{e807} holds for all $L\geq
2,$ the solution is infinitely differentiable in the $x$-variable.
>From \eqref{e204} we have that the solution is $C^{\infty}$ in both
variables. We are also quantifying the gain of each derivative by
the degree of
vanishing of the initial data at infinity.\\
\\
{\it Proof.} We use induction on $\alpha.$ For $\alpha =3,$ let
$u$ be a solution of \eqref{e204} satisfying \eqref{e807}.
Therefore, $u_{t}\in L^{\infty}([0,\,T]:\,L^{2}(W_{0\;L\;0}))$
where $u\in L^{\infty}([0,\,T]:\,H^{3}(W_{0\;L\;0}))$ and
$u_{t}\in L^{\infty}([0,\,T]:\,L^{2}(W_{0\;L\;0})).$ Then $u\in
C([0,\,T]:\;L^{2}(W_{0\;L\;0}))\cap
C_{w}([0,\,T]:\,H^{3}(W_{0\;L\;0})).$ Hence, $u:[0,\,T]\longmapsto
H^{3}(W_{0\;L\;0})$ is a weakly continuous function. In
particular, $u(\,\cdot \,,\,t)\in H^{3}(W_{0\;L\;0})$ for all $t.$
Let $t_{0}\in (0,\,T)$ and $u(\,\cdot\,,\,t_{0})\in
H^{3}(W_{0\;L\;0}),$ then there are $\{u_{0}^{(n)}\}\subseteq
C_{0}^{\infty}(\mathbb{R})$ such that $u_{0}^{(n)}(\,\cdot
\,)\rightarrow u(\,\cdot\,,\,t_{0})$ in $H^{3}(W_{0\;L\;0}).$ Let
$u^{(n)}(x,\,t)$ be a unique solution of \eqref{e204} with
$u^{(n)}(x,\,t_{0})=u_{0}^{(n)}.$ Then by Theorem 5.1 and 5.2,
there exists $u$ in a time interval $[t_{0},\,t_{0} + \delta]$
where $\delta >0$ does not depend on $n$ and $u$ is a unique
solution of \eqref{e204}, $u^{(n)}\in L^{\infty}([t_{0},\,t_{0} +
\delta]:\,H^{3}(W_{0\;L\;0}))$ with $u^{(n)}(x,\,t_{0})\equiv
u_{0}^{(n)}(x)\rightarrow u(x,\,t_{0})\equiv u_{0}(x)$ in
$H^{3}(W_{0\;L\;0}).$ Now, by Theorem 7.1, we have
\begin{eqnarray*}
u^{(n)}\in L^{\infty}([t_{0},\,t_{0} +
\delta]:\,H^{3}(W_{0\;L\;0}))\cap L^{2}([t_{0},\,t_{0} +
\delta]:\,H^{4}(W_{\sigma ,\,L - 1,\,0}))
\end{eqnarray*}
with a bound that depends only on the norm of $u_{0}^{(n)}$ in
$H^{3}(W_{0\;L\;0}).$ Furthermore, Theorem 7.1 guarantees the
non-uniform bounds
\begin{eqnarray*}
\sup_{[t_{0},\,t_{0} + \delta]}\sup_{x}\,(1 + |x_{+}|)^{k}\,|\,
\partial^{\alpha}u^{(n)}(x,\,t)\,|<+\infty
\end{eqnarray*}
for each $n,\,k$ and $\alpha .$ The main inequality \eqref{e301}
and the estimate \eqref{e802} are therefore valid for each
$u^{(n)}$ in the interval $[t_{0},\,t_{0} + \delta].$ $\eta $ may
be chosen arbitrarily in its weight class \eqref{e801} and then
$\xi$ is defined by \eqref{e307} and the constant $c_{1},$
$c_{2},$ $c_{3},$ $c_{4}$ are independent of $n.$ From
\eqref{e301} and \eqref{e801} we have
\begin{eqnarray}
\label{e809}\sup_{[t_{0},\,t_{0} + \delta]}\int_{\mathbb{R}}\xi
\,|u_{\alpha}^{(n)}|^{2}dx + \int_{t_{0}}^{t_{0} +
\delta}\int_{\mathbb{R}}\eta \,|u_{\alpha + 1}^{(n)}|^{2}dx \leq c
\end{eqnarray}
where by \eqref{e802}, $c$ is independent of $n.$ The estimate
\eqref{e809} is proved by induction for $\alpha=3,\,4,\,5,\ldots $
Thus $u^{(n)}$ is also bounded in
\begin{eqnarray}
\label{e810}L^{\infty}([t_{0},\,t_{0} + \delta]:\,
H^{\alpha}(W_{\sigma,\,L - \alpha + 3,\,\alpha - 3})) \cap
L^{2}([t_{0},\,t_{0} + \delta]:\,H^{\alpha + 1}(W_{\sigma,\,L -
\alpha + 2,\,\alpha - 3}))
\end{eqnarray}
for $\alpha \geq 3.$ Since $u^{(n)}\rightarrow u$ in
$L^{\infty}([t_{0},\,t_{0} + \delta]:\,H^{3}(W_{0\;L\;0})).$ By
Corollary 5.3 it follows that $u$ belongs to the space \eqref{e810}.
Since $\delta $ is fixed, this result is valid over the whole
interval $[0,\,T].$
|
2,877,628,090,297 | arxiv | \section{\@startsection {section}{1}{\z@
{-3.5ex \@plus -1ex \@minus -.2ex
{2.3ex \@plus.2ex
{\normalfont\bf\center }}
\renewcommand\subsection{\@startsection {subsection}{1}{\z@
{-3.5ex \@plus -1ex \@minus -.2ex
{2.3ex \@plus.2ex
{\normalfont\bf}}
\makeatother
\setlength{\topmargin}{-0.8cm}
\setlength{\oddsidemargin}{0.6cm}
\setlength{\evensidemargin}{0.65cm}
\setlength{\textheight}{23.5cm}
\setlength{\textwidth}{14.5cm}
\date{\today}
\raggedbottom
\newcommand{\iti}{\boldsymbol{1}}
\newcommand{\bbE}{\mathbb{E}}
\newcommand{\bbP}{\mathbb{P}}
\newcommand{\bR}{\mathbf{R}}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\e}{\varepsilon}
\newcommand{\pa}{\partial}
\newcommand{\kyosu}{\sqrt{-1}\,}
\newcommand{\I}{\operatorname{Id}}
\newcommand{\eil}{\overset{(\text{\rm law})}{=}}
\newcommand{\wt}{\widetilde}
\newcommand{\wh}{\widehat}
\newcommand{\res}{\operatorname{Res}}
\newtheoremstyle{new-thm}
{3pt}
{3pt}
{\it}
{0pt}
{\bf}
{.}
{.5em}
{}
\newtheoremstyle{new-def}
{3pt}
{3pt}
{\rm}
{0pt}
{\bf}
{.}
{.5em}
{}
\theoremstyle{new-thm}
\newtheorem{thm}{Theorem}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{lemma}[thm]{Lemma}
\newtheorem{prop}[thm]{Proposition}
\theoremstyle{new-def}
\newtheorem{defn}{Definition}[section]
\newtheorem{example}[thm]{Example}
\newtheorem{rem}[thm]{Remark}
\pagestyle{plain}
\begin{document}
\vspace*{2cm}
\begin{center}
{\Large\bf Asymptotics of the probability distributions \\
of the first hitting times of Bessel processes}
\end{center}
\bigskip
\begin{center}
Yuji Hamana$^{\text{\rm a}}$ and
Hiroyuki Matsumoto$^{\text{\rm b}}$
\footnote{Corresponding author.\\
{\it E-mail adress}: matsu(at)gem.aoyama.ac.jp(H.Matsumoto)}
\end{center}
\bigskip
\begin{center}
$^{\text{\rm a}}$Department of Mathematics,
Kumamoto University,
Kurokami 2-39-1, Kumamoto 860-8555, Japan \\
$^{\text{\rm b}}$Department of Physics and Mathematics,
Aoyama Gakuin University, Fuchinobe 5-10-1, Sagamihara 252-5258,
Japan
\end{center}
\begin{quote} {\bf Abstract.}
The asymptotic behavior of the tail probabilities
for the first hitting times of the Bessel process
with arbitrary index
is shown without using the explicit expressions for the distribution
function obtained in the authors' previous works.
2010 {\it Mathematics Subject Classification}: 60G40 \\
{\it keywords}: Bessel process, hitting time, tail probability
\end{quote}
\section{Introduction and main results}
Let $\bbP_a^{(\nu)}$ be the probability law
on the path space $W=C([0,\infty);\mathbf{R})$
of a Bessel process with index $\nu\in\mathbf{R}$
or dimension $\delta=2(\nu+1)$ starting from $a>0$.
For $w\in W$ we denote the first hitting time to $b>0$
by $\tau_b=\tau_b(w)$:
\begin{equation*}
\tau_b=\inf\{t>0; w(t)=b\}.
\end{equation*}
\indent In recent works \cite{HM-I,HM-T} the authors have shown
explicit forms of the distribution function and the density
of the distribution of $\tau_b$ under $\bbP_a^{(\nu)}$
in the case where $0<b<a$.
The other case, which is easier since we do not need to consider
a natural boundary, has been known by Kent \cite{Kent}.
See also Gettor-Sharpe \cite{GS}.
When $\nu>0$ and $b<a$, it is shown in \cite{HM-T} that
there exists a positive constant $C(\nu)$ such that
\begin{equation*}
\bbP_a^{(\nu)}(\tau_b>t) = 1-\Bigl(\frac{b}{a}\Bigr)^{2\nu} +
C(\nu) t^{-\nu} + o(t^{-\nu}).
\end{equation*}
The constant $C(\nu)$ may be expressed explicitly and
we could treat all the cases.
However, we need to consider separately the case
where $\delta$ is an odd integer and
the expression for $C(\nu)$ is different from the othere cases.
This is because the expression for the distribution function
itself is different.
The aim of this note is to show that the constant $C(\nu)$ has
the same simple expression also when $\delta$ is an odd integer
by considering the asymptotics without using the explicit
expressions for the distribution functions
obtained in \cite{HM-T}.
When $a<b$ and $\nu>0$, the explicit expressions for
$\bbP_a^{(\nu)}(\tau_b>t)$ has been shown in \cite{Kent}
and, from his result, it is easily shown that
the tail probability decays exponentially.
Hence we concentrate on the case of $b<a$.
\begin{thm} \label{thm-1}
Let $\nu>0$ and $0<b<a.$
Then{\rm,} as $t\to\infty,$ it holds that
\begin{equation*}
\bbP_a^{(\nu)}(t<\tau_b<\infty)=
b^{2\nu}\Bigl\{1-\Bigl(\frac{b}{a}\Bigr)^{2\nu}\Bigr\}
\frac{1}{\Gamma(1+\nu)(2t)^{\nu}}
+O(t^{-\nu-\e})
\end{equation*}
for any $\e\in(0,\frac{\nu}{1+\nu}),$
where $\Gamma$ denotes the usual Gamma function{\rm.}
\end{thm}
\begin{thm} \label{thm-2}
Let $\nu<0$ and $0<b<a.$
Then{\rm,} as $t\to\infty,$ it holds that
\begin{equation*}
\bbP_a^{(\nu)}(\tau_b>t)=
a^{2|\nu|}\Bigl\{1-\Bigl(\frac{b}{a}\Bigr)^{2|\nu|}\Bigr\}
\frac{1}{\Gamma(1+|\nu|)(2t)^{|\nu|}}
+O(t^{-|\nu|-\e})
\end{equation*}
for any $\e\in(0,\frac{\nu}{1+\nu}).$
\end{thm}
When $\nu=0$, it is known that
\begin{equation*}
\bbP_a^{(0)}(\tau_b>t)=\frac{2\log(a/b)}{\log t}+
o((\log t)^{-1}).
\end{equation*}
This identity has been discussed in \cite{HM-T} and
we omit the details.
\section{Proof of Theorem\ref{thm-1}}
We assume $\nu>0$ in this section.
At first we give some lemmas.
The first one is shown by Byczkowski and Ryznar \cite{BR}.
\begin{lemma} \label{l:rough-est}
There exists a constant $C$ such that
\begin{equation*}
\bbP_a^{(\nu)}(t<\tau_b<\infty) \leqq Ct^{-\nu}.
\end{equation*}
\end{lemma}
\begin{lemma} \label{l:exp-tail}
If $0<b<a,$ one has
\begin{equation} \label{e:exp-tail}
\bbP_a^{(\nu)}(\tau_b>t) = 1-\Bigl(\frac{b}{a}\Bigr)^{2\nu}+
\bbE_a^{(\nu)}\Bigl[ \Bigl(\frac{b}{R_t}\Bigr)^{2\nu}
\iti_{\{\inf_{0\leqq s \leqq t}R_s>b\}} \Bigr]
\end{equation}
for any $t>0,$ where $\bbE_a^{(\nu)}$ is the excpectation
with respect to $\bbP_a^{(\nu)}$ and
$\{R_s\}_{s\geqq0}$ denotes the coordinate process{\rm.}
\end{lemma}
\noindent{\bf Proof.}\
It is well known that
\begin{equation*}
\bbP_a^{(\nu)}(\tau_b=\infty)=
\bbP_a^{(\nu)}(\inf_{s\geqq0}R_s>b)=
1-\Bigl(\frac{b}{a}\Bigr)^{2\nu}.
\end{equation*}
By the Markov property of Bessel processes,
we have for $t>0$
\begin{align*}
\bbP_a^{(\nu)}(\tau_b=\infty) & =
\bbP_a^{(\nu)}(\inf_{0\leqq s\leqq t}R_s>b \ \text{and}\
\inf_{s\geqq t}R_s>b) \\
& = \bbE_a^{(\nu)}[ \bbP_{R_t}^{(\nu)}(\tau_b=\infty)
\iti_{\{\inf_{0\leqq s \leqq t}R_s>b\}}] \\
& = \bbE_a^{(\nu)}\Bigl[
\Bigl\{ 1-\Bigl(\frac{b}{R_t}\Bigr)^{2\nu} \Bigr\}
\iti_{\{\inf_{0\leqq s \leqq t}R_s>b\}} \Bigr],
\end{align*}
which implies \eqref{e:exp-tail}.
\begin{lemma} \label{l:asymp-mom}
For any $a>0$ and $p$ with $0<p<1+\nu,$ it holds that
\begin{equation} \label{e:asymp-mom}
\frac{\Gamma(1+\nu-p)}{\Gamma(1+\nu)} \frac{1}{(2t)^{p}}
e^{-\frac{a^2}{2t}} \leqq
\bbE_a^{(\nu)}[(R_t)^{-2p}] \leqq
\frac{\Gamma(1+\nu-p)}{\Gamma(1+\nu)} \frac{1}{(2t)^{p}} +
Ct^{-1-p}
\end{equation}
for $t\geqq1,$ where $C$ is a positive constant
independent of $t.$
\end{lemma}
\noindent{\bf Proof.}\
By the explicit expression for the transition density
of the Bessel process, we have
\begin{equation*}
\bbE_a^{(\nu)}[(R_t)^{-2p}] = \int_0^\infty
y^{-2p} \frac{1}{t} \Bigl(\frac{y}{a}\Bigr)^\nu y
e^{-\frac{a^2+y^2}{2t}} I_\nu\Bigl(\frac{ay}{t}\Bigr) \dd y,
\end{equation*}
where $I_\nu$ is the modified Bessel function of the first kind
with index $\nu$ (cf \cite{W}) given by
\begin{equation*}
I_\nu(z)=\Bigl(\frac{z}{2}\Bigr)^\nu \sum_{n=0}^\infty
\frac{(z/2)^{2n}}{\Gamma(n+1) \Gamma(1+\nu+n)}
\quad (z\in\bR\setminus(-\infty,0)).
\end{equation*}
Hence, it is easy to get
\begin{equation*}
\bbE_a^{(\nu)}[(R_t)^{-2p}] = \frac{1}{(2t)^{p}} e^{-a^2/2t}
\sum_{n=0}^\infty \frac{a^{2n} \Gamma(n+\nu+1-p)}
{\Gamma(n+1)\Gamma(1+n+\nu)(2t)^n}
\end{equation*}
and the assertion of the lemma.
\begin{rem}
The moments of $R_t$ for fixed $t$ have
explicit expressions by means of the Whittaker functions
(cf. \cite{GR}, p.709), but it does not seem useful.
\end{rem}
We are now in a position to give a complete proof
of Theorem \ref{thm-1}.
\bigskip
\noindent{\bf Proof of Theorem \ref{thm-1}.}\
By Lemma \ref{l:exp-tail} we have
\begin{equation*}
\bbP_a^{(\nu)}(t<\tau_b<\infty)=
b^{2\nu}\bbE_a^{(\nu)}[(R_t)^{-2\nu}]-
b^{2\nu}\bbE_a^{(\nu)}[(R_t)^{-2\nu}\iti_{\{\tau_b\leqq t\}}].
\end{equation*}
For the first term we have shown in Lemma \ref{l:asymp-mom}
\begin{equation*}
\bbE_a^{(\nu)}[(R_t)^{-2\nu}]=\frac{1}{\Gamma(1+\nu)(2t)^\nu}
(1+O(t^{-1})).
\end{equation*}
Hence, if we could show
\begin{equation} \label{e:to-go}
\bbE_a^{(\nu)}[(R_t)^{-2\nu}\iti_{\{\tau_b\leqq t\}}]
=\frac{1}{\Gamma(\nu+1)(2t)^\nu} \Bigl(\frac{b}{a}\Bigr)^{2\nu}
+O\Bigl(\frac{1}{t^{\nu+\e}}\Bigr)
\end{equation}
for any $\e\in(0,\frac{\nu}{\nu+1})$,
we obtain the assertion of the theorem.
For this purpose, we let $\alpha\in(0,\frac{1}{\nu+1})$,
choose $p$ satisfying
\begin{equation*}
\frac{1}{1-\alpha}<p<\frac{1+\nu}{\nu}
\end{equation*}
and let $q$ be such that $p^{-1}+q^{-1}=1$.
We devide the expectation on the right hand side of
\eqref{e:to-go} into the sum of
\begin{equation*}
I_1=\bbE_a^{(\nu)}[(R_t)^{-2\nu}
\iti_{\{\tau_b\leqq t^{\alpha q}\}}]
\quad \text{and} \quad
I_2=\bbE_a^{(\nu)}[(R_t)^{-2\nu}
\iti_{\{t^{\alpha q}<\tau_b\leqq t\}}]
\end{equation*}
\indent We simply apply the H\"older inequality to $I_2$.
Then we get
\begin{align*}
I_2 & \leqq \bbE_a^{(\nu)}[(R_t)^{-2\nu}
\iti_{\{t^{\alpha q}<\tau_b<\infty\}}] \\
& \leqq \Bigl\{ \bbE_a^{(\nu)}[(R_t)^{-2\nu p}] \Bigr\}^{1/p}
\Bigl\{ \bbP_a^{(\nu)}(t^{\alpha q}<\tau_b<\infty) \Bigr\}^{1/q}.
\end{align*}
and, by Lemmas \ref{l:rough-est} and \ref{l:asymp-mom},
we see that there exists a constant $C_1$ such that
\begin{equation*}
I_2 \leqq C_1 t^{-\nu-\alpha \nu}.
\end{equation*}
In the following we denote by $C_i$'s
the constants independent of $t$.
For $I_1$, the strong Markov property of Bessel processes implies
\begin{equation*}
I_1=\int_0^{t^{\alpha q}} \bbE_b^{(\nu)}[ (R_{t-s})^{-2\nu} ]
\bbP_a^{(\nu)}(\tau_b\in ds)=I_{11}+I_{12},
\end{equation*}
where
\begin{equation*}
I_{11}=\int_0^{t^{\alpha q}} \frac{1}{2^\nu \Gamma(\nu+1)}
\frac{1}{(t-s)^\nu} \bbP_a^{(\nu)}(\tau_b\in ds).
\end{equation*}
Since $\alpha q<1$, Lemma \ref{l:rough-est} imples
\begin{equation*}
|I_{12}| = |I_1-I_{11}| \leqq \int_0^{t^{\alpha q}}
\frac{C_2}{(t-s)^{\nu+1}} \bbP_a^{(\nu)}(\tau_b\in ds)
\leqq \frac{C_3}{t^{\nu+1}}.
\end{equation*}
\indent We devide $I_{11}$ into the sum of
\begin{align*}
& J_1=\int_{t^\alpha}^{t^{\alpha q}} \frac{1}{2^\nu\Gamma(\nu+1)}
\frac{1}{(t-s)^\nu} \bbP_a^{(\nu)}(\tau_b\in ds) \\
\intertext{and}
& J_2=\int_{0}^{t^{\alpha}} \frac{1}{2^\nu\Gamma(\nu+1)}
\frac{1}{(t-s)^\nu} \bbP_a^{(\nu)}(\tau_b\in ds).
\end{align*}
For $J_1$ we have by Lemma \ref{l:rough-est}
\begin{equation*}
0\leqq J_1 \leqq \frac{C_4}{(t-t^{\alpha q})^\nu}
\bbP_a^{(\nu)}(t^\alpha<\tau_b<\infty) \leqq
\frac{C_5}{t^{\nu+\alpha\nu}}.
\end{equation*}
\indent For $J_2$ we have
\begin{align*}
J_2 & \leqq \frac{1}{2^\nu \Gamma(\nu+1) (t-t^\alpha)^\nu}
\bbP_a^{(\nu)}(\tau_b\leqq t^\alpha) \\
& \leqq \frac{1}{\Gamma(\nu+1)(2t)^\nu}
\bbP_a^{(\nu)}(\tau_b<\infty)
\frac{1}{(1-t^{-(1-\alpha)})^\nu} \\
& \leqq \frac{1}{\Gamma(\nu+1)(2t)^\nu}
\Bigl(\frac{b}{a}\Bigr)^{2\nu}
\Bigl( 1+\frac{C_6}{t^{1-\alpha}} \Bigr).
\end{align*}
On the other hand we have by Lemma \ref{l:rough-est}
\begin{align*}
J_2 & \geqq \frac{1}{\Gamma(\nu+1)(2t)^\nu}
\bbP_a^{(\nu)}(\tau_b\leqq t^\alpha) \\
& = \frac{1}{\Gamma(\nu+1)(2t)^\nu} \Bigl\{
\bbP_a^{(\nu)}(\tau_b<\infty) -
\bbP_a^{(\nu)}(t^\alpha \leqq \tau_b<\infty) \Bigr\} \\
& \geqq \frac{1}{\Gamma(\nu+1)(2t)^\nu}
\Bigl(\frac{b}{a}\Bigr)^{2\nu} - \frac{C_7}{t^\nu t^{\alpha\nu}}.
\end{align*}
\indent Combining the above estimates, we obtain
\begin{equation*}
\bbE_a^{(\nu)}[(R_t)^{-2\nu} \iti_{\{\tau_b\leqq t\}}]
= \frac{1}{\Gamma(\nu+1)(2t)^\nu} \Bigl(\frac{b}{a}\Bigr)^{2\nu}
+ \frac{1}{t^{\nu}} O\Bigl(\frac{1}{t^{\alpha\nu}}+
\frac{1}{t}+\frac{1}{t^{1-\alpha}}\Bigr).
\end{equation*}
Since
\begin{equation*}
0<\alpha\nu<\frac{\nu}{\nu+1}<1-\alpha<1
\end{equation*}
and we can choose arbitrary $\alpha$ satifying this condition,
\begin{equation*}
\bbE_a^{(\nu)}[(R_t)^{-2\nu} \iti_{\{\tau_b\leqq t\}}]
= \frac{1}{\Gamma(\nu+1)(2t)^\nu} \Bigl(\frac{b}{a}\Bigr)^{2\nu}
+ O\Bigl(\frac{1}{t^{\nu+\e}}\Bigr)
\end{equation*}
holds for any $\e\in(0,\frac{\nu}{\nu+1})$.
Now we have shown \eqref{e:to-go} and
the assertion of Theorem \ref{thm-1}.
\section{Proof of Theorem \ref{thm-2}}
Theorem \ref{thm-2} is easily obtained from Theorem \ref{thm-1}.
We recall explcit expressions for
the Laplace transforms of the distributions of $\tau_b$:
for $\nu\in\mathbf{R}$, it is known (\cite{GS, HM-T}) that
\begin{equation*}
\bbE_a^{(\nu)}[ e^{-\lambda \tau_b} ] =
\Bigl(\frac{b}{a}\Bigr)^\nu
\frac{K_\nu(a\sqrt{2\lambda})}{K_\nu(b\sqrt{2\lambda})},
\qquad \lambda>0,
\end{equation*}
where $K_\nu$ is the modified Bessel function of the second kind.
From this identity we easily obtain for $\nu>0$
\begin{equation*}
\bbP_a^{(-\nu)}(\tau_b\in dt) = \Bigl(\frac{a}{b}\Bigr)^{2\nu}
\bbP_a^{(\nu)}(\tau_b \in dt).
\end{equation*}
Hence we get from Theorem \ref{thm-1}
\begin{equation*} \begin{split}
\bbP_a^{(-\nu)}(\tau_b>t) & = \Bigl(\frac{a}{b}\Bigr)^{2\nu}
\bbP_a^{(\nu)}(t<\tau_b<\infty) \\
& = a^{2\nu} \Bigl\{ 1-\Bigl(\frac{b}{a}\Bigr)^{2\nu}
\Bigl\} \frac{1}{\Gamma(1+\nu)(2t)^{\nu}} (1+o(1)).
\end{split} \end{equation*}
\noindent{\bf Acknowledgement.}\
We thank Professor Yuu Hariya for valuable discussions.
\bigskip
|
2,877,628,090,298 | arxiv | \section{Introduction}
\label{intro}
Kubernetes is an open-source software for automating management of computerized services, such as containers~\cite{miles:book2020:kubernetes}. Practitioners use Kubernetes because it reduces repetitive manual processes involved in container deployment and management. Kubernetes is considered one of the most popular open-source container orchestration tools and it is used in organizations such as Adidas, Nokia, Spotify, and the U.S. Department of Defense (DoD)~\cite{k8s:case:studies, cncf:case:studies}. Benefits of Kubernetes usage have been documented: for example usage of Kubernetes in the U.S. DoD resulted in reducing an eight month software deployment effort down to one week~\cite{cncf:case:studies}. For Adidas, the load time for an e-commerce website was reduced by half, and release frequency increased from once every 4$\mathtt{\sim}$6 weeks to 3$\mathtt{\sim}$4 times a day~\cite{k8s:case:studies}.
Despite reported benefits, Kubernetes users have reported their concerns related to Kubernetes security. The Cloud Native Computing Foundation conducted a survey with 1,337 practitioners and reported 40\% of the survey participants to be concerned with Kubernetes security~\cite{cncf:survey}. Anecdotal evidence supports practitioner-reported concerns related to Kubernetes security. For example, in 2018, malicious users gained access to Tesla's Amazon Web Services (AWS) resources using an insecure Kubernetes console~\cite{tesla:k8s:attack}.
Systematizing available knowledge regarding Kubernetes security practices could support practitioners in securing their Kubernetes installations. Such systematization of knowledge can be beneficial for practitioners who (i) want to understand what activities need to be executed to secure Kubernetes components and (ii) can use the derived list of practices as a benchmark to compare their state of security practices.
Systematization of knowledge can be conducted by analyzing Internet artifacts, such as blog posts and video presentations. Instead of academic forums, such as research conferences, practitioners often report what practices they use in Internet artifacts~\cite{garousi2018smells:blog, glass2006software:book:blog}. In prior work, researchers have acknowledged the value of Internet artifacts in deriving practices, and analyzed Internet artifacts to summarize security practices used in DevOps~\cite{rahman:csed2017:devsecops}, practices used for continuous deployment~\cite{rahman:agile2015:cd}, and testing practices~\cite{garousi2018smells:blog}. Analysis of Internet artifacts can be useful for systematizing Kubernetes security knowledge---a research topic that remains under explored~\cite{brad:rsa:k8s:underexplored}. By systematically analyzing Internet artifacts related to Kubernetes security we hypothesize to derive a list of security practices.
\textit{The goal of this paper is to help practitioners in securing their Kubernetes installations through a systematization of knowledge related to Kubernetes security practices.}
We answer the following research question: \textbf{\textit{RQ:} What Kubernetes security practices are reported by practitioners?}
We systematize knowledge related to Kubernetes security by conducting a grey literature review~\cite{grey:original} where we apply qualitative analysis on Internet artifacts. We collect required Internet artifacts using the Google search engine with three search strings. Next, we apply a set of filtering criteria and apply qualitative analysis~\cite{saldana2015coding} on 104 Internet artifacts, such as blog posts, to construct a list of security practices.
We list our contributions as following:
\begin{itemize}[leftmargin=*]
\item{A synthesized list of security practices for Kubernetes; and}
\item{A curated dataset\cite{k8s:dataset} with a mapping between Internet artifact and identified security practices.}
\end{itemize}
The rest of the paper is organized as follows: in Section \ref{meth}, we provide methodology of our paper. In Section \ref{taxonomy-res}, we describe our derived Kubernetes security practices in details. In Section \ref{related}, we discuss prior research on Kubernetes. We discuss our findings with implication for users and researchers and conclude the paper in Section \ref{discussion}.
\begin{figure*}[h]
\centering
\includegraphics[scale=0.95]{K8S_Background.pdf}
\caption{A brief overview of Kubernetes. Kubernetes users interact with the installation using the Kubernetes dashboard and `kubectl'. The purpose of master node is to maintain the desired cluster state and manage worker nodes. Worker nodes are used to run containerized applications inside the pod.}
\label{fig-k8s}
\end{figure*}
\section{Methodology}
\label{meth}
We first provide background on Kubernetes in Section~\ref{background}. Next, we provide methodology details in Section~\ref{taxonomy-meth}.
\subsection{Background}
\label{background}
Kubernetes is an open-source software for automating management of computerized services such as containers~\cite{miles:book2020:kubernetes}. A Kubernetes installation is colloquially referred to as a Kubernetes cluster~\cite{miles:book2020:kubernetes}. Each Kubernetes cluster contains a set of worker machines defined as nodes. As shown in Figure~\ref{fig-k8s}, two types of nodes exist for Kubernetes: master nodes and worker nodes.
Each master node includes the following components: `API server', `scheduler', `controller', and `etcd'~\cite{miles:book2020:kubernetes}. The `API server' is responsible for orchestrating all the operations within the cluster. Kubernetes serves its functionality through an application program interface from the `API server'. The `controller' is a component on the master that watches the state of the cluster through the `API server' and changes the current state towards the desired state. The `scheduler' is the component in the control plane responsible for scheduling pods across multiple nodes. The `etcd' is a key-value based database that stores all configuration information for the Kubernetes cluster. Users use a command-line tool `Kubectl' to communicate with the `API server' in the master node.
The worker nodes host the applications that run on Kubernetes~\cite{miles:book2020:kubernetes}. The following components are included in the worker node: `kube-proxy', `kubelet' and `pod'. `kube-proxy' maintains the network rules on nodes. `kubelet' is an agent that ensures containers are running inside a pod. The pod is the smallest Kubernetes entity, which includes at least one active container. A container is a standard software unit that packages the code and associated dependencies to run in any computing environment~\cite{miles:book2020:kubernetes}.
\subsection{Methodology to Identify Kubernetes Security Practices}
\label{taxonomy-meth}
We synthesize Kubernetes security practices by conducting a grey literature review~\cite{grey:original}. A grey literature review is the process of reviewing and synthesizing content included in Internet artifacts, such as blog posts and video presentations~\cite{grey:original}. A grey literature review is different from a systematic mapping study or systematic literature review, as in these types of literature reviews, researchers use peer-reviewed scientific articles indexed in scholar databases. In prior work, researchers have reported that practitioners use Internet artifacts, such as blog posts to report their experiences, recommendations, and the practices they follow. Previously, researchers have systematically studied Internet artifacts to identify challenges in microservices development, identify practices used in continuous deployment~\cite{rahman:agile2015:cd}, identify security practices used in organization who have adopted DevOps~\cite{rahman:csed2017:devsecops}, and software testing~\cite{GAROUSI:grey:testing}. Our hypothesis is that by systematically analyzing Internet artifacts we can synthesize Kubernetes security practices reported by practitioners.
We conduct grey literature review using the following steps:
\noindent \textbf{\textit{Step\#\MYROMAN{1}}-Collect Internet Artifacts}: We use the Google search engine to collect our Internet artifacts. We use 3 search strings: `kubernetes security practices', `kubernetes security good practices', and `kubernetes security best practices'. We start with the search string `kubernetes security practices', and later on added the other 2 search strings because while collecting search results with the first string we observe practices being referred to as `good practices' and `best practices'. After performing the search we collect the first 100 search results, as Google displays the results in a sorted order based on relevance.
\noindent \textbf{\textit{Step\#\MYROMAN{2}}-Select Internet Artifacts}: We apply an inclusion criteria on the collected search results to identify Internet artifacts that discuss security practices for Kubernetes. The inclusion criteria is listed below:
\begin{itemize}[leftmargin=*]
\item{The Internet artifact is not a duplicate};
\item{The Internet artifact is available for reading}; and
\item{The Internet artifact discusses security practices for Kubernetes};
\end{itemize}
\noindent \textbf{\textit{Step\#\MYROMAN{3}}-Qualitative Analysis}: We use open coding~\cite{saldana2015coding}, a qualitative analysis technique, to determine the security practices for Kubernetes. In open coding a rater observes and synthesizes patterns within unstructured text~\cite{saldana2015coding}. To determine the practices, the first author apply open coding on the content of the Internet artifacts to derive the security practices. The first author is a graduate student with a professional experience of one year in Kubernetes, and one year of academic experience in software security.
\noindent \textbf{\textit{Step\#\MYROMAN{4}}-Verify Rating}: The process of determining the practices is susceptible to first author bias. We mitigate this bias by allocating another rater, the second author of the paper, who apply closed coding~\cite{crabtree:coding:book} on a randomly selected set of 50 Internet artifacts. Closed coding is the technique of mapping an entry to a pre-defined category~\cite{crabtree:coding:book}. For each of the 50 Internet artifacts, the second author examined if the artifact of interest includes a discussion related to the security practices identified by the first author. The second author has 3 years of experience in software security. We calculate the agreement rate between the first and second author for the 50 Internet artifacts using Cohen's Kappa~\cite{cohens:kappa}.
\section{Kubernetes Security Practices}
\label{taxonomy-res}
In this section we answer: \textbf{\textit{RQ: What Kubernetes security practices are reported by practitioners?}} After applying open coding on 104 Internet artifacts we derive 11 practices for Kubernetes security. Of the 104 Internet artifacts 90.38\%, 4.81\%, and 4.81\% are respectively blog posts, videos and presentations. We describe each of these practices below, where the count of Internet artifacts is enclosed within parenthesis:
\noindent \MYROMAN{1}. \textbf{Authentication and Authorization (82)}: The practice of applying authentication and authorization rules to prevent malicious users from getting access and performing unauthorized activities inside the Kubernetes cluster. Authentication in Kubernetes refers to the authentication of API requests through authentication plugins\cite{k8s:docs}. Authorization in Kubernetes refers to the evaluation of each authenticated API request against all policies to allow or deny the request\cite{k8s:docs}. Practitioners have reported a set of tasks to implement the practice of authentication and authorization:
\begin{itemize}[leftmargin=*]
\item{Anonymous access to the Kubernetes server needs to be disabled. By default, Kubernetes allows anonymous access to the Kubernetes API server. \cite{k8s:docs}}
\item{Default authorization modes need to be disabled. }
\item{Admission controllers need to be enabled. In Kubernetes, an admission controller is a tool that intercepts requests to the Kubernetes API after the request is authenticated and authorized, but before a volume is made persistent.}
\item{Controlling the use of impersonation: Kubernetes allows one user to act as another user through impersonation headers\cite{k8s:docs}. The impersonation feature has benefits, for example, a user designated as an admin can use this feature to debug authorization by impersonating another user and checking if the request was denied. However, in case of failure to define limitations on who can impersonate and what the impersonated user can do, the impersonation feature can be detrimental to the security of Kubernetes.}
\item{Default configurations must be changed. The use of default configuration in authentication and authorization can allow any anonymous unauthenticated user to perform malicious activities. For example, a malicious user can guess the default configuration of an insecure admission, gain access to the admission controller, and run malicious commands.}
\end{itemize}
For authentication and authorization, practitioners suggest the use of OpenID~\footnote{https://openid.net/}, a standard protocol for authentication. The official Kubernetes documentation also provides guidelines on how to implement secure authorization using webhooks, role-based access control (RBAC) and attribute-based access control (ABAC)~\cite{k8s:docs}.
\noindent \MYROMAN{2}. \textbf{Implementing Kubernetes-specific Security Policies (81):} The practice of applying policies to secure Kubernetes components, pods and network of Kubernetes clusters to prevent security breaches.
\begin{itemize}[leftmargin=*]
\item{\textit{Network-specific policies}: The practice of applying a network policy to protect communication between Kubernetes pods from undesirable network communications. By default, all Kubernetes pods can communicate with other pods. Practitioners recommend policies to restrict traffic between pods, restrict API server access and reducing network exposure to secure the network. If network policies are not defined and firewalls are not set, then anyone may attack the API server from any IP address. Practitioners also suggest imposing proper firewalls to block all undesirable network communication using network policy plugins like Calico ~\footnote{https://www.projectcalico.org/calico-networking-for-kubernetes/} and configuring restricted access to a database for pods.}
\item{\textit{Pod-specific policies}: The practice of implementing a policy for pods to apply security context to pods and containers. Pod policies determine how the workloads should run in the Kubernetes cluster. Without defining a secure context for the pod, a container may run with root privilege and write permission into the root file system, which can make the Kubernetes cluster vulnerable. Practitioners recommend containers inside a pod must run as a non-root user with read-only permission and enabling Linux security modules. Practitioners also recommend that users install the minimal version of operating systems to reduce the attack surface.}
\item{\textit{Generic policies}: The practice of applying a generic security policy to protect Kubernetes cluster components from external malicious users. TCP ports for kubelet, API server, etcd, and network plugins should not be left open and should require authentication to have visibility. Every user in the system should have the least privilege by default. Public SSH access to Kubernetes cluster nodes should be restricted. Practitioners recommend that Kubernetes users create an audit policy for logging, and audit policies must be configured for each Kubernetes cluster at the API server level. }
\end{itemize}
\noindent \MYROMAN{3}. \textbf{Vulnerability scanning (63):} The practice of scanning Kubernetes components and continuous delivery (CD) components for vulnerabilities.
\begin{itemize}[leftmargin=*]
\item{Kubernetes components, such as containers can contain vulnerabilities and malicious malware. If vulnerabilities are present in a Kubernetes cluster, then the entire container orchestration system, and the provisioned applications, become susceptible to attacks. For example, in 2017, researchers found Docker images embedded with malicious malware. Practitioners recommended scanning containers for vulnerabilities with tools,such as `Dockscan'~\footnote{https://github.com/kost/dockscan} and `CoreOS Clair'~\footnote{https://github.com/quay/clair}.}
\item{If images and deployment configurations within CD components are not inspected, then it can make the Kubernetes cluster vulnerable to malicious users. The malicious users can gain access at a later point when these images are deployed and may exploit the latent vulnerabilities in Kubernetes production environments. Practitioners recommend pulling images from a trusted private registry and checking for the vulnerability of code and images.}
\end{itemize}
\noindent \MYROMAN{4}. \textbf{Logging (47):} The practice of enabling and monitoring logs for the Kubernetes cluster. Practitioners recommend that logging should be enabled for (i) applications, (ii) the containers within each pod, and for (iii) Kubernetes clusters for system health checking. Without enabling logging and monitoring, users may face difficulty troubleshooting unexpected consequences, such as attacks from malicious users and outages. To implement the practice of logging, practitioners propose the following practices:
\begin{itemize}[leftmargin=*]
\item{ Logs must be monitored at a regular interval. }
\item{Alerts must be set up for any drastic change in log metrics comparing to previous log records.}
\end{itemize}
\noindent \MYROMAN{5}. \textbf{Namespace separation (36):} The practice of separating namespaces so that the resource of one namespace are not shared with another. A `namespace' in Kubernetes is a logically isolated virtual cluster within the same physical cluster.\cite{k8s:docs} Creation of separate namespaces enables resources to be isolated between namespaces. If a separate namespace is not created for a resource then the resource gets `default' namespace. Practitioners recommend that each team in a company should have a separate namespace for better manageability and running its development and production environments\footnote{https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-organizing-with-namespaces}. If there is only a `default' namespace and no separate namespace for different teams then any malicious user can perform an attack on the `default' namespace making the entire resource vulnerable to that attack. Practitioners use the \textit{--namespace} flag in kubectl command to separate namepsaces.
\noindent \MYROMAN{6}. \textbf{Encrypt and restrict access to etcd (34):} The practice of encrypting and restricting access to `etcd', the internal database used by Kubernetes\cite{k8s:docs}. Practitioners recommend `etcd' to only be available from the API servers, and to be isolated behind a firewall so that outsiders can not get access via API.
By default, Kubernetes stores secret data as plaintext in `etcd'\footnote{https://ubuntu.com/kubernetes/docs/encryption-at-rest}. In that case, if a malicious user gets access to `etcd', then the malicious user can retrieve sensitive information, such as database user names, passwords, and queries. Although Kubernetes does encrypt `etcd', the key for the encryption is stored as plaintext in the config file in the master node. For that reason, practitioners recommend using secret management tools for additional security\cite{k8s:docs}, such as `Vault'\footnote{https://www.vaultproject.io} for encryption.
\noindent \MYROMAN{7}. \textbf{Continuous update (28):} The practice of applying security patches to keep the Kubernetes cluster updated with latest security fixes. Practitioners recommend that Kubernetes users apply updates as well as conducting continuous updates for the deployed applications within the Kubernetes pods. Without continuous updates, vulnerabilities might exist in the Kubernetes installation, which can give malicious users opportunity to perform attacks.
Vulnerabilities in Kubernetes are not uncommon: for example, two vulnerabilities CVE-2019-16276~\cite{cve-2019-16276} and CVE-2019-11253~\cite{cve-2019-11253} were discovered in October 2019 in Kubernetes~\footnote{https://security.berkeley.edu/news/kubernetes-vulnerabilities-allow-authentication-bypass-dos-cve-2019-16276}. The vulnerability `CVE-2019-16276' was related to `CWE-444 Inconsistent Interpretation of HTTP Requests (`HTTP Request Smuggling')'. The vulnerability `CVE-2019-11253' was related to `CWE-20 Improper Input Validation'. The security patches for `CVE-2019-11253' and `CVE-2019-16276' were released on October 16, 2019 and October 22, 2019 respectively~\footnote{https://cloud.google.com/kubernetes-engine/docs/security-bulletins}. If any Kubernetes user does not install these security patches then the Kubernetes cluster will be susceptible to a denial of service attack.
For continuous updates, practitioners have also recommended the use of rolling update, i.e. installing Kubernetes patches without disrupting the availability of the deployed applications.\footnote{https://k8s.vmware.com/kubernetes-security-best-practices/} Kubernetes provides tools, such as `kubectl' to perform rolling updates~\cite{k8s:docs}.
\noindent \MYROMAN{8}. \textbf{Limit CPU and memory quota (18):} The practice of limiting CPU and memory to a pod or a namespace so that malicious attacks can be mitigated. By default, all resources in Kubernetes start with unbounded memory requests/limits and unbounded CPU access. If a malicious user starts a denial of service (DOS) attack with in a pod within the Kubernetes cluster then, due to a high volume of requests, kube-scheduler will create a new pod and an instance of the container will start inside the new pod. This process continue until it consumes all available CPU resources and memory leaving all the applications in starvation. Hence, failure to define CPU and memory request limits for a pod or the namespace may result in a consumption of all available resources in the Kubernetes cluster, enabling a denial of service (DOS) attack.
Practitioners can configure the amount of resources by defining a maximum number of instances for a container, the number of CPU share for an application to consume, and the maximum amount of memory for a pod or namespace.
\noindent \MYROMAN{9}. \textbf{Enable SSL/TLS support (18):} The practice of enabling secure sockets layer (SSL) or transport layer security (TLS) protocol to ensure secure and encrypted communication between Kubernetes components. Enabling TLS between kubernetes api server, etcd, kubelet and kubectl ensures secure communication between cluster components. Practitioners suggest enabling TLS and SSL certificates for Kubernetes components.
\noindent \MYROMAN{10}. \textbf{Separate sensitive workload (14):} The practice of running sensitive applications on a dedicated set of machines to limit the potential impact of a security breach. For example, if a malicious user gets access to a node's `kubelet' credentials, then the user can access the contents of secrets and gain control of the entire file system, but the user will not be able to access the sensitive applications and associated secrets. Practitioners recommend Kubernetes-provided utilities, such as `taints and tolerations'\cite{k8s:docs} that can control where a pod might be deployed.
\noindent \MYROMAN{11}. \textbf{Secure metadata access (9):} The practice of securing the sensitive metadata of the Kubernetes cluster. Practitioners state that the Kubernetes metadata APIs provide a gateway to expose `kubelet' admin credentials. Google recommends activating features such as `Workload Identity’\footnote{https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata} for Google Kubernetes Engine (GKE) to prevent any sensitive information from leaking through the metadata service.
\textit{\textbf{Rater verification}}: The Cohen's Kappa between the two raters is 0.8, which is substantial according to Landis and Koch~\cite{Landis:Koch:Kappa:Range}.
\section{Threats to Validity}
\label{threats}
We discuss the limitations of our paper as following:
\noindent \textbf{Conclusion Validity}: Our derived set of practices is limited to our collection of 104 Internet artifacts. Our collection of Internet artifacts might have missed Internet artifacts, that may have included practices not identified in our paper. We mitigate this limitation by systematically collecting a set of 104 Internet artifacts.
The identified practices are also susceptible to biases of the rater who identified the practices by applying open coding. We mitigate this limitation by allocating another rater, who applied closed coding. The Cohen's Kappa between the two raters is 0.8. which is substantial~\cite{Landis:Koch:Kappa:Range}.
\noindent \textbf{Construct Validity}: Our identified categories are susceptible to experimenter bias. The first author who derived the practices has professional experience in Kubernetes. The first author's professional experience can formulate expectations related to security practices for Kubernetes, which may influence the identified practices.
\noindent \textbf{External Validity}: Our findings might not be generalizable as we might have excluded practices unique to the proprietary domains, and not discussed publicly in Internet artifacts.
\section{Related Work}
\label{related}
Our paper is related to prior research that has investigated usage and maintenance of Kubernetes. Burns et al.~\cite{k8s:original:google} described the evolution of container management systems at Google, and described how two initial internal systems called Borg and Omega was evolved into Kubernetes. Brewer~\cite{k8s:brewer} conducted a case study on Kubernetes and discussed how key concepts of Kubernetes can be used to simplify scaling of containers. Medel et al.~\cite{medel:k8s:performance} used real data collected from Kubernetes and applied formal modeling to characterize performance and resource management in Kubernetes. Chang et al.~\cite{chang:k8s:monitoring } constructed a monitoring platform to dynamically provision cloud resources using Kubernetes. Vayghan et al.~\cite{vayghan:k8s:outage} investigated availability of Kubernetes using a set of experiments, and reported that service outages can occur frequently. Shah and Dubaria~\cite{shah:k8s:compare} compared orchestration management features of Docker Swarm, Kubernetes, and Google Cloud Platform, and observed Kubernetes to provide features, such as deployment, monitoring, and easy scalability. Takahashi et al.~\cite{takahashi:k8s:portable} proposed a portable load balancer for Kubernetes, and reported improved portability without sacrificing performance. Song et al.~\cite{song:k8s:api} used Kubernetes to construct an auto scaling system for API gateways. The authors~\cite{song:k8s:api} report that their constructed system improves utilization of system resources, while ensuring high availability. Muralidharan et al.~\cite{murali:k8s:iot} constructed a Kubernetes-based system to monitor and manage Internet of Things (IoT) applications for smart cities. Wei-guo et al.~\cite{wei:k8s:scheduling} constructed a resource scheduling algorithm for Kubernetes using ant colony and particle swarm optimization techniques. The scheduling algorithm proposed by Wei-guo et al.~\cite{wei:k8s:scheduling} outperforms the original algorithm used in Kubernetes.
The above-mentioned discussion highlights Kubernetes research in two areas: (i) use of Kubernetes in creating systems, such as monitoring systems and (ii) case studies on Kubernetes related to performance and resource management. We observe a lack of research related to security practices for Kubernetes. We address this research gap by systematically synthesizing practitioner-reported security practices using grey literature review.
\section{Discussion and Conclusion}
\label{discussion}
\noindent \textbf{With great power comes responsibilities}: Kubernetes provide utilities for users to manage containers at scale. However, our description of the 11 practices in Section~\ref{taxonomy-res} shows that effective and secure usage of Kubernetes requires the implementation of security practices applicable for multiple components within the Kubernetes installations: containers, pods, `etcd' database etc. The application of the aforementioned 11 practices also need a deep understanding of Kubernetes components and configurations. Our discussion in Section~\ref{taxonomy-res} can be helpful in two ways: \textit{first}, understand the components where security practices are applicable. \textit{Second}, practitioners who already have Kubernetes in place, can use our identified practices as a benchmark and compare their usage of practices.
\noindent \textbf{Implication for researchers}: Our discussion in Section~\ref{related} shows that Kubernetes security to be an under explored research area. Our derived list of security practices can provide the groundwork for future research in Kubernetes security. To what extent the reported security practices are in use can be quantified systematically. Researchers can measure the attack surface associated with Kubernetes components and configurations. Researchers can find possible mitigation strategies such as static analysis and dynamic analysis to inspect insecure practices in Kubernetes. Researchers can also quantify how frequently the identified practices are actually in use. Furthermore, detection and mitigation of security misconfigurations that occur in Kubernetes could be of interest to researchers.
\noindent \textbf{Conclusion}\label{conclusion}: As Kubernetes usage becomes increasingly popular, securing Kubernetes is of paramount importance to practitioners. A systematization of knowledge related to practitioner-reported practices might be helpful to secure Kubernetes installations. We conduct a qualitative analysis of Internet artifacts, such as blog posts to identify 11 security practices for Kubernetes. Our derived list of practices include continuous update, enable SSL/TLS support, vulnerability scanning and logging. Our paper can help practitioners in securing Kubernetes installations. Further, our findings can lay the groundwork to conduct research in Kubernetes security.
\balance
\bibliographystyle{IEEEtran}
|
2,877,628,090,299 | arxiv | \section{Introduction}
Simulation of quantum channels is a central tool in quantum
information theory~\cite{NielsenChuang,Preskill,RMP,SamRMPm}. One
of the first seminal ideas was introduced in Ref.~\cite{B2}, where
the channel simulation was based on the standard teleportation
protocol~\cite{tele,teleREVIEW}, but where the shared
maximally-entangled state was replaced by an arbitrary two-qubit
resource state. Later on, Ref.~\cite{BoBo} showed that this method
allows one to simulate any Pauli channel, i.e., any quantum
channel whose action on an input state can be expressed by a Kraus
decomposition in terms of Pauli operators~\cite{NielsenChuang}. In
Ref.~\cite{B2}, the teleportation simulation was used to transform
protocols of quantum communication through a (Pauli) channel into
protocols of entanglement distillation over the resource states.
The same technique was then exploited in Ref.~\cite{HoroTEL} to
show the reproducibility between (isotropic) states and (Pauli)
channels.
In 2001, Ref.~\cite{WernerTELE} described generalized
teleportation protocols in the context of discrete variable (DV)
systems, allowing for more general quantum measurements beyond
Bell detection. Following these steps, Ref.~\cite{Leung} moved the
first steps in investigating teleportation-covariance for DV
channels, which is that property of a quantum channel to commute
with the random unitaries of teleportation. This property has been
generalized by Ref.~\cite{Stretching} to quantum channels at any
dimension, including continuous variable (CV) channels. Thanks to
teleportation covariance, a quantum channel can be simulated by
teleporting over its Choi matrix. This result was re-stated in a
different form by a follow-up work~\cite{WildeFollowup}.
One crucial step introduced by Ref.~\cite{Stretching} has been the
removal of any restriction on the dimension of the quantum systems
involved in the simulation process. For this reason, one can
simulate DV channels, CV channels and even hybrid channels between
DVs and CVs. More generally, Ref.~\cite{Stretching} was not
limited to teleportation LOCCs (i.e., Bell detection and unitary
corrections), but considered completely general LOCCs which may
also be asymptotic, i.e., defined as suitable sequences. This more
general LOCC simulation allowed them to simulate \textit{any}
quantum channel. In particular, it allowed them to simulate, for
the first time in the literature, the amplitude damping channel
(which is a DV channel) by using the Choi matrix of a bosonic
lossy channel (which is a CV channel) and an LOCC based on hybrid
CV-DV teleportation maps~\cite{Nota1}.
One of the most powerful applications of channel simulation is
teleportation stretching~\cite{Stretching}. In this method, the
LOCC simulation of a quantum channel (with some resource state
$\sigma$) is used to completely simplify the structure of adaptive
protocols of quantum and private communication, which are based on
the use of adaptive LOCCs, i.e., local operations assisted by
unlimited and two-way classical communications (CCs). Any such
protocol can be re-organized in such a way to become a much
simpler block protocol, where the output state, after $n$ uses of
the channel, is expressed in terms of a tensor-product of the
resource states $\sigma^{\otimes n}$ up to a global LOCC. Contrary
to previous approaches~\cite{B2,Niset,Wolfnotes,AlexAPP}, the
method devised in Ref.~\cite{Stretching} does not reduce quantum
communication (over specific channels) into entanglement
distillation, but reduce \textit{any} adaptive protocol (over
\textit{any} channel at \textit{any} dimension) into an equivalent
block form, where the original task is perfectly preserved (e.g.,
so that adaptive key generation is transformed into block key
generation). For this reason, the technique has been also extended
beyond point-to-point quantum
communication~\cite{networkPIRS,Multipointa}, and also to simplify
adaptive protocols of quantum metrology and quantum channel
discrimination~\cite{Metro,Nota2}.
By using teleportation stretching and extending the notion of
relative entropy of entanglement
(REE)~\cite{RMPrelent,VedFORMm,Pleniom} from states to channels,
Ref.~\cite{Stretching} derived a simple single-letter bound for
the two-way quantum and private capacities of an arbitrary quantum
channel. Such bound is shown to be achievable in many important
cases, so that Ref.~\cite{Stretching} established these capacities
for dephasing channels, erasure channels (see also
Refs.~\cite{ErasureChannel,GEWa}), quantum-limited amplifiers, and
bosonic lossy channels. The two-way capacity of the lossy channel,
also known as Pirandola-Laurenza-Ottaviani-Banchi (PLOB) bound,
completes an investigation started back in
2009~\cite{RevCohINFO,ReverseCAP}, and finally sets the ultimate
achievable limit for optical quantum communications in the absence
of quantum repeaters. This benchmark for quantum repeaters has
been already exploited in
literature~\cite{bench1,bench2,bench3,bench4,bench5}. Building on
most of the methods discovered by Ref.~\cite{Stretching} (i.e.,
channel's REE and teleportation stretching), the follow-up
work~\cite{WildeFollowup} later discussed the strong converse
property of the various bounds and two-way capacities established
in Ref.~\cite{Stretching}. See Ref.~\cite{Nota3} for
clarifications on literature.
In this context, the present work brings several new insights. It
considers a minimum perturbation of the standard teleportation
protocol, where the noiseless classical communication channel
between the parties (Alice and Bob) is replaced by a noisy
classical channel, where the Bell outcomes $k$ are stochastically
mapped into a variable $l$ on the same alphabet, according to some
conditional probability distribution $p_{l|k}$. We show that this
already allows us to enlarge the class of simulable channels well
beyond that of Pauli channels. This is non-trivial because this is
achieved without changing the dimensions of Alice's and Bob's
local Hilbert spaces $\mathcal{H}_A$ and $\mathcal{H}_B$
associated with the resource state $\sigma=\sigma_{AB}$. In fact,
changing such dimensions is another way to generate non-Pauli
channels, an example being the erasure channel which can be
generated using a $2\times 3$ dimensional resource state (i.e., a
qubit entangled with a qutrit).
Adopting the vectorial Bloch sphere representation for
qubits~\cite{NielsenChuang}, we provide simple conditions to be
satisfied in order to simulate non-Pauli channels. A profitable
way to generate such kinds of channels is to start from the Choi
matrix of an amplitude damping channel as resource state for the
noisy teleportation protocol. In this way, we can generate
non-Pauli channels which are significantly far from the Pauli
class, as quantified by the trace norm and the diamond norm.
In particular, we identify a class of simulable channels that we
call ``Pauli-damping channels'' because they can be decomposed
into a Pauli and an amplitude damping part. For channels in this
class we compute lower and upper bounds for the two-way quantum
and private capacities, by adopting the methodology developed by
Ref.~\cite{Stretching}.
The paper is structured as follows. We start with discussing
preliminary notions in Sec.~\ref{SEC_preli}, including the basics
of quantum teleportation, channel simulation and teleportation
stretching and its application to derive upper bounds for the
two-way capacities. Then, in Sec.~\ref{Expanding}, we show how to
simulate non-Pauli channels via our noisy teleportation protocol.
This is further developed in Sec.~\ref{dampSEC}, where we consider
the channels simulated starting from the Choi matrix of the
amplitude damping channel and we also define the Pauli-damping
channels. The properties of these channels are studied in
Sec.~\ref{SECprop}. Finally, Sec.~\ref{SECconclu} is for
conclusions.
\section{Preliminaries}\label{SEC_preli}
\subsection{Quantum teleportation}\label{QuanTel}
Teleportation~\cite{tele,TeleCon,teleREVIEW,teleMORE1,teleMORE2,teleMORE3}
is one of the strangest and most intriguing results to come out of
quantum information. We shall outline the standard approach here,
so that the generalizations in the following sections are more
apparent. The basic version of the protocol is as follows. Alice
($A$) and Bob ($B$) share a maximally entangled state, e.g., a
Bell state of the form
\begin{equation}\label{maxENT}
\ket{\Phi}=\frac{1}{\sqrt{d}}\left(\sum_{i=0}^{d-1}\ket{i}_A\ket{i}_B\right)
\end{equation}
for DV systems, and the asymptotic EPR state~\cite{SamRMPm}
\begin{equation}
\lim_{r\rightarrow\infty}\sqrt{1-\mathrm{tanh}^2(r)}\sum_{n=0}^\infty[-\mathrm{tanh(r)}]^n\ket{n}_A\ket{n}_B
\end{equation}
for CV systems (where $\ket{n}$ is the number state), which
produces correlations $\hat{q}_A=\hat{q}_B$ for the
position-quadrature, and $\hat{p}_A=-\hat{p}_B$ for the
momentum-quadrature~\cite{TeleCon,teleMORE1}. For the qubit case
$d=2$ which we shall be focusing on later, the state of
Eq.~(\ref{maxENT}) is
$\frac{1}{\sqrt{2}}\left(\ket{00}+\ket{11}\right)$.
Alice also has an arbitrary state $\rho_C$ to be teleported to
Bob. To begin the process, Alice performs a Bell measurement on
her two systems, $AC$. In DVs this is done by using the $d$
dimensional Bell basis, consisting of the $d^2$ maximally
entangled states $\ket{\Phi_{\alpha,\beta}}$, with
$\alpha,\beta\in\left\{1\ldots\;d\right\}$. In operator notation
we describe the measurement by $\left\{M_{\alpha,\beta}\right\}$,
with
$M_{\alpha,\beta}=\ket{\Phi_{\alpha,\beta}}\bra{\Phi_{\alpha,\beta}}$
and
\begin{equation}
\ket{\Phi_{\alpha,\beta}}
\equiv(\mathbb{I}_d\otimes\sigma_x^\alpha\sigma_z^\beta)\ket{\Phi},
\end{equation}
where
\begin{align}
\sigma_x\ket{k}=\ket{k + 1\text{ mod }d},&&\sigma_z\ket{k}=\omega^k\ket{k},\;\;\;\omega=e^{\frac{\mathrm{i} 2\pi}{d}}.
\end{align}
The set $\{\sigma_x^\alpha\sigma_z^\beta\}$ is known as the $d$-dimensional Weyl-Heisenberg group.
In the qubit case, we use the usual set of Pauli operators~\cite{NielsenChuang}
\begin{align}
\mathrm{I}&=\left(\begin{array}{cc}
\phantom{-}1 & \phantom{-}0\\
\phantom{-}0 & \phantom{-}1
\end{array}\right)& \sigma_x&=\left(\begin{array}{cc}
\phantom{-}0 & \phantom{-}1\\
\phantom{-}1 & \phantom{-}0
\end{array}\right)\\
\mathrm{i}\sigma_y\equiv\sigma_x\sigma_z&=\left(\begin{array}{cc}
\phantom{-}0 & \phantom{-}1\\
-1 & \phantom{-}0
\end{array}\right)& \sigma_z&=\left(\begin{array}{cc}
\phantom{-}1 & \phantom{-}0\\
\phantom{-}0 & -1
\end{array}\right).
\end{align}
In CVs, the measurement operator can be thought of as
\begin{equation}
M_{k}=(\mathbb{I}\otimes\hat{D}(k))\ket{\Phi}\bra{\Phi}(\mathbb{I}\otimes\hat{D}(k))^\dagger
\end{equation}
where $D(k)=\textrm{exp}{(k\hat{a}^\dagger-k^*\hat{a})}$ is the displacement operator with complex amplitude $k$ and $\hat{a}$ being the annihilation operator.
The effect of the Bell measurement, in which all outcomes occur
with equal probability, is to transform Bob's half of the
maximally entangled state into the teleported state up to a random
unitary. In DVs, the state of Bob (system $B$) takes the form
$\rho_{B|(\alpha,\beta)}=\sigma_x^\alpha\sigma_z^\beta\rho_C(\sigma_x^\alpha\sigma_z^\beta)^\dagger$,
for a given Bell outcome $(\alpha,\beta)$, while for CVs this
state is $\rho_{B|k}=\hat{D}(k)\rho_C\hat{D}(k)^\dagger$, given
the Bell outcome $k$. Since Alice communicates the Bell outcome to
Bob, he can undo the random unitary and recover Alice's input
state $\rho_C$. Note that the Alice's CC to Bob is necessary to
reproduce the state, otherwise the two remote users could
communicate faster than the speed of light. In the following, we
focus on DV systems and we discuss how the teleportation protocol
can be progressively modified to simulate more and more quantum
channels.
\subsection{Changing the resource for teleportation}\label{ResChange}
From the protocol described in the previous section, a natural
question to ask is ``what is the consequence of changing the
resource state shared by Alice and Bob?" This was first considered
in~\cite{B2}, who looked into the scenario where Alice and Bob
instead share a generic mixed two-qubit state, which we can
express as \cite{Horodecki}
\begin{align}
\tau=\frac{1}{4}\bigg(&\mathrm{I}\otimes\mathrm{I}+\sum_{i=1}^3 a_i\sigma_i\otimes\mathrm{I}\nonumber\\
+&\sum_{j=1}^3\mathrm{I}\otimes b_j\sigma_j +
\sum_{i,j=1}^{3}t_{ij}\sigma_i\otimes
\sigma_j\bigg)\label{generaltwo}.
\end{align}
in terms of Pauli operators
$\{\sigma_i\}_{i=0}^3=\{I,\sigma_x,\sigma_y,\sigma_z\}$, the
vectors $\mathbf{a}=\{a_{i}\}$, $\mathbf{b}=\{b_{i}\}$, and the
matrix $[T]_{ij}=t_{ij}$.
\begin{theorem}[\cite{BoBo}]\label{Bobothe}
The effect of teleportation over an arbitrary two-qubit state
$\tau$ as in Eq.~(\ref{generaltwo}) is the Pauli channel
\begin{equation}\label{Pauliform}
\mathcal{E}_P:\rho\rightarrow\sum_{i=0}^3 p_i\sigma_i\rho\sigma_i,
\end{equation}
where $p_i=\mathrm{Tr}\left(E_i\tau\right)$ and $E_i$ are the
projectors on the Bell states, i.e.,
\begin{align}
E_0&=\ket{\Phi^+}\bra{\Phi^+},~~\ket{\Phi^+}:=\frac{1}{\sqrt{2}}\left(\ket{00}+\ket{11}\right),\\
E_1&=\ket{\Psi^+}\bra{\Psi^+},~~\ket{\Psi^+}:=\frac{1}{\sqrt{2}}\left(\ket{01}+\ket{10}\right),\\
E_2&=\ket{\Psi^-}\bra{\Psi^-},~~\ket{\Psi^-}:=\frac{1}{\sqrt{2}}\left(\ket{01}-\ket{10}\right),\\
E_3&=\ket{\Phi^-}\bra{\Phi^-},~~\ket{\Phi^-}:=\frac{1}{\sqrt{2}}\left(\ket{00}-\ket{11}\right).
\end{align}
\end{theorem}
Using this theorem, we can view the standard teleportation
protocol of Sec.~\ref{QuanTel} in a new context, as simulating a
trivial Pauli channel (the identity channel from Alice to Bob). We
can re-state the previous theorem by using the \textit{Bloch
sphere representation} of qubit states.
\begin{definition}[\cite{NielsenChuang}]
In the computational basis, an arbitrary qubit state $\rho$ can be
represented by the density matrix\begin{equation} \rho
=\frac{1}{2}\left(\begin{array}{cc}
1+ z & x - \mathrm{i} y \\
x + \mathrm{i} y & 1 - z
\end{array}
\right).
\end{equation}
This is one-to-one with a Bloch vector, $\mathbf{r}=(x,y,z)$, with
Euclidean norm $||\mathbf{r}||\leq 1$ (equality for pure states).
We can thus represent the actions of qubit channels by their
effect on the Bloch vector of the sent state.
\end{definition}
Given a generic resource state of the form (\ref{generaltwo}), we
easily find that the Pauli channel simulated by teleportation over
this state corresponds to the transformation
\begin{equation}\label{TRASF}
\mathcal{E}:\left(x,y,z\right)\rightarrow\left(t_{11}x,-t_{22}y,t_{33}z\right),
\end{equation}
of the Pauli channel as follows
\begin{align}
t_{11}&=\phantom{-}p_0+p_1-p_2-p_3&&=\phantom{-}1-2p_2-2p_3,\\
t_{22}&=-p_0+p_1-p_2+p_3&&= -1+2p_1+2p_3,\\
t_{33}&=\phantom{-}p_0-p_1-p_2+p_3&&=\phantom{-}1-2p_1-2p_2.
\end{align}
It is also easy to verify that
\begin{align}
t_{11}+t_{22}+t_{33}&\leq 1,\\
t_{11}-t_{22}-t_{33}&\leq 1, \\
-t_{11}+t_{22}-t_{33}&\leq 1,\\
-t_{11}-t_{22}+t_{33}&\leq 1,
\end{align}
which means that the vector $(t_{11},t_{22},t_{33})$,
characterizing the Pauli channel, must belong to the tetrahedron
$\mathcal{T}$ defined by the convex combination of the four points
\begin{align}\label{tetra}
\mathbf{e}_0&=(\phantom{-}1,-1,\phantom{-}1),&\mathbf{e}_1&=(\phantom{-}1,\phantom{-}1,-1),\\
\mathbf{e}_2&=(-1,-1,-1),&\mathbf{e}_3&=(-1,\phantom{-}1,\phantom{-}1).\nonumber
\end{align}
According to Eq.~(\ref{TRASF}), there is a simple way to simulate
a Pauli channel with arbitrary probability distribution
$\left\{p_i\right\}$. One may just take the resource state
\begin{equation}\label{PauliChoi}
\rho=\frac{1}{4}\left(\mathrm{I}\otimes\mathrm{I}+\sum_{i=1}^3t_{ii}\sigma_i\otimes\sigma_i\right),
\end{equation}
with $t_{ii}$ being connected to $\left\{p_i\right\}$ by the
formulas above. Note that this resource state is \textit{Bell
diagonal}, i.e., a mixture of the four Bell states.
\subsection{Generalized channel simulation}\label{Simulation}
In general, the simulation of a quantum channel does not
necessarily need to be implemented through quantum teleportation
(even in some generalized form~\cite{WernerTELE}). In fact, we may
consider a completely arbitrary LOCC applied to some resource
state~\cite{NoteSIM}.
\begin{definition}[\cite{Stretching}]
A quantum channel $\mathcal{E}$ is called $\tau $-stretchable if
there exists an LOCC $\mathcal{S}$ and a resource state $\tau$
simulating the channel. More precisely, for any input state
$\rho$, we may write
\begin{equation}\label{ggg}
\mathcal{E}(\rho)=\mathcal{S}(\rho\otimes\tau)~.
\end{equation}
\end{definition}
Note that this is an extremely general idea. The dimension of the
Hilbert spaces involved can be finite, infinite, equal or
non-equal. Because of the generality of the LOCC, it is clear that
any channel is (trivially) simulable by a maximally entangled
state. In fact, it is sufficient to include the channel
$\mathcal{E}$ into Alice's LOs and then perform the standard
teleportation of the output. In fact, the point is to find the
best resource state $\tau$ among all the possible LOCC
simulations. Typically, the best case is when $\tau$ represents
the Choi matrix of channel
\begin{equation}
\chi_\mathcal{E}:=\mathbb{I}\otimes\mathcal{E}\left(\ket{\Phi}\bra{\Phi}\right).
\end{equation}
\begin{definition}[\cite{Stretching}]
A quantum channel $\mathcal{E}$ is called \textquotedblleft
Choi-stretchable" if it can be LOCC-simulated by using its Choi
matrix, i.e., we can write Eq.~(\ref{ggg}) with
$\tau=\chi_{\mathcal{E}}$.
\end{definition}
There is a simple condition that allows us to identify
Choi-stretchable channels, teleportation covariance.
\begin{definition}[\cite{Stretching}]
A quantum channel $\mathcal{E}$ is called
``teleportation covariant'' if, for any teleportation unitary $U$,
there exists some unitary $V$ such that
\begin{equation}
\mathcal{E}\left( U\rho U^{\dagger}\right) =V\mathcal{E}\left(
\rho\right)
V^{\dagger}.\label{covdef}%
\end{equation}
\end{definition}
Because of teleportation covariance we can simulate a quantum
channel by means of teleportation over its Choi matrix. In fact, let $\rho_{C}$ be an input state (owned by Alice) of channel $\mathcal{E}$ and consider the teleportation of $\rho_{C}$ using the maximally
entangled state $\ket{\Phi}_{AB}$. When Alice performs her Bell
measurement, if the outcome corresponding to the Bell state $(\mathbb{I}\otimes U)\ket{\Phi}$ is obtained, then the
state $U\rho_{C}U^\dagger$ is teleported to $B$. Applying a
teleportation covariant $\mathcal{E}$ to this state, we obtain
\begin{equation}
\mathcal{E}\left(U\rho_{C}U^\dagger\right)=V\mathcal{E}\left(\rho_{C}\right)V
^\dagger.
\end{equation}
Therefore, if the corrective unitary $V^{-1}$ is applied by Bob \textit{after} the channel for all the possible $U$, then he will obtain the final state $\mathcal{E}\left(\rho_{C}\right)$ irrespective of the Bell detection outcome.
This corresponds to simulation of $\mathcal{E}$ by teleportation. However,
because the Bell measurement on systems $AC$ is locally separated
from the application of $\mathcal{E}$ on system $B$, we can
commute these operations and the result is the simulation of
$\mathcal{E}$ by teleporting over its Choi matrix
$\chi_\mathcal{E}$. This leads to the following.
\begin{lemma}[\cite{Stretching}]
If a quantum channel $\mathcal{E}$ is teleportation covariant,
then it is Choi-stretchable via teleportation. This channel may
also be called a ``teleportation simulable'' channel.
\end{lemma}
All Pauli channels (regardless of dimension) are teleportation
covariant, and are therefore Choi-stretchable.
Note that in the previous lemma, we are stating a sufficient
condition only. We would like to modify the lemma into a
sufficient and necessary condition. Let us define the
Weyl-Heisenberg (WH) teleportation protocol. This is a
teleportation protocol over an arbitrary resource state where the
output corrective unitary is a unitary representation of the
Weyl-Heisenberg group associated with the Bell detection. This
protocol defines the WH-teleportation channels as follows.
\begin{definition}
We say that a quantum channel is a \textquotedblleft
WH-teleportation channel\textquotedblright\ if it can be written
in the form
\begin{equation}
\Gamma_{\tau}(\rho):=\sum_{g\in G}V_{B}^{\dagger}(g)\mathrm{Tr}_{CA}%
[E_{CA}(g)(\rho_{C}\otimes\tau_{AB})]V_{B}(g),\label{WHform}%
\end{equation}
where $\tau_{AB}$ is a preshared resource state between Alice and
Bob, $E_{CA}(g)=U_{A}^{\dagger}(g)\ket{\Phi}\bra{\Phi}U_{A}(g)$ is
a Bell detection operator with $U(g)\in\left\{
\sigma_{x}^{\alpha}\sigma_{y}^{\beta}\right\} $ belonging to the
$d$-dimensional Weyl-Heisenberg group, and $V(g)$ is a
(generally different) representation of the same group.
\end{definition}
Note that conventional teleportation may be written in the form of
Eq.~(\ref{WHform}) by setting $V(g)=U(g)$ and
$\tau_{AB}=\ket{\Phi}\bra{\Phi}$, the maximally entangled state.
In Appendix~\ref{App:proof1}, we then show the following
characterization.
\begin{theorem}\label{WHtheorem}
For DV systems, a channel is teleportation covariant iff it is a
WH-teleportation channel, i.e., Choi-stretchable via a
WH-teleportation protocol.
\end{theorem}
\subsection{Teleportation stretching and weak converse bounds for private
communication\label{Adaptive}}
The most general protocol for key generation (or private
communication) between two remote parties, connected by a quantum
channel $\mathcal{E}$, consists in the use of adaptive LOCCs
interleaved between each transmission through the channel. This
type of private protocol is very difficult to study due to the
presence of feedback that may be exploited to improve the inputs
to the channel in a real-time fashion. As Ref.~\cite{Stretching} has recently shown,
an adaptive protocol for private communication can be transformed
into a much simpler (non-adaptive) protocol by means of
teleportation stretching. This means that each use of channel
$\mathcal{E}$ is replaced by its\ simulation via an LOCC and a
corresponding resource state $\tau$. All the LOCCs, both the
original from the protocol and the new ones introduced by the
simulation, can be collapsed into a single (trace-preserving) LOCC
$\Lambda$. As a result, after $n$ transmissions, the output of the
protocol can be decomposed into the form
\begin{equation}
\rho_{n}=\Lambda(\tau^{\otimes n}).\label{LOCCsim}%
\end{equation}
To understand the huge simplification that this method brings, we
need to combine it with the use of the relative entropy of
entanglement (REE)~\cite{RMPrelent,VedFORMm,Pleniom}. Recall that
the relative entropy between two states $\rho$ and $\sigma$ is
defined as~\cite{RMPrelent}
\begin{equation}
S(\rho||\sigma):=\mathrm{Tr}(\rho\log\rho-\rho\log\sigma),
\end{equation}
and the REE of a state is given by the following minimization over
all separable states (SEP)~\cite{VedFORMm,Pleniom}
\begin{equation}
E_{R}\left( \rho\right)
:=\min_{\sigma\in\text{SEP}}S(\rho||\sigma
).\label{properties}%
\end{equation}
This is monotonic under trace-preserving LOCCs $\Lambda$, i.e.,
$E_{R}[\Lambda(\rho)]\leq E_{R}\left( \rho\right) $, and
sub-additive over tensor products, i.e., $E_{R}\left(
\rho\otimes\sigma\right) \leq E_{R}\left( \rho\right)
+E_{R}\left( \sigma\right) $.
Now consider the secret-key capacity $K$ of a quantum channel
(maximum number of secret bits per channel use which are generated
by adaptive protocols). This is equal to the two-way private
capacity $P_{2}$\ of the channel (maximum number of private bits
per channel use which are deterministically transmitted from Alice
to Bob by means of adaptive protocols) and greater than the
two-way quantum capacity $Q_{2}$ (maximum number of qubits per
channel use which are reliably sent from Alice to Bob by means of
adaptive protocols). We have the following.
\begin{theorem}
[\cite{Stretching}]The secret key capacity of a channel must
satisfy the weak converse upper bound
\begin{equation}
K(\mathcal{E})\leq E_{R}^{\star}\left( \mathcal{E}\right) :=\sup
_{\mathcal{L}}\lim_{n\rightarrow\infty}\frac{E_{R}\left(
\rho_{n}\right)
}{n},\label{UBree}%
\end{equation}
where $\mathcal{L}$ is an adaptive protocol for key generation and
$\rho_{n}$ is its $n$-use output.
\end{theorem}
Now we can see that combining the REE bound in Eq.~(\ref{UBree})
with the stretching in Eq.~(\ref{LOCCsim}), and exploiting the
monotonicity and sub-additivity of the REE, we derive the
following.
\begin{theorem}[\cite{Stretching}]\label{singleletterupper} If a channel
$\mathcal{E}$ is $\tau$-stretchable, then its secret-key capacity
is upper bounded by the REE of its resource state $\tau$, i.e.,
\begin{equation}
K(\mathcal{E})\leq E_{R}\left( \tau\right) .
\end{equation}
In particular, for a Choi-stretchable channel, we write
\begin{equation}
K(\mathcal{E})\leq E_{R}\left( \chi_{\mathcal{E}}\right) ,
\end{equation}
where $\chi_{\mathcal{E}}$ is its Choi matrix.
\end{theorem}
\section{Simulating non-Pauli channels via ``noisy'' teleportation}\label{Expanding}
Whilst we have an extremely simple way of simulating Pauli
channels, i.e., just standard teleportation on a two-qubit mixed
state~\cite{B2,BoBo}, we would like to have a similarly easy way
for simulating non-Pauli channels. Here we show that this is
possible by means of a simple modification of the teleportation
protocol where we also include a classical channel in the CCs from
Alice\ to Bob. This is non-trivial because until now, the only way
to generate non-Pauli channels via DV teleportation is by changing
the dimension of the Hilbert space between the systems $A$ and $B$
of the shared resource of Alice and Bob (e.g., using a
qubit-qutrit resource state, one may simulate an erasure channel).
In the following discussion, we shall limit ourselves to the case
where $\mathcal{E}$ maps qubits to qubits.
Consider a classical channel $\Pi$ from Alice's outcome $k$ for
the Bell measurement to Bob's variable $l$ for the corrective
Pauli unitary $U_{l}$. This is characterized by conditional
probability distribution~\cite{Shannon}
$\left\{ p_{l|k}\right\} $ such that%
\begin{equation}
p_{l|k}\geq0,~~\sum_{l=0}^{3}p_{l|k}=1,\;\;\;\forall
k\in\{0,1,2,3\}.
\end{equation}
What this means in practical terms is that when Alice obtains the
Bell outcome $k$, rather than Bob performing the corrective
unitary $U_{k}$ with certainty, instead he performs one of the
four unitaries $U_{l}$ with probability $p_{l|k}$. Using such a
noisy teleportation protocol, we prove the following.
\begin{theorem}
\label{FFormula} Consider a teleportation protocol based on a Bell
detection and Pauli correction unitaries but where the resource
state is a generic two-qubit state $\tau$ and the CCs from Alice
to Bob are subject to a classical channel $\Pi$ (``noisy
teleportation''). In this way, we simulate a quantum channel
$\mathcal{E}_{f}$ whose action on the Bloch sphere is described by
\begin{align}
\mathcal{E}_{f}:\left( x,y,z\right) \rightarrow( & f_{10}+f_{11}%
x+f_{12}y+f_{13}z,\nonumber\\
& f_{20}+f_{21}x+f_{22}y+f_{23}z,\nonumber\\
& f_{30}+f_{31}x+f_{32}y+f_{33}z)\label{fullform}%
\end{align}
where $f_{ij}$ is given by the formula
$f_{ij}=t_{ji}^{\prime}S_{ij}$, where
\begin{equation}
S_{ij}:=\frac{1}{4}\sum_{k,l=0}^{3}-1^{\delta_{k,0}+\delta_{j,2}+\delta
_{j,0}+\delta_{k,j}+\delta_{i,l}+\delta_{0,l}}p_{l|k},\label{Sdefinition}%
\end{equation}
and $T^{\prime}$ is defined as the \textquotedblleft augmented"
$T$ matrix,
\begin{equation}
t_{ji}^{\prime}=%
\begin{cases}
b_{i} & j=0\\
t_{ji} & j\in\left\{ 1,2,3\right\}
\end{cases}
i\in\left\{ 1,2,3\right\} ,
\end{equation}
taking $t_{ji}$ from the $T$ matrix of Eq.(\ref{generaltwo}).
\end{theorem}
By comparing Eq.~(\ref{TRASF}) with Eq.~(\ref{fullform}), we can see immediately that the inclusion of a classical channel
opens up much wider variety of simulated quantum channels. In
fact, we may now have dependence on $x$, $y$ and $z$ in any part
of the transformed Bloch vector, and it is also possible to add
constant terms. This clearly allows us to go well beyond Pauli
channels (a specific class of non-Pauli channels will be discussed
in the next section). Here we may also state the following result
which is a no-go for the simulation of non-Pauli channels when the
noisy teleportation protocol is restricted to Bell diagonal
resource states.
\begin{theorem}\label{nogo}
Using a Bell diagonal resource state, i.e., of the form in
Eq.~(\ref{PauliChoi}), it is only possible to simulate Pauli
channels regardless of the classical channel in place between the
two parties.
\end{theorem}
\noindent\textbf{Proof.} From the structure of $S_{ij}$, we can
see it can only take values in $[-1,1]$. Making use
of~(\ref{fullform}), we see that the action of any channel
generated using resource state~(\ref{PauliChoi}) will be
\begin{equation}
\mathcal{E}:(x,y,z)\rightarrow(t_{11}S_{11}x,t_{22}S_{22}y,t_{33}S_{33}z)~.
\end{equation}
Looking at the structure of the sums $S_{ii}$ for $i\in\{1,2,3\}$
(given in Appendix~\ref{App:sumforms}), we find that for any valid
$p_{l|k}$ term within the sum induces one of four transformations
\begin{align}
\mathcal{E}_{p_{l|k}}:(x,y,z) & \rightarrow(\phantom{-}t_{11}x,-t_{22}%
y,\phantom{-}t_{33}z)\label{idch}\\
& \rightarrow(\phantom{-}t_{11}x,\phantom{-}t_{22}y,-t_{33}z)\label{sxch}\\
& \rightarrow(-t_{11}x,-t_{22}y,-t_{33}z)\label{sych}\\
& \rightarrow(-t_{11}x,\phantom{-}t_{22}y,\phantom{-}t_{33}z)~,\label{szch}%
\end{align}
which are the four Pauli transformations induced by simulation
over the respective states defined by
\begin{align*}
& (\phantom{-}t_{11},\phantom{-}t_{22},\phantom{-}t_{33}), & &
(\phantom{-}t_{11},-t_{22},-t_{33}),\\
& (-t_{11},\phantom{-}t_{22},-t_{33}), & & (-t_{11},-t_{22}%
,\phantom{-}t_{33}),
\end{align*}
with perfect classical communication. We have assumed that $(t_{11}%
,t_{22},t_{33})$ is given by a convex weighting of our four bell
states with some probabilities $p_{i}$, and it is easy to spot
that we may obtain the other three states from the Bell states by
permuting these weights. Since the set $\{\frac{p_{l|k}}{4}\}$
sums to 1, this may also be thought of as a convex
weighting, and thus we may conclude that $(t_{11}S_{11},t_{22}S_{22}%
,t_{33}S_{33})\in\mathcal{T}$, and so induces a Pauli
channel.$~\square$
It is important to understand the difference between
Theorem~\ref{nogo} and Theorem~\ref{Bobothe}.
Theorem~\ref{Bobothe} tells us that an \textit{arbitrary} two
qubit resource state with \textit{perfect} CC from Alice to Bob
may only simulate Pauli channels, whereas Theorem~\ref{nogo}
states that a \textit{Bell diagonal} resource with an
\textit{arbitrary} classical channel for the CC from Alice to Bob
may only simulate Pauli channels. As a result, we have the
following corollary which will drive us in the choice of the
resource state in the next section.
\begin{corollary}\label{coro}
In order to simulate a non-Pauli channel via noisy teleportation,
the resource state $\tau$ of Eq.~(\ref{generaltwo}) must
have $\mathbf{b}\neq0$ or $T$ non-diagonal. This means $\tau$
cannot be the Choi matrix of a Pauli channel.
\end{corollary}
\section{Amplitude damping as a resource for simulating non-Pauli
channels}\label{dampSEC} Following Corollary~\ref{coro}, we will
explore resource states which are non-diagonal in the Bell basis.
A natural choice is to consider the Choi matrix of the amplitude
damping channel. This is the most studied (dimension preserving)
non-Pauli channel. It has the action
\begin{align}
\mathcal{E}_{\gamma}:\ket{0} & \rightarrow\ket{0},\\
\ket{1} & \rightarrow\sqrt{\gamma}\ket{0}+\sqrt{1-\gamma}\ket{1},
\end{align}
where $\gamma\in\lbrack0,1]$ is the probability of damping.
Alternatively, on the Bloch sphere, we have
\begin{equation}
\mathcal{E}_{\gamma}:(x,y,z)\rightarrow\left(
\sqrt{1-\gamma}x,\sqrt {1-\gamma}y,\gamma+(1-\gamma)z\right) .
\end{equation}
The Choi matrix of this channel is
\begin{equation}
\chi_{\gamma}=\left(
\begin{array}
[c]{cccc}%
\frac{1}{2} & 0 & 0 & \frac{\sqrt{1-\gamma}}{2}\\
0 & 0 & 0 & 0\\
0 & 0 & \frac{\gamma}{2} & 0\\
\frac{\sqrt{1-\gamma}}{2} & 0 & 0 & \frac{1-\gamma}{2}%
\end{array}
\right),
\end{equation}
which is a resource state of the form (\ref{generaltwo}), where
the non-zero entries are only
\begin{equation}
b_{3}=\gamma,~t_{11}=\sqrt{1-\gamma},~t_{22}=-\sqrt{1-\gamma},~t_{33}=1-\gamma.
\end{equation}
It is useful to define the \emph{F matrix} of a channel, which
compactly describes the action of the channel on the augmented
Bloch vector $(1,x,y,z)$.
\begin{definition}
\label{FMatrix} A quantum channel
$\mathcal{E}:(x,y,z)\rightarrow(x^{\prime
},y^{\prime},z^{\prime})$ can be described by its F matrix
$F_{\mathcal{E}}$, where
\begin{equation}
\left(
\begin{array}
[c]{c}%
1\\
x^{\prime}\\
y^{\prime}\\
z^{\prime}%
\end{array}
\right) =F_{\mathcal{E}}\left(
\begin{array}
[c]{c}%
1\\
x\\
y\\
z
\end{array}
\right) =\left(
\begin{array}
[c]{cccc}%
1 & 0 & 0 & 0\\
f_{10} & f_{11} & f_{12} & f_{13}\\
f_{20} & f_{21} & f_{22} & f_{23}\\
f_{30} & f_{31} & f_{32} & f_{33}%
\end{array}
\right) \left(
\begin{array}
[c]{c}%
1\\
x\\
y\\
z
\end{array}
\right) .
\end{equation}
\end{definition}
The F matrix of an amplitude damping channel $\mathcal{E}_{\gamma
}$ is
\begin{equation}
F_{\gamma }=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & \sqrt{1-\gamma } & 0 & 0 \\
0 & 0 & \sqrt{1-\gamma } & 0 \\
\gamma & 0 & 0 & 1-\gamma
\end{array}%
\right) .
\end{equation}%
For a Pauli channel $\mathcal{E}:\left( x,y,z\right) \rightarrow
\left(
t_{11}x,-t_{22}y,t_{33}z\right) $, we may set $q_{i}:=t_{ii}$\ and write%
\begin{equation}
F_{P}=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & q_{1} & 0 & 0 \\
0 & 0 & -q_{2} & 0 \\
0 & 0 & 0 & q_{3}%
\end{array}%
\right) ,
\end{equation}%
with $\mathbf{q}=(q_{1},q_{2},q_{3})$ belonging to the tetrahedron
$\mathcal{T}$ (see Sec.~\ref{ResChange}).
We are now ready to present the first of our two main results,
where we provide the general form of the channel that are
simulable by noisy teleportation over the Choi matrix
$\chi_{\gamma}$ of the amplitude damping channel.
\begin{theorem}\label{mainresult}
All channels that are simulable by noisy teleportation over the
amplitude damping Choi matrix $\chi_\gamma$ can be uniquely
decomposed in the following way
\begin{equation}\label{decomposition}
\mathcal{E}_\text{sim}=\sigma_x^{u}\circ\mathcal{E}_\eta\circ\mathcal{E}_{P}
\end{equation}
where $u=0$ or $1$, $\sigma_x$ is the Pauli unitary
$\sigma_x(\rho)=\sigma_x\rho\sigma_x^\dagger$,
$\mathcal{E}_\eta$ is an amplitude damping channel with parameter $\eta$, and $\mathcal{E}_{P}$ is a Pauli channel with suitable parameters $\mathbf{q}=(q_{1}%
,q_{2},q_{3})$ belonging to the tetrahedron $\mathcal{T}$.
\end{theorem}
\noindent\textbf{Proof.} Making use the formula in Eq.
(\ref{fullform}) we know that any channel
$\mathcal{E}_{\text{sim}}$ simulated with $\chi_\gamma$ will have
$F$ matrix
\begin{equation}
F_{\text{sim}}=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & \sqrt{1-\gamma}S_{11} & 0 & 0\\
0 & 0 & -\sqrt{1-\gamma}S_{22} & 0 \\
\gamma S_{30} & 0 & 0 & (1- \gamma) S_{33}
\end{array}
\right).
\end{equation}
If two channels have identical $F$ matrices, then they are
equivalent. This is because they both enact the same action on an
arbitrary qubit state. Thus we aim to prove the theorem by
equating the above $F$ matrix of a simulated channel with that of our
decomposition defined in Eq.~(\ref{decomposition}). From the $F$ matrices
of $\mathcal{E}_{\eta }$ and $\mathcal{E}_{P}$, we derive that $\mathcal{E}%
_{+}:=\mathcal{E}_{\eta }\circ \mathcal{E}_{P}$ and
$\mathcal{E}_{-}:=\sigma _{x}\circ \mathcal{E}_{\eta }\circ
\mathcal{E}_{P}$ have $F$ matrices
\begin{align}
F_{+}& =\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & \sqrt{1-\eta }q_{1} & 0 & 0 \\
0 & 0 & -\sqrt{1-\eta }q_{2} & 0 \\
\eta & 0 & 0 & (1-\eta )q_{3}%
\end{array}%
\right) , \\
F_{-}& =\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & \sqrt{1-\eta }q_{1} & 0 & 0 \\
0 & 0 & \sqrt{1-\eta }q_{2} & 0 \\
-\eta & 0 & 0 & -(1-\eta )q_{3}%
\end{array}%
\right) ,
\end{align}%
where $(q_{1},q_{2},q_{3})\in \mathcal{T}$. Since $\eta\geq 0$,
yet $\gamma S_{30}\in[-\gamma,\gamma]$, we are proposing that
\begin{equation}\label{equality}
F_{\text{sim}}
=\begin{cases}
F_+ &\text{if }S_{30}\geq 0 ,\\
F_- &\text{if }S_{30}\leq 0.
\end{cases}
\end{equation}
We will begin by considering the first case where $S_{30}\geq 0$. Equating the $f_{30}$ components it is clear
that we must set $\eta=\gamma S_{30}$. As $S_{30}\leq 1$ this is a valid $\eta$ value. Rearranging (\ref{equality}) this gives us that
\begin{align}
\left(q_1,q_2,q_3\right)=\bigg(\sqrt{\frac{1-\gamma}{1-\gamma S_{30}}}&S_{11},\nonumber\\
\sqrt{\frac{1-\gamma}{1-\gamma S_{30}}}&S_{22},\nonumber\\
\frac{1-\gamma}{1-\gamma S_{30}}&S_{33}\bigg).\label{qvec}
\end{align}
The vector $\left(S_{11},S_{22},S_{33}\right)$ belongs to the
tetrahedron $\mathcal{T}$, which we prove by showing (in Appendix
\ref{App:proofs})
\begin{align*}
S_{11}+S_{22}+S_{33}&\leq 1\\
S_{11}-S_{22}-S_{33}&\leq 1\\
-S_{11}+S_{22}-S_{33}&\leq 1\\
-S_{11}-S_{22}+S_{33}&\leq 1.
\end{align*}
Moreover, the scaling of this vector seen in equation (\ref{qvec}) simply maps to another point still within the tetrahedron
(also proven in Appendix \ref{App:proofs}). Thus we may conclude, in the case where $S_{30}\geq 0$, that our decomposition is valid and unique,
since equality defines a valid value for $\eta$, and a valid point in $\mathcal{T}$ defining $\mathcal{E}_{P}$ given by Eq. (\ref{qvec}).\\
The proof for the case when $S_{30}\leq 0$ is very similar to the first case, therefore we have included it in Appendix \ref{App:proofs}.
$~\square$
\subsection{Pauli-damping channels}\label{PossSim}
We have shown that all the channels simulable by noisy
teleportation over the resource state $\chi_{\gamma}$ are
necessarily of the form~(\ref{decomposition}). Here we discuss the
converse, i.e., we establish what channels of this form are
simulable, i.e., the region of parameters that are accessible in
the parametrization of Eq.~(\ref{decomposition}). This is the
content of the following theorem.
\begin{theorem}
\label{secondmain} Using noisy teleportation over the amplitude
damping Choi matrix $\chi_{\gamma}$, it is only possible to
simulate channels of the form in
Eq.~(\ref{decomposition}) where $\eta\in\lbrack0,\gamma]$ and $\mathbf{q}=(q_{1}%
,q_{2},q_{3})$ belonging to the convex space bounded by the points
\begin{align}
&\left( \phantom{-}a\phantom{b} , \pm ab
,\mp a^2b \right),\nonumber\\
&\left( \pm ab ,
\phantom{-}a\phantom{b}
,\mp a^2b \right),\nonumber\\
&\left( -a\phantom{b} , \pm ab
,\pm a^2b \right),\nonumber\\
&\left( \pm ab , -a\phantom{b}
,\pm a^2b \right),
\end{align}
with
\begin{equation*}
a=\sqrt{\frac{1-\gamma}{1-\eta}},\;b=1-\frac{\eta}{\gamma}.
\end{equation*}
These correspond to the extremal points of the tetrahedron
$\mathcal{T}$ truncated by the two planes $z=\pm b $, and shrunk by the transformation
\begin{equation}
(x,y,z)\rightarrow\left(
a x,a y,a^2 z\right) .
\end{equation}
\end{theorem}
This theorem motivates the following definition.
\begin{definition}
We define the Pauli-damping channels as the class of qubit
channels that are simulable by teleporting over amplitude damping
Choi matrix $\chi_{\gamma}$ and using a classical channel $\Pi$
for the CCs. They have a unique decomposition form in
Theorem~\ref{mainresult}, and must satisfy the criteria in
Theorem~\ref{secondmain}.
\end{definition}
\noindent\textbf{Proof.} First we consider $\mathcal{E}_{\eta}$.
Since $\eta=|\gamma S_{30}|$, and $S_{30}$ can take any value in
$[-1,1]$, we can conclude that $\eta\in\lbrack0,\gamma]$. A
slightly trickier question now arises: Given our resource has
parameter $\gamma$, and our amplitude damping channel within the
decomposition has parameter $\eta$, what Pauli channels are
attainable? We know that in our two cases (positivity/negativity
of $S_{30}$), the Pauli channel elements $\mathbf{q}$ are
\begin{align}
\text{case 1} & \text{:}\nonumber\\
\mathbf{q} & =\left(
\begin{array}
[c]{c}%
\frac{\sqrt{1-\gamma}}{\sqrt{1-\gamma S_{30}}}S_{11},\\
\phantom{-}\frac{\sqrt{1-\gamma}}{\sqrt{1-\gamma S_{30}}}S_{22},\\
\phantom{-}\frac{1-\gamma}{1-\gamma S_{30}}S_{33}%
\end{array}
\right) =\left(
\begin{array}
[c]{c}%
\frac{\sqrt{1-\gamma}}{\sqrt{1-|\gamma S_{30}|}}S_{11},\\
\phantom{-}\frac{\sqrt{1-\gamma}}{\sqrt{1-|\gamma S_{30}|}}S_{22},\\
\phantom{-}\frac{1-\gamma}{1-|\gamma S_{30}|}S_{33}%
\end{array}
\right), \\
\text{case 2} & \text{:}\nonumber\\
\mathbf{q} & =\left(
\begin{array}
[c]{c}%
\frac{\sqrt{1-\gamma}}{\sqrt{1+\gamma S_{30}}}S_{11},\\
-\frac{\sqrt{1-\gamma}}{\sqrt{1+\gamma S_{30}}}S_{22},\\
-\frac{1-\gamma}{1+\gamma S_{30}}S_{33}%
\end{array}
\right) =\left(
\begin{array}
[c]{c}%
\frac{\sqrt{1-\gamma}}{\sqrt{1-|\gamma S_{30}|}}S_{11},\\
-\frac{\sqrt{1-\gamma}}{\sqrt{1-|\gamma S_{30}|}}S_{22},\\
-\frac{1-\gamma}{1-|\gamma S_{30}|}S_{33}%
\end{array}
\right) .
\end{align}
Since we may prove that both
\begin{equation}
(S_{11},S_{22},S_{33}),(S_{11},-S_{22},-S_{33})\in\mathcal{T},
\end{equation}
(see Lemma~\ref{Sin} in Appendix \ref{App:proofs}), then we can
state with certainty that the class of possible Pauli channels
will be bound by the \textquotedblleft shrunk" tetrahedron
\begin{align}
& \left(
\phantom{-}\frac{\sqrt{1-\gamma}}{\sqrt{1-\eta}},\phantom{-}\frac
{\sqrt{1-\gamma}}{\sqrt{1-\eta}},-\frac{1-\gamma}{1-\eta}\right) \nonumber\\
& \left( \phantom{-}\frac{\sqrt{1-\gamma}}{\sqrt{1-\eta}},-\frac
{\sqrt{1-\gamma}}{\sqrt{1-\eta}},\phantom{-}\frac{1-\gamma}{1-\eta}\right)
\nonumber\\
& \left( -\frac{\sqrt{1-\gamma}}{\sqrt{1-\eta}},\phantom{-}\frac
{\sqrt{1-\gamma}}{\sqrt{1-\eta}},\phantom{-}\frac{1-\gamma}{1-\eta}\right)
\nonumber\\
& \left( -\frac{\sqrt{1-\gamma}}{\sqrt{1-\eta}},-\frac{\sqrt{1-\gamma}%
}{\sqrt{1-\eta}},-\frac{1-\gamma}{1-\eta}\right) .\label{positiveshrink}%
\end{align}
As well as this, we fixed the value of $S_{30}$ when choosing our
$\eta$ value. Since $S_{11},S_{22},S_{33}$ are dependent of the
same variables as $S_{30}$, this places some restrictions of the
values they may take. In order to obtain this, we first use vertex
enumeration~\cite{PANDA} to find all extremal probability
distributions of the space defined by
\begin{align}
\mathcal{P}_{\eta}^{\pm}=\bigg\{p_{l|k}\mid p_{l|k}\geq0, &
\;\;\sum
_{k=0}^{3}p_{l|k}=1,\\
& \;\;S_{30}=\pm\frac{\eta}{\gamma}\;\;k,l\in\left\{
0,1,2,3\right\} \bigg\},\nonumber
\end{align}
which we will denote $\left\{ Q_{m}^{\pm}\right\} $. Now we may
consider $(S_{11},S_{22},S_{33}),\,(S_{11},-S_{22},-S_{33})$ as
two linear functions, $\mathcal{S}_{+}$ and $\mathcal{S}_{-}$,
which map
\[
\mathcal{S}_{\pm}:\mathcal{P}_{\eta}^{\pm}\rightarrow\mathcal{T},
\]
Thus for a given probability distribution $\Pi$, we may calculate
this
transformation as%
\begin{equation}
\mathcal{S}_{\pm}(\Pi)=\mathcal{S}_{\pm}\left(
\sum_{m}\lambda_{m}Q_{m}^{\pm }\right)
=\sum_{m}\lambda_{m}\mathcal{S}_{\pm}\left( Q_{m}^{\pm}\right) ,
\end{equation}
with $\sum_{m}\lambda_{m}=1,\;\lambda_m\geq 0$. Therefore, we need only consider the
values of $S_{\pm}$ at these extremal probability distributions,
in order to obtain all allowable $S_{ii}$ values. These are easily
calculated, and we obtain that the
eight extremal distributions are%
\begin{align}
\bigg( & \phantom{-}1 & , & \pm\left(
1-\frac{\eta}{\gamma}\right) & &
,\mp\left( 1-\frac{\eta}{\gamma}\right) \bigg),\nonumber\\
\bigg( & \pm\left( 1-\frac{\eta}{\gamma}\right) & , &
\phantom{-}1 & &
,\mp\left( 1-\frac{\eta}{\gamma}\right) \bigg),\nonumber\\
\bigg( & -1 & , & \pm\left( 1-\frac{\eta}{\gamma}\right) & &
,\pm\left( 1-\frac{\eta}{\gamma}\right) \bigg),\nonumber\\
\bigg( & \pm\left( 1-\frac{\eta}{\gamma}\right) & , & -1 & &
,\pm\left( 1-\frac{\eta}{\gamma}\right) \bigg),
\end{align}
regardless of the case ($S_{30}$ positive or negative). These
points correspond to $\mathcal{T}$, truncated by two planes at
$S_{33}=\pm\left( 1-\frac{\eta}{\gamma}\right) $.~$\square$
An immediate consequence of this theorem is that we cannot
simulate the amplitude damping channel $\mathcal{E}_{\gamma}$
using its Choi matrix
$\chi_{\gamma}$.\ In fact, this would require $\eta=\gamma$ and $\mathcal{E}%
_{P}=\mathbb{I}$, corresponding to $\mathbf{q}=(1,-1,1)$. However,
when $\eta=\gamma$ our possible Pauli channels are limited from
both above and below by the same plane,
$q_{3}=\pm\frac{1-\gamma}{1-\gamma}\left(
1-\frac{\gamma}{\gamma}\right) =0$, and thus this is impossible.
Therefore the amplitude damping channel is not Choi-stretchable
even with the noisy teleportation protocol. The only exceptions to
this are the special cases where $\gamma=0$, which is simply the
identity channel, and when $\gamma=1$, which sends all qubit
states deterministically to $\ket{0}$. This can be decomposed into
the completely depolarizing channel $\mathcal{E}_{D}$ with
$\mathbf{q}=\mathbf{0}$, which sends all states to the maximally
mixed state $\frac{\mathbb{I}}{2}$, followed by itself, to fit our
decomposition (see Fig.~\ref{Truly}).
\begin{figure}[h!]
\begin{centering}
\includegraphics[width=8cm]{06shrunk05.pdf}
\vspace{-1cm} \caption{Possible Pauli channels when $S_{30}=0.5$
and $\gamma=0.6$, including the shrinking effect of
Eq.~(\ref{positiveshrink}). The hollow tetrahedron is
$\mathcal{T}$ characterizing all Pauli channels, whilst the shaded
region is the allowable values of $\mathbf{q}$ bounded by $
(\;\;\;\frac{2}{\sqrt{7}} ,\pm\frac{1}{\sqrt{7}} ,
\mp\frac{2}{7}), (\pm\frac{1}{\sqrt{7}} ,\;\;\;\frac{2}{\sqrt{7}}
, \mp\frac{2}{7}),
(-\frac{2}{\sqrt{7}} ,\pm\frac{1}{\sqrt{7}} , \pm\frac{2}{7}),\\
(\pm\frac{1}{\sqrt{7}} ,-\frac{2}{\sqrt{7}} , \pm\frac{2}{7})
$ for these particular values.
}\label{Truly}
\end{centering}
\end{figure}
\section{Properties and capacities of Pauli-damping
channels}\label{SECprop}
Now that we have shown what channels can be simulated, we study
some of the properties of these channels. First of all, we
quantify how distinguishable they are from their closest Pauli
equivalent. It turns out that the decomposition in
Theorem~\ref{mainresult} provides a simple answer to this problem:
the distance is simply $\eta$.
\subsection{Distance in trace norm}
The trace norm distance between two quantum channels
$\mathcal{E}_{1}$ and $\mathcal{E}_{2}$ can be defined as
\begin{equation}
||\mathcal{E}_{1}-\mathcal{E}_{2}||_{1}:=\sup_{\rho}||\mathcal{E}_{1}\left(
\rho\right) -\mathcal{E}_{2}\left( \rho\right) ||_{1}%
\end{equation}
where $||\sigma||_{1}=$\textrm{Tr}$\sqrt{\sigma\sigma^{\dagger}}$.
For Hermitian matrices, this is equivalent to the sum of the
absolute values of the eigenvalues of $\sigma$. We then state the
following.
\begin{proposition}\label{postrace}
Given a decomposition
$\mathcal{E}_{\text{sim}}=\sigma_{x}^u\circ\mathcal{E}_{\eta}\circ\mathcal{E}_{P}$
characterized by $\eta$ and $(q_{1}%
,q_{2},q_{3})$ respectively, then the trace norm between $\mathcal{E}%
_{\text{sim}}$ and the closest Pauli channel
$\mathcal{E}_{\text{cl}}$ is
simply $\eta$. Moreover, the closest Pauli channel has $(f_{11},f_{22}
,f_{33})=\begin{cases}
\left(
\sqrt{1-\eta}q_{1},-\sqrt{1-\eta}q_{2},\phantom{-}(1-\eta)q_{3}\right) & \mbox{ for } u=0,\\
\left(
\sqrt{1-\eta}q_{1},\phantom{-}\sqrt{1-\eta }q_{2},-(1-\eta)q_{3}\right)& \mbox{ for } u=1.
\end{cases}$
\end{proposition}
\noindent\textbf{Proof.} For qubits, the trace norm between
two states is simply the Euclidean distance between their Bloch
vectors. Therefore we have a very natural way to find the trace
norm between two-qubit channels. When $u=0$, the Bloch vector of a state under $\mathcal{E}_{\text{sim}}$ is $\mathbf{r}_{\text{sim}}=(\sqrt{1-\eta}q_{1}x,-\sqrt{1-\eta
}q_{2}y,(1-\eta)q_{3}z+\eta)$, whilst under an arbitrary Pauli channel it is $\mathbf{r}_P=(c_1x,-c_2y,c_3z)$. Thus the problem we need to solve is
\begin{align}
& \min_{\left( c_{1},c_{2},c_{3}\right) \in\mathcal{T}}\;\;\max
_{x,y,z:x^{2}+y^{2}+z^{2}\leq1}\nonumber\\
& \left( (\sqrt{1-\eta}q_{1}-c_{1})x\right) ^{2}+\left(
-(\sqrt{1-\eta
}q_{2}-c_{2})y\right) ^{2}\nonumber\\
+ & \Big(((1-\eta)q_{3}-c_{3})z+\eta\Big)^{2},
\end{align}
which is the square of the trace norm. Let us first look at the
final term $\left( \left( (1-\eta)q_{3}-c_{3}\right)
z+\eta\right) ^{2}$. Given our maximum occurs for some fixed
$|z|$ value, we have that the value of this term will be
\begin{align}
\max\bigg\{ & \big(\left( (1-\eta)q_{3}-c_{3}\right) |z|+\eta
\big)^{2},\nonumber\\
& \big(-\left( (1-\eta)q_{3}-c_{3}\right) |z|+\eta\big)^{2}%
\bigg\}\nonumber\\
= & \big(|(1-\eta)q_{3}-c_{3}||z|+\eta\big)^{2}.
\end{align}
Clearly this is minimized when $c_{3}=\left( 1-\eta\right)
q_{3}$, and has value $\eta^{2}$.
The remaining two parts of the equation are simpler. Clearly we
want to set
\[
c_{1}=\sqrt{1-\eta}q_{1},~c_{2}=\sqrt{1-\eta}q_{2},
\]
to make these parts disappear, regardless of the values of $x$ and
$y$. Thus we obtain our closest Pauli channel to be
\begin{equation}
(x,y,z)\rightarrow(\sqrt{1-\eta}q_{1}x,-\sqrt{1-\eta}q_{2}y,(1-\eta)q_{3}z).
\end{equation}
We can be sure that this channel is Pauli as a consequence of Lemma~\ref{shrinking}
in Appendix~\ref{App:proofs}.\\
For the case when $u=1$, the proof is very similar, and given in Appendix \ref{App:proofs}.$~\square$
\subsection{Distance in diamond norm}
It is not wise to use the trace norm as a measure for the
distinguishability of channels, since it has been shown we can do
it better in general by sending part of an entangled state through
the channel~\cite{Ent1,Ent2,Ent3,Ent4,Ent5,Ent6}. With this in
mind, we look to an alternative distance. The diamond norm
distance $||\mathcal{E}_{1}-\mathcal{E}_{2}||_{\diamond}$ is
defined as:
\begin{equation}
||\mathcal{E}_{1}-\mathcal{E}_{2}||_{\diamond}:=\sup_{\rho\in\kappa
\otimes\mathcal{H}}||\mathbb{I_{\kappa}}\otimes\mathcal{E}_{1}\left(
\rho\right) -\mathbb{I_{\kappa}}\otimes\mathcal{E}_{2}\left(
\rho\right) ||_{1},
\end{equation}
where $\mathrm{\kappa}$ in an ancillary Hilbert space to the one
acted upon by
$\mathcal{E}$, $\mathcal{H}$. In general, one has $||\mathcal{E}%
_{1}-\mathcal{E}_{2}||_{\diamond}\geq||\mathcal{E}_{1}-\mathcal{E}_{2}||_{1}$.
Also we know that the diamond norm can be achieved with an
ancillary Hilbert space $\kappa$ with
$\text{dim}\;\kappa=\text{dim}\;\mathcal{H}~$\cite{Fano}.
Therefore, we need only consider a 1 qubit ancillary space in our
case and state the following.
\begin{proposition}\label{posdiamond}
For a channel
$\mathcal{E}_\text{sim}=\sigma_{x}^u\circ\mathcal{E}_{\eta}\circ\mathcal{E}_{P}$,
the closest Pauli channel under the diamond norm is the same as
under the trace norm, given in Proposition \ref{postrace}, and the diamond norm distance is equal to
$\eta$.
\end{proposition}
\noindent\textbf{Sketch Proof.} (Full proof in appendix
\ref{App:proofs}). First off, we know that
\begin{equation}
\min_{\mathcal{E}_2\in\text{Pauli}}||\mathcal{E}_{\text{sim}}-\mathcal{E}_2||_\diamond\leq||\mathcal{E}_{\text{sim}}-\mathcal{E}_{\text{cl}}||_\diamond.
\end{equation}
In order to find the diamond norm between $\mathcal{E}_\text{sim}$ and $\mathcal{E}_{\text{cl}}$, we look at
\begin{equation}
||\mathbb{I}_2\otimes\mathcal{E}_\text{sim}\left(\rho\right)-
\mathbb{I}_2\otimes\mathcal{E}_\text{cl}\left(\rho\right)||_1
\end{equation}
for an arbitrary 2 qubit state $\rho$. We find the absolute sum of
eigenvalues for
$\mathbb{I}_2\otimes\mathcal{E}_\text{sim}\left(\rho\right)-
\mathbb{I}_2\otimes\mathcal{E}_\text{cl}\left(\rho\right)$ to be
independent of $\rho$ and equal to $\eta$. Thus we can conclude
that
\begin{equation}
||\mathcal{E}_{\text{sim}}-\mathcal{E}_{\text{cl}}||_\diamond=\eta=||\mathcal{E}_{\text{sim}}-\mathcal{E}_{\text{cl}}||_1.
\end{equation}
Using this, suppose there exists a channel $\mathcal{E}'$ with a strictly smaller diamond norm than our closest channel. Then we have the chain of inequalities
\begin{equation}\label{chain}
||\mathcal{E}_\text{sim}-\mathcal{E}'||_1\leq||\mathcal{E}_\text{sim}-\mathcal{E}'||_\diamond<||\mathcal{E}_\text{sim}-\mathcal{E}_\text{cl}||_\diamond=||\mathcal{E}_\text{sim}-\mathcal{E}_\text{cl}||_1
\end{equation}
leading to a contradiction, since we know the closest channel under trace norm
to be $\mathcal{E}_\text{cl}$. Thus we are forced to conclude that the diamond norm
is smallest between $\mathcal{E}_\text{sim}$ and $\mathcal{E}_\text{cl}$, with distance $\eta$. $\square$\\
The consequence of this result is that we have a natural measure of the generalization allowed by the introduction of classical channels. Given a resource state $\chi_\gamma$, we know that we will be able to simulate channels $||\cdot||_\diamond=\gamma$ distinct from the set of Pauli channels, since that is the largest allowable value $\eta$ may take.
\subsection{Upper bound for the two-way private capacity}
Now that we have characterized the class of Pauli-damping
channels, we are interested in their quantum and private
communication capacities. As explained in the introduction, the
two-way assisted capacities are in general hard to calculate. Yet
because we have shown that these channels can be simulated with an
LOCC protocol (noisy teleportation) over a pre-shared resource
(the amplitude damping Choi matrix $\chi_{\gamma}$), we may use
teleportation stretching and Theorem~\ref{singleletterupper} to
upper-bound their two-way quantum ($Q_{2}$) and private capacities
($P_{2}=K$). In fact, for an arbitrary Pauli-damping channel
$\mathcal{E}$ with resource state $\chi_{\gamma}$, we may compute
the upper bound (weak converse)
\begin{gather}
Q_{2}(\mathcal{E})\leq P_{2}(\mathcal{E})=K(\mathcal{E})\leq
E_{R}\left(
\chi_{\gamma}\right) \nonumber\\
\leq\frac{1}{2}-\frac{1-\gamma}{2}\log_{2}\left(
\frac{1-\gamma}{2}\right)
+\frac{2-\gamma}{2}\log_{2}\left( \frac{2-\gamma}{2}\right) .\label{YBsss}%
\end{gather}
Within the Pauli-damping class, let us analyze the
\textquotedblleft squared\textquotedblright\ channel
$\mathcal{E}_{\text{sq}}$ with its F matrix being given by
\begin{equation}
F_{\text{sq}}=\left(
\begin{array}
[c]{cccc}%
1 & 0 & 0 & 0\\
0 & \sqrt{1-\gamma}\left( 1-\frac{\gamma}{2}\right) & 0 & 0\\
0 & 0 & \sqrt{1-\gamma}\left( 1-\frac{\gamma}{2}\right) & 0\\
\gamma^{2} & 0 & 0 & (1-\gamma)^{2}%
\end{array}
\right) .
\end{equation}
The decomposition of this channel into the
form~(\ref{decomposition}) of Theorem~\ref{mainresult} is $u=0$,
$\eta=\gamma^{2}$, and
\begin{equation}
\mathbf{q}=\left( \frac{\left( 1-\frac{\gamma}{2}\right) }{\sqrt{1+\gamma}%
},-\frac{\left( 1-\frac{\gamma}{2}\right)
}{\sqrt{1+\gamma}},\frac{1-\gamma }{1+\gamma}\right),
\end{equation}
where $\gamma$ is the damping parameter of the resource state. Its
two-way quantum and private capacities are upper bounded by using
Eq.~(\ref{YBsss}) and lower bounded by optimizing the coherent
information of the channel. The results are shown in
Fig.~\ref{Lower}.
\begin{figure}[h!]
\begin{centering}
\includegraphics[width=8cm,height=5.3cm]{Lower_Bound_Border3.pdf}
\vspace{-.3cm} \caption{Upper and lower bounds for the two-way
private capacity $P_2$ and the two-way quantum capacity $Q_2$ of
the squared channel $\mathcal{E}_{\text{sq}}$, in terms of its
parameter $\eta$ which is the square of the amplitude damping
parameter $\gamma$ associated with its resource
state.}\label{Lower}
\end{centering}
\end{figure}
\section{Conclusions}\label{SECconclu}
In this paper we have studied a particular design for the LOCC
simulation of quantum channels. This design is based on a modified
teleportation protocol where not only the resource state is
generally mixed (instead of maximally entangled) but also the
classical communication channel between the parties is noisy,
i.e., affected by a classical channel. The latter feature allows
us to simulate family of quantum channels, much larger than the
Pauli class, for which we have provided a characterization in
Theorem \ref{FFormula}.
Starting from the Choi matrix of an amplitude damping channel as a
resource state for the noisy teleportation protocol, we can easily
simulate non-Pauli channels. In particular, we have introduced a
new class of simulable channels, that we have called Pauli-damping
channels. Their distance from the set of Pauli channels can be
quantified in terms of the diamond norm and turns out to be easily
related with the damping probability associated with the
generating Choi matrix. For these Pauli-damping channels we have
then used the method of teleportation stretching to derive upper
bounds for their two-way quantum and private capacities.
In conclusion, our results are useful to shed new light in the
area of channel simulation with direct implications for quantum
and private communication with qubit systems. Further developments
may include the study of Pauli-damping channels in the context of
adaptive quantum metrology \cite{Metro}, or in the setting of
secure quantum networks \cite{networkPIRS}.
\textbf{Acknowledgments}. This work has been supported by the
EPSRC via the `UK Quantum Communications Hub' (EP/M013472/1). T.C. and S.P.
would like to thank discussions with R. Laurenza and C. Ottaviani. L.H. would like to
thank the ERASMUS program who allowed him to visit the University of
York, where this work has been carried out. T.C. acknowledges
funding from a White Rose Scholarship. L.B. has received
funding for this research from the European Research Council
under the European Union Seventh Framework Programme
(FP7/2007-2013)/ERC Grant Agreement No. 308253 PACOMANEDIA.
\onecolumngrid
|
2,877,628,090,300 | arxiv | \section{Introduction}
The large number of exoplanets detected thus far provides
significant observational constraints for theoretical studies on
planet formation and evolution. The population-wide distributions of
these exoplanets are closely related to their formation history. Based
on the core accretion paradigm
\citep{Lin1996,Weidenschilling1996,Ida2004,Mordasini2012a}, we can
generate synthetic planet populations that may explain the
statistical characteristics of exoplanets, such as the planet’s
semi-major axis versus its mass ($a$-$M$) \citep{Ida2004,Mordasini2009}
and its mass-radius diagram \citep{Mordasini2012b}.
Recently, the $Kepler$ mission detected thousands of
planetary candidates, the radii of which were measured using transit
observations \citep{Borucki2011}. These data may provide an important clue to the
size distribution of close-in planets
\citep{Howard2012,Dong2013,Petigura2013a}, which is an essential test
of the current planet formation theories. Previous planetary population
synthesis models by
\citet{Mordasini2012a,Mordasini2012b} reproduced a
synthetic planet population with a similar planet size distribution
as observed in the $Kepler$ data at radii $>$ 2 $R_{\oplus}$ .
However, for planets with radii $<$ 2 $R_{\oplus}$, a strong
decrease in the synthetic planet population is inconsistent with
the plateau in the planet size distribution of the
$Kepler$ data, after correction for
observational bias \citep{Howard2012,Dong2013,Petigura2013a}. The
divergence may be due to simplifications in the planetary
population synthesis models \citep{Mordasini2012a,Mordasini2012b}.
As a simplification, atmospheric evaporation after the protoplanetary disks have dissipated
\citep{Lammer2003,Baraffe2004,Murray-Clay2009},
which is important for close-in planets,
was not considered in the former studies.
Nevertheless, atmospheric evaporation may play a vital role in the
thermal evolution of low-mass planets;
this can alter the size distribution of close-in planets
with radii $<$ 2 $R_{\oplus}$ \citep{Owen2012,Lopez2012,Lammer2014}.
Thus in order to use the radius distribution of close-in
exoplanets to constrain planet formation models which is our final goal,
it is necessary to include the effect of atmospheric escape.
In the planetary evolution stage after disk
dissipation, the atmospheric structure and thermal evolution
of a close-in planet are strongly influenced by intense irradiation
from the host star
\citep{Guillot2002,Fortney2005,Fortney2008,Hansen2008,Guillot2010}.
Moreover,
the incident stellar X-ray and extreme-ultraviolet (EUV) flux
can drive hydrodynamic evaporation in the atmosphere of close-in planets and
yield a substantially higher mass-loss rate than
for planets in the Jeans escape regime
\citep{Lammer2003,Baraffe2004,Murray-Clay2009,Owen2012,Lopez2012,Lammer2014}.
For example, oxygen, carbon, and magnesium were detected in the upper
atmosphere of HD 209458b at a distance of several planetary radii,
which indicates that its atmosphere is undergoing hydrodynamic escape
\citep{Vidal-Madjar2004,Vidal-Madjar2013}.
The estimated atmosphere
mass-loss rate for HD 209458b is $\gtrsim$ $10^{10}$\,g\,s$^{-1}$
\citep{Vidal-Madjar2004,Vidal-Madjar2008}. \citet{Wu2013} calculated
the masses of 22 sub-Jovian $Kepler$ planet pairs with the orbital
periods of $\sim$ 8 days using the TTV data and found that the mass-radius
relationship of these planets corresponds to the constant escape
velocity $\sim$20\,km\,s$^{-1}$, which is similar to the
sound speed of hydrogen plasma at $10^{4}$ K, indicating that
hydrodynamic evaporation is likely.
The atmospheric mass loss due to evaporation during planetary
evolution can be estimated by considering an energy-limited model.
This scenario assumes that a portion of the heating energy of stellar
irradiation contributes to $P$d$V$ work that expands
the upper atmosphere \citep{Watson1981}. The
mass-loss rate in the energy-limited model depends on the strength
of stellar XUV irradiation, including both X-ray and
EUV flux, and the heating efficiency, which
describes how much heating is converted to $P$d$V$
work.
Following the temporal evolution of the XUV emission from a
sun-like star and assuming 100\% heating efficiency, early
studies showed that even a Jovian planet can lose
its entire initial envelope during its lifetime at close-in orbits
\citep{Lammer2003,Baraffe2004}.
However, later hydrodynamic simulations of
EUV-driven atmospheric escape show that although the EUV-driven mass
flow can produce the observed Lyman-$\alpha$ absorption signatures,
this mechanism can only evaporate a small portion of the mass of a
Jovian planet \citep{Tian2005,Yelle2004,Murray-Clay2009,Owen2012}.
Furthermore, \citet{Tian2005} showed that, in the energy-limited model,
the assumption that the incoming irradiation is absorbed inside a
single layer is inaccurate because the absorption depth of the
incident radiation depends on the wavelength.
\citet{Murray-Clay2009} found that
atmospheric evaporation is in a radiation-recombination-limited
regime for high EUV flux. In this regime, a large portion of the heating energy is
lost to cooling radiation, which decreases the mass-loss
rate. \citet{Owen2012} showed that atmospheric evaporation
can be either X-ray- or EUV-driven,
depending on the X-ray and EUV flux.
Most planets are in an X-ray-driven evaporation regime at the beginning
of evolution and then transition to an EUV-driven regime when the X-ray
flux falls below a critical value. This transition indicates that using the total XUV fluxes in an evaporation
model is insufficient because, in various regimes, the escape flow is dominated by
the stellar irradiation at different wavelengths, either X-ray or EUV \citep{Owen2012}.
Unlike Jovian planets, Neptune-like planets and super-Earths
are most likely to lose their
entire envelopes at small distances \citep{Lammer2009,Lopez2012,Owen2012}.
Because the
mass-loss timescale for planets scales with the planetary mass
$\times$ the planetary mean density, the planetary thermal
evolution coupled with atmospheric escape can elucidate the
observed threshold for low-density planets in the $M_{\rm p}^{2}/R_{\rm p}^{3}$ versus
distance$^{2}$ parameter space, above which there
is no low-mass, low-density transiting exoplanets have not been found \citep{Jackson2012,Lopez2012,Owen2013}.
Moreover, \citet{Owen2013}
found that evaporation can create a bimodal
planet size distribution around $2 R_{\oplus}$. In an extensive
investigation of evaporation that included thousands of
planets with different sizes and incident fluxes, \citet{Lopez2013}
recently showed that the bimodal distribution near $2 R_{\oplus}$ was
unclear. In addition, they observed a diagonal region in the
semi-major axis versus radius ($a$-$R$) space,
where planets are relatively rare.
Herein, we simulate the thermal evolution of synthetic planet
populations that were obtained from a planet formation code
\citep{Alibert2005,Mordasini2012a,Mordasini2012b}. The atmospheric
mass-loss due to hydrodynamic evaporation is now included in the
planetary evolution. Thereby, we aim to determine how
evaporation statistically affects the entire planet population
and to make a more consistent comparison with the $Kepler$ data.
Our results show that the $a$-$R$ space is strongly influenced
by evaporation, which can modify the size distribution of the
planets within 1 AU.
The paper is organized as follows. In $\S$ \ref{sect:evapmodel}, we
describe the evaporation models used herein. Our
improved atmospheric model and numerical experiments on planet
evolution are shown in $\S$ \ref{sect:planetevolution}. In $\S$
\ref{sect:resultsII}, we present the mass and radius distributions of
the synthetic planet populations and compare the planet size
distribution of the synthetic populations with $Kepler$ candidates.
Finally, we provide a detailed discussion ($\S$ \ref{sect:discussion})
and brief summary ($\S$ \ref{sect:summary}).
\section{Evaporation models}
\label{sect:evapmodel}
The dominant heating source of hydrodynamic outflow
is either EUV or X-ray radiation,
which divides the hydrodynamic evaporation
into two distinct sub-regimes \citep{Owen2012}.
In this work, we use a model that includes both EUV-driven
and X-ray-driven evaporation through simple semi-analytical equations.
\subsection{X-ray-driven Evaporation}
X-ray photons have a smaller interaction cross-section compared with EUV photons;
Thus, they penetrate deeper into the planetary atmosphere.
In the early stage of planetary evolution, X-ray irradiation is
an important heating source for planetary atmospheres due to
strong X-ray emissions from young stars \citep{Ribas2005,Jackson2012}.
In this work, the mass-loss rate in an X-ray-driven regime is calculated using an
an energy-limited model and by assuming that part of the
heating energy is converted to $P$d$V$ work with
the efficiency factor $\epsilon$ \citep{Jackson2012}:
\begin{equation}
\dot{m}=\epsilon\frac{\displaystyle{16 \pi F R^3_{\rm p}}}{\displaystyle{3 G M_{\rm p} K(\xi)}}
\label{Mdotxray}
\end{equation}
where $M_{\rm p}$ is the planetary mass,
$R_{\rm p}$ is the planet's radius at
the optical depth $\tau = 2/3$ in the thermal wavelengths.
($\tau$ is calculated using the grain-free Rosseland mean opacity
$\kappa_{\rm th}$ from \citet{Bell1994} and \citet{Freedman2008}),
$F$ is the X-ray flux in the wavelength range from
1 to 20 ${\rm \AA}$ from \citet{Ribas2005},
$\xi=R_{\rm roche}/{R_{\rm p}}$, and
\begin{equation}
K(\xi)=1-\frac{3}{2\xi}+\frac{1}{2\xi^3}
\end{equation}
accounts for the enhanced mass-loss rate by a factor of
$1/K(\xi)$ because the Roche lobe of a close-in planet can be
close to the planet's surface \citep{Erkaev2007}.
\citet{Lammer2009} discuss the possible values of the efficiency factor
in energy-limited model and consider a realistic range of 0.1-0.25.
Considering that it is the X-ray from 5 to 10 ${\rm \AA}$
responsible for heating \citep{Owen2012}, the X-ray flux from 1 to 20
${\rm \AA}$ used in our model might be too high.
Thus, we set the nominal efficiency factor in the X-ray-driven regime to 0.1.
We also set it to 0.2 for a comparison population synthesis
to generate larger mass-loss rates.
\subsection{EUV-driven Evaporation}
EUV photons ($h\nu\ga 13.6$ eV) can ionize the hydrogen in
the upper atmosphere through the photoelectric effect.
Because the H recombination cooling peak lies slightly above $10^{4}$ K,
photoionization-recombination will produce an equilibrium
temperature of $\sim$ $10^{4}$ K in the ionized region \citep{Dalgarno1972}.
Based on the strength of the incoming EUV radiation, EUV-driven
evaporation can be further divided into two sub-regimes \citep{Murray-Clay2009}:
At high EUV fluxes with $F_{\rm EUV} > F_{\rm crit}$,
a large fraction of the heating is lost to
cooling radiation.
The mass-loss rate increases slowly with the incoming flux as
$\dot{M} \propto F_{EUV}^{0.6}$ \citep{Murray-Clay2009}.
Therefore, the mass-loss rate cannot be estimated using a linear
energy-limited equation of incident flux.
Assuming that the escape flow is isothermal, the mass-loss rate is simply
the mass flux at the sonic point in the flow \citep{Murray-Clay2009}, we have
\begin{equation}
\dot{M}_{\rm rr-lim} \sim 4 \pi \rho_{\rm s} c_{\rm s} r_{\rm s}^2
\end{equation}
where $c_{\rm s} = [kT/(m_{\rm H}/2)]^{1/2}$ is the isothermal sound speed,
$T = 10^4$ K is the temperature at the sonic point,
$r_{\rm s} = GM_{\rm p}/(2c_{\rm s}^2)$ is the sonic point where
the mass flow escapes the planet at the sound speed, and $\rho_{\rm s}$ is
the flow density at the sonic point, which was
calculated as described in \citet{Murray-Clay2009}.
For $F_{\rm EUV} < F_{\rm crit}$,
the mass-loss rate is calculated using an energy-limited model that is
similar to Equation \ref{Mdotxray}.
The difference is that the $F$ used here is given by
the EUV fluxes of a sun-like star from \citet{Ribas2005}.
$R_{\rm p}$ in the EUV-driven regime is set as the planetary radius
where the optical depth becomes unity
for EUV photons (20 eV),
at a pressure of $\sim$ 1 nanobar \citep{Murray-Clay2009}.
The efficiency factor is set by comparing Equation \ref{Mdotxray}
with Equation 19 in \citet{Murray-Clay2009},
which is an analytical equation used to estimate the mass-loss rates at
a low $F_{\rm EUV}$ and does not include the Roche lobe term $\xi$
\footnote[1]{The Roche lobe term is also neglected in the energy-limited EUV-
driven regime in our evaporation model.}
The efficiency factor is 0.06 in the nominal evaporation model and
0.12 in the comparison population synthesis.
We adopt the criterion from \citet{Murray-Clay2009},
which states that, for $F_{\rm EUV}$
$> 10^4~ {\rm\,erg}~ {\rm\,cm}^{-2}~ {\rm\,s}^{-1}$,
evaporation is in the radiation-recombination-limited regime for
hot Jupiters orbiting main-sequence solar analogs.
In reality, the critical $F_{\rm EUV}$ ($F_{\rm crit}$) for the transition from
a radiation-recombination- to an energy-limited evaporation
is a function of the planetary mass and radius, among other characteristics.
In our work, we simply choose a constant $F_{\rm crit}$
$= 10^4~ {\rm\,erg}~ {\rm\,cm}^{-2}~ {\rm\,s}^{-1}$ for each planet.
The variations of this critical value and its influence on
the final planet population will be described in $\S$ \ref{modelruns}.
\subsection{Transition Between the X-ray- and EUV-driven Regimes}
Whether evaporation is X-ray- or EUV-driven depends on
the intensity of the EUV and X-ray irradiation received by a planet.
We use the criterion from \citet{Owen2012},
which separates X-ray- and EUV-driven regimes
based on whether the X-ray-driven flow can reach the sonic point
before it enters the ionization front.
This criterion can be determined based on the EUV luminosity of the host star.
The threshold EUV luminosity for the escape
flow in an EUV-driven regime is \citep{Owen2012}:
\begin{equation}
\begin{split}
\Phi_* &\ge 10^{40} {\rm\, s}^{-1} \left(\frac{a}{0.1\rm { \,AU}}\right)^2\left(\frac{\dot{m}_{\rm X}}{10^{12}\rm { \,g\;\, s}^{-1}}\right)^2\left(\frac{A}{1/3}\right) \\
&\quad \times \left(\frac{\beta}{1.5}\right)\left(\frac{R_{\rm p}}{10 R_{\oplus}}\right)^{-3}.
\end{split}
\end{equation}
where $\Phi_*$ is the EUV luminosity (in photons per second) of the host star,
$A$ (typically $\approx$ 1/3) is a geometric factor that approximates the steepness of
the density fall-off in the ionized portion of the
X-ray-heated flow \citep{Johnstone1998},
$\beta$ is the ratio of the X-ray sonic surface to the planetary radius
(it is of order unity \citep{Owen2012}; we set it to 1.5 herein),
and $\dot{m}_{\rm X}$ is the mass flux of the X-ray-driven flow that enters the ionization front.
\section{Planetary evolution}
\label{sect:planetevolution}
\subsection{The Synthetic Planet Populations}
The synthetic planet populations adopted herein are based on
a planet formation model
\citep{Alibert2005,Mordasini2012a,Mordasini2012b}, in which we
simulate the accretion of solid/gas materials and the disc-based
migration of a planet in combination with the evolution of a
protoplanetary gas disk. At 8 Myr, nearly all protoplanetary gas
disks have disappeared \citep{Mordasini2009}; thereafter, the
planets enter the evolution stage, where both gas accretion and
disk-driven migration end.
Planet-planet scattering and Kozai migration is not included in
the model because we use the one-embryo-per-disk approximation.
We take a
snapshot of each planet in a population at 10 Myr and set it as
the initial condition of the evolution stage.
We use the following parameters
for each planet: core mass, envelope mass, luminosity,
mass-fraction of the ice in the core, initial deuterium fraction, and
semi-major axis.
The population-wide effect of atmospheric evaporation is our main
focus. We consider two variable quantities in population
synthesis: one is the description of the evaporation model that varies
with the dominant heating source (X-ray, EUV, or both) and
heating efficiency, and the other is the ISM
grain opacity reduction factor used during the formation stage.
The grain opacity during the formation phase controls
the amount of H/He a core of given mass and luminosity
can accrete \citep{Movshovitz2010,Mordasini2014a}.
We follow the evolution of different synthetic planet populations
using different evaporation models. The details for each simulation
are listed in Table \ref{tab:simulist}. All populations are formed around
a 1 $M_{\odot}$ star and using the one-embryo-per-disk approximation.
A description of the concurrent formation of several planets can be found in \citet{Alibert2013}.
We define the nominal evaporation
model as one that includes both X-ray-and EUV-driven regimes.
The nominal synthetic population is defined as a population calculated
using the isothermal type I migration rate
\citep{Tanaka2002}
with a reduction factor
of 0.1 and a grain opacity reduction factor
($f_{\rm opa}$) of 0.003 compared with the full ISM value. The
low-mass, close-in planets in this population are formed based on a
protoplanetary disk that includes the stellar irradiation for
calculating the disk's temperature structure, and the combined
viscous and thermal criteria for the transition from type I to type
II migration. These assumptions are more realistic than the assumptions in
\citet{Mordasini2009}, wherein non-irradiated disks were used with only the
thermal criterion and a type I migration rate with a reduction
factor of 0.001. The inner boundary of the disk in our model is
0.1 AU due to limitations in the stellar irradiation description
for the disk \citep{Fouchet2012}. Radial velocity and transit surveys
have detected many planets within 0.1 AU, which demonstrates that
even close-in planets with orbital periods less than 10 days are
common \citep{Mayor2011,Dong2013,Petigura2013a}. Planets can reach
close-in orbits through disk-driven migration
\citep{Goldreich1980,Lin1996,Zhou2005} or migration due to
planetesimal disk dynamics
\citep{Terquem2007,Ji2011,Ormel2012}. A
close-in planet may also be formed through Kozai migration with tidal circularization
\citep{Wu2003,Fabrycky2007}, but this process can take considerably longer, and the
intense atmospheric evaporation stage may have passed. Because the inner boundary (0.1 AU) in the semi-major axis distribution of
our planet populations is too large to reasonably study
hydrodynamic evaporation, we manually shift the semi-major axes
of all planets inward by 0.04 AU; thus, the planet populations begin
at 0.06 AU.
\begin{table}
\centering
\caption{Details of the different simulations}
\label{tab:simulist}
\begin{tabular}{lccc}
\hline
Simulation & Type I migration & $f_{opa}$ & Evaporation model \\
run & rate & & \\
\hline
XE & 0.1$\times$isothermal-rate & 0.003 & X-ray + EUV \\
NoEV & 0.1$\times$isothermal-rate & 0.003 & No Evap \\
SatE & 0.1$\times$isothermal-rate & 0.003 & EUV (Saturation) \\
XE2 & 0.1$\times$isothermal-rate & 0.003 & XE $\times$ 2 \\
L12 & 0.1$\times$isothermal-rate & 0.003 & \citet{Lopez2012} \\
B04 & 0.1$\times$isothermal-rate & 0.003 & \citet{Baraffe2004} \\
\hline
NIOpa003 & non-isothermal & 0.003 & X-ray + EUV \\
NIOpa0 & non-isothermal & 0.0 & X-ray + EUV \\
NIOpa1 & non-isothermal & 1.0 & X-ray + EUV \\
\hline
\vspace{0.05cm}
\end{tabular}
\end{table}
\subsection{Planet Structure Model}
A planet’s structure consists of a core and
a gaseous envelope. The planetary core mass is constant
during evolution. The core radius is determined by its mass,
its mass-fraction of ice, and the pressure at the bottom of the
gaseous envelope \citep{Mordasini2012b}. We assume spherical
symmetry in the planetary envelope and solve its structure by
combining the following one-dimensional hydrostatic equations:
\begin{subequations}
\begin{minipage}{.22\textwidth}
\begin{align}
\frac{{\rm d}m}{{\rm d}r}=4\pi r^{2}\rho \\
\frac{{\rm d}P}{{\rm d}r}=-\frac{Gm}{r^{2}}\rho
\end{align}
\end{minipage}
\hfill
\begin{minipage}{.22\textwidth}
\begin{align}
\frac{{\rm d}\tau}{{\rm d}r}=\kappa_{\rm th}\rho \\
\frac{{\rm d}L}{{\rm d}r}=0
\end{align}
\end{minipage}
\end{subequations}
\\ where $r$ is the radius as measured from the planetary center,
$m$ is the cumulative mass inside $r$,
$\rho$ is the density in each spherical shell, $P$ the pressure,
$G$ the gravitational constant,
and $L$ is the planetary luminosity, which
includes radiogenic heating from the solid core.
We assume that the luminosity is constant with radius,
which does not significantly affect
the evolution as discussed in \citet{Mordasini2012a}.
The temperature gradient in the gaseous envelope depends on both
the optical depth and heat transfer mechanism
(convective or radiative) at each envelope layer.
We separate the gaseous envelope into two parts:
the atmosphere where most of the stellar irradiation is absorbed and
the envelope that lies below the atmosphere.
If an atmospheric layer is convectively stable,
we adopt the globally averaged temperature profile
from Equation (49) in the semi-grey model of \citet{Guillot2010}
(which is derived using the Eddington approximation):
\begin{equation}
\begin{split}
T^4 &={3T_{\rm int}^4\over 4}\left\{{2\over 3}+\tau\right\}+ {3T_{\rm eq}^4\over 4}\left\{{2\over 3}+\right. \\
& \quad \left. {2\over 3\gamma}\left[1+\left({\gamma\tau\over 2}-1\right)e^{-\gamma\tau}\right]+ {2\gamma\over 3}\left(1-{\tau^2\over 2}\right)E_2(\gamma\tau)\right\}
\end{split}
\label{2bdglobal}
\end{equation}
where $T_{\rm int}=(L/(4\pi\sigma_{B}R_{\rm p}^2))^{1/4}$ is the intrinsic temperature
that characterizes the heat flux from the planet's interior
($\sigma_{B}$ is the Stefan-Boltzmann constant),
$T_{\rm eq}=T_*(R_*/(2D))^{1/2}$ is the equilibrium temperature obtained by averaging
the stellar radiation over the entire planet surface ($T_*$ is the stellar temperature,
$R_*$ is the stellar radius, and $D$ is the distance from the planet to the star),
$\gamma=\kappa_{\rm v}/\kappa_{\rm th}$ is the ratio of the visible opacity to
the thermal opacity \citep{Guillot2010}.
The visible opacity $\kappa_{\rm v}$ is not explicitly calculated but
is incorporated in the model by $\gamma$.
$E_2(\gamma\tau)$ is the exponential integral $E_n(z)\equiv\int_1^\infty t^{-n}e^{-zt}dt$ with $n=2$.
The boundary between the atmosphere and envelope
should be at the optical depth in
visible wavelengths $\tau_{\rm v} \gg$ 1.
Based on the $\gamma$ defined in the semi-grey model,
we have $\tau \gg 1/(\sqrt{3}\gamma)$ at the transition \citep{Rogers2011}.
Because most of the starlight is absorbed at pressures less than 10 bar
\citep{Guillot2002}, we set the atmosphere/envelope boundary
at $\tau = 100/(\sqrt{3}\gamma)$ (which corresponds to a pressure of $\sim$ 10 bar).
If the envelope at $\tau > 100/(\sqrt{3}\gamma)$ is convectively stable,
the radiative temperature gradient is calculated using the diffusion
approximation that only includes the planet's intrinsic luminosity:
\begin{equation}
\frac{{\rm d}T}{{\rm d}r}=-\frac{3\kappa_{\rm th}\rho L}{64\pi\sigma_{B}T^3 R^2}
\end{equation}
On the other hand, if a layer is convectively unstable
(i.e., the adiabatic temperature gradient is less steep than the radiative temperature gradient),
we use the adiabatic temperature profile instead:
\begin{equation}
\frac{{\rm d}T}{{\rm d}r}=\frac{T}{P}\frac{{\rm d}P}{{\rm d}r}\left(\frac{{\rm ln}T}{{\rm ln}P}\right)_{\rm ad}
\end{equation}
where the adiabatic temperature gradient
is calculated using the equation of state of \citet{Saumon1995}.
This convective adjustment is also used in the atmosphere, although
we do not allow detached convective zones there.
Planetary evolution is modeled using the framework of
\citet{Mordasini2012a,Mordasini2012b}, wherein
the planetary luminosity $L$ and its temporal evolution is
derived through energy conservation as $L = -{\rm d}E_{\rm tot}/{\rm d}t$,
where ${\rm d}E_{\rm tot}$ is the energy change due to gravitational contraction
and release of internal heat (neglecting planetary rotation).
The luminosity $L$ at each timestep controls the
planetary structure and the changes of the interior adiabat,
and hence the temporal evolution of the planet.
\begin{table}
\centering
\caption{The $\gamma=\kappa_{\rm v}/\kappa_{\rm th}$ used in the atmospheric model.}
\begin{tabular}{cc}
\hline
Temperature (K) & $\gamma$ \\
\hline
260 & 0.005 \\
388 & 0.008 \\
584 & 0.027 \\
861 & 0.07 \\
1267 & 0.18 \\
1460 & 0.19 \\
1577 & 0.18 \\
1730 & 0.185 \\
1870 & 0.2 \\
2015 & 0.22 \\
2255 & 0.31 \\
2777 & 0.55 \\
\hline
\end{tabular}\\
\label{table:gamma}
\end{table}
\begin{figure}
\includegraphics[width=9cm]{fig1.eps}\\
\caption{Pressure-temperature models from \citet{Fortney2008} (points) and
the corresponding analytical fits (solid line) for different semi-major axes.
The modeled planets are with $g = 15$ m s$^{-2}$ and $T_{\rm int} = 200$ K,
and they orbit around a sun-like star.
The discrepancy in the upper part of atmosphere is due to the non-grey effects
\citep{Parmentier2014m,Parmentier2013s}.}
\label{fig:gamma}
\end{figure}
\begin{figure*}
\includegraphics[width=17.0cm]{fig2}
\caption{
Mass-radius relationships for low-mass planets with
different envelope fractions $f_{\rm env/tot}$ ($M_{\rm env}/M_{\rm p}$).
The radius plotted here are the planetary radii at $\tau = 2/3$.
There are three groups of lines: The green dotted lines
are calculated using the fixed $\gamma = 0.6\sqrt{T_{\rm irr}/2000{\rm K}}$
and the planetary boundary at $\tau = 2/3$;
these setting are the same as \citet{Rogers2011}.
The other two groups use the interpolated $\gamma$ at equilibrium temperatures at
500 and 1000 K, and the planetary boundary at $\tau = 0.01$
(the planetary radii plotted in the figure are at $\tau = 2/3$).
The green dot-dashed lines use the isotropic temperature profile (Equation 27 in \citet{Guillot2010}),
it is for comparison with the thick black lines that
use the globally averaged temperature profile (Equation 49 in \citet{Guillot2010}).
From bottom to top, the $f_{\rm env/tot}$ in each group is
0.001, 0.01, 0.1, 0.3, and 0.5, respectively.
The red line at the bottom is the mass-radius relationship of
the solid cores for planets with $f_{\rm env/tot}=0.001$.
In all of the runs, the luminosity of each planet is
set according to $L_{\rm p}/M_{\rm p} = 10^{-10.5}$ W\,kg$^{-1}$.
}
\label{fig:mrRg}
\end{figure*}
\subsection{Atmosphere Calibration}
\label{sect:resultsI}
One important parameter in the semi-grey model, $\gamma$,
determines how much of the incoming flux is absorbed in the upper
atmosphere. The $\gamma$ values used in our simulation are shown
in Table \ref{table:gamma} and are determined by comparing the
temperature in the deep isothermal zone of the analytical model
(Equation \ref{2bdglobal}) with the results from the line-by-line
radiative transfer models \citep{Fortney2005,Fortney2008,Parmentier2013s}. The
line-by-line EGP (Extrasolar Giant Planet) code was initially
developed by \citet{Mckay1989} to study Titan's atmosphere.
Since then, it has been extensively modified and adapted for
studies on giant planets \citep{Marley1999}, brown dwarfs
\citep{Marley1996,Marley2002,Burrows1997}, and hot Jupiters
\citep[e.g.,][]{Fortney2005,Fortney2008,Showman2009}.
The $\gamma$ at a specific equilibrium temperature can be determined through
interpolation between these tabulated values. Figure \ref{fig:gamma}
shows the $PT$ profiles obtained by two different models.
The numerical profiles are calculated assuming
a clear-sky, solar composition atmosphere where TiO and VO
have rained out of the atmosphere \citep[see][]{Parmentier2013a,Parmentier2014m} and is neglected in the model. However, the effects of
a non-solar composition may be important but are not
considered in this study.
The analytical solution is highly consistent with the line-by-line model in the deep atmosphere.
For pressures lower than 1 bar,
the discrepancy is due to non-grey effects \citep{Parmentier2014m}.
\begin{figure*}
\includegraphics[width=17.0cm]{fig3}
\caption{
Temporal evolution of two close-in planets at 0.03 AU.
The left column shows the evolution of a Neptunian planet with
a 15 $M_{\oplus}$ core and a 10 $M_{\oplus}$ envelope.
The right column shows the evolution of a Jovian
planet with a 25 $M_{\oplus}$ core and a 290 $M_{\oplus}$ envelope.
The thick blue line is the experiment includes both
X-ray- and EUV-driven evaporation; the dashed part of the line indicates
the X-ray-driven regime, the solid part indicates
the radiation-recombination-limited EUV-driven regime,
and the dotted part indicates the energy-limited EUV-driven regime.
The thin green, yellow, and brown lines are the experiments
that include only EUV-driven evaporation;
the solid part indicates the radiation-recombination-limited regime,
and the dotted part indicates the energy-limited regime.
The yellow and brown lines show the evolution of the
Neptunian planet using the $F_{\rm cirt}$
from a radiation-recombination-limited to an energy-limited regime
of $2 \times 10^4~ {\rm\,erg}~ {\rm\,cm}^{-2}~ {\rm\,s}^{-1}$
and $ 0.5 \times 10^4~ {\rm\,erg}~ {\rm\,cm}^{-2}~ {\rm\,s}^{-1}$.
The red dash-dotted lines in the right column use the
energy-limited model with a 100\% heating efficiency \citep{Baraffe2004},
}
\label{fig:single}
\end{figure*}
\subsection{Mass-radius Relationship of Low-Mass Planets}
We then compare the mass-radius relationships of the low-mass planets
from our model with those of \citet{Rogers2011}.
These low-mass planets have different envelope mass fractions
$f_{\rm env/tot}$ (the ratio of the envelope mass to the total planetary mass)
and are at an equilibrium temperature of 500 or 1000 K.
\citet{Rogers2011} use a fixed $\gamma$ of
$0.6\sqrt{T_{\rm irr}/2000{\rm K}}$ based on the fit in \citet{Guillot2010},
where $T_{\rm irr}=T_*(R_*/D)^{1/2}$ is the irradiation temperature.
\citet{Parmentier2014m} examine the fitted
$\gamma = 0.6\sqrt{T_{\rm irr}/2000{\rm K}}$ and find that it
is poor for temperatures lower than 1000 K due to
the disappearance of absorption by alkaline molecules (sodium, potassium, etc.).
\citet{Rogers2011} also set the planetary boundary at
an optical depth of $\tau = 2/3$.
As shown in Figure \ref{fig:mrRg}, the planetary radius is closely related
to the $\gamma$ value and the planetary boundary.
The mass-radius relationships calculated using the same $\gamma$ and
planetary boundary as \citet{Rogers2011} are highly consistent with their results.
We also show two other groups of mass-radius relationships that are
calculated using the tabulated $\gamma$ at 500 and 1000 K.
In addition to the different $\gamma$,
the planetary boundary for these two groups is set at
$\tau = 0.01$ (i.e., the atmospheric structure is integrated from $\tau = 0.01$;
notably, we adopt the radius at the optical depth $\tau = 2/3$
as the planetary radius, and for the figures herein,
the plotted planetary radii are at $\tau = 2/3$).
These two groups of mass-radius relationships are
considerably different from those of \citet{Rogers2011},
especially the low-mass planets
with a high $f_{\rm env/tot}$.
The difference between these two groups is the
temperature profile used in the semi-grey model
in the upper atmosphere.
One group uses the isotropically averaged temperature profile
in \citet{Rogers2011},
and the other group uses the globally averaged temperature profile, which
considers
the advective transport of energy to a certain extent \citep{Guillot2010}.
In contrast to the large discrepancies from the different
$\gamma$ and planetary boundary,
the differences between the mass-radius relationships
created by the isotropically averaged temperature profile and
the globally averaged profile are small.
The planetary radii calculated using the globally averaged temperature
profile are slightly larger than the radii calculated using the isotropic profile.
For the equilibrium temperature 500 K and with an $f_{\rm env/tot}$ of 0.1,
the radius of a 3 $M_{\oplus}$ planet calculated using the globally averaged
temperature profile is $\sim$ 4\% larger than the radius
for the isotropic profile,
and the difference between the radii from the two temperature profiles
is $<$ 1\% for planets larger than 6 $M_{\oplus}$.
The isotropic temperature profile is only used for
the comparison group shown in Figure \ref{fig:mrRg}.
In our population syntheses, planetary evolution is calculated
using the globally averaged temperature profile.
The red lines in Figure \ref{fig:mrRg} show the radii of the
solid cores of the planets with an $f_{\rm env/tot}$ of 0.001.
The large differences between the radii of the cores and the
total planetary radii show that the planetary envelope with only
0.1\% of the planetary mass produces a large increase in
the planetary radius, which is a well-known effect \citep[e.g.,][]{Adams2008}.
For example, at an equilibrium temperature of 500 K,
a planet with a total mass of approximately 1 $M_{\oplus}$ and
an envelope of 0.1\% of the planetary mass
will have a radius that is greater than 2 $R_{\oplus}$.
The planet's atmosphere is bloated due to its low surface gravity
and the heating from the incoming irradiation.
Consequently, the planetary radius will decrease dramatically after its
entire envelope is removed through evaporation.
\subsection{Illustrative Model Runs: Planetary Evolution with Escape}
\label{modelruns}
\begin{figure*}
\includegraphics[width=17.0cm]{fig4}
\caption{
A parameter study of planetary evolution coupled
with evaporation in the $M_{\rm core}$ versus
semi-major axis plane.
All planets orbit around a 1 $M_{\odot}$ star.
There are four planets (different point sizes) with
different choices of $f_{\rm env/core}$ at each grid node,
as indicated in the top right corner of the figure.
The color of each point shows how much of the initial envelope was lost.
The left column and the top panel in the right column show the
temporal evolution of the simulation using the nominal evaporation
model that includes both X-ray- and EUV-driven regimes.
The three panels in the right column compare the final results
of three simulations using different evaporation models:
both X-ray- and EUV-driven are included (nominal), EUV-driven only,
and the energy-limited model from \citet{Lopez2012}.
}
\label{fig:grid}
\end{figure*}
Figure \ref{fig:single} shows the evolution of
a Neptunian planet and a Jovian planet located at 0.03 AU.
The atmospheric escape is included such that
the planetary envelope mass is reduced at each timestep
based on the calculated mass-loss rate.
The Neptunian planet, with a 15 $M_{\oplus}$ core and 10 $M_{\oplus}$ envelope,
loses its entire initial envelope in the first 220 Myr when both
X-ray- and EUV-driven evaporation are included.
If only EUV-driven evaporation is included,
the planet can retain an envelope of $\sim$ 2.9 $M_{\oplus}$
after 10 Gyr of evolution.
X-ray-driven evaporation in the early stage of
planetary evolution is efficient at removing gas from a planet
because the planetary atmosphere is bloated and the
X-ray flux from the host star is intense \citep{Ribas2005}.
The mass-loss rates in the first 1 Myr of planetary evolution
are approximately $4 \times 10^{-6} M_{\oplus}/yr$.
From $\sim 1.45 \times 10^{8}$ years and forward,
evaporation transitions to the EUV-dominated
radiation-recombination-limited regime.
At this time, the planet only has a 0.085 $M_{\oplus}$ envelope remaining.
Soon thereafter, the planet loses its entire envelope and becomes a
15 $M_{\oplus}$ rocky core with a $\sim$ $2 R_{\oplus}$ radius.
In contrast, if only EUV-driven evaporation is included, in its early stage,
the planet is in the radiation-recombination-limited regime wherein most
incoming energy is lost to cooling radiation.
The mass-loss rates in this regime are determined based on the density of
the escape flow at the sonic point; they are below
$10^{-8} M_{\oplus}/yr$ in the first 100 Myr of planetary evolution.
The transition from radiation-recombination-limited evaporation to
energy-limited evaporation occurs at approximately $5.1 \times 10^{8}$ years and
when the EUV flux from the star is below
$10^4~ {\rm\,erg}~ {\rm\,cm}^{-2}~ {\rm\,s}^{-1}$.
After 10 Gyr, the final bulk composition of this planet includes
a 15 $M_{\oplus}$ core with a 2.9 $M_{\oplus}$ envelope,
and the planetary radius is 5.67 $R_{\oplus}$.
The discontinuous change in the mass-loss rate for this experiment demonstrates
that the critical EUV flux, which delimits the radiation-recombination-limited
and energy-limited evaporation regimes, is in principle inappropriate for this Neptunian planet.
In Figure \ref{fig:single},
we also show the evolution of the same planet but using different
$F_{\rm crit}$ at $2 \times 10^4$ and
$ 0.5 \times 10^4 ~ {\rm\,erg}~ {\rm\,cm}^{-2}~ {\rm\,s}^{-1}$.
These two experiments generate a final planetary radius of 5.34 $R_{\oplus}$
($F_{\rm crit} =
2 \times 10^4 ~ {\rm\,erg}~ {\rm\,cm}^{-2}~ {\rm\,s}^{-1}$)
and 5.81 $R_{\oplus}$ ($F_{\rm crit} =
0.5 \times 10^4 ~ {\rm\,erg}~ {\rm\,cm}^{-2}~ {\rm\,s}^{-1}$),
which corresponds to the 5.8\% and 2.5\% changes in the radius, respectively.
Notably, these two experiments only include EUV-driven evaporation.
When X-ray-driven evaporation is included in the model,
most of the planets that could be evaporated to bare cores
lost their entire envelope when the evaporation remained
in the X-ray-driven regime.
Thus, the constant $F_{\rm crit}$ used for
the low-mass planets in our model will not excessively affect the
population-wide radius distribution, as shown in the following section.
For the Jovian planet, the evaporation models with or without X-ray-driven
mass-loss do not show significantly different final results.
The modeled planet has a 25 $M_{\oplus}$ core and a 290 $M_{\oplus}$ envelope.
As shown in the right column of Figure \ref{fig:single}, the evolution
of this planet under different evaporation models overlaps
during the 10 Gyr's evolution.
The only notable difference between the two runs is that, in the first 100 Myr,
the mass-loss rate in the X-ray-driven regime is approximately 18-fold greater than
in the EUV-driven, radiation-recombination-limited regime.
This difference is insufficient for producing a notable change in the planetary
mass and radius because the total mass of the envelope lost
only composes a small fraction of the planetary mass.
In the end, the experiment that also included X-ray-driven evaporation
lost $\sim$ 2.29 $M_{\oplus}$ of the envelope, and the other run, which did not
include X-ray-driven evaporation, lost $\sim$ 0.72 $M_{\oplus}$ of the envelope.
The only way for a Jovian planet to lose a significant portion of
its envelope is by assuming an energy-limited evaporation model
with a 100\% heating efficiency \citep{Baraffe2004}.
In this case, the Jovian planet began with a mass-loss rate greater than
$10^{-6} M_{\oplus}$\,yr$^{-1}$.
With this high mass-loss rate,
the planetary radius expands at approximately $10^8$ years due to the
entropy change in the outer radiative layers.
Detailed descriptions of this interesting process can be found in \citet{Baraffe2004,Hjellming1987}.
In turn, the larger planetary radius results in an even greater mass-loss rate.
Beginning at $\sim 1.5\times10^{8}$ years, the planet reaches
a runaway mass-loss stage, as observed by \citet{Baraffe2004}.
Eventually, the planet loses its entire initial envelope after
$\sim 1.72\times10^{8}$ years of evolution.
\subsection{A Parameter Study}
In Figure \ref{fig:grid}, we show a parameter study on evaporation.
We consider three parameters: the planetary semi-major axis,
the planetary core mass,
and the ratio of the planetary envelope mass to the core mass
($f_{\rm env/core}$).
The parameter space is similar to \citet{Lopez2013}, where
the incident flux from the star is used instead of the planetary semi-major axis
because the distance from a planet to its host star essentially determines
the incident flux that the planet can receive for a fixed stellar type.
The modeled planets are located at 0.03-0.5 AU with a core of 1-40 $M_{\oplus}$.
At each semi-major axis and each core mass, four planets have
different $f_{\rm env/core}$, i.e., 1\%, 10\%, 20\%, and 80\%.
We do not consider planets with a larger $f_{\rm env/core}$.
Due to the long Kelvin-Helmholtz timescale,
it is unlikely that a small core will accrete
significant levels of gas during the formation stage.
A massive core may accrete high levels of gas,
but the effect of evaporation on gas giants is small,
as shown in Figure \ref{fig:single}; thus,
gas giants are not included in this parameter study.
Because all planets in this parameter study are artificial bodies without
an initial luminosity from the
self-consistent planet formation stage, we set their
initial luminosities to the value that corresponds to an entropy
at the core-envelope boundary that equals
$7.11\times(M_{\rm core}/M_{\oplus})^{0.0422}\times (f_{\rm env/core})^{0.0175}$.
This is an empirical fit of the central entropies of the planets
with a 1-40 $M_{\oplus}$ core and a $f_{\rm env/core}$ of 1\%-80\%
in our synthetic population at 10 Myr.
This fit shows that the initial entropy of a planet
increases with the core mass and envelope to core mass fraction.
The initial entropy of low-mass planets is an interesting subject
that will be investigated separately in future work.
\begin{figure*}
\includegraphics[width=17.0cm]{fig5}
\caption{
Temporal evolution of the mass-loss rates of the reference simulation
in the planetary semi-major axis versus mean density plane.
All planets orbit around a 1 $M_{\odot}$ star.
Each point in the figure corresponds to a planet in the synthetic population.
The color of each point show the mass-loss rate of the planet.
The black points are the planets that have lost all their initial envelopes.
The parallel dashed lines show the loci of identical
mass-loss rates in the energy-limited evaporation regime,
i.e., points along each line will have the same mass-loss rate.
}
\label{fig:arhoMdotfid}
\end{figure*}
\begin{figure*}
\includegraphics[width=17.0cm]{fig6}
\caption{
The temporal evolution of the planetary mass and semi-major axis distribution
of the reference simulation.
The color of each point shows how much of the initial envelope was lost
($M_{\rm lost}/M_{\rm initial}$).
The black points are the planets that have lost all their initial envelopes.
Note the large population of close-in, low-mass planets that
have evaporated to bare cores.
}
\label{fig:aMfid}
\end{figure*}
The left column of Figure \ref{fig:grid} shows the evolution of
the simulation using the nominal evaporation model, which includes
both the X-ray- and EUV-driven mechanisms.
Four snapshots of this simulation (at 20 Myr, 110 Myr, 1 Gyr, and 5 Gyr)
are presented.
At 20 Myr (after 10 Myr of evolution), all of the planets in the bottom left corner have lost their entire envelopes and have become bare rocky cores.
There are two reasons for this observation:
one, the large planetary radii due to the heating effect from the
intense stellar irradiation and large planetary intrinsic entropies
during the early stage and,
two, the manually fixed $f_{\rm env/core}$ for small cores
can be substantially higher than predicted in the formation calculations.
In particular, the formation calculations do not produce
cores of 1, 2, or 5 $M_{\oplus}$ with
a $f_{\rm env/core}$ of 80\% at 0.03 or 0.05 AU.
The high temperature at the outer boundary and
large initial luminosities for these small planets with a large
$f_{\rm env/core}$ appear to produce an unstable envelope structure
that cannot be modeled using the hydrostatic equilibrium approximation.
The only structure that we find for these planets is a bloated
atmosphere that expands beyond the Hill sphere, which is unstable.
Thus, their initial atmospheres evaporate in a short time.
\citet{Kurokawa2014} show comparable cases with low-mass planets undergoing a similar dynamic, i.e.,
runaway mass-loss.
After the early, intense, X-ray-driven evaporation stage,
the evaporation transitions to the moderate radiation-recombination-limited
or energy-limited EUV-driven regimes and the snapshots at the 1 Gyr and 5 Gyr only slightly differ.
The three panels in the right column compare the final status
of the three simulations, with
the only difference between them being the evaporation model.
One uses the nominal model, which includes both X-ray- and EUV-driven mechanisms,
another only uses the EUV-driven evaporation model, and a third
uses the same energy-limited evaporation model as that in \citet{Lopez2012}.
The final configurations of these three groups are similar:
the planets in the bottom left corner become bare rocky cores,
while planets with large cores at large semi-major axes
retain most of their initial envelopes.
The differences between these three groups lie in the diagonal region
of each panel.
X-ray-driven evaporation in the early stage is more
effective at removing planetary atmosphere than EUV-driven evaporation;
thus, the diagonal region in the
simulation using the nominal evaporation model includes more bare cores than
the simulation using only the EUV-driven mechanism.
The simulation
using the energy-limited evaporation model in \citet{Lopez2012} produces the greatest mass-loss.
Consequently, it includes the fewest planets with portions of their initial envelopes, and the $f_{\rm env/core}$
of each planet at 5 Gyr is the smallest of the three simulations.
\begin{figure*}
\includegraphics[width=17.0cm]{fig7}
\caption{
Temporal evolution of the planetary radius and semi-major axis
distribution of the reference simulation.
The color of each point shows how much of the initial envelope was lost.
The black points are the planets that have lost all their initial envelopes.
Above the black points, a separating ``evaporation valley" that runs
diagonally downward from 0.06 to 0.5 AU at 5 Gyr is clearly visible.
The empty region at intermediate orbital distances (extending at 0.02 Gyr from 0.2 to 2 AU)
is in contrast an artifact of assuming a minimal planetary mass of 1 $M_{\oplus}$,
and has no physical meaning.
Note that all planets start with a primordial H/He envelope.
In reality, this is likely not the case for all low-mass planets.
}
\label{fig:aRfid}
\end{figure*}
\section{The population-wide impact of evaporation}
\label{sect:resultsII}
We then couple planetary population synthesis with
different evaporation models to determine the population-wide
effect of atmospheric evaporation.
First, in Figure \ref{fig:arhoMdotfid}, \ref{fig:aMfid},
and \ref{fig:aRfid}, we show the evolution of the mass-loss rates,
mass and radius distribution of our nominal planetary population
using the nominal evaporation model
We then investigate the influence of the efficiency of evaporation mechanism.
In Figure \ref{fig:6arhoMdot}, \ref{fig:6aM}, \ref{fig:6aR}, and \ref{fig:6mr},
we compare the mass-loss rates, mass and radius distributions,
and mass-radius relationships using
different evaporation models (or without evaporation).
The effect of the different grain opacities used during the
planetary formation stage is demonstrated by the mass-radius relationships of
three different planet populations in Figure \ref{fig:nio}.
Finally, we compare the radius distribution of our synthetic populations
with the $Kepler$ data in Figure \ref{fig:3histo} and \ref{fig:9histo}.
\subsection{Synthetic Planets: A, the Reference Simulation}
\label{sect:nominal}
\begin{figure*}
\includegraphics[width=18.0cm]{fig8}
\caption{
The mass-loss rates of the nominal planet population using
different evaporation models (Table \ref{tab:simulist}) at 5 Gyr
in the $M_{\rm p}\cdot\bar{\rho}$ versus semi-major axis plane.
The black points are the planets that have lost all their initial envelopes.
The minimal planetary mass is 1 $M_{\oplus}$.
The solid line in each panel indicates the evaporation threshold.
}
\label{fig:6arhoMdot}
\end{figure*}
In Simulation XE (see Table \ref{tab:simulist}),
we apply the nominal evaporation model
(X-ray + EUV) to the nominal planetary population;
therefore, it is referred to as our reference simulation.
In Figure \ref{fig:arhoMdotfid}, we plot the temporal evolution of
the planet mass-loss rates for Simulation XE
in the planet's semi-major axis versus mean density plane.
The evaporation rates of close-in planets are large at 0.02 Gyr and 0.11 Gyr.
At 0.02 Gyr, close-in planets at 0.06 AU can have a mass-loss rate of
$\sim$ $10^{-6}$ $M_{\oplus}$ yr$^{-1}$.
The mass-loss rates for most of planets beyond 1 AU are
less than $10^{-10}$ $M_{\oplus}$ yr$^{-1}$.
At 1 Gyr, nearly all planets that retain portions of their
envelope have a mass-loss rate of less than $10^{-10}$ $M_{\oplus}$ yr$^{-1}$,
which explains why the envelope mass fractions and radius distributions
in Figure \ref{fig:aMfid} and Figure \ref{fig:aRfid} (in the following section)
barely change after 1 Gyr.
The parallel dashed lines show identical mass-loss rates
for a purely energy-limited evaporation model (i.e., the planets along each line
have the same mass-loss rate as those in the energy-limited regime); the
mass-loss rate is a function of the planet’s mean density and incoming flux.
We use an energy-limited model in both the X-ray and the low EUV regimes;
thus, most of the color belts in Figure \ref{fig:arhoMdotfid} are parallel to these dashed lines.
At 0.02 Gyr, a considerable portion of the planets are
in the radiation-recombination-limited regime at $\sim$ 0.2 to 1 AU;
hence, the yellow and orange belts slightly deviate
from the parallel dashed lines.
From 1 Gyr, nearly all of the planets are in the energy-limited,
EUV-driven regime, and the color belts closely follow the parallel dashed lines.
Figure \ref{fig:aMfid} shows the temporal evolution of the mass versus
the semi-major axis ($a$-$M$) distribution of the reference simulation
(the XE simulation). The $a$-$M$ distribution shows typical
sub-populations, such as the numerous low-mass ``failed cores" that only
accrete a limited quantity of gas, as well as many giant
planets that preferentially form outside of the snow line
\citep{Ida2004,Mordasini2009}. The color of each point in Figure
\ref{fig:aMfid} denotes the fraction of the initial envelope that
is lost for this planet (i.e., 1.0 indicates that a planet
has lost all of its initial H/He, and 0 means that the planet retains
its entire envelope). The black region in the bottom right corner corresponds to
the planets that have lost their entire initial envelope. The snapshot
at 0.11 Gyr presents a greatly increased black bare-core region;
however, the further increase is small for 1 and 5 Gyr.
The critical planetary mass below which a planet
can lose its entire envelope ($M_{\rm crit}$) has the form $M_{\rm crit}(t)=M_{\rm
crit}(t,a_{0}=0.06 ~{\rm AU})(a/a_0)^{-1}$. For
example, at t = 10 Gyr the critical planetary mass is $M_{\rm crit}(a=0.06 ~{\rm AU})\simeq 10 M_{\oplus}$, while
$M_{\rm crit}(a=0.7 ~{\rm AU})\simeq 1 M_{\oplus}$. Most of the
fully depleted planets are low-mass ``failed cores" ($M \leq 10
M_{\oplus}$) that only accrete a small envelope during the
formation stage. Although these planets have evaporated to bare, rocky
cores, they do not lose a substantial amount of their total mass because their initial envelope masses are
substantially lower than their core masses. Neptunian planets have
greater initial envelope mass fractions, but it is difficult for them to
lose a large portion of their envelopes. All of the Jovian planets retain
most of their envelopes. Thus, the
$a$-$M$ distribution of the entire planet population exhibits nearly no change: the four
snapshots in Figure \ref{fig:aMfid} all have similar
shapes. Notably, herein we only include planets with $a \geq 0.06 {\rm
AU}$; these results will differ for planets at very close-in orbits.
\begin{figure*}
\includegraphics[width=18.0cm]{fig9}
\caption{
The planetary mass versus semi-major axis distributions of
the nominal planet population
using different evaporation models at 5 Gyr (Table \ref{tab:simulist}).
The color of each point shows how much of the
initial envelope was lost.
The black points are the planets that have lost all their initial envelopes.
}
\label{fig:6aM}
\end{figure*}
\begin{figure*}
\includegraphics[width=18.0cm]{fig10}
\caption{
The planetary radius versus semi-major axis distributions of
the nominal planet population
using different evaporation models at 10 Gyr (Table \ref{tab:simulist}).
The color of each point shows the ratio of the planetary envelope mass
to the core mass, $f_{\rm env/core}$.
The black points are the planets that have lost all their initial envelopes.
It is obvious that the simulation B04 is incompatible with the observed $a$-$R$ relationship.
}
\label{fig:6aR}
\end{figure*}
However, the $a$-$R$ distribution
of the entire planetary population is clearly modified by evaporation.
Figure \ref{fig:aRfid} shows the $a$-$R$ distribution of the same
reference simulation.
First, certain features are related to the
planet formation and evolution model.
For example, the empty region from 0.2 to 2 AU at 0.11 Gyr
is an artifact of using a minimal core mass of 1 $M_{\oplus}$
in the formation calculations.
This region is left empty because protoplanets inside the ice line quickly accrete all planetesimals
in their feeding zones.
Therefore, their luminosities are low
and are mostly attributed to gas accretion.
For a fixed core mass under these circumstances,
the envelope mass increases with orbital distance \citep[see][]{Ikoma2012}
because more gas can be bound at lower nebula temperatures,
which translates into a larger radius and makes
the hollow higher at larger distances (to approximately 2 AU).
Outside of this distance, another effect becomes dominant:
the solid accretion timescale is longer, which makes that
certain planets have high luminosities (due to planetesimal accretion)
at the end of the disk lifetime.
These planets can only hold tenuous envelopes
and no longer retain a relationship with the orbital distance.
In reality, there is no minimal core mass; therefore,
this empty region should not exist in the actual $a$-$R$ plot.
A very similar artifact can be seen in Fig. 7 of \citet{Owen2013}.
One real visible effect of planetary evolution can be observed by the decrease of the planetary radii of the entire population due to planet cooling.
Note the sharp upper limit for the planetary radius, which
is artificially sharp because, first, no special bloating mechanisms
are included \citep[e.g., ohmic heating,][]{Batygin2011},
and second, the opacity, which can affect cooling \citep{Vazan2013},
is identical for each planet.
The features related to evaporation are shown by the color of each point,
which represents the fraction of the initial envelope
that evaporated.
Here, we use black points to indicate the planets that
have lost their entire envelopes (the bottom left region of each panel).
As indicated, when a planet becomes a bare rocky core,
it settles to the bottom of the $a$-$R$ plane and detaches
from the planets that retain at least a portion of their initial envelopes;
this settling leads to the formation of an empty region that runs diagonally downward
in the $a$-$R$ plane between 0.06 and 0.5 AU.
This empty diagonal belt separates the bare rocky cores from the
planets that retain an envelope,
which, henceforth, we refer to as an ``evaporation valley".
The evaporation valley occurs because the radius of
a purely rocky planet is substantially smaller than that of a planet
with both a core and gaseous envelope.
For example, an envelope at only 0.1\% of the planetary mass
can dramatically enhance the planetary radius (Figure \ref{fig:mrRg}).
Additionally, the last 0.1\% of the envelope is lost on a
short timescale, $\sim 10^{5}$ yrs; therefore, we are unlikely to detect
a planet when it lies in the evaporation valley.
Thus, an empty valley appears in the $a$-$R$ plane after
many low-mass planets have become bare, rocky cores.
At 0.02 Gyr, the valley only appears within $\sim$ 0.1 AU,
and rapidly extends to $\sim$ 0.3 AU at 0.11 Gyr.
Clearly, the empty valley is only expected if all
low-mass planets begin with significant H/He envelopes, which
is unlikely. We discuss this topic further in $\S$ \ref{sect:dis41}.
The separated distribution of low-mass planets within 0.5 AU
suggests a bimodal size distribution of the close-in low-mass planets,
which was first theoretically observed by \citet{Owen2013}.
A similar but weaker structure was also demonstrated by \citet{Lopez2013}.
We show the size distributions for our
synthetic planet populations in $\S$ \ref{sect:comparison}.
\subsection{Synthetic Planets: B, influence of Parameters}
\label{sect:parameters}
To determine how our results depend on the evaporation description,
we simulate the evolution of the same nominal planet population but using different evaporation models.
Table \ref{tab:simulist} lists the details for these simulations.
Simulation XE is our reference.
Simulation NoEV is planetary evolution without evaporation.
Simulation SatE includes only EUV-driven evaporation and assumes that
the stellar EUV emissions are saturated during the first 100 Myr of planetary evolution.
Simulation XE2 uses the nominal evaporation model, but the heating efficiency
in both the X-ray ($\epsilon = 0.2$) and energy-limited EUV regime
($\epsilon = 0.12$) are twice as high compared with the XE simulation.
Simulation L12 uses the energy-limited model from \citet{Lopez2012}, which
uses the total flux between 1-1200 ${\rm \AA}$ as the incoming energy
and 0.1 as the heating efficiency.
Simulation B04 uses the energy-limited model from \citet{Baraffe2004},
which includes a different temporal evolution of the XUV emission from
a sun-like star and adopts a 100\% heating efficiency.
All the simulations are evolved for 10 Gyr.
\begin{figure*}
\includegraphics[width=18.0cm]{fig11}
\caption{
The mass-radius relationship of the planets between
0.06 and 1 AU at 5 Gyr for different simulations (Table \ref{tab:simulist}).
The color of each point shows how much of the initial envelope was lost.
The black points are planets that have lost all their initial envelopes.
All the planets have an identical, solar composition opacity,
a pure H/He envelope, and no bloating mechanisms are included.
This means that the vertical width of the mass-radius relation is
likely underestimated.
The mass of the host star is 1 $M_{\odot}$.
All rocky planetary cores have a terrestrial composition (2:1 silicate-iron ratio).
}
\label{fig:6mr}
\end{figure*}
Because we are more interested in the potentially observable influence
of these different evaporation models, in Figure
\ref{fig:6arhoMdot}, we plot the mass-loss rates of the six simulations at 5
Gyr in the semi-major axis versus planetary mass $\times$ mean density
($M_{\rm p}^{2}/R_{\rm p}^{3}$) plane. At 5 Gyr, only
Simulation B04 leads to planets with evaporation rates
greater than $10^{-8}$ $M_{\oplus}$ yr$^{-1}$. Simulation B04 is
also the only simulation in which even a Jovian planet can evaporate
to a rocky core. At 110 Myr, the largest mass-loss rates of the gas
giants within 0.1 AU for Simulation B04 are $\sim$ $10^{-6}$
$M_{\oplus}$ yr$^{-1}$ (not shown in the plot), which indicates that these
planets lose at least 100 $M_{\oplus}$ in the first 100 Myr
of planetary evolution because the mass-loss rate of a planet decreases with time. Thus, many planets in Simulation B04, including certain close-in Jovian
planets, eventually lose their entire envelope. The mass-loss
rates for the other four simulations are significantly smaller; the
typical rate for a planet within 0.1 AU at 5 Gyr is $\sim$
$10^{-10}$ $M_{\oplus}$ yr$^{-1}$ and is $\sim$
$10^{-8}$ $M_{\oplus}$ yr$^{-1}$ at 110 Myr (not shown in the plot).
The evaporation timescale $M_{\rm p}/\dot{M}$ is
proportional to $M_{\rm p}^{2}/(R_{\rm p}^{3}F)$ ($F$ is the
incoming flux) in an energy-limited regime; thus, an evaporation threshold is
expected in the semi-major axis versus planetary mass $\times$ the mean
density ($M_{\rm p}\cdot\bar{\rho}$) plane
\citep{Jackson2012,Lopez2012,Owen2013}. Low-mass, low-density planets
below this threshold at the beginning of planetary
evolution will be evaporated to bare, rocky cores at high mean
densities, and eventually, no planets with H/He are below the
threshold, as demonstrated by the solid lines and black dots in Figure
\ref{fig:6arhoMdot}.
A similar threshold is also apparent in
the NoEV simulation, which does not include evaporation; however, in this instance, the cut-off is unclear (as in the evaporation-inclusive simulations), and the threshold is at the lower value of $M_{\rm
p}\cdot\bar{\rho}$.
This lower limit is an artifact of using a the minimal planetary
mass of 1 $M_{\oplus}$, which , indicates that the envelope
mass for a fixed core mass increases with orbital distance, as
discussed above. Evaporation renders this
threshold clearer and raises it in the $M_{\rm
p}\cdot\bar{\rho}$ plane. With a substantially stronger evaporation model,
the evaporation threshold is so high that it intersects the
bare, rocky cores; hence, the evaporation threshold becomes
blurred, as demonstrated in the B04 simulation. If planets with masses lower
than 1 $M_{\oplus}$ are included in our synthetic population,
then the evaporation threshold can also be blurred because the
low-mass bare rocky cores would extend to lower values of $M_{\rm
p}\cdot\bar{\rho}$ below the threshold. Examples of such low-mass
cores include $Kepler$-10b and Corot-6b (see Figure 6 in
\citet{Lopez2012}).
\begin{figure*}
\includegraphics[width=18.0cm]{fig12}
\caption{
The mass-radius relationship of the planets between 0.06 and 1 AU
at 5 Gyr of the three planetary populations
calculated using non-isothermal type I migration rates and
different grain opacities (Table \ref{tab:simulist}).
The color of each point shows how much of the initial envelope was lost.
The black points show the planets that have lost all their envelopes.
All of the planets orbit around a sun-like star.
}
\label{fig:nio}
\end{figure*}
Figure \ref{fig:6aM} shows the $a$-$M$ distributions at 5 Gyr.
With the exception of Simulation B04, the remaining five simulations present
similar $a$-$M$ distributions.
The difference for B04 is the severe depletion of
planets with masses at 30-50 $M_{\oplus}$ within 0.2 AU.
At distances smaller than those modeled here ($a \lesssim$ 0.03 AU),
the observed planet population includes a desert that may be related to evaporation
\citep{Beauge2013,Kurokawa2014}.
In each simulation, the black bottom left corner corresponds to
the planets that have lost their entire envelope.
Although the heating efficiency in XE2 is twice that of XE, the black corner in XE2 does not show
a large difference compared with XE.
These data indicate that long-term evolution of the entire planet population
is not extremely sensitive to the heating efficiency of the evaporation model,
at least within the framework of the models listed here.
However, if a substantially more violent evaporation model is used, as for B04,
the $a$-$M$ distribution of the entire planet population noticeably changes.
Figure \ref{fig:6aR} shows the $a$-$R$ distributions of
the six simulations at 10 Gyr.
The color of each point indicates the ratio of
the envelope mass to the core mass, $f_{\rm env/core}$.
Here, all of the simulations that include evaporation show a
distinct feature compared with NoEV:
an evaporation valley in the radius distribution at $\sim$ 2 $R_{\oplus}$.
The $f_{\rm env/core}$ of each planet does not change in the NoEV simulation;
thus, based on the colors in the NoEV snapshot,
the initial $f_{\rm env/core}$ of a planet generally scales with its core mass.
This finding is expected based on the long Kelvin-Helmholtz timescales for gas accretion of low-mass cores.
For example, only the planets within 0.1 AU with a core larger than 10 $M_{\oplus}$
can have a $f_{\rm env/core}$ that exceeds 80\%;
low-mass planets with a core smaller than 2 $M_{\oplus}$ typically
have a $f_{\rm env/core}$ that is less than 10\%.
Thus, most low-mass planets with cores at
1-3 $M_{\oplus}$ can be quickly evaporated to bare, rocky cores
after evolution has begun (even in simulation SatE,
which includes the weakest evaporation model).
However, this clear valley only occurs if all these low-mass planets
begin with a primordial H/He envelope, which is
unlikely in reality.
In the actual formation process, some low-mass planets may reach their final mass only after the
dissipation of the gaseous nebula,
leading to planets without significant primordial H/He envelopes.
This is in contrast with
our synthetic population where no growth via giant impact
is included after the protoplanetary disk has dissipated.
When the low-mass planets become bare, rocky cores,
they settle to the bottom of the $a$-$R$ panel and are
separated from the planets that retain some H/He.
The evaporation valley becomes a large void region for B04,
for which the mass-loss rates are so high that most planets within 0.2 AU have evaporated to bare cores,
including certain gas giants with $M_{\rm p} \lesssim 400 M_{\oplus}$.
Such strong planet depletion between $\sim$ 2 and $\sim$ 10 $R_{\oplus}$
is not presented in the observed data (including $Kepler$ candidates), which
implies that an 100\%-efficient, energy-limited evaporation model
is incompatible with the observed radius distribution of the extrasolar planets.
\subsection{Mass-Radius Relationship of Close-in Planets}
\label{sect:mr}
Figure \ref{fig:6mr} compares the mass-radius relationship
of the planets between 0.06-1 AU at 5 Gyr
for the six simulations using different evaporation models.
The planetary radii plotted in the figure are
at the optical depth $\tau = 2/3$.
Compared with the old grey model \citep{Mordasini2012a}, the semi-grey model used
for the atmosphere increases the radius of close-in planets
due to stellar irradiation \citep{Guillot2010}.
This effect is demonstrated by the low-mass
planets at small semi-major axes,
such as planets with masses of 1-2 $M_{\oplus}$ but
radii of 3-5 $R_{\oplus}$ in the simulation NoEV, which
does not include an evaporation model.
These low-mass planets have low densities.
For example, the mean density of a 1.5 $M_{\oplus}$,
4 $R_{\oplus}$ planet is only $\sim$ 0.13 g ${\rm\,cm}^{-3}$.
The evaporation valley in Figure \ref{fig:6aR} and evaporation
threshold in Figure \ref{fig:6arhoMdot} can also be observed in the
mass-radius relationships in Figure \ref{fig:6mr}. The black bar at
the bottom of each panel in the five simulations that included evaporation
corresponds to the rocky cores of the planets that
have lost their entire envelopes.
Note that with the migration model used here,
all close-in low-mass planets that become bare cores
have only accreted inside of the iceline,
giving them a rocky interior.
This could be different for higher migration rates,
or if several planets form concurrently in one disk
\citep{Alibert2013}.
In the current model, we assume that all
rocky cores have an identical composition (2:1 silicate:iron ratio,
as in Earth). In reality, this ratio (and the composition of
refractory element) depends on the stellar composition and a
planet's formation history, such as large impacts. The black bar
creates a gap at $\sim$ 2 $R_{\oplus}$, which separates the bare
cores from the planets that retain at least a portion of their
initial envelopes. This feature clearly corresponds to
the evaporation valley in Figure \ref{fig:6aR}). The length of this
black bar is related to the efficiency of evaporation mechanism. Simulation
SatE includes the lowest mass-loss rates; consequently, it produces the
shortest black bar. Simulation B04 produces the longest black bar,
extending to $\sim 20 M_{\oplus}$, and certain black points are
massive cores from stripped gas giants. In Simulation B04,
no planets with $M_{\rm p} \lesssim 5 M_{\oplus}$ within 1 AU maintain their primordial H/He.
Another difference between NoEV and
the other five simulations is the disappearance of low-mass, very
low-density planets. This result is similar to the evaporation threshold in
Figure \ref{fig:6arhoMdot}. When evaporation is included in
planetary evolution, the bloated envelopes of close-in, low-mass
planets are rapidly removed. Thus, an upper
threshold is depicted in the bottom left corner of the mass-radius distribution
(for planets within 1 AU). Planets that are initially above this
threshold will lose at least a portion of their initial envelopes until
they are sufficiently dense that they lie below the threshold.
This threshold is also related to the efficiency of evaporation mechanism;
in Simulation B04, the threshold occurs at larger planetary
masses compared with Simulation SatE.
\subsection{Synthetic Planets: C, non-isothermal Migration and Different Grain Opacities}
\label{sect:nonnorminal}
Various evaporation models have been applied to the nominal
planet population that is calculated using the isothermal type I migration rate
reduced by a factor of 0.1 \citet{Tanaka2002}.
This is an artificial factor that prevents most synthetic planets
from falling into the star \citep{Benz2014}.
Recent studies have shown that, depending on the temperature profile of the disc,
type I migration can also induce outward migration \citep[e.g.,][]{Masset2006,Paardekooper2008,Kley2009,Uribe2011}.
Thus, in principle, the artificial reduction factor may be eliminated
even if the migration still seems to be too rapidly inward,
mainly due to saturation of the corrotation torques \citep{Benz2014}.
Here, we apply the nominal evaporation model to three synthetic populations that are all calculated using the full non-isothermal type I migration rate
from \citet{Dittkrist2014}.
The difference between these three populations is the ISM grain opacity
reduction factor, $f_{\rm opa}$, which is used during the formation stage.
For the planet population in Simulation NIOpa003, we use an $f_{\rm opa}$
at 0.003 (the nominal value) during formation;
in Simulation NIOpa0, the $f_{\rm opa}$ equals 0 (no grain opacity),
and in Simulation NIOpa1, the $f_{\rm opa}$ equals 1 (full ISM grain opacity).
The details are in Table \ref{tab:simulist} and \citet{Mordasini2014a}.
Lower grain opacity during formation phase yields a higher envelope mass for
a given core mass because it is more efficient to
radiate away the liberated potential energy,
allowing the envelope to contract.
This means that, at low $f_{\rm opa}$, planets with a lower mean density
are formed, which is indicated by
a larger maximal radius for a given mass in the mass-radius diagram.
In contrast, the atmospheric opacity during planetary evolution is identical
for all three populations and is given via the opacity of a grain-free gas
with a solar composition \citep{Freedman2008}.
The features that are related to evaporation, such as the black bottom left corner
in the $a$-$M$ diagram, the evaporation valley in the $a$-$R$ distribution,
and the black bar, which indicates purely rocky cores in the mass-radius plot,
have been detailed above
and are similar among these three populations.
Here, we focus on the features related to the effect of $f_{\rm opa}$.
Figure \ref{fig:nio} shows the mass-radius relationships of
the planets within 1 AU for the NIOpa003, NIOpa0, and NIOpa1 simulations.
One of the effects of the different grain opacities during formation
is the number of giant planets \citep{Mordasini2014a}.
As shown in the figure, the NIOpa0 group includes the most gas giants, while the NIOpa1 group shows the opposite result.
This effect is more clearly demonstrated by the histogram of
planet size distributions in Figure \ref{fig:9histo} in the following subsection.
Another effect is that, in Simulation NIOpa0, the largest overall radius
(at a mass of $\sim$ 4 $M_{\rm Jupiter}$) is slightly larger than for the other two simulations.
At $f_{\rm opa} = 0$, even very low-mass cores can become
supercritical and trigger giant planet formation \citep{Movshovitz2010,Mordasini2014a}.
These giant planets with low-mass cores
yield large planetary radii.
A third effect, as extensively discussed in \citet{Mordasini2014a}, is
the effect of $f_{\rm opa}$ on the mass-radius relationship of low-mass planets.
With a low $f_{\rm opa}$, even cores with only a few $M_{\oplus}$ can accrete
significant envelope masses, thereby producing large radii.
This result is clearly visible in Figure \ref{fig:nio}.
For example, at 10 $M_{\oplus}$, the maximum radii are approximately 3.5, 5.5,
and 7.5 $R_{\oplus}$ for $f_{\rm opa}$ =1, 0.03, and 0, respectively.
The effect of the grain opacity during formation on the observable
mass-radius relationship as discussed in \citet{Mordasini2014a} (who neglected evaporation) is also found here,
even if the very low-mass,
low-density planets are removed by evaporation.
\begin{figure*}
\includegraphics[width=18.0cm]{fig13}
\caption{
The normalized planet size distribution of
the XE simulation at 0.1, 1, and 5 Gyr.
The red solid line corresponds to the sub-population
of the planets within 1 AU.
In the 5 Gyr panel, the black dashed line shows the
size distribution of all $Kepler$ candidates,
while the green doted line shows the size distribution of
$Kepler$ candidates within 0.1 AU.
Note that at radii $\lesssim 2 R_{\oplus}$, the $Kepler$ data is
affected by observational bias, and the size distribution of
$Kepler$ candidates with correction for survey completeness is
a plateau at 1-3 $R_{\oplus}$ \citep{Dong2013,Petigura2013a}.
}
\label{fig:3histo}
\end{figure*}
\begin{figure*}
\includegraphics[width=18.0cm]{fig14}
\caption{
The normalized planet size distributions of all the simulations
at 5 Gyr (Table \ref{tab:simulist}).
In each panel, the red solid line shows the
synthetic population with 0.06 $< a$/AU $<$ 1,
and the black dash line shows size distribution of all $Kepler$ candidates.
}
\label{fig:9histo}
\end{figure*}
\subsection{Comparisons with $Kepler$ Candidates}
\label{sect:comparison}
Figure \ref{fig:3histo} shows the temporal evolution of the
normalized size distribution for Simulation XE.
For comparison with the observations, we only show the size distributions
for the planets within 1 AU.
There are three peaks in the planet size distributions,
all of which form at an early evolutionary stage.
The first peak is at $\sim$ 1 $R_{\oplus}$, which corresponds to the bare, rocky cores
of the low-mass planets that have entirely lost their initial envelopes.
The third peak, at 11 - 12 $R_{\oplus}$, $\sim 1 R_{\rm Jupiter}$, indicates
the sub-population of Jovian planets and is
a consequence of degeneracy of the electrons in the interior;
all giant planets with masses larger than $\sim$ Saturn have
the same radius of approximately $\sim 1 R_{\rm Jupiter}$.
Compared with the actual population, this effect must be overestimated in
our results because all planets evolved with the same opacity,
and bloating mechanisms are not included.
The middle peak at 2-4 $R_{\oplus}$ corresponds to the sub-population of
super-Earths and Neptunian planets that retain an envelope.
\citet{Owen2013} found that evaporation leads to a bimodal distribution
in planetary size with a planet deficit at approximately 2 $R_{\oplus}$.
Our results show a similar bimodal distribution with a minimum at
approximately 2 $R_{\oplus}$; however, this deficit is substantially more severe at $\sim$ 1.2-2 $R_{\oplus}$.
The normalized size distribution of all $Kepler$ candidates
(released Feb 26, 2012), most of which are within 1 AU,
are also plotted for comparison in the 5 Gyr panel.
Notably, the $Kepler$ data are biased and incomplete for $R \lesssim 2 R{\oplus}$
because the detection efficiency of the star and the detection sensitivity of the planet
decrease towards small planetary radii and larger semi-major axes.
The planetary occurrence of $Kepler$ candidates with a correction for
observational bias produces a nearly flat distribution at 1-3 $R_{\oplus}$
\citep{Howard2012,Dong2013,Petigura2013a,Petigura2013b}.
Figure \ref{fig:3histo} shows that the wide evaporation valley
in our result is not compatible with $Kepler$ data.
There are three possible reasons for this observation.
(1) Our evaporation model either overestimates evaporation or is too
deterministic because all stars in our simulation have the same mass and XUV flux as a function of time.
(2) We assume that all low-mass planets begin
with a primordial H/He envelope.
In reality, there is likely a population of
close-in, low-mass planets that formed without H/He via
planet impacts after the dissipation of the nebula
\citep{Terquem2007,Ormel2012}.
(3) The distinct evaporation valley is related to our identical
core composition (2:1 silicate:iron ratio).
As mentioned, the migration model \citep[0.1 isothermal migration rate of][]{Tanaka2002}
predicts that all close-in low-mass planets loosing the entire envelope have a rocky interior (migration only inside of the iceline).
In reality, the actual core composition
of close-in planets might be highly diverse (i.e., contains some ice).
In the 5 Gyr panel, we also plotted the normalized size distribution
for $Kepler$ candidates within 0.1 AU,
which are planets that are more likely to be eroded by evaporation.
These candidates accumulate at $\sim$ 1-3 $R_{\oplus}$,
while the close-in $Kepler$ candidates
corrected for the observational bias have
a nearly flat distribution at 1-3 $R_{\oplus}$
\citep{Dong2013,Petigura2013a,Petigura2013b}.
Thus, the evaporation valley in our synthetic planet populations
was not observed in the $Kepler$ data.
In a following paper, we show that, by varying the ice fraction of the
planetary cores, the radii of the bare, rocky cores can be substantially larger;
the dip in the evaporation valley is thus eliminated.
This means that important observational constrains for planet formation and migration theory can be
deduced from evolutionary models with atmospheric escape \citep[cf.][]{Lopez2013}.
Figure \ref{fig:9histo} compares the normalized size distribution of
$Kepler$ candidates with our non-nominal simulations.
With the exception of NoEV, which does not include evaporation, and
B04, which has a 100\% heating efficiency in the energy-limited regime,
the remaining seven simulations show a bimodal distribution at small sizes.
As mentioned, an additional peak is at 1 $R_{\rm Jupiter}$ for the gas giants.
Simulation NoEV yields only one peak at 3-4 $R_{\oplus}$; this occurs
because the low-mass planets maintain their envelopes and, hence,
have large planetary radii.
The simulation B04 group is another extreme case that
yields a single peak at 1-2 $R_{\oplus}$.
Most of the low-mass planets in Simulation B04
evaporated to bare, rocky cores; thus, the number of Neptunian planets
that retain an envelope is too small to produce a second peak.
In the other seven simulations, the inner peak in the
bimodal distribution is at 1.0-1.2 $R_{\oplus}$, which
corresponds to bare, rocky cores with a 2:1 silicate:iron ratio.
For different simulations, the outer peaks in the bimodal distribution
differ slightly in both magnitude and location.
A stronger evaporation model will produce a lower outer peak
and move the position of the peak to smaller planet sizes,
as shown in the top two rows of Figure \ref{fig:9histo}, where
the only difference between these simulations is the evaporation model.
The effect of the grain opacity reduction factor, $f_{\rm opa}$,
is apparent in the bottom row of Figure \ref{fig:9histo}.
Because planets grow faster at smaller $f_{\rm opa}$,
the NIOpa0 group contains the largest number of gas giants.
Consequently, the NIOpa0 group has the highest peak at $\sim$ 12 $R_{\oplus}$; the NIOpa1 group is the opposite.
\begin{figure*}
\includegraphics[width=17.0cm]{fig15}
\caption{
The $a$-$R$ distributions of the XE and GREY simulations at 0.11 and 5 Gyr.
The color of each point shows how much of the initial envelope was lost.
The black points are the planets that have lost all their initial envelopes.
The planetary radii are smaller and the evaporation valley is also narrower
in the GREY simulation,
but the general, population-wide impact of evaporation is
similar in the two simulations.
}
\label{fig:aRcomp}
\end{figure*}
\begin{figure}
\includegraphics[width=8cm]{fig16.eps}\\
\caption{
The size distributions of the planet
within 1 AU in the XE and GREY simulations at 5 Gyr.
The blue dotted line shows the XE simulation.
The red solid line shows the GREY simulation.
The black dashed line shows the size distribution of all $Kepler$ candidates.
}
\label{fig:1histo}
\end{figure}
\section{Discussion}
\label{sect:discussion}
\subsection{The Bimodal Distribution and Evaporation Valley}
\label{sect:dis41}
The bimodal distribution for planet sizes at approximately 2 $R_{\oplus}$
was first observed by \citet{Owen2013}, in which the
hydrodynamic evaporation of a theoretical planet population was studied.
\citet{Lopez2013} observe a diagonal band, on which planets
are relatively rare; the bimodal distribution
near $2 R_{\oplus}$ is less clear in their results.
In our results, this diagonal band \citet{Lopez2013}
becomes a distinct evaporation valley.
This evaporation valley separates the bare, rocky cores
from the planets that retain an envelope (Figure \ref{fig:6aR}).
The valley is $\sim$ 0.5 $R_{\oplus}$ wide and occurs at different
planet sizes (from $\sim$ 1 to $\sim$ 2.5 $R_{\oplus}$) depending on the semi-major axis.
We find that such empty valleys are closely related to the initial
characteristics of the synthetic planet population. In our synthetic
planet population, all close-in planets begin with a primordial H/He
envelope, and their rocky cores have a 2:1 silicate:iron ratio. Due to the long Kelvin-Helmholtz timescales
of low-mass cores, low-mass planets can only have a small initial
$f_{\rm env/core}$, which is proportional to the planetary core
mass. As shown in the NoEV simulation snapshot in Figure
\ref{fig:6aR}, low-mass planets within 0.5 AU with a radius $<$ 4
$R_{\oplus}$ typically include a $f_{\rm env/core}$ $<$ 10\%. Only
a planet with $>$ 6 $R_{\oplus}$ can have a $f_{\rm env/core}$
$\geq$ 80\%. Thus, low-mass planets are vulnerable to evaporation
to bare rocky cores due to their small envelopes and small
gravitational binding energies; these planets form a peak
at about 1 $R_{\oplus}$ in the bimodal radius distribution of low-mass planets (trimodal, if the giants are included).
Most close-in Neptunian planets can keep at least a portion of their
initial envelopes at the end of evolution; they form the second peak
at about 2-3 $R_{\oplus}$ in the bimodal distribution. In a forthcoming
paper, we show that the dip of the evaporation valley can be
removed by varying the ice fractions of planetary cores: the
sizes of bare low-mass icy cores can be $\sim$ 2 $R_{\oplus}$.
\subsection{Insensitivity to the Boundary Conditions}
Although the semi-grey model is a significant improvement
over the grey model used in \citet{Mordasini2012a},
it does not include the effects of
non-grey thermal opacities \citep{Parmentier2014m,Parmentier2013s}.
Neglecting non-grey effects in the planetary atmosphere boundary
may lead to an inaccurate planetary radius and, hence, mass-loss rates.
Fortunately, we find that the final radius distribution of the entire
planet population is not very sensitive to the outer boundary condition.
The exact planetary radius and mass-loss rate are clearly important
for individual planets but not so much for the overall statistical impact
on the planet population.
To clarify this, we perform a comparison numerical experiment,
the GREY simulation, using our nominal planet population but
with the previous grey atmospheric model from \citet{Mordasini2012a}.
The grey model assumes that stellar irradiation is absorbed
at the upper atmosphere where optical depth equals $2/3$.
For the GREY simulation, we also use the nominal evaporation model (i.e.,
both X-ray- and EUV-driven evaporation are included).
Figure \ref{fig:aRcomp} compares the $a$-$R$ distributions
of the GREY and XE simulations.
The planetary radii are smaller for the GREY model, as expected \citep{Guillot2010},
but the global effects of evaporation in general
and for the evaporation valley in particular
remain visible in a similar manner in the $a$-$R$ space.
The GREY simulation produces a narrower gap in the radius distribution
because the planetary envelopes from this numerical experiment are less bloated than
in simulations using the semi-grey model.
Figure \ref{fig:1histo} compares the planet size distributions
from the GREY and XE simulations at 5 Gyr.
The GREY simulation clearly shows a bimodal distribution at approximately 2 $R_{\oplus}$.
The difference is that, compared with XE,
the peak of the bare, rocky cores in the GREY simulation is less prominent,
and the outer peak in the bimodal distribution occurs at a smaller size.
\subsection{Simplifications in the Evaporation Models}
An accurate method to calculate the mass-loss rate due to
hydrodynamic escape is to solve the mass, energy,
and momentum conservation equations \citep{Tian2005,Penz2008,Murray-Clay2009,Owen2012,Lammer2013}.
In our work, we used an approximate
mass-loss formula in the energy-limited and radiation-recombination-limited regime as well as criteria for determining whether the outflow is EUV- or
X-ray-driven; we derived these from different authors \citep{Murray-Clay2009,Jackson2012,Owen2012}.
Thus, our evaporation model includes significant simplifications.
In reality, the evaporation efficiency that characterizes
the how much heating energy is converted to $P$d$V$ work
depends on the characteristics of a planet
and changes with time \citep{Yelle2004,Tian2005,Owen2012}.
Moreover, the direct transition from an X-ray regime to
an EUV-driven regime, as used in our model, can produce
a discontinuous change in the mass-loss rate,
as shown in Figure \ref{fig:single}.
Due to these limitations, we performed several groups of
population syntheses using different evaporation models, which allowed us to obtain a likely range of mass-loss rates.
Our results show that the evolution and final structure of
a single planet may be less accurate due to the evaporation model simplifications but that the statistical information
for the entire planet population was not greatly influenced.
For example, the XE2 simulation yields results
that are similar to those of the XE simulation even though the heating efficiencies
in the XE2 simulation are twice those of XE.
The final mass and radius distributions are not
sensitive to heating efficiency for the following reasons.
At a fixed incident flux, the evaporation threshold scales are
$M_{\rm env}M_{\rm p}/(\epsilon R_{\rm p}^{3})$.
As shown in the NoEV panel in Figure \ref{fig:6aR},
the initial $M_{\rm env}$ of a planet increases quickly
with the planetary mass.
For example, the initial $f_{\rm env/core}$ of a 4 $M_{\oplus}$
planet is typical of $\sim$ 10\%; however, for a
2 $M_{\oplus}$ planet, $\sim$ 1\% is typical.
For $R_{\rm p}$, the NoEV panel in Figure \ref{fig:6mr}
shows that the increase of the planetary radius
with an increasing planetary mass is not significant
for low-mass planets with $M_{\rm p} < 10 M_{\oplus}$.
Thus, planetary mass plays a dominant role in determining the evaporation threshold.
Moreover, atmospheric evaporation is only intense in the
early stage for a short time (with a timescale $\sim$ 100 Myr).
When increasing the heating efficiency in an evaporation model, a slow increase is observed for
the critical planetary mass below which a planet will
be evaporated to a bare rocky core.
As shown in Figure \ref{fig:9histo}, the final planet size distributions
from the XE and XE2 simulations only show limited differences.
One exception is the B04 simulation, where even Jovian planets
lose a significant portion of their envelopes;
in this case, the bimodal feature at the small planet size disappears
because many Neptunian or Jovian planets evaporate
to bare, rocky cores. These massive cores finally
fill the deficit at $\sim$ 2 $R_{\oplus}$.
Another simplification in our models is that we
apply hydrodynamic evaporation to all planets,
even for those at large semi-major axes.
In fact, the atmospheric escape of planets at large distances
should be included in the Jeans escape regime.
Whether an escape flow is in the hydrodynamic or Jeans regime
can be assessed by comparing the mean free path of gas molecules
with the local scale height of the flow
(\citet{Johnson2013} demonstrate how to implement this criterion
in the energy-limited model).
\citet{Owen2012} show that, for close-in planets with semi-major axes
smaller than 0.1 AU, the dominant mass-loss process is hydrodynamic
evaporation. They found that at very small distances, such as $<$ 0.05 AU,
even gas giants with a few Jupiter masses can lose mass hydrodynamically.
They also show that massive planets with large densities are too
gravitationally bound for hydrodynamic outflow.
For example, a planet that is more massive than 1 $M_{\rm Jupiter}$
with a density greater than 1 g\,cm$^{-3}$ can no longer
undergo hydrodynamic evaporation at $\sim$ 1 AU.
Thus, our models overestimate the mass-loss rates
of the planets at large distances by assuming
that the planets undergo hydrodynamic evaporation.
However, this assumption does not affect the main statistical results for
the entire planet population because only
low-mass planets with tenuous envelopes can be evaporated
to bare cores at large distances due to the overestimated mass-loss rates.
We also did not include non-thermal ion escape in our
models (e.g., the planetary ions that are captured by stellar wind).
The influence of non-thermal escape on the statistical results is
also weak because the mass-loss rate of ion pick-up escape is typically
several times smaller than for thermal atmospheric escape
\citep{Kislyakova2013}.
\subsection{Dependence on Stellar Type}
Most $Kepler$ candidates orbit a host star
with a mass of 0.8-1.1 $M_{\odot}$.
During the formation and evolution of our synthetic populations,
all planets orbit around a sun-like star with a mass of exactly 1 $M_{\odot}$.
The key effect of different stellar types is that they
lead to different disk properties, which finally determine the
characteristics of the synthetic planet populations \citep{Ida2005,Alibert2011}.
Massive F/G-type stars are found to yield fewer Neptunian planets
but more massive Jovian planets.
K/M-dwarfs are predicted to yield more Neptunian planets
but fewer Jovian planets \citep{Ida2005,Alibert2011}.
However, for the low-mass planets,
which are sensitive to evaporation
and form a bimodal size distribution,
the fraction does not strongly depend on the mass of the central star \citep{Ida2005,Alibert2011}.
Thus, at least for 0.8-1.1 $M_{\odot}$, comparisons between $Kepler$ candidates
and our synthetic planet populations are feasible.
The stars in our simulations all have the same $L_{\rm EUV}$ and $L_{\rm X}$,
which follows the temporal evolution of the X-rays and EUV emissions of
a sun-like star.
In reality,
the EUV flux (360-920 ${\rm \AA}$) of a sun-like star
at different times does not have direct observational constraints, with the exception of the present Sun.
The temporal evolution of the EUV flux adopted in this work is
from \citet{Ribas2005}, which was derived by scaling the EUV flux with
the temporal evolution of the stellar flux in other wavelength ranges.
The accuracy of this method is approximately 10\% - 20\% \citep{Ribas2005}.
Considering that the EUV evaporation contributes typically less than 10\% of
the total mass-loss of a planet \citep{Owen2013}, the low accuracy of
the EUV fluxes at different times does not affect the statistical results.
On the other hand, the stellar X-ray emissions are highly diverse \citep{Guedel2004}.
The ratio of X-rays to bolometric luminosity during the early saturated phase
decreases from $10^{-3.1}$ for late K dwarfs to $10^{-4.3}$
for early F-type stars (0.29 $\leq$ $(B - V)_{0} <$ 1.4) \citep{Jackson2012}.
Thus, our model is too deterministic because we use a fixed evolution track of
stellar X-ray emission.
M dwarfs can have strong chromosphere activities.
They are very bright regarding hard radiation, which
can strongly erode the planetary envelope.
However, planets around K/M-dwarfs also have a less effective
Roche lobe effect \citep{Lammer2009,PenzMicela2008}.
For planets surrounding F-, G-, K-, and M-type stars,
\citet{Lammer2009} show that evaporation can only remove
a limited portion of the initial gas giant envelopes,
and only Neptunian and terrestrial planets are significantly affected by evaporation.
\section{Summary}
\label{sect:summary}
We combine models of hydrodynamic atmospheric escape with planet population syntheses which include both planet formation and evolution. Our global planet formation model is constructed based on the core accretion paradigm. We find that atmospheric escape adds characteristic features to the radius distribution of the synthetic planetary populations. The most interesting imprints are an “evaporation valley” in the radius-distance diagram and a bimodal planet size distribution \citep[cf.][]{Owen2013}. These features are consequences of evaporation, but their properties are also related to the characteristics of the initial planet population, and thus the planet formation process.
In our synthetic populations, the initial envelope fraction of low-mass planets with sizes of less than 4 $R_{\oplus}$ is normally $<$ 10\% (especially for planets at small distances). The initial mass fraction of planetary envelopes also scales with the planetary core mass and is typically $<$ 5\% for Earth-size planets with a 1 $M_{\oplus}$ core. Due to their low gravity, such very low-mass planets are sensitive to evaporation and are evaporated to bare cores with radii of about 1 $R_{\oplus}$ at sufficiently small orbital distances. However, planets with larger core masses also have larger envelopes, as typical for the core accretion paradigm. At the end of evolution, such more massive super-Earths or Neptunian planets retain at least a portion of their H/He envelopes. They will have substantially larger radii because already 0.1\% in mass of H/He significantly increases the radius. The threshold core mass where a complete loss of the initial H/He envelope occurs decreases with orbital distance.
As a result, an ``evaporation valley" running diagonally downward in the orbital distance - planetary radius plane appears. It separates bare cores from planets retaining some primordial H/He. As the process of losing the last, e.g., 0.1\% of H/He occurs on a short timescale, the valley is sparsely populated with planets at any given moment at late times (e.g., 5 Gyrs). At this time, the evaporation valley runs diagonally downward from about 2 $R_{\oplus}$ at 0.06 AU to 1 $R_{\oplus}$ at 0.5 AU.
Corresponding to this valley, the one-dimensional radius distribution of close-in low-mass planets is bimodal, with a local maximum at about 1 $R_{\oplus}$, a local minimum at about 1.5 $R_{\oplus}$ and another maximum at 2-3 $R_{\oplus}$. The lower maximum in the bimodal distribution corresponds to the bare cores of planets that have lost their entire initial H/He envelope. The minimum corresponds to the ``evaporation valley". The second maximum corresponds to low-mass planets that have kept some primordial H/He.
No such very prominent features (deep diagonal evaporation valley and strong depletion at about 1.5 $R_{\oplus}$ in the radius histogram) can be seen in the $Kepler$ data for $R_{\rm p} < 2 R_{\oplus}$, even if a small local minimum might be present also in the observational data \citep{Owen2013,Petigura2013a}. This difference could be due to the following reasons:
-First, our evaporation model might be too deterministic (identical mean XUV flux as a function of time for all stars) and/or overestimate the impact of evaporation.
-Second, in our formation model, all low-mass planets start with a primordial H/He envelope and reach their final mass during the presence of the gaseous disk. In reality, at least some terrestrial planets will reach their final mass only after the dissipation of the gaseous nebula through a series of giant impacts, as likely for the Earth itself. This population of ``late" planets is not expected to start with a significant H/He envelope (a few percent), and therefore will not exhibit the imprints of evaporation (no significant radius evolution in time). Such planets would fill in the valley. For such planets, the transition from solid planets to planets with H/He should likely not be a clear function of the semi-major axis, in contrast to the case that the transition is due to evaporation as studied here. This is an important constraint for formation models.
-Third, the composition of the bare cores might be different than in the model here: in our synthetic populations, the sizes of the bare evaporated cores range between 1.0 and 1.2 $R_{\oplus}$ because all of the low-mass planets within 1 AU have rocky cores with a zero ice fraction despite orbital migration included in the formation model: they only migrate within the iceline. If the migration of individual planets is more efficient than assumed here, or if more massive planets push lower-mass planets closer-in due to capture in mean motion resonance in multiple systems (neither included here), planets that have formed beyond the ice line may migrate to close-in orbits during the formation stage. Such planets will accrete high amounts of ice during formation and, hence, have a large, icy core. In a forthcoming paper, we use a synthetic planet population with icy planetary cores to show that the sizes of the bare icy cores will be significantly larger and that these bare icy cores can fill in the minimum at $\sim 1.5 R_{\oplus}$. Thus, if close-in planets have both rocky and icy (or mixed) cores, this can lead to an approximately flat radius distribution for $R_{\rm p} < 2-3 R_{\oplus}$, as observed in the bias-corrected radius distribution of the $Kepler$ sample \citep[e.g.,][]{Fressin2013,Petigura2013a}. Thus, a diversity in the core composition combined with the consequences of evaporation may provide an explanation for the observed radius plateau to 2-3 $R_{ops}$ and the decrease at larger radii. Clearly, the ice content of close-in low-mass planet is another fundamental constraint for formation (migration) models.
The specific shape and location of the second maximum at 2-3 $R_{\oplus}$ in the bimodal distribution (planets that have retained primordial H/He) is related to envelope evaporation. Stronger evaporation produces a lower outer maximum and moves the peak to smaller radii. Most of our evaporation models lead to a similar outer peak, which is approximately consistent with the size distribution of the $Kepler$ candidates in the radius range of about 2-8 $R_{\oplus}$. However, in two extreme cases, the NoEV simulation without any evaporation and the B04 simulation with very strong evaporation (100\% heating efficiency), the final planet size distribution shows clear differences compared with the $Kepler$ data in this range. This indicates that evaporation is indeed important in shaping the characteristics of close-in, low-mass planets \citep{Lopez2012,Owen2013}. For the comparison of observations with predictions of formation model for such planets, it should therefore be taken into account. Other major findings are as follows: We find that in contrast to the radius distribution, the mass distribution of the entire planet population is barely affected by evaporation at $a >$ 0.06 AU: low-mass planets may lose all H/He, but its initial mass fraction is low anyway. Giant planets in contrast do not lose much H/He in our nominal evaporation model - their high mass gravity protects them.
We demonstrate the importance of the core mass using a parameter study that is similar to the study described in \citet{Lopez2012}. We confirm the evaporation threshold in the $M_{\rm p}^{2}/R_{\rm p}^{3}$ versus distance$^{2}$ plane \citep{Jackson2012,Lopez2012,Owen2013}. Furthermore, we find that this evaporation threshold is also apparent in the mass-radius relationship of close-in planets. For the simulations that include evaporation, the mass-radius relationship clearly contains a threshold, and the very low-mass, very low-density planets that are initially above the threshold will lose at least a portion of their envelopes until they are below this threshold. Finally, the impact of the grain opacity in the outer radiative zone of protoplanets during the formation stage on the mass-radius relationship at 5 Gyrs remains clear also with evaporation.
Our study shows that several important observational constrains can be inferred from the comparison of observational results and theoretical formation and evolution models that include atmospheric escape. This is of high interest in view of several future high-precision photometric missions like TESS \citep{Ricker2010} or CHEOPS \citep{Broeg2013}. Our results in particular show a dynamical evolution of the planetary population in terms of the radii (or composition) in time. In principle, such a temporal evolution could be observed directly with PLATO 2.0 \citep{Rauer2013} which determines the ages of the host stars. This would open a new perspective to understand the nature of close-in planets.
\acknowledgements We thank Dr. Jonathan Fortney for the atmospheric
structures used for the comparison with the semi-grey model. We
also thank Dr. Helmut Lammer, Kai-Martin Dittkrist, and
Gabriel-Dominique Marleau for helpful discussions. S. Jin
acknowledges the financial support of the Chinese Academy of
Sciences and the Max-Planck-Gesellschaft. This work was also
supported by the National Natural Science Foundation of China
(Grants No. 11273068, 11473073), the Natural Science Foundation of Jiangsu
Province (Grant No. BK20141509), the innovative and
interdisciplinary program by CAS (Grant No. KJZD-EW-Z001), and the
Foundation of Minor Planets of the Purple Mountain Observatory. J.
H. Ji acknowledges the Strategic Priority Research Program - The
Emergence of Cosmological Structures of the Chinese Academy of
Sciences (Grant No. XDB09000000). C. Mordasini thanks the
Max-Planck-Gesellschaft for the Reimar-L\"ust Fellowship.
We thank the referee for comments that helped to improve the manuscript.
|
2,877,628,090,301 | arxiv | \section{Apparent equations of mass and momentum conservation}
\label{sec:ApparentProperties}
We begin by recalling the equations of mass and momentum conservation for the anisotropic material $\Omega$:
\begin{subequations}
\begin{align}
& \textrm{i}\omega P\big/ B= \nabla\cdot \mathbf{V} ,
\label{eq:GoverningEqnsA}\\
& \textrm{i}\omega \mathbf{V}= \boldsymbol{H} \nabla P ,
\label{eq:GoverningEqnsB}%
\end{align}
\label{eq:GoverningEqns}%
\end{subequations}
where the symmetric tensor $\boldsymbol{H}=\boldsymbol{\rho}^{-1}$ is the inverse of the density tensor.
We denote $\boldsymbol{H}_\Gamma= H_{ij}$ with $(i,j)\in\{1,2\}^2$ its restriction to in-plane directions, and
$\textbf{q}= q_1\mathbf{e}_1+q_2\mathbf{e}_2$ the vector with components:
\begin{equation}
q_1 = H_{13}/H_{33},
\qquad\text{and}\qquad
q_2 = H_{23}/H_{33}.
\label{eq:ApparentParameters}
\end{equation}
Furthermore, the pressure and velocity fields are taken in the following form prescribed by the incident plane wave, the linearity in the system, and the homogeneity of the layer:
\begin{equation}
P=\widehat{P}(x_3)e^{\textrm{i}k_1 x_1+\textrm{i}k_2 x_2} \ \ \text{and} \ \ \mathbf{V}=\widehat{\mathbf{V}}(x_3) e^{\textrm{i}k_1 x_1+\textrm{i}k_2 x_2},
\label{eq:FieldsInLayer}
\end{equation}
where $\widehat{P}(x_3)$ and $\widehat{\mathbf{V}}(x_3) $ still depend on the coordinate $x_3$.
The in-plane vector is defined by $\textbf{k}_\Gamma = k_1 \mathbf{e}_1 + k_2 \mathbf{e}_2 $. The in-plane and normal components of the velocity $\widehat{\mathbf{V}}(x_3) $ are also identified as
\begin{equation}
\widehat{\mathbf{V}} = \widehat{\mathbf{V}}_\Gamma + \widehat{V}_3\mathbf{e}_3,
\label{eq:VelocityComponents}
\end{equation}
where $\widehat{\mathbf{V}}_\Gamma = \widehat{V}_1\mathbf{e}_1 + \widehat{V}_2\mathbf{e}_2,$ and $\widehat{V}_j = \widehat{\mathbf{V}} \cdot \mathbf{e}_j$ for $j\in\{1,2,3\}$.
Using the fact that derivation of the fields $P$ and $\mathbf{V}$ in Eq.~(\ref{eq:FieldsInLayer}) with respect to $x_j$ is equivalent to their multiplication by $\textrm{i} k_j$ for $j\in\{1,2\}$, the conservation equations in Eq.~(\ref{eq:GoverningEqns}) can be re-written as follows, where Eq.~(\ref{eq:GoverningEqns2A}) is equivalent to Eq.~(\ref{eq:GoverningEqnsA}) while
Eqs.~(\ref{eq:GoverningEqns2B}) and (\ref{eq:GoverningEqns2C}) are equivalent to Eq.~(\ref{eq:GoverningEqnsB}) in the in-plane and normal directions respectively:
\begin{subequations}
\begin{align}
& \frac{\textrm{i}\omega \widehat{P}}{ B }= \textrm{i} \textbf{k}_\Gamma \cdot \widehat{\mathbf{V}}_\Gamma + \frac{\partial \widehat{V}_3}{\partial x_3},
\label{eq:GoverningEqns2A}\\
& \textrm{i}\omega \mathbf{V}_\Gamma = \textrm{i}\boldsymbol{H}_\Gamma .\textbf{k}_\Gamma \widehat{P} + H_{33} \frac{\partial \widehat{P} }{\partial x_3} \textbf{q},
\label{eq:GoverningEqns2B} \\
& \textrm{i}\omega \widehat{V}_3 = H_{33} \( \textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma \widehat{P} + \frac{\partial \widehat{P} }{\partial x_3} \).
\label{eq:GoverningEqns2C}%
\end{align}
\label{eq:GoverningEqns2}%
\end{subequations}
Equation (\ref{eq:GoverningEqns2C}) yields the apparent momentum conservation in the normal direction, and involves the apparent density $\widetilde{\rho}\, $:
\begin{equation}
\textrm{i} \omega \widetilde{\rho}\, \widehat{V}_3 = \textrm{i} (\textbf{q}\cdot \textbf{k}_\Gamma ) \widehat{P} + \frac{ \partial \widehat{P}}{\partial x_3}
\ \ \text{where}\ \
\widetilde{\rho}\, = \frac{1}{H_{33}}.
\label{eq:ApparentMomentumConservation}
\end{equation}
Besides, substitution of Eq.~(\ref{eq:GoverningEqns2B}) in (\ref{eq:GoverningEqns2A}) provides the relation:
\begin{equation}
\frac{\textrm{i}\omega \widehat{P}}{ B }=
\textrm{i} \textbf{k}_\Gamma \cdot \textbf{q} \frac{ H_{33}}{\textrm{i} \omega} \frac{\partial \widehat{P} }{\partial x_3}
- \frac{ \widehat{P} }{\textrm{i} \omega} \textbf{k}_\Gamma \cdot (\boldsymbol{H}_\Gamma .\textbf{k}_\Gamma)
+ \frac{\partial \widehat{V}_3}{\partial x_3},
\label{eq:ApparentMassConservation0}
\end{equation}
while, according to Eq.~(\ref{eq:GoverningEqns2C}), the $x_3$-derivative of $\widehat{P}$ reads:
\begin{equation}
\frac{\partial \widehat{P} }{\partial x_3} = \frac{ \textrm{i}\omega }{H_{33}} \widehat{V}_3 - \textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma \widehat{P} .
\label{eq:DerivativeP}
\end{equation}
Substitution of Eq.~(\ref{eq:DerivativeP}) in (\ref{eq:ApparentMassConservation0}) yields the apparent equation of mass conservation:
\begin{equation}
\frac{\textrm{i} \omega \widehat{P}}{ \widetilde{B} } = \textrm{i} (\textbf{q}\cdot \textbf{k}_\Gamma ) \widehat{V}_3 + \frac{\partial \widehat{V}_3 }{ \partial x_3 } ,
\label{eq:ApparentMassConservation}
\end{equation}
where the apparent bulk modulus $ \widetilde{B}$ satisfies:
\begin{equation}
\frac{1}{ \widetilde{B} } = \frac{1}{ B } - \frac{ 1 }{ \omega^2 } \( \:^t\textbf{k}_\Gamma . \boldsymbol{H}_\Gamma .\textbf{k}_\Gamma - H_{33} (\textbf{k}_\Gamma \cdot \textbf{q} )^2 \) .
\label{eq:ApparentBulkModulus}
\end{equation}
Expansion of the right-hand side of Eq.~(\ref{eq:ApparentBulkModulus}) gives:
\begin{equation}
\begin{array}{l}
\displaystyle
\:^t\textbf{k}_\Gamma . \boldsymbol{H}_\Gamma .\textbf{k}_\Gamma - H_{33} (\textbf{k}_\Gamma \cdot \textbf{q} )^2
= H_{11} k_1^2 + H_{22} k_2^2 \\
\displaystyle
\quad + 2H_{12} k_1 k_2- H_{33}\( q_1^2 k_1^2 + 2 q_1q_2k_1k_2 + q_2^2 k_2^2 \).
\end{array}
\label{eq:Expand}
\end{equation}
Substitution of Eq.~(\ref{eq:Expand}) into (\ref{eq:ApparentBulkModulus}) yields
\begin{subequations}
\begin{align}
&\frac{\omega^2}{B} -\frac{\omega^2}{ \widetilde{B}(k_1,k_2) } = \xi_{11} k_1^2 + \xi_{22}k_2^2 +2 \, \xi_{12} k_1k_2 ,
\label{eq:ApparentBulk2A}\\
&\text{with}\quad \xi_{ij} = H_{ij} - H_{33} \, q_i \, q_j ,\ \ \forall (i,j)\in\{1,2\}^2.
\label{eq:ApparentBulk2B}%
\end{align}
\label{eq:ApparentBulk2}%
\end{subequations}
Equations (\ref{eq:ApparentMomentumConservation}), (\ref{eq:ApparentMassConservation}) and (\ref{eq:ApparentBulk2}) provide the results of this section.
\section{Nicolson--Ross--Weir formula in anisotropic media}
\label{sec:Nicolson}
To derive the Nicolson--Ross--Weir formula in anisotropic media, we start with the following system stating the boundary conditions at the interfaces of the layer with the surrounding medium:
\begin{equation}
\begin{Bmatrix} R+1\\ \\ \displaystyle \frac{R-1}{\widetilde{Z}_0} \\ \end{Bmatrix}
= \textbf{U} \cdot
\begin{bmatrix} e^{ \textrm{i} k_3^{-}L} & 0\\ \\ 0 & e^{\textrm{i} k_3^{+} L}\\ \end{bmatrix}
\cdot
\textbf{U}^{-1}
\cdot
\begin{Bmatrix} T \\ \\ \displaystyle \frac{-T }{\widetilde{Z}_0} \\ \end{Bmatrix},
\label{eq:Sol2}
\end{equation}
where matrices $\textbf{U}$ and $\textbf{U}^{-1}$ are given by
\begin{equation}
\textbf{U} =
\begin{bmatrix}
\widetilde{Z} & \widetilde{Z} \\
-1 & 1 \\
\end{bmatrix}, \quad \text{and }\quad
\textbf{U}^{-1} = \frac{1}{2}
\begin{bmatrix}
1\big/\widetilde{Z} & -1\\
1\big/\widetilde{Z} & 1 \\
\end{bmatrix},
\label{eq:Sol}%
\end{equation}
while wavenumbers in the layer read as follows in the $x_3$-direction:
\begin{equation}
k_3^{\pm} = -\textbf{q}\cdot \textbf{k}_\Gamma \pm\widetilde{k}.
\label{eq:LayerWavenumber}
\end{equation}
Substitution of Eq.~(\ref{eq:Sol}) into (\ref{eq:Sol2}) yields the following two equations once developed:
\begin{subequations}
\begin{align}
&& 2(R+1) ~~ &= T e^{ \textrm{i} k_3^{+}L} (1-\sigma) + T e^{ \textrm{i} k_3^{-}L} (1+\sigma),
\label{eq:TheTwoEqnsA}\\
&& 2(R+1) \sigma &= T e^{ \textrm{i} k_3^{+}L} (1-\sigma) -T e^{ \textrm{i} k_3^{-}L} (1+\sigma) ,
\label{eq:TheTwoEqnsB}%
\end{align}
\label{eq:TheTwoEqns}%
\end{subequations}
where $\sigma=\widetilde{Z}/\widetilde{Z}_0$ is the ratio of apparent impedances. Summing side by side Eqs.~(\ref{eq:TheTwoEqnsA}) and (\ref{eq:TheTwoEqnsB}) while using the expression (\ref{eq:LayerWavenumber}) of the wavenumber $ k_3^{+}$ provides:
\begin{equation}
( T e^{- \textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma L } ) e^{+ \textrm{i} \widetilde{k}L} = 1 + \frac{1+\sigma}{1-\sigma} R.
\label{eq:SumTwoEqns}%
\end{equation}
Subtracting side by side Eq.~(\ref{eq:TheTwoEqnsB}) to (\ref{eq:TheTwoEqnsA}) provides
\begin{equation}
( T e^{- \textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma L } ) e^{- \textrm{i} \widetilde{k}L} = 1 + \frac{1-\sigma}{1+\sigma} R.
\label{eq:DifTwoEqns}%
\end{equation}
Multiplying side by side Eqs.~(\ref{eq:SumTwoEqns}) and (\ref{eq:DifTwoEqns}) leads to:
\begin{equation}
( T e^{- \textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma L } )^2 = \( 1 + \frac{1-\sigma}{1+\sigma} R \)\(1 + \frac{1+\sigma}{1-\sigma} R \).
\label{eq:MultwoEqns}%
\end{equation}
Equation (\ref{eq:MultwoEqns}) can be re-written in the form
\begin{equation}
\frac{1+\sigma^2}{1-\sigma^2} = \frac{1+R^2-( T e^{- \textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma L } )^2}{2R}.
\label{eq:MultwoEqns2}%
\end{equation}
Solving for $\sigma^2$ in Eq.~(\ref{eq:MultwoEqns2}) yields
\begin{equation}
\sigma^2 = \frac{1+R^2-( T e^{- \textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma L } )^2 -2R }{1+R^2-( T e^{- \textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma L } )^2 +2R}.
\label{eq:Sigma}%
\end{equation}
Taking the square root of Eq.~(\ref{eq:Sigma}), the apparent impedance is found:
\begin{equation}
\widetilde{Z}=\pm \widetilde{Z}_0
\sqrt{ \frac{(1-R)^2-( T e^{- \textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma L } )^2 }{(1+R)^2-( T e^{- \textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma L } )^2}},
\label{eq:MyZZ}%
\end{equation}
while Eqs.~(\ref{eq:TheTwoEqnsA}) and (\ref{eq:TheTwoEqnsB}) can be cast in the single relation
\begin{equation}
e^{ \mp \textrm{i} \widetilde{k}L} = \( 1 + \frac{1 \mp\widetilde{Z}/\widetilde{Z}_0}{ 1 \pm\widetilde{Z}/\widetilde{Z}_0 } R \) \frac{1}{T e^{- \textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma L }}.
\label{eq:TwoEqnsSingle}%
\end{equation}
Equations (\ref{eq:MyZZ}) and (\ref{eq:TwoEqnsSingle}) provide the results of this section.
\section{Coefficients of the density tensor}
\label{sec:DensityTensor}
Here, we derive the expression of the six coefficients of the density tensor $ \boldsymbol{\rho} $ from the knowledge of the following parameters:
\begin{subequations}
\begin{align}
&\ q_1 = H_{13}/H_{33}, \quad q_2 = H_{23}/H_{33},\quad \widetilde{\rho}\, =1/H_{33} ,
\label{eq:InitialParametersA}\\
&\xi_{ij} = H_{ij} - H_{33} \, q_i \, q_j ,\ \ \forall (i,j)\in\{1,2\}^2.
\label{eq:InitialParametersB}%
\end{align}
\label{eq:InitialParameters}%
\end{subequations}
Since $ \boldsymbol{H} = \boldsymbol{\rho}^{-1}$, tensors $\boldsymbol{H}$ and $\boldsymbol{\rho}$ satisfy
\begin{equation}
\boldsymbol{H} = \frac{ \:^t\text{comat}(\boldsymbol{\rho})}{\det(\boldsymbol{\rho})},
\quad\text{and}\quad
\boldsymbol{\rho} = \det(\boldsymbol{\rho}) \:^t\text{comat}(\boldsymbol{H}),
\label{eq:MutualInverse}
\end{equation}
where $\text{comat}(\cdot)$ stands for the comatrix. Since $ \boldsymbol{H} $ and $ \boldsymbol{\rho}$ are symmetric, calculation of their co-matrices and of the determinant of the density tensor provide:
\begin{widetext}
\begin{equation}
\boldsymbol{H} = \frac{1}{\det(\boldsymbol{\rho})}
\begin{bmatrix}
\quad& \rho_{22} \rho_{33} - \rho_{23}^2
&\qquad& \rho_{13} \rho_{23} - \rho_{33}\rho_{12}
&\quad& \rho_{12} \rho_{23} - \rho_{22}\rho_{13}
& \quad
\\ \\
\quad& \rho_{13} \rho_{23} - \rho_{33} \rho_{12}
&\qquad& \rho_{11} \rho_{33} - \rho_{13}^2
&\qquad& \rho_{12}\rho_{13} - \rho_{11}\rho_{23}
& \quad
\\ \\
\quad& \rho_{12}\rho_{23} - \rho_{13}\rho_{22}
&\qquad& \rho_{12}\rho_{13} - \rho_{11}\rho_{23}
&\qquad& \rho_{11}\rho_{22} - \rho_{12}^2
& \quad
\\
\end{bmatrix},
\label{eq:Comat}
\end{equation}
\begin{equation}
\boldsymbol{\rho} = \det(\boldsymbol{\rho}) \cdot
\begin{bmatrix}
\quad& H_{22} H_{33} - H_{23}^2
&\qquad& H_{13} H_{23} - H_{33}H_{12}
&\quad& H_{12} H_{23} - H_{22}H_{13}
& \quad
\\ \\
\quad& H_{13} H_{23} - H_{33} H_{12}
&\qquad& H_{11} H_{33} - H_{13}^2
&\qquad& H_{12}H_{13} - H_{11}H_{23}
& \quad
\\ \\
\quad& H_{12}H_{23} - H_{13}H_{22}
&\qquad& H_{12}H_{13} - H_{11}H_{23}
&\qquad& H_{11}H_{22} - H_{12}^2
& \quad
\\
\end{bmatrix},
\label{eq:ComatH}
\end{equation}
\begin{equation}
\text{det}(\boldsymbol{\rho}) =
\rho_{13}\,( \rho_{12}\rho_{23}-\rho_{13}\rho_{22} ) + \rho_{23}\,(\rho_{12}\rho_{13}-\rho_{11}\rho_{23}) + \rho_{33}\,( \rho_{11} \rho_{22}-\rho_{12}^2).
\label{eq:Deter}
\end{equation}
\end{widetext}
In what follows, the following coefficient is used:
\begin{equation}
\Delta_{12} = \rho_{11} \rho_{22}- \rho_{12}^2 = H_{33} \det(\boldsymbol{\rho})
\label{eq:Delta12}
\end{equation}
Combining Eqs.~(\ref{eq:InitialParametersA}) to (\ref{eq:Delta12}), the following relations are found immediately:
\begin{subequations}
\begin{align}
& q_1 = \frac{H_{13}}{H_{33}} = \frac{ \rho_{12} \rho_{23} - \rho_{22} \rho_{13} }{\Delta_{12}},
\label{eq:FirstRelationsA} \\
& q_2 = \frac{H_{23}}{H_{33}} = \frac{ \rho_{12} \rho_{13} - \rho_{11} \rho_{23} }{\Delta_{12}},
\label{eq:FirstRelationsB} \\
& \det(\boldsymbol{\rho}) = \Delta_{12} \( \rho_{13} q_1 + \rho_{23} q_2 + \rho_{33} \),
\label{eq:FirstRelationsC} \\
& \widetilde{\rho}\, =\frac{1}{H_{33} } = \frac{\det(\boldsymbol{\rho})}{ \Delta_{12}} = \rho_{13} q_1 + \rho_{23} q_2 + \rho_{33} .
\label{eq:FirstRelationsD}
\end{align}
\label{eq:FirstRelations}%
\end{subequations}
Now, the parameters $\xi_{ij}$ in Eq.~(\ref{eq:InitialParametersB}) can be re-written
\begin{subequations}
\begin{align}
&& \xi_{11} = H_{11} - H_{33} \, q_1^2 &= \frac{H_{11}H_{33}-H_{13}^2}{H_{33}} , \\
&& \xi_{22} = H_{22} - H_{33} \, q_2^2 &= \frac{H_{22}H_{33}-H_{23}^2}{H_{33}} , \\
&& \xi_{12} = H_{12} - H_{33} \, q_1q_2 &= \frac{H_{12}H_{33}-H_{13}H_{23}}{H_{33}} .
\end{align}
\label{eq:Xi1}%
\end{subequations}
Besides, equation (\ref{eq:ComatH}) provides the following coefficients of the density tensor:
\begin{subequations}
\begin{align}
& \rho_{11} = \( H_{22}H_{33}-H_{23}^2 \) \text{det}(\boldsymbol{\rho}), \\
& \rho_{22} = \( H_{11}H_{33}-H_{13}^2 \) \text{det}(\boldsymbol{\rho}), \\
& \rho_{12} = \( H_{13}H_{23}-H_{33}H_{12} \) \text{det}(\boldsymbol{\rho}) .
\end{align}
\label{eq:Xi2}%
\end{subequations}
Combination of Eqs.~(\ref{eq:Delta12}), (\ref{eq:Xi1}) and (\ref{eq:Xi2}) yields
\begin{equation}
\xi_{11} = \frac{ \rho_{22} }{\Delta_{12}} ,
\quad
\xi_{22} = \frac{\rho_{11}}{ \Delta_{12}} ,
\quad
\xi_{12} = \frac{ - \,\rho_{12} }{\Delta_{12}} .
\label{eq:Xi3}%
\end{equation}
These parameters obviously satisfy the equality
\begin{equation}
\xi_{11} \xi_{22} - \xi_{12}^2 = \frac{1}{\Delta_{12}} = \frac{1}{\rho_{11}\rho_{22}-\rho_{12}^2}.
\label{eq:Xi4}%
\end{equation}
As a result, Eq.~(\ref{eq:Xi3}) can be inverted to give
\begin{subequations}
\begin{align}
&\rho_{11} = \ \ \xi_{22} \Big/( \xi_{11} \xi_{22} - \xi_{12}^2 ) ,\\
& \rho_{22} = \ \ \xi_{11} \Big/( \xi_{11} \xi_{22} - \xi_{12}^2 ) ,\\
&\rho_{12}= - \, \xi_{12} \Big/( \xi_{11} \xi_{22} - \xi_{12}^2) .
\end{align}
\label{eq:RhoInPlane}%
\end{subequations}
Now combining Eqs.~(\ref{eq:FirstRelations}a,b) and (\ref{eq:Xi3}), the parameters $q_1$ and $q_2$ read:
\begin{equation}
q_1 = -( \xi_{11} \rho_{13} + \xi_{12} \rho_{23} ) ,
\quad
q_2 = -( \xi_{22} \rho_{23} + \xi_{12} \rho_{13} ) .
\label{eq:q1q2}%
\end{equation}
Solving for $\rho_{13}$ and $\rho_{23}$ in Eq.~(\ref{eq:q1q2}) yields:
\begin{subequations}
\begin{align}
&\rho_{13} = (\xi_{12} q_2 - \xi_{22} q_1) \Big/( \xi_{11} \xi_{22} - \xi_{12}^2 ) ,\\
& \rho_{23} = (\xi_{12} q_1 - \xi_{11} q_2) \Big/( \xi_{11} \xi_{22} - \xi_{12}^2 ) .
\end{align}
\label{eq:RhoCoupling}%
\end{subequations}
Finally, substitution of Eq.~(\ref{eq:RhoCoupling}a,b) into (\ref{eq:FirstRelationsD}) leads to:
\begin{equation}
\rho_{33} = \widetilde{\rho}\, + ( \xi_{22} q_1^2 +\xi_{11} q_2^2 -2\, \xi_{12} q_1 q_2 ) \Big/( \xi_{11} \xi_{22} - \xi_{12}^2 ).
\label{eq:Rho33}%
\end{equation}
Equations (\ref{eq:RhoInPlane}a,b,c), (\ref{eq:RhoCoupling}a,b) and (\ref{eq:Rho33}) provide the result of this section.
\section{Cell problems in the homogenization process}
\label{sec:CellProblems}
In this section, the definition of the Johnson--Champoux--Allard--Lafarge \cite{Johnson1987,Lafarge1997} parameters are provided in terms of volume and surface averages of periodic potential fields over the unit cell of the porous medium. These potential fields satisfy periodic cell problems, the full derivation of which from the theory of two-scale asymptotic homogenization \cite{SanchezPalencia1980,Auriault2009} is not recalled here.
In what follows, $\Omega_\textrm{p}$ denotes the unit cell of the porous medium, $\Omega_\textrm{f}$ denotes the fluid domain inside the unit cell, and $\Gamma$ denotes the fluid/solid interface with $\vect{n}$ as normal vector. Moreover, the volume average of a field defined in the fluid domain of the unit cell is denoted by
\begin{equation}
\langle \cdot \rangle = \frac{1}{| \Omega_\textrm{p}|} \int_{\Omega_\textrm{f}}{ \cdot \, \text{d}\Omega},
\end{equation}
where $| \Omega_\textrm{p}| $ is the overall volume of the unit cell. In the present case of the parallelepiped cube with sizes $l_I$, $l_{I\!\!I}$, and $l_{I\!\!I\!\!I}$ in the directions $(\mathbf{e}_{I},\mathbf{e}_{I\!\!I},\mathbf{e}_{I\!\!I\!\!I})$, the volume of the unit cell is $| \Omega_\textrm{p}| = l_I \, l_{I\!\!I} \, l_{I\!\!I\!\!I}$. Furthermore, the porosity of the porous medium reads
\begin{equation}
\phi= \langle 1 \rangle = | \Omega_\textrm{f} | \big/ | \Omega_\textrm{p}|.
\end{equation}
\subsection{Visco-inertial cell problem}
The frequency-dependent visco-inertial cell problem consists of the incompressible Stokes flow through the unit cell in response to a unit pressure gradient applied in the $\mathbf{e}_{J}$ with $J=I,I\!\!I,I\!\!I\!\!I$:
\begin{equation}
\left\lbrace
\begin{array}{l}
\displaystyle
\text{div}( \vect{k}_J ) = 0, \\
\displaystyle
\text{div}( \textbf{grad}(\vect{k}_J) ) + \frac{\textrm{i} \omega \rho_0}{ \eta} \vect{k}_J = \textbf{grad}( \zeta_J ) - \vect{e}_J , \\
\displaystyle
\vect{k}_J \equiv \vect{0}\text{\quad on }\Gamma, \\
\displaystyle
\langle \zeta_J \rangle \equiv 0, \\
\displaystyle
\vect{k}_J,\quad \zeta_J \quad \Omega\text{-periodic},\\
\end{array}
\right.
\label{eq:ViscoInertialCellPb}
\end{equation}
where $\vect{k}_J$ plays the role of the velocity field and $\zeta_J$ of its associated pressure field. The visco-inertial frequency-dependent permeability tensor reads:
\begin{equation}
\tens{K} = \langle \vect{e}_J \otimes \vect{k}_J \rangle,
\end{equation}
with implicit summation on $J$, and with $\otimes $ being the tensor product. Due to symmetries in the unit cell, directions $(\mathbf{e}_{I},\mathbf{e}_{I\!\!I},\mathbf{e}_{I\!\!I\!\!I})$ are identified as principal axes of the visco-inertial permeability tensor, which is therefore diagonal in this coordinate system $\tens{K} = \textbf{diag}(K_J)$ where $K_J=\langle \vect{k}_J \cdot \vect{e}_J \rangle$.
The visco-static permeability tensor corresponds to the tensor $\tens{K}$ at the frequency $\omega=0$. The fields $(\vect{k}_J,\zeta_J)=(\vect{k}^0_J,\zeta^0_J)$ at $\omega=0$ satisfy the geometric cell problem:
\begin{equation}
\left\lbrace
\begin{array}{l}
\displaystyle
\text{div}( \vect{k}_J^0 ) = 0, \\
\displaystyle
\text{div}( \textbf{grad}(\vect{k}_J^0) ) = \textbf{grad}( \zeta_J ^0) - \vect{e}_J^0 , \\
\displaystyle
\vect{k}_J^0 \equiv \vect{0}\text{\quad on }\Gamma, \\
\displaystyle
\langle \zeta_J^0 \rangle \equiv 0, \\
\displaystyle
\vect{k}_J^0,\quad \zeta_J^0 \quad \Omega\text{-periodic}, \\
\end{array}
\right.
\label{eq:ViscoInertialCellPb0}
\end{equation}
and the visco-static permeability $K_J^0 $ is real and positive valued and is given by:
\begin{equation}
K_J^0 = \langle \vect{k}_J^0 \cdot \vect{e}_J \rangle.
\label{eq:Kj0}
\end{equation}
At high frequencies $\omega\rightarrow\infty$, the fields are denoted $(\vect{k}_J,\zeta_J)=(\vect{k}^\infty_J,\zeta^\infty_J)$ and satisfy the cell problem:
\begin{equation}
\left\lbrace
\begin{array}{l}
\displaystyle
\text{div}( \vect{k}_J ^\infty) = 0, \\
\displaystyle
\frac{\textrm{i} \omega \rho_0}{ \eta} \vect{k}_J^\infty = \textbf{grad}( \zeta_J^\infty ) - \vect{e}_J , \\
\displaystyle
\vect{k}_J^\infty \cdot \vect{n} = \vect{0}\text{\quad on }\Gamma, \\
\displaystyle
\langle \zeta_J^\infty \rangle \equiv 0, \\
\displaystyle
\vect{k}_J^\infty,\quad \zeta_J^\infty \quad \Omega\text{-periodic}.\\
\end{array}
\right.
\label{eq:ViscoInertialCellPbInfty}
\end{equation}
The high-frequency limit of the principal permeability then reads:
\begin{equation}
K_J^\infty = \frac{\phi \eta}{-\textrm{i} \omega \rho_0 \alpha_J^\infty}
\quad\text{where}\quad
\alpha_J^\infty = \frac{\phi}{\phi - \displaystyle \left\langle \frac{ \partial \zeta_J^\infty}{\partial x_J } \right\rangle} ,
\label{eq:KjInf}
\end{equation}
with $\alpha_J^\infty$ being the high-frequency tortuosity. Besides, the characteristic viscous length $\Lambda_J$ is related to the high-frequency asymptotic limit of the permeability $K_J$ when the viscous boundary layer at the fluid/solid interface is accounted for \cite{Johnson1987}. It reads:
\begin{equation}
\Lambda_J = 2 \frac{ \int_{\Omega_\textrm{f}}{ \vect{k}_J^\infty \cdot \vect{k}_J^\infty \,\text{d}\Omega}}{ \int_{\Gamma}{ \vect{k}_J^\infty \cdot \vect{k}_J^\infty \,\text{d}\Gamma} }.
\label{eq:Lvis}
\end{equation}
Equations (\ref{eq:ViscoInertialCellPb0}) and (\ref{eq:ViscoInertialCellPbInfty}) provide the cell problems solved to obtain the JCAL parameters $K_J^0$, $\alpha_J^\infty$ and $\Lambda_J $ given in Eqs. (\ref{eq:Kj0}), (\ref{eq:KjInf}), and (\ref{eq:Lvis}).
\subsection{Thermo-acoustic cell problem}
The frequency-dependent thermo-acoustic cell problem consists of the heat conduction through the unit cell in response to a constant pressure applied:
\begin{equation}
\left\lbrace
\begin{array}{l}
\displaystyle
\text{div}( \textbf{grad}(\vartheta) ) + \frac{\textrm{i} \omega \rho_0 c_p }{ \kappa } \vartheta = -1 , \\
\displaystyle
\vartheta \equiv 0 \text{\quad on }\Gamma, \\
\displaystyle
\vartheta \quad \Omega\text{-periodic},\\
\end{array}
\right.
\label{eq:ThermocCellPb}
\end{equation}
where $\vartheta$ plays the role of the temperature field. The thermo-acoustic frequency-dependent permeability tensor reads:
\begin{equation}
\Theta = \langle \vartheta \rangle.
\end{equation}
The thermostatic permeability corresponds to the permeability $\Theta$ at the frequency $\omega=0$. The field $ \vartheta= \vartheta^0$ at $\omega=0$ satisfy the geometric cell problem:
\begin{equation}
\left\lbrace
\begin{array}{l}
\displaystyle
\text{div}( \textbf{grad}(\vartheta^0) ) = -1 , \\
\displaystyle
\vartheta^0 \equiv 0 \text{\quad on } \Gamma, \\
\displaystyle
\vartheta^0 \quad \Omega\text{-periodic}.\\
\end{array}
\right.
\label{eq:ThermocCellPb0}
\end{equation}
and the thermostatic permeability $\Theta^0 $ is given by:
\begin{equation}
\Theta^0 = \langle \vartheta^0 \rangle.
\label{eq:Theta0}
\end{equation}
At high frequencies $\omega\rightarrow\infty$, the thermal field is denoted $\vartheta= \vartheta^\infty $ and is uniform over the cell. It satisfies:
\begin{equation}
\frac{\textrm{i} \omega \rho_0 c_p }{ \kappa } \vartheta^\infty = -1
\quad\text{and}\quad
\Theta^\infty = \langle \vartheta^\infty \rangle = \frac{\phi \kappa }{-\textrm{i} \omega \rho_0 c_p } .
\label{eq:ThetaInf}
\end{equation}
Besides, the characteristic thermal length $\Lambda'$ is related to the high-frequency asymptotic limit of the permeability $\Theta$ when the thermal boundary layer at the fluid/solid interface is accounted for. It reads:
\begin{equation}
\Lambda' = 2 \frac{ \int_{\Omega_\textrm{f}}{ \vartheta^\infty \cdot \vartheta^\infty \,\text{d}\Omega}}{ \int_{\Gamma}{ \vartheta^\infty \cdot \vartheta^\infty \,\text{d}\Gamma} }
= 2 \frac{|\Omega_\textrm{f}|}{| \Gamma|}.
\label{eq:Lthe}
\end{equation}
Equation (\ref{eq:ThermocCellPb0}) provides the cell problem solved to obtain the JCAL parameters $\Theta^0$ and $\Lambda' $ given in Eqs. (\ref{eq:Theta0}) and (\ref{eq:Lthe}).
\pagebreak
\section{Introduction}
With the rapid development of acoustic metamaterials and transformation acoustics, efficient characterization methods that enable to estimate the unprecedented acoustic effective properties of structured materials are timely required. Characterization methods based on the inversion of the scattering matrix \cite{Nicolson1970,Weir1974} have been largely developed in the field of metamaterials \cite{Smith2002} and acoustic materials \cite{Song2000}. They are of particular interest in the design of acoustic metamaterials \cite{Popa2009,Zigoneanu2011,Jiang2011} since they directly provide their effective density and bulk modulus. Alternatively, these methods also turn out to be well-suited to retrieve the effective parameters of periodic arrangements of unit cells \cite{Fokin2007}, provided that their effective material supports only one propagative mode in the frequency range of interest and that Drude layers at its boundaries are accounted for at high frequencies \cite{Simovski2007}.
However, many acoustic metamaterials may be described as effective anisotropic fluids \cite{Christensen2012,Torrent2008} notably to achieve acoustic cloaking \cite{Norris2015}.
Certainly, characterization methods have been extended to characterize three-dimensional anisotropic materials with principal directions belonging to the layer plane interface \cite{Li2009,Jiang2011}, or two-dimensional anisotropic materials with principal directions arbitrarily tilted with respect to the reference coordinate system \cite{Castanie2014, Park2016}. Nevertheless, no specific methods seem to have been developed to characterize fully anisotropic acoustic materials in three dimensions (3-D). Our aim here is to present a general retrieval method to extract the bulk modulus and all six components of the 3-D symmetric anisotropic tensor of density from a limited number of characterization tests. To do so, we build upon past work to extend methods based on plane wave reflection and transmission through a layer sample. Here, the general 3-D case of fully anisotropic fluid material having principal axes tilted in a-priori unknown directions is considered.
The article is organized as follows. In Sec. \ref{sec:DirectProblem}, the direct problem is solved via a state vector formalism to yield the reflection and transmission coefficients. In particular, the transmission coefficient is shown to exhibit phase delays which are related to the orientation of the material with respect to the layer interfaces. These phase delays will appear to be of paramount importance in the retrieval method. In Sec. \ref{sec:InverseProblem}, the inverse problem is studied and the general retrieval method is presented. It provides the analytical expression of the bulk modulus, and all six coefficients of the density tensor as functions of the reflection and transmission coefficients obtained from interrogation of the layer by incident plane waves at specific angles of incidence and orientation of the incident plane. In Sec. \ref{sec:Application}, the efficiency of the procedure is demonstrated in the case of sound propagation through an anisotropic viscothermal fluid layer made of an orthorhombic lattice of overlapping ellipsoids. The effective poroacoustic properties of the array are first derived from the theory of two-scale asymptotic homogenization \cite{SanchezPalencia1980,Auriault2009} and the retrieval method is then applied to the homogenized anisotropic layer. All seven material parameters (6 coefficients of the symmetric density tensor and bulk modulus) are accurately retrieved. They provide insight on the orientation of the material microstructure, through the recovery of the three principal directions and principal densities.
\section{Direct problem}
\label{sec:DirectProblem}
In this section, the plane wave propagation through a layer made of homogeneous anisotropic fluid material $\Omega$ is studied, see Fig. \ref{fig:FIG1}.
The layer has the thickness $L$ and its constitutive material $\Omega $ has the bulk modulus $B$ and density tensor $\boldsymbol{\rho}$. In the reference Cartesian coordinate system $\mathcal{R}_0=\left(O,\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3 \right)$ with position coordinates $(x_1,x_2,x_3)$, the mutually parallel plane boundaries $\Gamma_0$ and $\Gamma_L$ of the layer are given by the equations $x_3=0$ and $x_3=L$ respectively. The layer is surrounded on both sides $x_3\leq 0$ and $x_3\geq L$ by a homogeneous isotropic fluid $\Omega_0$ of scalar density $\rho_0$ and bulk modulus $B_0$. It leads to the sound speed $c_0=\sqrt{B_0/\rho_0}$ and characteristic impedance $Z_0=\rho_0 c_0$ in the surrounding medium $\Omega_0$. Here, the analysis is performed in the linear harmonic regime at the circular frequency $\omega$ with the implicit time dependence $e^{-\textrm{i}\omega t}$. In this system, the pressure and particle velocity fields $(P, \mathbf{V})$ in the layer and $(p, \mathbf{v})$ in the surrounding medium, are governed by the equations of mass and momentum conservation:
\begin{subequations}
\begin{align}
&& \textrm{i}\omega P\big/ B= \nabla \cdot \mathbf{V} \quad \text{and}\quad \textrm{i} \omega \boldsymbol{\rho} \cdot \mathbf{V}=\nabla P \quad \text{in $\Omega$,}
\label{eq:GoverningEqnsA}\\
&& \textrm{i} \omega p\big/B_0= \nabla \cdot \mathbf{v} \quad \text{and}\quad \textrm{i} \omega \rho_0 \mathbf{v}=\nabla p \quad \text{in $\Omega_0$.}
\label{eq:GoverningEqnsB}%
\end{align}
\label{eq:GoverningEqns}%
\end{subequations}
Equations (\ref{eq:GoverningEqnsA}) and (\ref{eq:GoverningEqnsB}) testify that the anisotropy of the material $\Omega$ in the layer is accounted for by the tensorial character of its density.
As usual for passive media, the density tensor $\boldsymbol{\rho}$ is symmetric, that is $\:^t\boldsymbol{\rho}=\boldsymbol{\rho}$ where $\:^t$ denotes transposition.
In particular, the orthonormal coordinate system $\mathcal{R}_{\rho}=(\mathbf{e}_{I},\mathbf{e}_{I\!\!I},\mathbf{e}_{I\!\!I\!\!I})$ of its principal directions with coordinates $(x_{I},x_{I\!\!I},x_{I\!\!I\!\!I})$ can be defined so that the density matrix is diagonal in this system. In other words, the density tensor can be written as $\boldsymbol{\rho} = \boldsymbol{\rho}^{\star}=\textbf{diag}\left(\rho_I,\rho_{I\!\!I},\rho_{I\!\!I\!\!I} \right)$ in $\mathcal{R}_{\rho}$,
where $\rho_I$, $\rho_{I\!\!I}$ and $\rho_{I\!\!I\!\!I} $ are the principal densities. As a result, when expressed in the reference coordinate system $\mathcal{R}_0$, the density tensor reads $\boldsymbol{\rho}= \mathbf{R}\cdot \boldsymbol{\rho}^{\star} \cdot \,^t\mathbf{R}$ where $\mathbf{R}=\mathbf{R}_3\left(\theta_{I\!\!I\!\!I} \right)\mathbf{R}_2\left(\theta_{I\!\!I} \right) \mathbf{R}_1\left(\theta_{I} \right)$ is the rotation matrix between the two coordinate systems, with $\mathbf{R}_1$, $\mathbf{R}_2$, $\mathbf{R}_3$ being elementary matrices of rotations and $\theta_I$, $\theta_{I\!\!I}$ and $\theta_{I\!\!I\!\!I}$ the roll, pitch, and yaw angles. Moreover, it is worth recalling that, as effective properties, the bulk modulus $B$ and density tensor $ \boldsymbol{\rho}$ can be complex-valued and frequency-dependent.
\begin{figure}[tbp]
\centering
\includegraphics[width=8.5cm]{Fig1-SchemaLayer_2.pdf}
\caption{\label{fig:FIG1} (Color online) Conceptual view of the homogeneous anisotropic fluid layer of thickness $L$. The principal directions of the anisotropic fluid are denoted by $(x_{I},x_{I\!\!I},x_{I\!\!I\!\!I})$.}
\end{figure}
Now, the layer is submitted to the incident plane wave $p^i=e^{\textrm{i} k_1 x_1+\textrm{i} k_2 x_2-\textrm{i} k_3 \left(x_3-L\right)}$ propagating with a unit amplitude in the domain $x_3\geq L$ with the wavenumbers
\begin{subequations}
\begin{align}
&k_1=-k_0 \sin{\varphi}\cos{\psi},
\label{eq:IncidentWavenumbersA}\\
&k_2=-k_0 \sin{\varphi}\sin{\psi},
\label{eq:IncidentWavenumbersB}\\
&k_3=\sqrt{k_0^2-k_1^2-k_2^2}=k_0 \cos{\varphi},
\label{eq:IncidentWavenumbersC}
\end{align}
\label{eq:IncidentWavenumbers}%
\end{subequations}
where $k_0=\omega/c_0$ is the acoustic wavenumber in $\Omega_0$, while $\psi$ and $\varphi$ are the azimuthal and elevation angles measured from $(O,x_1)$ and $(O,x_3)$ respectively.
Due to the linearity in the system, the Snell-Descartes Law holds: the in-plane wavevector $ \textbf{k}_\Gamma = k_1\mathbf{e}_1+k_2\mathbf{e}_2$ of the incident field is prescribed to the fields in both $\Omega_0$ and $\Omega$. In the surrounding medium $\Omega_0$, this gives rise to the specularly reflected and transmitted waves $p^R$ and $p^T$ in the form
\begin{equation}
p^R=R e^{\textrm{i} k_3 \left(x_3-L\right)} e^{\textrm{i} \textbf{k}_\Gamma \cdot \mathbf{x}_\Gamma },
\qquad
p^T=T e^{-\textrm{i} k_3 x_3} e^{\textrm{i} \textbf{k}_\Gamma \cdot \mathbf{x}_\Gamma },
\label{eq:ReflectedTransmitted}
\end{equation}
in domains $x_3\geq L$ and $x_3\leq 0$ respectively, where $R$ and $T$ are the pressure reflection and transmission coefficients, while $\mathbf{x}_\Gamma = x_1\mathbf{e}_1+x_2\mathbf{e}_2$ is the in-plane position vector.
In the layer $\Omega$, the Snell-Descartes Law implies that the pressure and velocity fields take the form
\begin{equation}
P=\widehat{P}(x_3)e^{\textrm{i} \textbf{k}_\Gamma \cdot \mathbf{x}_\Gamma }
\quad \text{and} \quad
\mathbf{V}=\widehat{\mathbf{V}}(x_3) e^{\textrm{i} \textbf{k}_\Gamma \cdot \mathbf{x}_\Gamma },
\label{eq:FieldsInLayer}
\end{equation}
where $\widehat{P}(x_3)$ and $\widehat{\mathbf{V}}(x_3) $ are independent of $\mathbf{x}_\Gamma$ due to the homogeneity of the layer, but still depend on the coordinate $x_3$. Substitution of Eq. (\ref{eq:FieldsInLayer}) into (\ref{eq:GoverningEqnsA}) leads to the following equations of apparent mass and momentum conservation involving $\widehat{P}(x_3)$ and normal velocity $\widehat{V}_3=\mathbf{e}_3\cdot \widehat{\mathbf{V}}(x_3) $.
\begin{subequations}
\begin{align}
& \textrm{i} \omega \widehat{P} \big/ \widetilde{B} = \textrm{i} (\textbf{q}\cdot \textbf{k}_\Gamma ) \widehat{V}_3 + \partial \widehat{V}_3 \big/\partial x_3 ,
\label{eq:ApparentMassMomentumA}\\
& \textrm{i} \omega \widetilde{\rho} \, \widehat{V}_3 = \textrm{i} (\textbf{q}\cdot \textbf{k}_\Gamma ) \widehat{P} + \partial \widehat{P} \big/\partial x_3.
\label{eq:ApparentMassMomentumB}
\end{align}
\label{eq:ApparentMassMomentum}%
\end{subequations}
Details about the derivation of Eqs.~(\ref{eq:ApparentMassMomentum}a,b) are provided in the Supplementary Material.
In these equations, the dimensionless vector $\textbf{q}= q_1\mathbf{e}_1+q_2\mathbf{e}_2$ is induced by the anisotropic material which couples in-plane and normal directions, while scalars $\widetilde{B}$ and $\widetilde{\rho}\, $ are the apparent bulk modulus and density. Denoting the inverse of the density tensor by the symmetric tensor $\boldsymbol{H}=\boldsymbol{\rho}^{-1}$, the coefficients $q_1$, $q_2$, and the apparent density $ \widetilde{\rho}\, $ are found to depend only on the (inverse) density tensor:
\begin{equation}
q_1 = H_{13}\big/H_{33},\quad
q_2 = H_{23}\big/H_{33},\quad
\widetilde{\rho}\, =1\big/H_{33} ,
\label{eq:ApparentParameters}
\end{equation}
while the apparent bulk modulus $ \widetilde{B} $ depends on the (inverse) density tensor, the bulk modulus $B$ and, more importantly, on the in-plane wavevector $\textbf{k}_\Gamma$ according to
\begin{subequations}
\begin{align}
&\frac{\omega^2}{B} -\frac{\omega^2}{ \widetilde{B}(k_1,k_2) } = \xi_{11} k_1^2 + \xi_{22}k_2^2 +2 \, \xi_{12} k_1k_2 ,
\label{eq:ApparentBulk2A}\\
&\text{with}\quad \xi_{ij} = H_{ij} - H_{33} \, q_i \, q_j ,\ \ \forall (i,j)\in\{1,2\}^2.
\label{eq:ApparentBulk2B}%
\end{align}
\label{eq:ApparentBulk}%
\end{subequations}
It is worth noting that apparent density $\widetilde{\rho}_0=\rho_0$ and bulk modulus $ \widetilde{B}_0 = B_0 k_0^2 \big/[k_0^2- k_1^2-k_2^2] $
in the isotropic surrounding medium $\Omega_0$ also display similar features, but the coupling vector $\textbf{q}$ and the coefficient $\xi_{12}$ are zero in $\Omega_0$.
Introducing now the state vector $\boldsymbol{W}=\,^t\lbrace \widehat{P},\,\widehat{V}_3\rbrace$, the differential system in Eqs.~(\ref{eq:ApparentMassMomentum}a,b) can be cast in the following form, which is close to that of an homogeneous isotropic fluid, with the exception of non-zero diagonal terms induced by the anisotropic material:
\begin{equation}
\frac{\partial \boldsymbol{W}}{\partial x_3} = \textbf{M}. \boldsymbol{W}
\quad \text{with}\quad \textbf{M} =
\left[
\begin{array}{cc}
-\textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma & \textrm{i} \omega \widetilde{\rho}\, \\
\displaystyle \textrm{i} \omega \big/ \widetilde{B} & -\textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma
\end{array}
\right].
\label{eq:Sol1}
\end{equation}
This system is solved by means of matrix exponential,
\begin{equation}
\boldsymbol{W}(L)=e^{\textbf{M}L}\cdot\boldsymbol{W}(0) .
\label{eq:Sol1b}
\end{equation}
Due to the continuity of the pressure and normal component of the particle velocity at the layer boundaries $\Gamma_0$ and $\Gamma_L$, the values of the state vector $\boldsymbol{W}$ at the layer boundaries $x_3=0$ and $x_3=L$ read
\begin{equation}
\boldsymbol{W}(L)= \begin{Bmatrix} R+1\\ \\ \displaystyle \frac{R-1}{\widetilde{Z}_0} \\ \end{Bmatrix}
\quad\text{and}\quad
\boldsymbol{W}(0)= \begin{Bmatrix} T \\ \\ \displaystyle \frac{-T }{\widetilde{Z}_0} \\ \end{Bmatrix},
\label{eq:ValueStateVectorBoundary}%
\end{equation}
where $\widetilde{Z}_0=(\widetilde{\rho}_0\widetilde{B}_0)^{1/2} = Z_0/\cos \varphi$ is the apparent impedance of air in the direction $(O,\mathbf{e}_3)$.
To calculate the exponential of the constitutive matrix $\textbf{M}$ in Eq.~(\ref{eq:Sol1b}), this latter is diagonalized according to
\begin{equation}
\textbf{M}~= \textbf{U}\cdot \boldsymbol{\Sigma} \cdot\textbf{U}^{-1} ,
\label{eq:ConstitutiveMatrix}
\end{equation}
where $\boldsymbol{\Sigma}$ is the diagonal matrix of eigenvalues and $\textbf{U}$ the matrix of associated eigenvectors:
\begin{subequations}
\begin{align}
&\boldsymbol{\Sigma}=
\begin{bmatrix}
\textrm{i} k_3^{-}& 0\\
0 &\textrm{i} k_3^{+}\\
\end{bmatrix} \quad \text{with}\quad k_3^{\pm} = -\textbf{q}\cdot \textbf{k}_\Gamma \pm\widetilde{k} ,
\label{eq:SolA}\\
&
\textbf{U} = \frac{1}{\sqrt{2}}
\begin{bmatrix}
\widetilde{Z} & \widetilde{Z} \\
-1 & 1 \\
\end{bmatrix}, \quad \text{with}\quad
\textbf{U}^{-1} =
\frac{1}{\sqrt{2}}
\begin{bmatrix}
1\big/\widetilde{Z} & -1\\
1\big/\widetilde{Z} & 1 \\
\end{bmatrix}.
\label{eq:SolB}%
\end{align}
\label{eq:Sol}%
\end{subequations}
Here, the wavenumber $\widetilde{k}$ and impedance $\widetilde{Z} $ are built from the apparent density $\widetilde{\rho}\, $ and bulk modulus $\widetilde{B}$ as
\begin{equation}
\widetilde{k}= \omega\sqrt{\widetilde{\rho}\, \big/\widetilde{B}}
\qquad\text{and}\qquad
\widetilde{Z} =\sqrt{\widetilde{\rho}\, \widetilde{B}}.
\label{eq:k3andZ3}
\end{equation}
The eigenvalues $k_3^{\pm}$ of the matrix $\textbf{M}$ in Eq.~(\ref{eq:SolA}) provide the dispersion relation in the anisotropic fluid. Indeed, the pressure field $\widehat{P}(x_3)$ will take the form $\widehat{P}=\widehat{P}^+ e^{ \textrm{i} k_3^{+}x_3} +\widehat{P}^- e^{ \textrm{i} k_3^{-}x_3} $, where $\widehat{P}^{\pm}$ are complex amplitudes and $\widehat{P}^\pm e^{ \textrm{i} k_3^{\pm}x_3}$ represent waves propagating in the direction $\pm x_3$. Conversely to isotropic media, the wavenumbers $k_3^{\pm}$ are not necessarily opposite: their sum yields $k_3^{-}+k_3^{+}=2\textbf{q}\cdot \textbf{k}_\Gamma$, which will be at the origin of phase delays in the transmission coefficients, as shown later. This effect is due to the coupling between the directions of the reference coordinate system operated by the anisotropic material when the density matrix is fully-symmetric. However, the coupling vector $\textbf{q}$ between in-plane and normal directions is zero when one principal direction of the (inverse) density tensor coincides with the direction $(O,x_3)$ normal to the boundaries of the layer. Then, the anisotropy of the material $\Omega$ only influences the apparent bulk modulus $\widetilde{B}$ according to Eq.~(\ref{eq:ApparentBulk}), which would depend only on the rotation of the principal directions around $(O,x_3)$.
Finally, substitution of the boundary conditions (\ref{eq:ValueStateVectorBoundary}) and of the diagonalized form (\ref{eq:ConstitutiveMatrix}) of the constitutive matrix $\textbf{M}$ into Eq.~(\ref{eq:Sol1b}),leads to the following linear system to solve for the reflection and transmission coefficients:
\begin{equation}
\begin{Bmatrix} R+1\\ \\ \displaystyle \frac{R-1}{\widetilde{Z}_0} \\ \end{Bmatrix}
= \textbf{U} \cdot
\begin{bmatrix} e^{ \textrm{i} k_3^{-}L} & 0\\ \\ 0 & e^{\textrm{i} k_3^{+} L}\\ \end{bmatrix}
\cdot
\textbf{U}^{-1}
\cdot
\begin{Bmatrix} T \\ \\ \displaystyle \frac{-T }{\widetilde{Z}_0} \\ \end{Bmatrix}.
\label{eq:Sol2}
\end{equation}
Resolution of this linear system yields the following reflection and transmission coefficients:
\begin{subequations}
\begin{align}
&R = \frac{ -\textrm{i} \left(\widetilde{Z}\big/\widetilde{Z}_0 - \widetilde{Z}_0\big/\widetilde{Z} \right) \sin (\widetilde{k}L )
}{ 2 \cos (\widetilde{k}L ) -\textrm{i} \left( \widetilde{Z}\big/\widetilde{Z}_0 + \widetilde{Z}_0\big/\widetilde{Z} \right) \sin (\widetilde{k}L ) },
\label{eq:CoeffR} \\
&T=\frac{ 2\, e^{\textrm{i} (\textbf{q}\cdot \textbf{k}_\Gamma ) L}
}{ 2 \cos(\widetilde{k}L) -\textrm{i} \left( \widetilde{Z}\big/\widetilde{Z}_0 + \widetilde{Z}_0\big/\widetilde{Z} \right) \sin(\widetilde{k}L) },
\label{eq:CoeffT}%
\end{align}%
\label{eq:CoeffRandT}%
\end{subequations}%
As mentioned previously, Eq.~(\ref{eq:CoeffT}) shows that the transmission coefficient $T$ is affected by the phase delay
$ \textbf{q}\cdot \textbf{k}_\Gamma L $ due to the anisotropy of the material. This property will be of paramount importance when presenting the retrieval method in the next section.
\section{Retrieval method}
\label{sec:InverseProblem}
The problem now consists in retrieving the six components of the symmetric density tensor, and the value of the bulk modulus from the knowledge of the thickness $L$ of the layer and the reflection and transmission coefficients,
$R$ and $T$, at specific azimuthal and elevation angles $\psi$ and $\varphi$, that is at specific in-plane wavenumbers $\textbf{k}_\Gamma=(k_1,k_2)$, see Eq.~(\ref{eq:IncidentWavenumbers}). Once estimated, the material parameters will be marked by the superscript $^{\dag}$ in the form $(\rho_{11}^{\dag},\rho_{12}^{\dag},\rho_{13}^{\dag},\rho_{22}^{\dag},\rho_{23}^{\dag},\rho_{33}^{\dag}, B^{\dag})$.
Central to the retrieval method is the fact that the apparent impedance $\widetilde{Z} $ and wavenumber $\widetilde{k}$, and subsequently the apparent density $\widetilde{\rho}$ and bulk modulus $\widetilde{B}$, can be directly retrieved from the reflection and transmission coefficients, assuming the prior knowledge of the phase delay $ \textbf{q}\cdot \textbf{k}_\Gamma L $ affecting the transmission coefficient in Eq.~(\ref{eq:CoeffT}).
Indeed, inverting the system given by Eqs.~(\ref{eq:Sol2}) with the matrices expressed in Eqs.~(\ref{eq:Sol}a,b), the Nicolson--Ross--Weir procedure \cite{Nicolson1970,Weir1974} can be extended to oblique incidence and anisotropic media as follows, see Supplementary Material for details:
\begin{subequations}
\begin{align}
& \widetilde{Z}^\dag= \pm\widetilde{Z}_0
\sqrt{\frac{
\left(Te^{-\textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma L} \right)^2-\left(1+R \right)^2
}{
\left(Te^{-\textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma L} \right)^2-\left(1-R \right)^2}} ,
\label{eq:CoeffZ} \\
& e^{\mp \textrm{i} \widetilde{k}^\dag L}=\, \chi^{\mp} =
\( 1 + \displaystyle \frac{ \widetilde{Z}_0 \mp \widetilde{Z}^\dag }{\widetilde{Z}_0 \pm \widetilde{Z}^\dag} R \)
\frac{1}{ T e^{-\textrm{i} \textbf{q}\cdot \textbf{k}_\Gamma L}} .
\label{eq:ExpKL}
\end{align}%
\label{eq:CoeffZandExpKL}%
\end{subequations}%
While the sign in Eq.~(\ref{eq:CoeffZ}) is actually determined by the passivity constraint $\textrm{Re}(\widetilde{Z}^\dag )\geq0$,
both signs in Eq.~(\ref{eq:ExpKL}) are physically sound. However, inverting $e^{- \textrm{i} \widetilde{k}^\dag L}$ is preferred here since the negative $x_3$-going waves usually carry more energy than the positive ones for negative $x_3$-going incident wave. That provides
\begin{equation}
\widetilde{k}^\dag= \left(-\textrm{ang}\left(\chi^- \right)+\textrm{i} \log |\chi^-|+2 n \pi \right) /L ,
\label{eq:Retrievedk3}
\end{equation}
where $\textrm{ang}$ is the phase angle, $\log$ is the natural logarithm. In Eq.~(\ref{eq:Retrievedk3}), the term $2 n \pi$ with integer $n\in \mathbb{Z}$ is used to unwrap the phase of $\chi^-$ so that
$\widetilde{k}^\dag$ is continuous over the frequencies, with $\widetilde{k}^\dag=0$ at the frequency $\omega=0$. The integer $n$ has to be determined and depends on the nature of the material, but
it is is usually zero when initiating the procedure at very low frequency. Further, using Eq.~(\ref{eq:k3andZ3}) and the values of $\widetilde{Z} ^\dag$ and $\widetilde{k}^\dag$, the apparent density and bulk modulus are retrieved as
\begin{equation}
\widetilde{\rho}^\dag = \widetilde{Z}^\dag \widetilde{k}^\dag \big/ \omega
\qquad\text{and}\qquad
\widetilde{B}^\dag = \omega \, \widetilde{Z}^\dag \big/ \widetilde{k}^\dag.
\label{eq:Rho3andB3}
\end{equation}
It is important here to emphasize that Eqs.~(\ref {eq:CoeffZ}) to (\ref{eq:Retrievedk3}) actually hold for any in-plane wavevector $\textbf{k}_\Gamma=(k_1,k_2)$, provided that the coefficients $ q_1 $ and $q_2$ are known to calculate the phase delay $ \textbf{q}\cdot \textbf{k}_\Gamma L $ in Eq.~(\ref{eq:CoeffT}). Since $ q_1 $ and $q_2$ are independent of the wavevector $\textbf{k}_\Gamma=(k_1,k_2)$, see Eq.~(\ref{eq:ApparentParameters}), they can be retrieved by using two pairs of transmission coefficients as follows. Choosing $T(k'_1,0)$, $T(-k'_1,0)$ and $T(0,k'_2)$, $T(0,-k'_2)$ with wavenumbers $k'_1\neq0$ and $k'_2\neq0$ ensures the equality of the denominators of $T(k'_1,0)$ and $T(-k'_1,0)$ on the one hand, and those of $T(0,k'_2)$ and $T(0,-k'_2)$ on the other hand, see Eq. (\ref{eq:CoeffT}). This can be explained by the fact that denominators of reflection and transmission coefficients actually represent the dispersion relation of the anisotropic layer modes. Using this property, the coefficients $q_1^\dag$ and $q_2^\dag$ are retrieved from the following relations:
\begin{equation}
e^{2\textrm{i} q_1^\dag k'_1 L} =\frac{T(k'_1,0)}{T(-k'_1,0)};
\qquad
e^{2\textrm{i} q_2^\dag k'_2 L} =\frac{T(0,k'_2)}{T(0,-k'_2)}.
\label{eq:Retrievedqj}
\end{equation}
At this stage, from the four transmission coefficients using the wavevectors $\textbf{k}_\Gamma=\pm k'_1 \mathbf{e}_1$ and $\textbf{k}_\Gamma=\pm k'_2 \mathbf{e}_2$ with any non-zero wavenumbers $k'_1\neq0$ and $k'_2\neq0$, the coefficients $q_1^\dag$ and $q_2^\dag$ have been retrieved. Therefore, the apparent parameters $\widetilde{Z}^\dag$ and $\widetilde{k}^\dag$, and consequently $\widetilde{\rho}^\dag$ and $\widetilde{B}^\dag$ are not only known for $\textbf{k}_\Gamma=\pm k'_1 \mathbf{e}_1$ and $\textbf{k}_\Gamma=\pm k'_2 \mathbf{e}_2$, but they can now be assessed for any incident wave, using Eqs. (\ref{eq:CoeffZ}-\ref{eq:Rho3andB3}). This central property is used in what follows to retrieve the bulk modulus $B$ and all six components of the density tensor $\boldsymbol{\rho}$. Please note that to obtain $\widetilde{B}^{\dag1}\equiv \widetilde{B}^{\dag} (k_1',0)$ and $\widetilde{B}^{\dag2}\equiv\widetilde{B}^{\dag} (0,k_2')$, being respectively equal to $\widetilde{B}^\dag (-k_1',0)$ and $\widetilde{B}^\dag (0,-k_2')$, only two additional reflection coefficients are needed, i.e. $R(k_1,0)$ and $R(0,k_2)$.
To gain access to the physical bulk modulus $B$, a fifth characterization test at normal incidence $(k_1,k_2)=(0,0)$ is here considered, which provides, according to Eq.~(\ref{eq:ApparentBulk2A}),
\begin{equation}
B^{\dag} = \widetilde{B}^\dag (0,0).
\label{eq:RetrievedBulk}
\end{equation}
Once the bulk modulus $B^{\dag}$ is determined, it is straightforward to retrieve the coefficients $\xi_{11}$ and $\xi_{22} $ from the four characterization tests already performed with the wavevectors $\textbf{k}_\Gamma=\pm k'_1 \mathbf{e}_1$ and $\textbf{k}_\Gamma=\pm k'_2 \mathbf{e}_2$. Indeed, using Eq.~(\ref{eq:ApparentBulk2A}), the following relation holds:
\begin{equation}
\forall j \in \{1,2\}, \quad
\xi_{jj}^{\dag}= \frac{\omega^2}{(k'_j)^2} \( \frac{1}{B^{\dag}} - \frac{1}{\widetilde{B}^{\dag j}} \).
\label{eq:RetrievedXiJJ}
\end{equation}
To be in a position to retrieve all coefficients of the density tensor, a sixth and final characterization test with the wavevector $\textbf{k}_\Gamma=(k''_1,k''_2)$ such that $k''_1\neq 0$ and $k''_2\neq 0$ is considered. As it happens, with the knowledge of $B^{\dag} $, $\xi_{11}^{\dag}$ and $\xi_{22}^{\dag}$, equation ~(\ref{eq:ApparentBulk2A}) is solved for $\xi_{12}$ to yield:
\begin{equation}
\xi^{\dag}_{12} =
\frac{\omega^2}{2 k''_1 k''_2} \( \frac{1}{B^{\dag}} - \frac{ 1}{\widetilde{B}^{\dag3}} \)
- \frac{\xi_{11}^{\dag} \, k''_1 }{2 k''_2} - \frac{\xi_{22}^{\dag} \, k''_2 }{2 k''_1 },
\label{eq:RetrievedXi12}
\end{equation}
with $\widetilde{B}^{\dag3}\equiv\widetilde{B}^{\dag}(k''_1,k''_2)$. The coefficients of the inverse density tensor $\boldsymbol{H}$ are retrieved from Eqs.~(\ref{eq:ApparentParameters}) and (\ref{eq:ApparentBulk2B}) to provide the following relations, where $i,\, j\in\{1,2\}$:
\begin{equation}
H_{33}^{\dag} = \frac{1}{\widetilde{\rho}^\dag},
\quad
H_{ij}^{\dag} = \xi_{ij}^{\dag} + H_{33}^{\dag} q_i^\dag q_j^\dag ,
\quad
H_{i3}^{\dag} = q_i^\dag H_{33}^{\dag}.
\label{eq:InverseDensity}%
\end{equation}
Finally, the density tensor can be obtained by inverting $\boldsymbol{H}$ numerically, or by using the following expressions derived in detail in the Supplementary Material:
\begin{subequations}
\begin{align}
& \rho_{11}^{\dag} = \frac{ \xi_{22}^{\dag} }{\Delta_\xi^{\dag}} ,
\qquad \rho_{22}^{\dag} = \frac{\xi_{11}^{\dag} }{\Delta_\xi^{\dag} } ,
\qquad \rho_{12}^{\dag} = \frac{ - \xi_{12}^{\dag} }{ \Delta_\xi^{\dag} } ,
\label{eq:RetrievedRhoInPlane} \\
& \rho_{13}^{\dag} = \frac{ \xi_{12}^{\dag} q_2^{\dag} - \xi_{22}^{\dag} q_1^{\dag}}{ \Delta_\xi^{\dag}} ,
\qquad
\rho_{23}^{\dag} = \frac{ \xi_{12}^{\dag} q_1^{\dag} - \xi_{11}^{\dag} q_2^{\dag}}{\Delta_\xi^{\dag}} ,
\label{eq:RetrievedRhoCoupling} \\
& \rho_{33}^{\dag} = \widetilde{\rho}\, ^{\dag} + \frac{1}{\Delta_\xi^{\dag}}\Big( \xi_{22}^{\dag} (q_1^{\dag})^2 +\xi_{11}^{\dag} (q_2^{\dag})^2 -2\, \xi_{12}^{\dag} q_1^{\dag} q_2^{\dag} \Big) ,
\label{eq:RetrievedRho33}
\end{align}
\label{eq:RetrievedRho}%
\end{subequations}
where $ \Delta_\xi^{\dag} = \xi_{11}^{\dag} \xi_{22}^{\dag} - ( \xi_{12}^{\dag} )^2$. This brings an end to the retrieval procedure. The seven rheological parameters characterizing the anisotropic fluid material in the layer (the bulk modulus and the six coefficients of the density tensor) have been retrieved from six transmission coefficients and four associated reflection coefficients. They derive from tests performed at normal incidence $\varphi\equiv0$; at oblique incidence ($\varphi\neq0$) with opposite pairs of in-plane wavevectors $\textbf{k}_\Gamma=(\pm k'_1,0)$ and $\textbf{k}_\Gamma=(0,\pm k'_2)$ oriented along the axes $\mathbf{e}_1$ and $\mathbf{e}_2$ of the reference coordinate system; and at oblique incidence ($\varphi\neq0$) with in-plane wavevector $\textbf{k}_\Gamma=(k''_1,k''_2)$ out of the axes of the reference coordinate system, that is $k''_1 k''_2 \neq 0$.
\begin{figure*}
\includegraphics[width=\textwidth]{Fig2-Results_2.pdf}
\caption{\label{fig:FIG2}{(Color online) Schematic view of the unit cell (a). Reconstructed (${\color{blue}{\times}}$) and initial ($\color{blue}{\rule[0.5ex]{1em}{1pt}}$) real and imaginary parts of the normalized bulk (b-c). Reconstructed ($\times$) and initial ($\rule[0.5ex]{1em}{1pt}$) real and imaginary parts of the six normalized components of the symmetric density tensor (d-e). The inset Fig. (e) depicts the color of the different components. Initial and reconstructed principal directions (f): $x_I$ ($\color{blue}{\rule[0.5ex]{1em}{1pt}}$), $x_{I\!\!I}$ ($\color{green}{\rule[0.5ex]{1em}{1pt}}$), and $x_{I\!\!I\!\!I}$ ($\color{red}{\rule[0.5ex]{1em}{1pt}}$); $x_I^\dag$ ($\color{blue}{--}$), $x_{I\!\!I}^\dag$ ($\color{green}{--}$), and $x_{I\!\!I\!\!I}^\dag$ ($\color{red}{--}$). Initial and reconstructed real and imaginary parts of the normalized densities in the principal directions (g-h): $\rho_I$ ($\color{blue}{\rule[0.5ex]{1em}{1pt}}$), $\rho_{I\!\!I}$ ($\color{green}{\rule[0.5ex]{1em}{1pt}}$), and $\rho_{I\!\!I\!\!I}$ ($\color{red}{\rule[0.5ex]{1em}{1pt}}$); $\rho_I^\dag$ ($\color{blue}{\times}$), $\rho_{I\!\!I}^\dag$ ($\color{green}{\square}$), and $\rho_{I\!\!I\!\!I}^\dag$ ($\color{red}{\circ}$.)}}
\end{figure*}
It can be interesting to note that the apparent density $ \widetilde{\rho}\, $ (or equivalently $H_{33}=1/\widetilde{\rho}\, $) can actually be estimated independently in the six tests with the prior knowledge of coefficients $q_1^{\dag}$ and/or $q_2^{\dag}$, see Eqs.~(\ref {eq:CoeffZ}) to (\ref{eq:Retrievedk3}). This property can be used to assess the accuracy of the retrieved parameters (all tests should provide the same apparent density $ \widetilde{\rho} $), and it can also lead to estimate $ \widetilde{\rho} $ precisely by averaging all its retrieved values.
In the same line of thought, it is worth noting that the retrieval method has been presented on the basis of only six characterization tests. However, the wavenumbers $k'_1$ and $k'_2$ used for tests at oblique incidence in the directions $\mathbf{e}_1$ and $\mathbf{e}_2$, and wavenumbers $k''_1$ and $k''_2$ used for tests at oblique incidence out of directions $\mathbf{e}_1$ and $\mathbf{e}_2$ are actually not specified. Several repetitions of these tests with various values of $k'_1$, $k'_2$, $k''_1$ and $k''_2$ (that is, various angles of incidence $\varphi$ for all tests and various azimuthal angles $\psi$ for the last test) can therefore be performed. This increases the number of tests but it would allow to average the various retrieved values of the material parameters, and hence smooth experimental or numerical noise in the initial data.
Finally, once the full symmetric tensor of (inverse) density is retrieved, it can be diagonalized to yield $\boldsymbol{\rho}^\dag= \mathbf{R}^\dag\cdot \boldsymbol{\rho}^{\star\dag} \cdot \:^t\mathbf{R}^\dag$ where $ \boldsymbol{\rho}^{\star\dag}$ is the diagonal matrix of the retrieved principal densities, see Sec. \ref{sec:DirectProblem}. In particular, the columns of the retrieved rotation tensor $\mathbf{R}^\dag$ actually represent the coordinates of the right-hand orthonormal eigenvectors of the density tensor in the reference coordinate system $\mathcal{R}_0$. This requires to normalize and orient correctly the eigenvectors. These latter provide precious insight on the orientation of the material axes with respect to $\mathcal{R}_0$, which can be related to the orientation of the anisotropic layer microstructure. It is worth recalling also that such orientation of material axes can be characterized by the roll, pitch, and yaw angles $\theta_I$, $\theta_{I\!\!I}$ and $\theta_{I\!\!I\!\!I}$. However, these Euler angles are not unique without additional constraints on the range of their values, and without a convention to specify in which order the principal densities are sorted in the diagonal density matrix $ \boldsymbol{\rho}^{\star\dag}$. Indeed, the order of the principal densities in $ \boldsymbol{\rho}^{\star\dag}$ determines which eigenvector of the density tensor actually plays the role of vectors $\mathbf{e}_{I}$, $\mathbf{e}_{I\!\!I}$ or $\mathbf{e}_{I\!\!I\!\!I}$ in the material coordinate system $\mathcal{R}_\rho$.
\section{Application and validation}
\label{sec:Application}
In this section, the efficiency of the retrieval method to estimate accurately the bulk modulus and all six components of the density tensor is demonstrated.
Here, the procedure is applied to an anisotropic viscothermal fluid layer of thickness $L=3\textrm{ cm}$ under ambient conditions.
For numerical applications, the air density, adiabatic constant, dynamic viscosity, specific heat capacity at constant pressure, and thermal conductivity at equilibrium are
$\rho_0=1.213\textrm{ kg.m}^{-3}$, $\gamma=1.4$, $\eta=1.839\times 10^{-5} \textrm{ Pa.s}$, $c_p=1.005\times 10^3 \textrm{ J.K.kg$^{-1}$}$,
and $\kappa=2.5\times 10^{-2}\textrm{ W.m$^{-1}$.K$^{-1}$}$ respectively, while the atmospheric pressure is $P_0=1.013 \times 10^5 \textrm{ Pa}$.
This lead to the bulk modulus $B_0=\gamma P_0$.
The material in the layer is supposed to be made of the periodic orthorhombic lattice of overlapping ellipsoids filled with air.
As shown in Fig. \ref{fig:FIG2} (a), the rigid frame in its unit cell is obtained by extrusion of the ellipsoid having semi-axes
$r_I=0.66\,\ell $, $r_{I\!\!I}=1.32\,\ell$, and $r_{I\!\!I\!\!I}=1.98\,\ell$ in the directions $(\mathbf{e}_{I},\mathbf{e}_{I\!\!I},\mathbf{e}_{I\!\!I\!\!I})$, from the parallelepiped cube having sizes
$l_I=\ell$, $l_{I\!\!I}=2\,\ell$, and $l_{I\!\!I\!\!I}=3\,\ell$ in that same coordinate system.
Under the condition of the scale separation $3 k_0 \ell \ll 1$, the theory of two-scale asymptotic homogenization \cite{SanchezPalencia1980,Auriault2009} can be applied to the lattice to describe it as an effective fluid material satisfying the governing equations (\ref{eq:GoverningEqnsA}). Due to symmetries in the unit cell, the effective density tensor is diagonal in the Cartesian coordinate system $\mathcal{R}_\rho=(\mathbf{e}_{I},\mathbf{e}_{I\!\!I},\mathbf{e}_{I\!\!I\!\!I})$. Each principal density $\rho_J$ with $J=I,I\!\!I,I\!\!I\!\!I$ of this diagonal density tensor, as well as the effective bulk modulus $B$ are then approximated according to the following Johnson--Champoux--Allard--Lafarge (JCAL) formulas \cite{Johnson1987,Lafarge1997},
\begin{subequations}
\begin{align}
&\frac{\rho_J}{\rho_0}
= \frac{ \alpha^\infty_J}{\phi} + \textrm{i} \, \frac{ \eta / K^0_J }{ \omega \rho_0 }\sqrt{1-\frac{\textrm{i} \omega\rho_0}{\eta} \left(\frac{2\alpha^{\infty}_J K^0_J}{\phi\Lambda_J}\right)^2} ,
\label{eq:JCALa} \\
&\frac{\gamma-1}{ \gamma - \displaystyle \frac{B_0}{ \phi B} }
= 1+ \textrm{i} \, \frac{ \phi \kappa / \Theta^0 }{\omega \rho_0 c_p }\sqrt{ 1-\frac{\textrm{i} \omega \rho_0 c_p }{\kappa }\left(\frac{2 \Theta^0 }{\phi\Lambda'}\right)^2}.
\label{eq:JCALb} %
\end{align}
\label{eq:JCAL}%
\end{subequations}
Here, $\phi$ is the porosity, $ K^0_J$, $\alpha^\infty_J$ and $\Lambda_J$ are the visco-static permeability, the high frequency tortuosity, and the characteristic viscous length in the direction $\mathbf{e}_{J}$ with $J=I,I\!\!I,I\!\!I\!\!I$, and $ \Theta^0$ and $\Lambda'$ are the thermo-static permeability and the characteristic thermal length. All these parameters are defined from periodic cell problems provided by the theory of two-scale asymptotic homogenization \cite{SanchezPalencia1980,Auriault2009}. These latter, recalled in the Supplementary Material, are solved numerically by means of the Finite Element Method using the software COMSOL Multiphysics\,\textregistered . The values of the calculated JCAL parameters are
\begin{subequations}
\begin{align}
& K^0_{I} = 0.11\, \ell^2,
&& K^0_{I\!\! I} = 0.08\, \ell^2,
&& K^0_{I\!\!I\!\!I} = 0.09\, \ell^2,
\label{eq:JCALparamA} \\
& \alpha^\infty_{I} = 1.18 ,
&& \alpha^\infty_{I\!\! I} = 1.06 ,
&& \alpha^\infty_{I\!\!I\!\!I} = 1.04 ,
\label{eq:JCALparamB} \\
& \Lambda_{I} = 1.02\, \ell,
&& \Lambda_{I\!\! I} = 1.20\, \ell,
&& \Lambda_{I\!\!I\!\!I} = 1.34\, \ell,
\label{eq:JCALparamC} \\
& \phi=0.91,
&& \Theta^0=0.20\, \ell^2,
&& \Lambda'=1.84\, \ell.
\label{eq:JCALparamD} %
\end{align}
\label{eq:JCALparam}%
\end{subequations}
In what follows, the characteristic pore size $\ell=200\textrm{ $\mu$m}$ has been chosen so that $3 k_0\ell \approx 0.11$ at $ 10\textrm{ kHz}$,
which guarantees that the condition of scale separation is satisfied sharply in the frequency range $[10\textrm{ Hz}, 10\textrm{ kHz}]$ over which the retrieval method will be applied.
With such a sharp separation of scales, the Drude boundary layers at the layer interfaces $\Gamma_0$ and $\Gamma_L$ can be neglected \cite{Levy1977,SanchezPalencia1980}. Moreover, the
layer includes 150 unit cells in its thickness, which guarantees a bulk behaviour of the material.
Now, the diagonal matrix density $\boldsymbol{\rho}^{\star} $ with principal densities given by Eq.~(\ref{eq:JCALa}) is rotated by the roll, pitch an yaw angles $\theta_{I}=\pi/6$, $\theta_{I\!\!I}=\pi/4$, and $\theta_{I\!\!I\!\!I}=\pi/3$ to result in the fully-anisotropic density tensor $\boldsymbol{\rho}= \mathbf{R}\cdot \boldsymbol{\rho}^{\star} \cdot \,^t\mathbf{R}$ with $\mathbf{R}=\mathbf{R}_3\left(\theta_{I\!\!I\!\!I} \right)\mathbf{R}_2\left(\theta_{I\!\!I} \right) \mathbf{R}_1\left(\theta_{I} \right)$. Reflection and transmission coefficients related to incident plane waves impinging the layer are then computed
by means of the Finite Element Method using the software COMSOL Multiphysics\,\textregistered, thus ensuring that the inverse crime is not committed.
To illustrate the generality of the retrieval method, different angles of incidence have been considered for the different tests, although they can very well be identical in practice.
To form the pair of in-plane wavevectors $ \textbf{k}_\Gamma = \pm k'_1 \mathbf{e}_1$ having orientations of the incident plane given by the angles $\psi=0$ and $\psi=\pi$, the angle of incidence
$\varphi=\pi/3$ has been used. To form the pair of in-plane wavevectors $ \textbf{k}_\Gamma = \pm k'_2 \mathbf{e}_2 $ in the planes of incidence oriented by $\psi=\pm\pi/2$ , the
the angle of incidence $\varphi=\pi/6$ has been used. Finally, the angle of incidence $\varphi=\pi/4$ in the plane of incidence oriented by $\psi=\pi/3$ has been considered to
yield the in-plane wavevector $ \textbf{k}_\Gamma = k''_1 \mathbf{e}_1 + k_2'' \mathbf{e}_2$, see Sec.~\ref{sec:InverseProblem}.
The retrieval method has been applied to give access to the bulk modulus and the six components of the density tensor. The reconstructed normalized bulk modulus $B^{\dag}/P_0$ and normalized components of the density tensor $\rho^{\dag}_{ij}/\rho_0$, $i,j, \in [1,2,3]$ are given in Fig. \ref{fig:FIG2}(b-c) and Fig. \ref{fig:FIG2}(d-e). The retrieved parameters are in excellent agreement with those used in the direct problem. However, the real and imaginary parts of the non-diagonal components do not have the expected signs to comply with causality. This apparent features is actually due to the fact that the reference coordinate system do not coincide with the principal directions of the material. To lift this ambiguity, the density tensor has been diagonalized and the normalized principal densities $\rho_I^{\dag}/\rho_0$, $\rho_{I\!\!I}^{\dag}/\rho_0$ and $\rho_{I\!\!I\!\!I}^{\dag}/\rho_0$ are given in Fig. \ref{fig:FIG2}(g-h). These principal densities comply with the causality conditions, $\textrm{Re}\left(\rho_J^{\dag}\right)\geq0$ and $\textrm{Im}\left(\rho_J^{\dag}\right)\geq0$, $J=I, I\!\!I, I\!\!I\!\!I$.
To sort the densities along the principal directions, an orthonormal basis is first built $(x_{I}^{\dag},x_{I\!\!I}^{\dag},x_{I\!\!I\!\!I}^{\dag})$ from the highest frequency reconstructed eigenvector matrix, which is compared with the provided principal directions Fig. \ref{fig:FIG2}(f). The eigenvectors are then compared (simple scalar product with the reconstructed vectors) to this basis to sort the densities. Note that the reconstructed principal directions are rotated when compared with the data but are correctly estimated. The additional recovery of the high frequency limit of the tortuosities $\alpha_J^{\infty}$, viscous characteristic lengths $\Lambda_J$, and static viscous permeabilities $K_J^0$ in the principal directions $J=I, I\!\!I, I\!\!I\!\!I$ and the open porosity $\phi$, thermal characteristic length $\Lambda'$, and static thermal permeability $\Theta^0$ is out of the scope of the present article but may be achieved by adapting existing methods such as Ref. \cite{Niskanen2017}.
\section{Conclusion}
Anisotropic fluids are of growing interest, mainly due to the rapid development of acoustic metamaterials, but also because many acoustic materials can be modeled as anisotropic fluids. A general method to characterize anisotropic fluid layers is developed and validated on simulated data in this work. This method extends existing ones based on the inversion of the scattering matrix to general three-dimensional anisotropic fluids whose principal directions are unknown and possibly tilted relative to the layer coordinate system. The method relies on the measurement of both transmission and reflection coefficients of the layer at a small number of incidence angles. From the transmission coefficients at two pairs of angles, the phase terms due to the possible out-of-plane principal directions are first recovered. Then, four pairs of transmission/reflection coefficients (possibly involving the same previous transmission coefficients) are required to recover analytically the six components of the symmetric density matrix and the bulk modulus. The density matrix is finally diagonalized to estimate the principal directions and the densities along the principal directions. This procedure is successfully applied to recover the principal directions, density matrix and bulk modulus of a simulated anisotropic viscothermal fluid material made of the orthorhombic lattice of overlapping ellipsoids. This procedure paves the way for the characterization and design of three-dimensional anisotropic metamaterials and acoustic materials.
\begin{acknowledgments}
The authors gratefully acknowledge ANR \textit{Chaire industrielle} MACIA (ANR-16-CHIN-0002) %
and RFI Le Mans Acoustique (R\'egion Pays de la Loire) PavNat project. %
This article is based upon work from COST Action DENORMS CA15125, supported by COST (European Cooperation in Science and Technology).
\end{acknowledgments}
|
2,877,628,090,302 | arxiv |
\section*{Acknowledgments}
This work was supported by NSF CNS-1111520 and gifts from Huawei, Intel, and Cisco. We thank our SIGCOMM'16 shepherd, Sujata Banerjee, and the anonymous SIGCOMM'16 reviewers for their thoughtful feedback; Changhoon Kim, Nick \nohyphens{McKeown}, Arjun Guha, and Anirudh Sivaraman for helpful discussions; and Nick Feamster, Ronaldo \nohyphens{Ferreira}, Srinivas Narayana, and Jennifer Gossels for feedback on earlier drafts.
\section{Formal Semantics of SNAP\xspace}
\label{app:semantics}
\begin{figure}[ht!]
\begin{mdframed}
\scriptsize
\centering
\begin{minipage}{\textwidth}
\[\begin{array}{rcl}
v \in \mathsf{Val} &::=&
\textsf{IP addresses} \, | \,
\textsf{TCP ports} \, | \,
\dots \, | \,
\overset{\rightharpoonup}{v} \\
l \in \mathsf{Log} & ::= & \mathsf{E} \, | \, R \, s \cup l \, | \, W \, s \cup l \\
&& \\
E \cup l &=& l \\
(R \, s , l_1) \cup l_2 &=& l_1 \cup (R \, s, l_2) \\
(W \, s , l_1) \cup l_2 &=& l_1 \cup (W \, s, l_2)
\end{array} \]
\end{minipage}
\,
\fbox{$\mathsf{eval}_e : \mathsf{Expr} \rightarrow \mathsf{Packet} \rightarrow \mathsf{Val}$}
\[\begin{array}{rcl}
\mathsf{eval}_e(v,pkt) &=& v \\
\mathsf{eval}_e(f,pkt) &=& pkt.f \\
\mathsf{eval}_e(\overset{\rightharpoonup}{e},pkt) &=& \mathsf{eval}_e(e_1,pkt),\dots,\mathsf{eval}_e(e_n,pkt) \\
\multicolumn{3}{r}{\text{where } \overset{\rightharpoonup}{e} = e_1,\dots,e_n}
\end{array}\]
\fbox{$\mathsf{eval} : \mathsf{Pol} \rightarrow \mathsf{Store} \rightarrow \mathsf{Packet} \rightarrow \mathsf
\mathsf{Store} \times 2^{\mathsf{Packet}} \times \mathsf{Log}$}
\[\begin{array}{lll}
\mathsf{eval}(0, m, pkt) & = & (m, \emptyset, \mathsf{E}) \\
\mathsf{eval}(1, m, pkt) & = & (m, \set{pkt}, \mathsf{E}) \\
\mathsf{eval}(f = v, m, pkt) & = &
(m,
\begin{cases}
\set{pkt} & pkt.f = v \\
\emptyset & \text{otherwise}
\end{cases},
\mathsf{E}) \\
\mathsf{eval}(s[e_1] = e_2, m, pkt) & = &
(m,
\begin{cases}
\set{pkt} & m ~ s ~ \mathsf{eval}_e(e_1, pkt) = \mathsf{eval}_e(e_2, pkt) \\
\emptyset & \text{otherwise}
\end{cases},
R\, s) \\
\mathsf{eval}(\neg a, m, pkt) & = &
\text{ let } (\_, PKT, l) = \mathsf{eval}(a, m, pkt) \text{ in }
(m, \set{pkt} \setminus PKT, l) \\
\mathsf{eval}(f \leftarrow v, m, pkt) & = & (m, pkt[f \mapsto v], \mathsf{E}) \\
\mathsf{eval}(s[e_1] \leftarrow e_2, m, pkt) & = &
(\lambda s'. \lambda e'.
\begin{cases}
\mathsf{eval}_e(e_2,pkt) & s = s' \land e' = \mathsf{eval}_e(e_1, pkt) \\
m ~ s' ~ e' & \text{otherwise}
\end{cases},
\set{pkt},
W\, s) \\\\
\mathsf{eval}(s[e_1]\text{\codeb{++}}, m, pkt) & = &
(\lambda s'. \lambda e'.
\begin{cases}
(m ~ s' ~ e') + 1& s = s' \land e' = \mathsf{eval}_e(e_1, pkt) \\
m ~ s' ~ e' & \text{otherwise}
\end{cases},
\set{pkt},
W\, s) \\\\
\mathsf{eval}(s[e_1]\text{\codeb{-{}-}}, m, pkt) & = &
(\lambda s'. \lambda e'.
\begin{cases}
(m ~ s' ~ e') - 1 & s = s' \land e' = \mathsf{eval}_e(e_1, pkt) \\
m ~ s' ~ e' & \text{otherwise}
\end{cases},
\set{pkt},
W\, s) \\
\mathsf{eval}(\IfElse{a}{p}{q}, m, pkt) & = &
\text{let } (m', PKT, l) = \mathsf{eval}(a, m, pkt) \text{ in } \\
& & \text{let } (m'', PKT', l') = \begin{cases}
\mathsf{eval}(p, m', pkt) & PKT = \set{pkt} \\
\mathsf{eval}(q, m', pkt) & PKT = \emptyset \end{cases} \\
& & \text{in } (m'', PKT', l' \cup l) \\\\
\mathsf{consistent}(l_1, l_2) & = &
\forall s, (W\, s \in l_1 \implies (R\, s \notin l_2 \land W\, s \notin l_2)) \\
& & \land (W\, s \in l_2 \implies (R\, s \notin l_1 \land W\, s \notin l_1)) \\
\merge(m, m_1, m_2) & = &
\lambda s. \begin{cases}
m_2 \, s & \forall e, \, m_1 \, s \, e = m \, s \, e \\
m_1 \, s & \text{otherwise}
\end{cases} \\
\merge(m, m_1, m_2, \dots, m_k) & = & \merge(m, m_1, \merge(m, m_2, \dots, m_k)) \\
\mathsf{eval}(p + q, m, pkt) & = &
\text{let } (m_1, PKT_1, l_1) = \mathsf{eval}(p, m, pkt) \text{ in } \\
&& \text{let } (m_2, PKT_2, l_2) = \mathsf{eval}(q, m, pkt) \text{ in }\\
&& \qquad \begin{cases}
(\merge(m, m_1, m_2), PKT_1 \cup PKT_2, l_1 \cup l_2) & \mathsf{consistent}(l_1, l_2) \\
\bot & \text{otherwise}
\end{cases} \\
\mathsf{eval}(p ; q, m, pkt)
& = & \text{let } (m_1, PKT_1, l_1) = \mathsf{eval}(p, m, pkt) \text{ in } \\
&& \text{let } (m_{21}, PKT_{21}, l_{21}),\dots,(m_{2n}, PKT_{2n}, l_{2n}) = \\
&& \mathsf{eval}(q, m_1, pkt_1 \in PKT_1),\dots,\mathsf{eval}(q, m_1, pkt_n \in PKT_1) \text{ in } \\
&& \qquad \begin{cases}
(\merge(m,m_{21},\dots,m_{2n}), \bigcup_{i=1}^n PKT_{2i}, l_1 \cup (\bigcup_{i=1}^n l_{2i}))
& \forall i \ne j, ~ \mathsf{consistent}(l_{2i},l_{2j}) \\
\bot & \text{otherwise}
\end{cases} \\
\mathsf{eval}(\mathsf{atomic}(p), m , pkt) &=& \mathsf{eval}(p, m, pkt)
\end{array}\]
\end{mdframed}
\captionsetup{skip = 3mm}
\caption{SNAP\xspace Semantics}
\label{fig:semantics}
\end{figure}
\twocolumn
\clearpage
\section{State Dependency Algorithm}\label{app:st-dep}
\normalsize
\begin{figure}[h!]
\scriptsize
\mdfsetup{
innerleftmargin=0mm,
skipabove=0mm,
innertopmargin=-2mm,
skipbelow = 3mm,
}
\begin{mdframed}
\[\begin{array}{rcl}
\textproc{st-dep}(\union{p}{q}) & = & \textproc{st-dep}(p) \cup \textproc{st-dep}(q) \\
\textproc{st-dep}(\seq{p}{q})
& = & (\textproc{r}(p) \times \textproc{w}(q)) \cup {} \\
&& \textproc{st-dep}(p) \cup \textproc{st-dep}(q) \\
\multicolumn{3}{l}{\textproc{st-dep}(\IfElse{a}{p}{q}) = {(\textproc{r}(a) \times (\textproc{w}(p) \cup \textproc{w}(q)))}} \\
& & \cup~\textproc{st-dep}(p) \cup \textproc{st-dep}(q) \\
\textproc{st-dep}(\mathsf{atomic}(p))
&=& (\textproc{r}(a) \cup \textproc{w}(a)) \times (\textproc{r}(a) \cup \textproc{w}(a)) \\
\textproc{st-dep}(p) & = & \emptyset \text{ otherwise}\\[0.5em]
\textproc{r}(p) & : & \text{set of state variables read by $p$}\\
\textproc{w}(p) & : & \text{set of state variables written by $p$}
\end{array}\]
\end{mdframed}
\caption{\textproc{st-dep}\ function for determining ordering constraints against state variables.}
\label{fig:st-graph}
\end{figure}
\section{Extended State Sharding}\label{app:sharding}
Consider $s[inport]$ for instance. The compiler
partitions $s$ into $s_1$ to $s_k$, where $s_i$ stores
$s$ for port $i$. The MILP can be used as before to decide
placement and routing, this time with
the option of placing $s_i$'s at different places without worrying about synchronization as
$s_i$s store \emph{disjoint} parts of $s$.
The same idea can be used
for distributing $t[srcip]$, where $t_1$ to $t_k$ are $t$'s partitions for
disjoint subset of IP addresses $ip_1$ to $ip_k$. In this case,
each port $u$ in the OBS should be replaced with $u_1$ to $u_k$, with $u_i$
handling $u$'s traffic with source IP $ip_i$.
\section{Deciding Egress Ports}\label{appendix}
One might worry that the it isn't always possible to decide the egress
port $v$ for a given packet upon entry because it depends on the
state.
Suppose a packet arrives at port 1 in our example topology
and the user policy specifies that its outport should be assigned to
either 5 or 6 based on state variable $s$, located at $C_6$.
Assume the MILP assigns the path $p_1$ to $(1, 5)$ traffic and the
path $p_2$ to $(1, 6)$. The ingress switch ($I_1$) can not determine
whether the packet belongs to $(1, 5)$ or $(1, 6)$ to forward it on
$p_1$ or $p_2$ respectively. But: it does not actually matter! Both
paths go through $C_6$ because both kinds of traffic need $s$.
In order to ensure better usage of resources, we can choose which of
$p_1$ and $p_2$ to send the packet over in proportion to each path's
capacity. But whichever path we take, the packet will make its way to
$C_6$ and its processing continues from there.
More formally, the MILP outputs the
optimized path for the traffic between each ingress port $u$ and egress port $v$. However, the policy
may not be able to determine $v$ at the ingress switch. Suppose that
$v_1, \cdots, v_k$ are the possible
outport for packets that enter from $u$. From packet-state mapping (section~\ref{sec:psm}),
we know that the packets from $u$ to each $v_i$ need a sequence (as they are now ordered) of state variables
$\sequence{s_{i1}, \cdots, s_{ip}}$.
Therefore, the designated path for this traffic goes through the sequence of
nodes $\sequence{u, n_{i1}, \cdots, n_{ip}, v_i}$ where $n_{ij}$ is the switch holding $s_{ij}$.
Now suppose that the policy starts processing a packet from inport $u$ and gets stuck on
a statement containing $s$. If $s$ only appears in $v_i$'s state sequence, the policy's
getting stuck on $s$ implies that the packet belongs to the traffic from $u$ to $v_i$, so we
can safely forward the packet
on its designated path. However, it may be the case that
$s$ appears in the state sequences of multiple $v_i$s, each at index $l_i$.
Thus, we have multiple paths to the switch holding $s$, where the path assigned to $(u, v_i)$'s traffic
is $\sequence{u, n_{i1}, \cdots, n_{il_i}}$ and is capable of carrying at least $d_{uv_i}$
volume of traffic. Let's call the set of $v_i$s whose traffic need $s$, $V_s$. The observation here is that at most
$\sum_{v_i \in V_s} d_{uv_i}$ worth of traffic entering from $u$ needs state $s$, and the
total capacity of the designated paths from $u$ to $n_{il_i}$, where $s$ is held, is also equal to
$\sum_{v_i \in V_s} d_{uv_i}$. Therefore, we just send the traffic that needs $s$ over
one of these paths in proportion to their capacity. The packet will make its way to the switch
holding $s$, and its processing will continue from there. A similar technique is used whenever a switch
gets stuck on the processing of a packet because of a state variable that is not locally available.
\clearpage
\input{fdd-seq}
\clearpage
\input{app-examples}
\section{SNAP\xspace Policy Examples}
\label{app:examples}
\newenvironment{snappolicy}[1]
{
\def\caption{#1}{\caption{#1}}
\def\label{alg:#1}{\label{alg:#1}}
\begin{algorithm}[h!]
\begin{algorithmic}[1]%
\scriptsize
}
{\end{algorithmic}
\caption{#1}
\label{alg:#1}
\end{algorithm}}
\snaptitle{Number of domains that share the same IP address.}
Suppose an attacker tries to avoid blocking access to his malicious IP through a specific DNS domain by frequently changing the domain name that relates to that IP~\cite{chimera}.
Detection of this behavior is implemented by policy~\ref{alg:many-ip-domains}.
\begin{snappolicy}{many-ip-domains}
\If{$srcport = 53$}
\If{$\neg$domain-ip-pair[DNS.rdata][DNS.qname]}
\State num-of-domains[DNS.rdata]++;
\State domain-ip-pair[DNS.rdata][DNS.qname] $\gets$ True;
\If{num-of-domains[DNS.rdata] =threshold}
\State mal-ip-list[DNS.rdata]$\gets$ True
\Else \State id
\EndIf
\Else \State id
\EndIf
\Else \State id
\EndIf
\end{snappolicy}
\snaptitle{Number of distinct IP addresses per domain name.}
Too many distinct IPs under the same domain may indiciate a malicious activity~\cite{chimera}. Policy~\ref{alg:many-domain-ips} counts number of different IPs for the same domain name and checks whether it crosses some threshold.
\begin{snappolicy}{many-domain-ips}
\If{srcport = 53}
\If{$\neg$ip-domain-pair[DNS.qname][DNS.rdata]}
\State num-of-ips[DNS.qname]++;
\State ip-domain-pair[DNS.qname][DNS.rdata] $\gets$ True;
\If{num-of-ips[DNS.qname] =threshold}
\State mal-domain-list[DNS.qname]$\gets$ True
\Else \State id
\EndIf
\Else \State id
\EndIf
\Else \State id
\EndIf
\end{snappolicy}
\snaptitle{Stateful firewall.}
A stateful firewall for the CS department, implemented by policy~\ref{alg:stateful-fw} allows only connections initiated within ip$_6$, which is the CS department.
\begin{snappolicy}{stateful-fw}
\If{srcip=ip$_6$}
\State {established[srcip][dstip] $\gets$ True}
\Else
\If{dstip=ip$_6$}
\State {established[dstip][srcip]}
\Else \State {id}
\EndIf
\EndIf
\end{snappolicy}
\snaptitle{DNS TTL change tracking}. The frequency of TTL changes in the
DNS response for a domain is a feature that can help identify a
malicious domain~\cite{chimera}. Policy~\ref{alg:dns-ttl-change}
keeps track of the number of changes in the announced TTL for each
domain and in the \codeb{ttl-change} state variable. This state
variable can be used in the subsequent parts of the policy to
blacklist a domain.
\begin{snappolicy}{dns-ttl-change}
\If{srcport = 53}
\If{$\neg$seen[dns.rdata]}
\State seen[dns.rdata]$\gets$True;
\State last-ttl[dns.rdata]$\gets$dns.ttl;
\State ttl-change[dns.rdata]$\gets$0
\Else
\If{last-ttl[dns.rdata] = dns.ttl}
\State id
\Else
\State last-ttl[dns.rdata]$\gets$ dns.ttl;
\State ttl-change[dns.domain]++
\EndIf
\EndIf
\Else \State id
\EndIf
\end{snappolicy}
\snaptitle{FTP monitoring.}
Policy~\ref{alg:ftp-monitoring} tracks the states of FTP control channel and allows data channel traffic
only if there has been a signal on the control channel.
The policy assumes FTP standard mode where client announces data port (ftp.PORT),
other complicated modes may be implemented as well.
\begin{snappolicy}{ftp-monitoring}
\If{dstport=21}
\State {ftp-data-chan[srcip][dstip][ftp.PORT]$\gets$True}
\Else
\If{srcport=20}
\State {ftp-data-chan[dstip][srcip,][ftp.PORT]}
\Else \State {id}
\EndIf
\EndIf
\end{snappolicy}
\snaptitle{Phishing/spam detection.}
To detect suspicious Mail Transfer Agents (MTAs), policy~\ref{alg:spam-detection} detects new MTAs, then checks if any of them sends a large amount of mails in its first 24 hours.
We assume state variables will be reset every 24 hours.
\begin{snappolicy}{spam-detection}
\If{MTA-dir[smtp.MTA] = Unknown}
\State MTA-dir[smtp.MTA] $\gets$ Tracked;
\State mail-counter[smtp.MTA] = 0
\Else \State id;
\EndIf
\If{MTA-dir[smtp.MTA] = Tracked}
\State mail-counter[smtp.MTA]++;
\If{mail-couter[smtp.MTA] = threshold}
\State MTA-directory[smtp.MTA] $\gets$ Spammer
\Else \State {id}
\EndIf
\Else \State id
\EndIf
\end{snappolicy}
\snaptitle{Heavy hitter detection.}
Policy~\ref{alg:heavy-hitter-detection} keeps a counter per flow and
marks those passing a threshold as heavy hitters.
To detect and block heavy hitters, one could use the following policy:
\codeb{heavy-hitter-detection; \\ (heavy-hitter[srcip] = False)}
\begin{snappolicy}{heavy-hitter-detection}
\If{tcp.flags = SYN \& $\neg$heavy-hitter[srcip]}
\State hh-counter[srcip] ++;
\If{hh-counter[srcip] = threshold}
\State {heavy-hitter[srcip] $\gets$ True}
\Else \State {id}
\EndIf
\Else \State {id}
\EndIf
\end{snappolicy}
\snaptitle{Sidejack detection.} Sidejacking occurs when an attacker
steals the session id information from an unencrypted HTTP cookie and
uses it to impersonate the legitimate user. Sidejacking can be
detected by keeping track of the client IP address and user agent for
each session id, and checking subsequent packets for that session id
to make sure they are coming from the client that started the
session~\cite{chimera}. This procedure is implemented in SNAP\xspace in policy~\ref{alg:sidejacking}.
\begin{snappolicy}{sidejacking}
\If{(dstip = server) \& $\neg$(sid\footnote{\codeb[scriptsize]{sid = http.session-id}} = null)}
\If{$\neg$active-session[sid]}
\State atomic(active-session[sid]$\gets$True;
\State sid2ip[sid]$\gets$srcip;
\State sid2agent[sid]$\gets$http.user-agent)
\Else
\If{sid2ip[sid] = srcip \& sid2agent[sid] = http.user-agent}
\State {id}
\Else \State {drop}
\EndIf
\EndIf
\Else \State {id}
\EndIf
\end{snappolicy}
\snaptitle{Super-spreader detection.}
Policy~\ref{alg:super-spreader-detection} increases a counter on SYNs and decreases it on FINs per IP address that initiates the connection.
If an IP address creates too many connections without closing them, it is marked as a super spreader.
\begin{snappolicy}{super-spreader-detection}
\If{tcp.flags=SYN}
\State spreader[srcip]++;
\If{spreader[srcip] = threshold}
\State {super-spreader[srcip] $\gets$ True}
\Else \State {id}
\EndIf
\Else
\If{tcp.flags=FIN}
\State {spreader[srcip]-{}-}
\Else \State {id}
\EndIf
\EndIf
\end{snappolicy}
\snaptitle{Sampling based on flow-size.}
Policy~\ref{alg:sampling-based-flow-size} uses policy~\ref{alg:flow-size-detect} to detect flow size by keeping a counter for a flow size and select sampling rate based on the counter value. Then it uses one of the three sampler policies~\ref{alg:sample-small}
,~\ref{alg:sample-medium}, or ~\ref{alg:sample-large} for differentiated sampling.
\begin{snappolicy}{flow-size-detect}
\State flow-size[flow-ind\footnotemark]++;
\If{flow-size[flow-ind]=1}
\State {flow-type[flow-ind]$\gets$SMALL}
\Else \If{flow-size[flow-ind]=100}
\State {flow-type[flow-ind]$\gets$MEDIUM}
\Else \If{flow-size[flow-ind]=1000}
\State {flow-type[flow-ind]$\gets$LARGE}
\Else \State {id}
\EndIf
\EndIf
\EndIf
\end{snappolicy}
\begin{snappolicy}{sampling-based-flow-size}
\State flow-size-detect;
\If{flow-type[flow-ind\footnotemark[\value{footnote}]]=SMALL}
\State {sample-small}
\Else \If{flow-type[flow-ind]=MEDIUM}
\State {sample-medium}
\Else \State {sample-large}
\EndIf
\EndIf
\end{snappolicy}
\begin{snappolicy}{sample-small}
\State small-sampler[flow-ind\footnotemark[\value{footnote}]]++;
\If{small-sampler[flow-ind]=5}
\State {small-sampler[flow-ind]$\gets$0}
\Else \State {drop}
\EndIf
\end{snappolicy}
\begin{snappolicy}{sample-medium}
\State medium-sampler[flow-ind\footnotemark[\value{footnote}]]++;
\If{medium-sampler[flow-ind]=50}
\State {medium-sampler[flow-ind]$\gets$0}
\Else \State {drop}
\EndIf
\end{snappolicy}
\begin{snappolicy}{sample-large}
\State large-sampler[flow-ind\footnotemark[\value{footnote}]]++;
\If{large-sampler[flow-ind]=500}
\State {large-sampler[flow-ind]$\gets$0}
\Else \State {drop}
\EndIf
\end{snappolicy}
\footnotetext{[flow-ind] = [srcip][dstip][srcport][dstport][port]}
\snaptitle{Selective packet dropping.}
Policy~\ref{alg:selective-packet-dropping} drops differentially-encoded B frames in an MPEG encoded stream if the dependency (preceding I frame) was dropped.
\begin{snappolicy}{selective-packet-dropping}
\If{mpeg.frame-type=Iframe}
\State {dep-count[srcip][dstip][srcport][dstport]$\gets$14}
\Else \If{dep-count[srcip][dstip][srcport][dstport]=0}
\State {drop}
\Else \State {dep-count[srcip][dstip][srcport][dstport]-{}-}
\EndIf
\EndIf
\end{snappolicy}
\snaptitle{Connection Affinity.}
Policy~\ref{alg:conn-affinity} uses TCP state machine to distinguish ongoing connections from new ones, assuming \codeb{basic-tcp-reassembly ; conn-affinity} and
that we want to do per-connection load balancing using \codeb{lb}.
\begin{snappolicy}{conn-affinity}
\If{tcp-state[dstip][srcip][dstport][srcport][proto] = ESTABLISHED $|$ tcp-state[srcip][dstip][srcport][dstport][proto] = ESTABLISHED}
\State {lb}
\Else \State {id}
\EndIf
\end{snappolicy}
\snaptitle{SYN flood detection.}
To detect SYN floods, we should count the number of SYNs without any matching ACK from the sender side and if this sender crosses a certain threshold it should be blocked.
This can actually be implemented in a similar way as the super-spreader-detection policy (policy~\ref{alg:super-spreader-detection}).
\snaptitle{Elephant flow detection.}
Suppose the attacker launches legitimate
but very large flows. One could detect abnormally
large flows, flag them as attack flows, and then
randomly drop packets from these
large flows.
This policy can actually be implemented by a composition of previously implemented
policies: \\ \codeb{flow-size-detect;sample-large policy}.
\snaptitle{DNS amplification mitigation}
In a DNS amplification attack, the attacker spoofs and sends out many DNS queries with the IP address of the victim. Thus, large answers are
sent back to the victim that can lead to denial of service in case the victim is a server.
Policy~\ref{alg:dns-amplification} detects this attack by tracking the DNS queries that the server has actually sent out, and getting suspicious of attack
if the number of unmatched DNS responses passes a threshold.
\begin{snappolicy}{dns-amplification}
\If{dstport=53}
\State {benign-request[srcip][dstip]$\gets$True}
\Else \If{srcport=53 \& $\neg$benign-request[dstip][srcip]}
\State {drop}
\Else \State {id}
\EndIf
\EndIf
\end{snappolicy}
\snaptitle{UDP flood mitigation.}
Policy~\ref{alg:udp-flood} identifies
source IPs that send an anomalously higher number
of UDP packets and uses this to categorize each
packet as either attack or benign.
\begin{snappolicy}{udp-flood}
\If{proto = UDP \& $\neg$udp-flooder[srcip]}
\State udp-counter[srcip] ++;
\If{udp-counter[srcip] = threshold}
\State udp-flooder[srcip] $\gets$ True;
\State drop
\Else \State {id}
\EndIf
\Else \State {id}
\EndIf
\end{snappolicy}
\snaptitle{Snort flowbits.}
The Snort IPS rules~\cite{Snort}
contain both stateless and stateful ones. Snort uses a tag called \emph{flowbits} to mark a boolean state of a ``5-tuple''.
The following example shows how flowbits are used for application specification:
\mdfsetup{
skipabove = 3mm,
skipbelow = 3mm,
}
\begin{mdframed}
\centering
\codeb[scriptsize]{pass tcp HOME-NET any -> EXTERNAL-NET 80 \\
(flow:established; content:"Kindle/3.0+"; \\ \textbf{flowbits:set,kindle};) \\}
\end{mdframed}
The same rule can be expressed in SNAP\xspace terms as can be seen in policy~\ref{alg:snort-flowbits}:
\begin{snappolicy}{snort-flowbits}
\State \match{srcip}{HOME-NET};
\State \match{dstip}{EXTERNAL-NET};
\State \match{dstport}{80};
\State \match{established[srcip][dstip][srcport][dstport][proto]}{True};
\State \match{content}{"Kindle/3.0+"} ;
\State \textbf{kindle[srcip][dstip][srcport][dstport][proto]} $\gets$ \textbf{True}
\end{snappolicy}
Note that Snort's flowbits are more restricted than SNAP\xspace state variables in the sense that they can only be defined per 5-tuple, i.e.\ the index to the state is fixed.
\newpage
\snaptitle{Basic TCP state machine.}
Policy~\ref{alg:basic-tcp-reassembly} implements a basic bump-on-the-wire TCP state machine.
\onecolumn
\begin{snappolicy}{basic-tcp-reassembly}
\If{tcp.flags=SYN \& tcp-state[srcip][dstip][srcport][dstport][proto] =CLOSED}
\State {tcp-state[srcip][dstip][srcport][dstport][proto] $\gets$ SYN-SENT}
\Else \If{tcp.flags=SYN-ACK \&
tcp-state[dstip][srcip][dstport][srcport][proto]=SYN-SENT}
\State {tcp-state[dstip][srcip][dstport][srcport][proto]$\gets$SYN-RECEIVED}
\Else \If{tcp.flags=ACK \&
tcp-state[srcip][dstip][srcport][dstport][proto]=SYN-RECEIVED}
\State {tcp-state[srcip][dstip][srcport][dstport][proto]$\gets$ESTABLISHED}
\Else \If{tcp.flags=FIN \&
tcp-state[srcip][dstip][srcport][dstport][proto]=ESTABLISHED}
\State {tcp-state[srcip][dstip][srcport][dstport][proto]$\gets$FIN-WAIT}
\Else \If{tcp.flags=FIN-ACK \&
tcp-state[dstip][srcip][dstport][srcport][proto]=FIN-WAIT}
\State {tcp-state[dstip][srcip][dstport][srcport][proto]$\gets$FIN-WAIT2}
\Else \If{tcp.flags=ACK \&
tcp-state[srcip][dstip][srcport][dstport][proto]=FIN-WAIT2}
\State {tcp-state[srcip][dstip][srcport][dstport][proto]$\gets$CLOSED}
\Else \If{tcp.flags=RST \&
tcp-state[dstip][srcip][dstport][srcport][proto]=ESTABLISHED}
\State {tcp-state[dstip][srcip][dstport][srcport][proto]$\gets$CLOSED}
\Else \State {tcp-state[dstip][srcip][dstport][srcport][proto]=ESTABLISHED + tcp-state[srcip][dstip][srcport][dstport][proto]=ESTABLISHED}
\EndIf \EndIf \EndIf \EndIf \EndIf \EndIf \EndIf
\end{snappolicy}
\section{Background on NetKAT}\label{sec:background}
This section provides some background on NetKAT~\cite{NetKAT}, a domain-specific language for specifying and reasoning about networks, from which SNAP\xspace borrows its basis from.
To start with the reasons for choosing NetKAT; as mentioned earlier, SNAP\xspace is expected to express a large set of policies, each involves a set of packet tests and transformations. Therefore some mechanism that support combining these policies into a single one is required. NetKAT offers primitives for \emph{parallel and sequential compositions}, therefore it is a natural choice for this task.
NetKAT has attracted a lot of academic attention over the last couple of years, such as the design of an efficient compiler~\cite{FastNetKATCompiler} whose constructs we have adopted.
\textbf{Expressions} --- NetKAT is designed to express the forwarding policy based on packet's location and information in its headers.
A NetKAT expression can be thought of as a function that receives a packet and outputs a set of zero or more packets.
There are two main types of expressions, predicates that act as filters) and policies that include predicates and also support actions such as field modification and packet duplication.
\textbf{Filters} ---
include a set of primitives: the identity (1) outputs the singleton set of the input packet, drop (0) returns the empty set, and test ($f = v$), which acts like identity when the value of $f$ field in the input packet is $v$ and
returns the empty set otherwise;
as well as a set of operators for combining them; negation ($\neg$), conjunction ($;$) and disjunction ($+$).
\textbf{Policies} ---
include predicates and a primitive action for field modification (\modify{f}{v}). Fields include a set of common header fields as well as two additional ones that indicate the packet location: switch and port. For example the following policy \code{sw=1;\modify{pt}{2}} means filter out all packets but those at switch 1 and send them to its port 2. Policies also include operators for combining
policies --- parallel composition ($+$) and sequential composition ($;$).
\textbf{Parallel composition} --- $(p+q)$ provides the programmer an abstraction that allows combining two policies without specifying an order of execution. $(p+q)$ executes $p$ and $q$ separately and outputs the union of
the results. In a basic example where there is a single input packet, $p$ is leaves the packet unchanged and $q$ modifies a field, $(p + q)$ produces two packets in its output. A more illustrative example may be of performing multicast of a packet to seven different output ports. This is accomplished by parallel composition of seven different policies, each modifies the out port of the packet to a different one.
\textbf{Sequential composition} --- $(p;q)$, takes the output of $p$ and uses it as input to $q$. If $p$ generates multiple packets, $q$ is executed
on each separately and the union of the results are returned.
$p*$ may be thought as a sequential composition of $p$ with itself for zero or more times. It is usually used to express packet traversal over the network topology.
Figure~\ref{fig:syntax} describes SNAP\xspace 's syntax, which provides the general notion of NetKAT's syntax except for its missing iteration operator (since SNAP\xspace uses a One Big Switch abstraction that abstracts topology, thus iteration is not needed) and excluding SNAP\xspace 's additions, which are marked as bold.
\section{Compilation}\label{sec:compilation}
\begin{figure}[t!]
\centering
\includegraphics[width = .75\columnwidth]{compiler-overview.png}
\caption{Overview of the compiler phases.}
\label{fig:compilation}
\end{figure}
To implement a SNAP\xspace program specified on one big switch,
we must fill in two critical details: \emph{traffic
routing} and \emph{state placement}. The physical topology may
offer many paths between edge ports, and many possible locations for
placing state.\footnote{\fontsize{8}{10} \selectfont In this work, we assume each state
variable resides in one place, though it is conceivable
to distribute it (see \cref{sec:milp} and \cref{subsec:extensions}).}
The routing and placement problems interact: if two flows (with
different input and output OBS ports) both need some state variable
$s$, we should select routes for the two flows such that they pass
through a common location where we place $s$.
Further complicating the situation, the OBS program may specify that
certain flows read/write multiple state variables in a particular
order. The routing and placement on the physical topology must respect
that order. In \Tunnel, for instance, routing must ensure that
packets reach wherever \codeb{orphan} is placed before
\codeb{susp-client}.
In some cases, two different flows may depend on
the same state variables, but in different orders.
We have designed a \emph{compiler} that translates OBS programs
into forwarding rules and state placements for a given topology.
As shown in Figure~\ref{fig:compilation},
the two key phases are (i) translation to \emph{extended forwarding decision
diagrams} (xFDDs)---used as the intermediate representation of the program and to
calculate which flows need which state
variables---and (ii) optimization via \emph{mixed integer linear program}
(MILP)---used to decide routing and state placement.
In the rest of this section, we present the compilation process in
phases, first discussing the analysis of state dependencies, followed
by the translation to xFDDs and the packet-state mapping, then the
optimization problems, and finally the generation of rules sent to the
switches.
\subsection{State Dependency Analysis}
\label{subsec:statedep}
Given a program, the compiler first performs \emph{state
dependency analysis} to determine the ordering constraints on its state variables.
A state variable $t$ \emph{depends} on a state variable $s$ if the program writes to $t$
after reading from $s$. Any realization of the program on a
concrete network must ensure that $t$ does not come before $s$.
Parallel composition, $p + q$, introduces no dependencies: if $p$
reads or writes state, then $q$ can run independently of that.
Sequential composition $p;q$, on the other hand, introduces
dependencies: whatever reads are in $p$ must happen before writes
in $q$.
In explicit conditionals ``$\IfElse{a}{p}{q}$'', the writes in $p$ and $q$
depend on the condition $a$.
Finally, atomic sections $\mathsf{atomic}(p)$ say that all state in $p$ is
inter-dependent. In \Tunnel, for instance,
\codeb{blacklist} is dependent on \codeb{susp-client}, itself
dependent on \codeb{orphan}.
This information is encoded as a dependency graph on
state variables and is used to order the xFDD\xspace structure (\cref{sec:fdds}),
and in the MILP (\cref{sec:milp}) to drive state placement.
\subsection{Extended Forwarding Decision Diagrams}
\label{sec:fdds}
\newcommand{\textproc{to-xfdd}}{\textproc{to-xfdd}}
\begin{figure}[t!]
\scriptsize
\mdfsetup{
innerleftmargin=0cm,
innertopmargin= -1mm,
skipbelow=-3mm,
rightmargin=0mm,
leftmargin=0mm,
}
\begin{mdframed}
\[
\begin{array}{rclr}
d &::=& t~?~d_1 : d_2 \,|\, \set{as_1, \dots, as_n} & \text{xFDDs} \\
t &::=& f = v \,|\, f_1 = f_2 \,|\, s[e_1] = e_2 & \text{tests} \\
as & ::= & a \,|\, a;a & \text{action sequences} \\
a &::=& id \,|\, drop \,|\, f \leftarrow v \,|\, s[e_1] \gets e_2 & \text{actions} \\
& & \,|\, s[e_1]\text{\codeb{++}} \,|\, s[e_1]\text{\codeb{-{}-}} \\
\end{array}\]
\end{mdframed}
\mdfsetup{
rightmargin=0mm,
innertopmargin= -2mm,
skipabove=0mm,
skipbelow=2mm,
}
\begin{mdframed}
\[
\arraycolsep=1.2pt
\begin{array}{rcl}
\textproc{to-xfdd}(a) &=& \set{ a } \\
\textproc{to-xfdd}(f = v) &=& f = v ~?~ \set{id} ~:~ \set{drop} \\
\textproc{to-xfdd}(\neg x) &=& \ominus \textproc{to-xfdd}(x) \\
\textproc{to-xfdd}(s[e_1] = e_2) &=& s[e_1] = e_2 ~?~ \set{id} ~:~ \set{drop} \\
\textproc{to-xfdd}(\mathsf{atomic}(p)) &=& \textproc{to-xfdd}(p) \\
\textproc{to-xfdd}(p + q) &=& \textproc{to-xfdd}(p) \oplus \textproc{to-xfdd}(q) \\
\textproc{to-xfdd}(p;q) &=& \textproc{to-xfdd}(p) \odot \textproc{to-xfdd}(q) \\
\textproc{to-xfdd}(\IfElse{x}{p}{q})
&=& (\textproc{to-xfdd}(x) \odot \textproc{to-xfdd}(p)) \\
&\oplus {}& (\ominus \textproc{to-xfdd}(x) \odot \textproc{to-xfdd}(q))
\end{array} \]
\end{mdframed}
\caption{xFDD\xspace syntax and translation.}
\label{fig:fdd-syntax}
\end{figure}
\begin{figure*}[t!]
\scriptsize
\begin{tabularx}{\textwidth}{|c|X|}
\hline
\arraycolsep=0.7pt
$\begin{array}{rcl}
\set{as_{11}, \cdots, as_{1n}} \oplus \set{as_{21}, \cdots, as_{2m}} & = & \set{as_{11}, \cdots, as_{1n}} \cup \set{as_{21}, \cdots, as_{2m}} \\
(t~?~d_1 : d_2) \oplus \set{as_1, \cdots, as_n} & = & (t~?~ d_1 \oplus \set{as_1, \cdots, as_n} : d_2 \oplus \set{as_1, \cdots, as_n}) \\\\
(t_1~?~ d_{11} : d_{12}) \oplus (t_2~?~d_{21} : d_{22}) & = &
\begin{cases}
(t_1 ~?~ d_{11} \oplus d_{21} : d_{12} \oplus d_{22}) & t_1 = t_2 \\
(t_1 ~?~ d_{11} \oplus (t_2~?~d_{21}:d_{22}) : d_{12} \oplus (t_2~?~d_{21}:d_{22}) & t_1 \sqsubset t_2 \\
(t_2 ~?~ d_{21} \oplus (t_1~?~d_{11}:d_{12}) : d_{22} \oplus (t_1~?~d_{11}:d_{12}) & t_2 \sqsubset t_1
\end{cases}
\end{array}$
&
\arraycolsep=0.7pt
$\begin{array}{rcl}
\ominus \set{id} &= & \set{drop} \\
\ominus \set{drop} & = & \set{id}\\
\ominus (t?d_1:d_2) & = & (t?\ominus d_1:\ominus d_2)
\end{array}$
\\ \hline
\end{tabularx}
\begin{tabularx}{\textwidth}{|XcX|c|}
\hline
&
\arraycolsep=3pt
$\begin{array}{lcl}
as \odot \set{as_1, \cdots, as_n} & = & \set{as \odot as_1, \cdots, as \odot as_n}\\
as \odot (t~?~d_1:d_2) & = & \text{(see explanations in \cref{sec:fdds})} \\
\set{as_1, \cdots, as_n} \odot d & = & (as_1 \odot d) \oplus \cdots \oplus (as_n \odot d) \\
(t~?~d_1:d_2) \odot d & = & (d_1 \odot d)|_t \oplus (d_2 \odot d)|_{\sim t} \\
\end{array}$
& &
\arraycolsep=1pt
$\begin{array}{rcl}
\set{as_1, \cdots, as_n}|_t & = & (t~?~\set{as_1, \cdots, as_n} : \set{drop})\\\\
(t_1~?~d_1:d_2)|_{t_2} & = &
\begin{cases}
(t_1~?~d_1:\set{drop}) & t_1 = t_2 \\
(t_2~?~(t_1~?~d_1:d_2):\set{drop}) & t_2 \sqsubset t_1 \\
(t_1~?~d_1|_{t_2}:d_2|_{t_2}) & t_1 \sqsubset t_2
\end{cases}
\end{array}$
\\
\hline
\end{tabularx}
\caption{Definitions of xFDD\xspace composition operators.}
\label{fig:fdd-normalization}
\end{figure*}
\begin{figure*}
\scriptsize
\newcommand{\textproc{refine}}{\textproc{refine}}
\newcommand{\kw}[1]{\text{\textbf{#1}}}
\mdfsetup{
skipbelow = 0.05mm,
}
\begin{mdframed}
$ \begin{array}{rcl}
\oplus(\set{as_{11}, \cdots, as_{1n}}, \set{as_{21}, \cdots, as_{2m}}, context) & = & \set{as_{11}, \cdots, as_{1n}} \cup \set{as_{21}, \cdots, as_{2m}} \\
\oplus((t~?~d_1 : d_2), \set{as_1, \cdots, as_n}, context) & = & \kw{let } c_T = context.add(t) \kw{ in }\\
& & \kw{let } brch_T = \oplus (d_1, \set{as_1, \cdots, as_n}, c_T) \kw{ in }\\
& & \kw{let } c_F = context.add(\neg t) \kw{ in }\\
& & \kw{let } brch_F = \oplus (d_2, \set{as_1, \cdots, as_n}, c_T) \kw{ in }\\
& & (t~?~ brch_T : brch_F) \\\\
\oplus(d_1, d_2, context) & = & \kw{let } (t_1~?~ d_{11} : d_{12}) = \textproc{refine}(d_1, context) \kw{ in }\\
& & \kw{let } (t_2~?~d_{21} : d_{22}) = \textproc{refine}(d_2, context) \kw{ in } \\
& & \kw{let } c_T = \kw{ if } t_1 \sqsubset t_2 \kw{ then } context.add(t_1) \kw{ else } context.add(t_2) \kw{ in }\\
& & \kw{let } c_F = \kw{ if } t_1 \sqsubset t_2 \kw{ then } context.add(\neg t_1) \kw{ else } context.add(\neg t_2) \kw{ in }\\
& & \begin{cases}
(t_1 ~?~ \oplus (d_{11}, d_{21}, c_T) : \oplus(d_{12}, d_{22}, c_F)) & t_1 = t_2 \\
(t_1 ~?~ \oplus(d_{11}, (t_2~?~d_{21}:d_{22}), c_T) : \oplus(d_{12}, (t_2~?~d_{21}:d_{22}), c_F) & t_1 \sqsubset t_2 \\
(t_2 ~?~ \oplus(d_{21}, (t_1~?~d_{11}:d_{12}), c_T) : \oplus(d_{22}, (t_1~?~d_{11}:d_{12}), c_F) & t_2 \sqsubset t_1
\end{cases}
\end{array} $
\end{mdframed}
\mdfsetup{
skipabove = 0mm,
skipbelow = 3mm,
}
\begin{mdframed}
\[\begin{array}{rcl}
\textproc{refine}(\set{as_1, \cdots, as_n}, context) & = & \set{as_1, \cdots, as_n} \\
\textproc{refine}((t ~ ? ~ d_1 : d_2), context) & = & \kw{if } context.imply(t) \kw{ then } \textproc{refine}(d_1, context) \\
& & \kw{else if } context.imply(\neg t) \kw{ then } \textproc{refine}(d_2, context) \\
& & \kw{else } (t ~ ? ~ d_1 : d_2) \\
\end{array} \]
\end{mdframed}
\caption{A closer look at $\oplus$.}
\label{fig:fdd-normalization-detailed}
\end{figure*}
\cam{The input to the compiler is a SNAP\xspace program, which can be a composition of several smaller
programs. The output, on the other end, is the distribution of the original policy across the network.
Thus, in between, we need an intermediate representation for SNAP\xspace programs that is both composable
and easily partitioned. This intermediate representation can help the compiler compose
small program pieces into a unified representation, which can further be partitioned to get distributed
across the network. Extended forwarding decision diagrams (xFDDs\xspace), which are introduced in this section,
are what we use as our internal representation of SNAP\xspace programs and have both desired properties.
They also simplify analysis of SNAP\xspace programs for extracting packet-state mapping, which we discuss in~\cref{sec:psm} }
Formally (see Figure~\ref{fig:fdd-syntax}), an xFDD is either a \emph{branch} $(t ~?~ d_1 : d_2)$, where $t$
is a test and $d_1$ and $d_2$ are xFDDs, or a \emph{set} of action sequences
$\set{as_1, \dots, as_n}$.
Each branch can be thought of as a conditional: if the test $t$ holds
on a given packet $pkt$, then the xFDD continues processing $pkt$ using
$d_1$; if not,
processes $pkt$ using $d_2$.
There are three kinds of tests.
The
\emph{field-value test} $f=v$ holds when $pkt.f$ is
equal to $v$.
The \emph{field-field test} $f_1=f_2$ holds when the
values in $pkt.f_1$ and $pkt.f_2$ are equal.
Finally, the \emph{state test} $s[e_1] = e_2$ holds
when the state variable $s$ at index $e_1$ is equal to $e_2$.
The last two tests are our extensions to FDDs. The state tests support
our stateful primitives, and as we show later in this section, the field-field
tests are required for correct compilation.
Each leaf in an xFDD is a set of action sequences, with each action being
either the identity, drop, field-update
$f \leftarrow v$, or state update $s[e_1] \gets e_2$, which is another extension to the original FDD.
A key property of xFDDs\xspace is that the order of their tests ($\sqsubset$)
must be defined in advance.
\cam{This ordering is necessary to ensure that each test is present at most once on any
path in the final tree when merging two xFDDs\xspace into one. Thus, xFDD\xspace composition can be done
efficiently without creating redundant tests.}
In our xFDDs\xspace, we ensure
that all field-value tests precede
all field-field tests, themselves preceding all state tests.
Field-value tests themselves are ordered by fixing an arbitrary order on
fields and values. Field-field tests are ordered similarly.
For state tests,
we first define a total order on state variables by looking at the dependency
graph from \cref{subsec:statedep}. We break \cam{the dependency graph} into strongly
connected components (SCCs) and fix an arbitrary order on state
variables within each SCC. For every edge from one SCC to
another, i.e., where some state variable in the second SCC depends on some
state variable in the first, $s_1$ precedes $s_2$ in the order,
where $s_2$ is the minimal element in the second SCC and $s_1$
is the maximal element in the first SCC. The state tests are then ordered
based on the order of state variables.
We translate a program to an xFDD using the $\textproc{to-xfdd}$ function (Figure~\ref{fig:fdd-syntax}), which
translates small parts of a program directly to xFDDs.
Composite programs get recursively translated and then
composed using a corresponding \cam{composition operator for xFDDs\xspace}: we use
$\oplus$ for $p + q$,
$\odot$ for $p$ ; $q$, and $\ominus$ for $\neg p$.
Figure~\ref{fig:fdd-normalization} gives a high-level
definition of the semantics of these operators.
For example, $d_1 \oplus d_2$
tries to merge similar test
nodes recursively by merging their true branches together and false ones together.
If the two tests are not the same and $d_1$'s test comes first in the total order, both of its subtrees
are merged recursively with $d_2$. The other case is similar.
$d_1 \oplus d_2$ for leaf nodes is the union of their action sets.
The hardest case is surely for $\odot$, where we try to add in an
action sequence $as$ to an xFDD $(t ~?~ d_1 : d_2)$.
Suppose we want to compose
$f \gets v_1$ with $(f = v_2~?~d_1 : d_2)$. The result of this xFDD composition
should behave as if we first do the update and then the condition on $f$. If
$v_1 = v_2$, the composition should continue only on $d_1$, and if not,
only on $d_2$. Now let's look at a similar example including state, composing
$s[srcip] \gets e_1$ with $(s[dstip] = e_2 ~?~ d_1 : d_2)$.
If $srcip$ and $dstip$ are equal (rare but not impossible) and $e_1$ and $e_2$
always evaluate to the same value,
then the whole composition reduces to just $d_1$.
The field-field tests are introduced to let us answer these equality questions, and that is why
they always precede state tests in the tree.
The trickiness in the algorithm comes from generating proper
field-field tests, by keeping
track of the information in the xFDD,
to \cam{properly answer the equality tests of interest}.
The full algorithm is given in appendix~\ref{app:st-dep}.
Note that the actual definition of the xFDD\xspace composition operators is a bit more involved than
the one in Figure~\ref{fig:fdd-normalization} as we have to make sure, while composing
FDDs, that the resulting FDD is \emph{well-formed}. An FDD is defined to be well-formed
if its tests conform to the pre-defined total order ($\sqsubset$) and do not contradict the previous
tests in the FDD.
Figure~\ref{fig:fdd-normalization-detailed} contains a more detailed
definition of $\oplus$ as an example.
To detect possible contradictions, we accumulate both the equalities
and inequalities implied by previous tests in an argument called $context$
and pass it through recursive calls to $\oplus$. Before applying $\oplus$ to the input FDDs,
we first run each of the FDDs through a function called \textproc{refine}, which removes
both redundant and contradicting tests from top of the input FDD based on the input $context$
until it reaches a non-redundant and non-contradicting test. After both input FDDs are
``refined'', we continue with the merge as before.
Finally, recall from \cref{sec:language} that Inconsistent use of state variables
is prohibited by the language semantics when composing programs.
We enforce
the semantics by
looking for these violations while merging the xFDDs of composed
programs
and raising a compile error if the final xFDD contains
a leaf with parallel updates to the same state variable.
\subsection{Packet-State Mapping}
\label{sec:psm}
\newcommand{\textproc{PSM}}{\textproc{PSM}}
For a given program $p$, the corresponding xFDD $d$ offers
an explicit and complete specification of the way $p$
handles packets. We analyze $d$, using an algorithm called
\emph{packet-state mapping}, to determine which
\emph{flows} use which states. This information is further used in the
optimization problem (\cref{sec:milp}) to decide the correct
routing for each flow.
Our default definition of a flow is those packets that travel between
any given pair of ingress/egress ports in the OBS, though
we can use other notions of flow (see
\cref{sec:milp}).
Traversing from $d$'s root down to the action sets at $d$'s leaves, we
can gather information associating each flow with the set of state
variables read or written.
See appendix~\ref{app:fdd-seq} for the full algorithm.
Furthermore, the operators can give hints to the compiler by specifying
their network \emph{assumptions} in a separate policy:
{
\mdfsetup{
skipabove=2mm,
skipbelow=3mm,
rightmargin=0.3cm,
leftmargin = 0.3cm,
}
\begin{mdframed}
\raggedright
\codeb[scriptsize]{
\hspace{-2mm}assumption = (\match{srcip}{10.0.1.0/24} \& \match{inport}{1}) \\
\hspace{1.45cm} + (\match{srcip}{10.0.2.0/24} \& \match{inport}{2}) \\
\hspace{1.45cm} + ... \\
\hspace{1.45cm} + (\match{srcip}{10.0.6.0/24} \& \match{inport}{6}) \\
}
\end{mdframed}
}
We require the assumption policy to be a predicate over packet header
fields, only passing the packets that match the operator's assumptions.
\codeb{assumption} is then sequentially composed with the rest of the program,
enforcing the assumption by dropping packets that do not match the assumption.
Such assumptions benefit the packet-state mapping. Consider our
example xFDD in Figure~\ref{fig:ex-fdd3}.
Following the xFDD's tree structure, we can infer that all the packets
going to port 6 need all the three state variables in \Tunnel. We can
also infer that all the packets coming from the 10.0.6.0/24 subnet need
\codeb{orphan} and \codeb{susp-client}. However, there is nothing in the program to
tell the compiler that these packets can only enter the network from port 6.
Thus, the above assumption policy can help the compiler to identify this relation
and place state more efficiently.
\subsection{State Placement and Routing}
\label{sec:milp}
\cam{At this stage, the compiler has enough information to fill in the details abstracted away from the programmer:
where and how each state variable should be placed, and how the traffic should be routed in the network.
There are two general approaches for deciding state placement and routing. One is to keep
\emph{each} state variable at one location and route the traffic through the
state variables it needs. The other is to keep multiple copies of the same state variable on different switches and
partition and route the traffic through them.
The second approach requires mechanisms to keep different copies of the same state variable consistent. However, it is not possible to provide strong consistency guarantees when distributed updates are made on a packet-by-packet basis at line rate. Therefore, we chose the first approach, which locates each state variable at one physical switch.}
To decide state placement and routing, we
generate an optimization problem, a \emph{mixed-integer linear
program} (MILP) that is an extension of the multi-commodity flow
linear program. The MILP has three key inputs: the concrete network
topology, the state dependency graph $G$, and the
packet-state mapping, and two key outputs: routing
and state placement (Table~\ref{tab:milp-inout}).
\cam{Since route selection depends on state placement and each
state variable is constrained to one physical location, we need to make sure
the MILP picks \emph{correct} paths without degrading network
performance.
Thus, the MILP minimizes the sum of link utilization in the network as a measure of congestion.
However, other objectives or constraints are conceivable
to customize the MILP to other kinds of performance requirements.}
\snaptitle{Inputs.}
The topology is defined in terms of the following inputs to the MILP:
\begin{inlinelist}
\item the nodes, some distinguished as edges (ports in OBS),
\item expected traffic $d_{uv}$ for every pair of edge nodes $u$ and $v$, and
\item link capacities $c_{ij}$ for every pair of nodes $i$ and $j$.
\end{inlinelist}
State dependencies in $G$ are translated
into input sets $dep$ and $tied$. $tied$
contains pairs of state variables which are in the same SCC
in $G$, and must be co-located.
$dep$ identifies state variables with dependencies that do not need to be co-located;
in particular, $(s,t) \in dep$ when $s$ precedes $t$ in variable ordering, and they are not
in the same SCC in $G$.
The packet-state mapping is used as
the input variables $S_{uv}$, identifying the set of
state variables needed on flows between nodes $u$ and $v$.
\begin{table}[t!]
\small
\centering
\begin{tabular}{|c|l|}\hline
\multicolumn{1}{|c}{\bf Variable} & \multicolumn{1}{|c|}{\bf Description}\\ \hline
$u, v$ & \text{edge nodes (ports in OBS)} \\
$n$ & \text{physical switches in the network}\\
$i, j$ & \text{all nodes in the network} \\
$d_{uv}$ & \text{traffic demand between $u$ and $v$} \\
$c_{ij}$ & \text{link capacity between $i$ and $j$} \\
$dep$ & \text{state dependencies} \\
$tied$ & \text{co-location dependencies }\\
$S_{uv}$ & \text{state variables needed for flow $uv$} \\
\hline
$R_{uvij}$ & fraction of $d_{uv}$ on link $(i,j)$\\
$P_{sn}$ & 1 if state $s$ is placed on $n$, 0 otherwise\\
$P_{suvij}$ & $d_{uv}$ fraction on link ($i,j$) that has passed $s$ \\
\hline
\end{tabular}
\caption{Inputs and outputs of the optimization problem.
\label{tab:milp-inout}}
\end{table}
\snaptitle{Outputs and Constraints.} The routing outputs are variables
$R_{uvij}$, indicating what fraction of the flow from edge node $u$
to $v$ should traverse the link between nodes $i$ and $j$. The
constraints on $R_{uvij}$ (left side of
Table~\ref{tab:milp-constraints}) follow the multi-commodity flow
problem closely, with standard link capacity and flow conservation
constraints, and edge nodes distinguished as sources and sinks of
traffic.
State placement is determined by the variables $P_{sn}$, which
indicate whether the state variable $s$ should be placed on the physical switch
$n$.
Our constraints here are more unique to our setting.
First, every state variable $s$ can be placed on exactly one
switch, a choice we discussed earlier in this section.
Second, we must ensure that flows that need a given state variable $s$ traverse
that switch.
Third, we must ensure that each flow traverses states in the order
specified by the $dep$ relation; this is what the variables
$P_{suvij}$ are for. We require that $P_{suvij} = R_{uvij}$ when the
traffic from $u$ to $v$ that goes over the link $(i, j)$ has already
passed the switch with the state variable $s$, and zero otherwise. If
$dep$ requires that $s$ should come before some other state variable $t$---and
if the $(u, v)$ flow needs both $s$ and $t$---we can use $P_{suvij}$
to make sure that the $(u, v)$ flow traverses the switch with $t$ only
after it has traversed the switch with $s$ (the last state constraint in
Table~\ref{tab:milp-constraints}).
Finally, we must make sure that state variables $(s,t) \in tied$ are located on
the same switch.
\cam{Note that only state variables that are \emph{inter-dependent} are required
to be located on the same switch. Two variables $s$ and $t$ are inter-dependent if
a read from $s$ is required before a write to $t$ \emph{and vice versa}.
Placing them on different switches will result in a forwarding
loop between the two switches which is not desirable in most networks.
Therefore, in order to synchronize reads and writes to inter-dependent variables correctly,
they are always placed on the same switch.}
\cam{Although the current prototype chooses the same path for the traffic between the same
ports,} the MILP can be configured to decide paths for more fine-grained notions of flows.
Suppose packet-state mapping finds that only packets with $srcip=x$
need state variable $s$. We refine the MILP input to have two edge nodes per
port, one for traffic with $srcip=x$ and one for the rest, so
the MILP can choose different paths for them.
\cam{Finally, the MILP makes a \emph{joint} decision for state placement and routing.
Therefore, path selection is tied to state placement. To have more freedom in picking
forwarding paths, one option is to first use common traffic engineering techniques to decide
routing, and then optimize the placement of state variables with respect to the selected paths.
However, this approach may require replicating state variables and maintaining consistency
across multiple copies, which as mentioned earlier, is not possible at line rate for distributed
packet-by-packet updates to state variables.
}
\subsection{Generating Data-Plane Rules}
\label{sec:rulegen}
Rule generation happens in two phases and combines
information from the xFDD and MILP to configure the
network switches.
We assume each packet is
augmented with a SNAP\xspace-header upon entering the network,
which contains its original OBS inport and future outport,
and the id of the last processed xFDD node, the purpose of
which will be explained shortly.
This header is stripped off \cam{by the egress switch} when the packet exits the network.
We use \codeb{\Tunnel[footnotesize];assign-egress} from \cref{sec:example} as a running example,
with its xFDD in Figure~\ref{fig:ex-fdd3}. For the sake of the example,
we assume that all the state variables are stored on $C_6$
instead of $D_4$.
\begin{table}[t!]
\scriptsize
\centering
\begin{tabular}{|>{$}l<{$}|>{$}l<{$}|}
\hline
\text{\textbf{Routing Constraints}} & \text{\textbf{State Constraints}} \\
\hline
& \sum_{n} P_{sn} = 1\\
\sum_j R_{uvuj} = 1 & \forall u, v.~ \forall s \in S_{uv}.~\sum_i R_{uvin} \ge P_{sn}\\
\sum_i R_{uviv} = 1& \forall (s, t)\in tied.~P_{sn} = P_{tn}\\
\sum_{u,v} R_{uvij}d_{uv} \le c_{ij} & P_{suvij} \leq R_{uvij} \\
\sum_i R_{uvin} = \sum_j R_{uvnj} & P_{sn} + \Sigma_{i} P_{suvin} = \Sigma_{j} P_{suvnj} \\
\sum_i R_{uvin} \leq 1 & \forall s \in S_{uv}.~P_{sv} + \sum_i P_{suviv} = 1 \\
& P_{sn} + \Sigma_i P_{suvin} \ge P_{tn} \\
\hline
\end{tabular}
\caption{Constraints of the optimization problem.}
\label{tab:milp-constraints}
\end{table}
In the first phase, we break the xFDD down into `per-switch' xFDDs, since not every switch
needs the entire xFDD to process packets.
Splitting the xFDD is straightforward given placement
information: stateless tests and actions can happen anywhere, but
reads and writes of state variables must happen on switches storing them.
For example, edge switches ($I_1$ and $I_2$, and $D_1$ to $D_4$)
only need to process packets up to the state tests, e.g., tests 3 and
8, and write the test number in the packet's SNAP\xspace-header showing
how far into the xFDD they progressed.
Then, they send the packets to $C_6$, which has the corresponding state
variables, \codeb{orphan} and \codeb{susp-client}. $C_6$, on the other
hand, does not need the top part of the xFDD. It just needs the
subtrees containing its state variables to continue processing the
packets sent from the edges.
The per-switch xFDDs are then translated to switch-level configurations, by
a straightforward traversal of the xFDD (See \cref{sec:implementation}).
In the second phase, we generate a set of match-action
rules that take packets through the paths decided by the MILP. These
paths comply with the state ordering used in the xFDD,
thus they get packets to switches with the right states in the right order.
Note that packets contain the path identifier (the OBS inport and outport,
$(u, v)$ pair in this case)
and the ``routing'' match-action rules are generated in terms of this identifier
to forward them on the correct path.
Additionally, note that it may not always be possible to decide the egress
port $v$ for a packet upon entry if its outport depends on state.
We observe that in that case, all the paths for possible
outports of the packet pass the state variables it needs. We
\cam{load-balance over} these paths in proportion to their capacity and show, in
appendix~\ref{appendix}, that traffic on
these paths remains in their capacity limit.
\cam{To see an example of how packets are handled by generated rules,}
consider a DNS response with source IP 10.0.1.1 and destination IP
10.0.6.6, entering the network from port 1. The rules on $I_1$
process the packet up to test 8 in the xFDD, tag the packet with the
path identifier (1, 6) and number 8. The packet is then sent to $C_6$.
There, $C_6$ will process the packet from test 8, update state variables
accordingly, and send the packet to $D_4$ to exit the network from port 6.
\section{Conclusion}\label{sec:conclusion}
In this paper, we introduced a stateful SDN programming model with a one-big-switch abstraction, persistent global arrays, and network transactions. We developed algorithms for analyzing and compiling programs, and distributing their state across the network. Based on these ideas, we prototyped and evaluated the SNAP language and compiler on numerous sample programs. \cam{We also explore several possible extensions to SNAP\xspace to support a wider range of stateful applications. Each of these extensions introduces new and interesting research problems to extend our language, compilation algorithms, and prototype.}
\section{Discussion}
\label{sec:discussion}
This section discusses data-plane implementation strategies
for SNAP\xspace's stateful operations, how SNAP\xspace relates to middleboxes,
and possible extensions to our techniques to enable a broader range
of applications.
\subsection{Stateful Operations in the Data Plane}
A state variable (array) in SNAP\xspace is a key-value mapping, or a \emph{dictionary},
on header fields, persistent across multiple packets.
When the key (index) range is small, it is feasible to pre-allocate all the memory
the dictionary needs and implement it using an array.
A large but \textit{sparse} dictionary
can be implemented using a \textit{reactively}-populated table,
similar to a MAC learner table.
It contains a single default entry in the beginning,
and as packets fly by and change the state variable,
it \textit{reactively} adds/updates the corresponding entries.
In software, there are efficient techniques to implement a dictionary in either approach, and some software switches already support similar reactive ``learning'' operations, either atomically~\cite{netasm} or
with small periods of inconsistency~\cite{openvswitch}.
The options for current hardware are:
\begin{inlinelist}
\item arrays of registers, which are already supported in
emerging switch interfaces~\cite{p4}.
They can be used to implement small dictionaries, as well as
Bloom Filters and hash tables as sparse dictionaries.
In the latter case, it is possible for two different keys to hash to the same dictionary entry.
However, there are applications such as load balancing and flow-size-based sampling
that can tolerate such collisions \cite{FAST}.
\item Content Addressable Memories (CAMs) are typically present in today's
hardware switches and can be modified by a software agent running on the switch.
Since CAM updates triggered by a packet are not immediately available to
the following packets,
it may be used for applications that tolerate small periods of state inconsistency, such as
a MAC learner, DNS tunnel detection, and others from Table~\ref{fig:example-list}.
\end{inlinelist}
Our NetASM implementation (\cref{sec:implementation}) takes the CAM-based approach.
NetASM's software switch supports atomic updates to the tables in the data plane and therefore can
perform \emph{consistent} stateful operations.
\cam{At the time of writing this paper, we are not aware of any hardware switch
that can implement an \emph{arbitrary} number of SNAP\xspace's stateful operations both at \emph{line rate}
and with \emph{strong consistency}. Therefore, we use \nohyphens{NetASM's} low-level primitives as the
compiler's backend so that we can specify data-plane primitives that are required for an efficient and
consistent implementation of SNAP\xspace's operations. If one is willing to relax one of the above constraints
for a specific application, i.e., operating at line rate or strong consistency,
it would be possible to implement SNAP\xspace on today's switches.
If strong consistency is relaxed, CAMs/TCAMs can be programmed using languages such as P4~\cite{p4}
to implement SNAP\xspace's stateful operations as described above.
If line-rate processing is relaxed, one can use software switches, or programmable hardware switching
devices such as ones in the OpenNFP project that allow insertion of Micro-C code extensions to P4 programs
at the expense of processing speed~\cite{opennfp} or FPGAs.}
\subsection{SNAP\xspace and Middleboxes}
Networks traditionally rely on middleboxes for
advanced packet processing, including stateful functionalities.
However, advances in switch technology enable stateful packet processing
in the data plane, which naturally makes the switches capable of
subsuming a subset of middlebox functionality.
SNAP\xspace provides a \emph{high-level programming
framework} to exploit this ability, hence, it is able to express a wide range of stateful programs
that are typically relegated to middleboxes (see Table~\ref{fig:example-list} for examples).
This helps the programmer to
think about a single, explicit network policy, as opposed to a disaggregated, implicit network policy
using middleboxes, and therefore, get more control and customization over a variety of
simpler stateful functionalities.
This also makes SNAP\xspace subject to similar challenges
as managing stateful middleboxes.
For example, many network functions must observe all traffic pertaining to
a connection \emph{in both directions}.
In SNAP,
if traffic in both directions
uses a shared state variable, the MILP optimizer
forces traffic in both directions through the same node.
Moreover, previous work such as
Split/Merge~\cite{SplitMerge} and OpenNF~\cite{OpenNF}
show how to migrate \emph{internal} state from one network
function to another, and Gember-Jacobson et
al.~\cite{GJ} manage to migrate state without buffering
packets at the controller.
SNAP\xspace currently focuses on static state placement. However,
since SNAP\xspace's state variables are explicitly declared as part of the policy, rather
than hidden inside blackbox software,
SNAP\xspace is well situated to adopt these algorithms to support
smooth transitions of state variables in dynamic state
placement.
Additionally, the SNAP\xspace compiler can easily analyze a program to
determine whether a switch modifies packet fields
to ensure correct traffic steering---something that is challenging today with
blackbox middleboxes~\cite{FlowTags,Simple}.
While SNAP\xspace goes a step beyond previous high-level languages
to incorporate stateful programming into SDN,
we neither claim that it is as expressive
as all stateful middleboxes, nor that it can replace
them.
To interact with middleboxes,
SNAP may adopt techniques such as
FlowTags~\cite{FlowTags} or SIMPLE~\cite{Simple} to
direct traffic through middleboxs chains by
tagging packets to mark their progress.
\cam{Since SNAP\xspace has its own tagging and steering to keep track
of the progress of packets through the policy's xFDD\xspace, this adoption
may require integrating tags in the middlebox framework with SNAP\xspace's tags.
As an example, we will describe below how SNAP\xspace and FlowTags can be used
together on the same network.}
\cam{In FlowTags,
users specify which class of traffic should pass which chain of middleboxes under what conditions.
For instance, they can ask for web traffic to go to an intrusion detection system (IDS) after a firewall if
the firewall marks the traffic as suspicious. The controller keeps a mapping between the tags and the flow's original
five tuple plus the contextual information of the last middlebox, e.g., suspicious vs. benign in the case of a firewall. The tags are used
for steering the traffic through the right chain of middleboxes and preserving the original information of the flow in case it is
changed by middleboxes.}
\cam{To use FlowTags with SNAP\xspace, we can treat middlebox contexts as state variables and
transform FlowTags policies to SNAP\xspace programs. Thus, they can
be easily composed with other SNAP\xspace policies.
Next, we can fix the placement of middlebox state variables to the actual location of the middlebox
in the network in SNAP\xspace's MILP. This way, SNAP\xspace's compiler can decide state placement and routing for
SNAP\xspace's own policies while making sure that the paths between different middleboxes in
the FlowTags policies exist in the network. Thus, steering happens using SNAP\xspace-generated tags.
Middleboxes can still use tags from FlowTags to learn about flow's original information or the context of the previous middlebox.
}
Finally, we focus on \emph{programming} networks
but if
verification is of interest in future work, one might adopt techniques such as
RONO~\cite{RONO} to verify isolation properties
in the presence of stateful middleboxes. In summary,
interacting with existing middleboxes is no harder or easier
in SNAP\xspace than it is in other \emph{global} SDN languages, \cam{stateless or stateful}, such as
NetKAT~\cite{netkat} or Stateful NetKAT~\cite{StatefulNetKAT}.
\cam{
\subsection{Extending SNAP\xspace}
\label{subsec:extensions}
\snaptitle{Sharding state variables.}
The MILP assigns each state variable to
\emph{one} physical switch to avoid the overhead
of synchronizing multiple instances of the same variable.
Still, distributing a state variable remains a valid option. For instance, the compiler
can partition $s[inport]$ into $k$ \emph{disjoint} state variables, each storing
$s$ for one port. The MILP can decide
placement and routing as before, this time with the option of distributing partitions of $s$
with no concerns for synchronization. See appendix~\ref{app:sharding} for more details.
\snaptitle{Fault-Tolerance.}
SNAP\xspace's current prototype does not implement any particular fault tolerance mechanism in case a switch holding a state variable fails.
Therefore, the state on the failed switch will be lost.
However, this problem is not inherent or unique to SNAP\xspace and will happen in existing solutions with middleboxes too if the state of the middlebox is not replicated.
Applying common fault tolerance techniques to switches with state to avoid state loss in case of failure can be an interesting direction for future work.
\snaptitle{Modifying fields with state variables.}
An interesting extension to SNAP\xspace is allowing a packet field
to be directly modified with the value of a state variable at a
specific index:
\codeb{\modify{f}{s[e]}}.
This action can be used in applications such as NATs and proxies,
which can store connection mappings in state variables and modify
packets accordingly as they fly by.
Moreover, this action would enable SNAP\xspace programs to modify a field
by the output of an arbitrary function on a set of packet fields,
such as a hash function.
Such a function is nothing but a fixed mapping between input header fields and
output values. Thus, when analyzing the program, the compiler can treat
these functions as fixed state variables with the function's input fields as index
for the state variable
and place them on switches with proper capabilities when distributing
the program across the network.
However, adding this action results in
complicated dependencies between program statements, which is
interesting to explore as future work.
\snaptitle{Deep packet inspection (DPI).} Several applications such as intrusion detection require
searching the packet's payload for specific patterns.
SNAP\xspace can be extended
with an extra field called \emph{content}, containing the packet's payload.
Moreover, the semantics
of tests on the content field can be extended to match on regular expressions.
The compiler can also be modified to assign content tests
to switches with DPI capabilities.
\snaptitle{Resource constraints.}
SNAP\xspace's compiler
optimizes state placement and routing for link utilization.
%
However, other resources such as switch memory and processing power in terms of maximum number of
complicated operations on packets (such as stateful updates, increments, or decrements) may limit the possible computations on
a switch.
An interesting direction for future work would be to augment the SNAP\xspace compiler with the ability to optimize for these additional resources.
\snaptitle{Cross-packet fields.}
Layer 4-7 fields are useful for classifying flows in
stateful applications, but are often scattered across multiple physical packets.
Middleboxes typically perform session reconstruction to extract these fields.
Although SNAP\xspace language is agnostic to the chosen set of fields,
the compiler currently supports fields stored \emph{in the packet itself}
and the state associated with them. However, it may be interesting to explore
abstractions for expressing how multiple packets (e.g., in a session) can form
``one big packet'' and use its fields.
The compiler can further
place sub-programs that use cross-packet fields on devices that are capable of reconstructing
the ``one big packet''.
\snaptitle{Queue-based policies.}
SNAP\xspace currently has no notion of queues and therefore, cannot be used to express queue-based
performance-oriented policies such as active queue management, queue-based load balancing, and
packet scheduling.
There is ongoing research on finding the right set of primitives for expressing such
policies~\cite{pifo}, which is largely orthogonal and complementary to SNAP\xspace's current goals.
}
\section{Evaluation}
\label{sec:evaluation}
\newcommand{P1}{P1}
\newcommand{P2}{P2}
\newcommand{P3}{P3}
\newcommand{P4}{P4}
\newcommand{P5}{P5}
\newcommand{P6}{P6}
This section evaluates SNAP\xspace in terms of language expressiveness and
compiler performance.
\subsection{Language Expressiveness}
\begin{table}[t!]
\centering
\scriptsize
\begin{tabular}{| l | l |} \cline{2-2}
\multicolumn{1}{c|}{} &
\multicolumn{1}{c|}{\bf Application} \\ \hline
\multirow{5}{1.3cm}{Chimera~\cite{chimera}}& \# domains sharing the same IP address \\
& \# distinct IP addresses under the same domain \\
& DNS TTL change tracking \\
& DNS tunnel detection\\
& Sidejack detection\\
& Phishing/spam detection\\ \hline
\multirow{6}{1cm}{FAST~\cite{FAST}}& Stateful firewall\\
& FTP monitoring\\
& Heavy-hitter detection\\
& Super-spreader detection\\
& Sampling based on flow size\\
& Selective packet dropping (MPEG frames)\\
& Connection affinity\\ \hline
\multirow{4}{1cm}{Bohatei~\cite{bohatei}}& SYN flood detection \\
& DNS amplification mitigation\\
& UDP flood mitigation\\
& Elephant flows detection\\ \hline
\multirow{2}{1cm}{Others} & Bump-on-the-wire TCP state machine \\
& Snort flowbits~\cite{Snort}\\ \hline
\end{tabular}
\caption{Applications written in SNAP\xspace.}
\label{fig:example-list}
\end{table}
We
have implemented several stateful network functions (Table~\ref{fig:example-list}) that are typically relegated to middleboxes in SNAP\xspace. Examples were taken from the
Chimera~\cite{chimera}, FAST~\cite{FAST}, and Bohatei~\cite{bohatei} systems.
The code can be found in appendix~\ref{app:examples}.
Most examples use protocol-related fields in fixed
packet-offset locations, which are parsable by emerging programmable parsers.
Some fields
require session reassembly.
However, this is orthogonal to the
language expressiveness; as long as these fields are available to the
switch, they can be used in SNAP\xspace
programs.
To make them available,
one could extract these fields by placing a ``preprocessor''
before the switch pipeline, similar to middleboxes.
For instance, Snort~\cite{Snort} uses preprocessors
\cam{to extract fields for use in the detection engine.}
\subsection{Compiler Performance}
The compiler goes through several phases upon the system's cold start,
yet most events require only some of them.
Table~\ref{tab:phase-per-change} summarizes these
phases and their sensitivity to network and policy changes.
\snaptitle{Cold Start.} When the very first
program is compiled, the compiler
goes through all phases, including MILP model
creation, which happens \emph{only once} in the lifetime
of the network.
Once created, the model supports incremental additions and modifications of
variables and constraints in a few milliseconds.
\snaptitle{Policy Changes.} Compiling a \emph{new} program requires executing the three program analysis phases and rule generation as well as
\emph{both} state placement and routing, which are
decided using the MILP in~\cref{sec:milp}, denoted by ``ST''.
Policy changes become considerably
\emph{less frequent}~(\cref{subsec:compiler-overview})
since most dynamic changes
are captured by the state variables that reside on the data plane.
The policy, and consequently switch configurations,
\emph{do not} change upon state changes.
Thus, we expect policy changes to happen infrequently, and be planned in
advance. The Snort rule set, for instance, gets updated every few
days~\cite{snort-blog}.
\snaptitle{Topology/TM Changes.}
Once the policy is compiled, we fix the decided
state placement, and only re-optimize routing in response to network
events such as failures.
For that, we formulated a variant of ST, denoted as ``TE'' (traffic engineering), that
receives state placement as input, and decides forwarding paths while
satisfying state requirement constraints.
\cam{\emph{We expect TE to run every few minutes}} since in a typical network,
the traffic matrix is fairly stable and traffic engineering happens on the timescale of \emph{minutes}~\cite{google-b4, tm-reloaded, nucci2005problem,
suchara2011network}.
\begin{table}[t!]
\setlength{\belowcaptionskip}{-2mm}
\setlength\tabcolsep{3pt}
\scriptsize
\centering
\begin{tabular}{| l | l | l | c | c | c |}
\hline
\textbf{ID} & \multicolumn{2}{c|}{\textbf{Phase}} & \begin{tabular}{@{}c@{}}\textbf{\tiny Topo/TM} \\ \textbf{\tiny Change}\end{tabular} &
\begin{tabular}{@{}c@{}}\textbf{\tiny Policy} \\ \textbf{\tiny Change}\end{tabular} & \begin{tabular}{@{}c@{}} \textbf{\tiny Cold} \\ \textbf{\tiny Start}\end{tabular} \\
\hline
P1 & \multicolumn{2}{c |}{State dependency} & - & \checkmark & \checkmark\\
P2 & \multicolumn{2}{c |}{xFDD generation} & - & \checkmark & \checkmark\\
P3 & \multicolumn{2}{c |}{Packet-state map} & - & \checkmark & \checkmark \\
\hline
P4 & \multicolumn{2}{c |}{MILP creation} & - & - & \checkmark\\ \hline
\multirow{2}{*}{P5} & \multirow{2}{*}{\begin{tabular}{@{}c@{}} \text{MILP} \\ \text{solving}\end{tabular}} &
\begin{tabular}{@{}c@{}} \text{State placement } \\ \text{and routing (ST)}\end{tabular} & - & \checkmark & \checkmark\\ \cline{3-6}
& & Routing (TE) & \checkmark & - & -\\
\hline
P6 & \multicolumn{2}{c |}{Rule generation} & \checkmark & \checkmark & \checkmark\\
\hline
\end{tabular}
\caption{Compiler phases. For each scenario, phases that get executed are checkmarked.}
\label{tab:phase-per-change}
\end{table}
\subsubsection{Experiments}
We evaluated performance
based on applications listed in Table~\ref{fig:example-list}.
Traffic matrices are synthesized
using a gravity model~\cite{tm-synthesis}. We used an
Intel Xeon E3, 3.4 GHz, 32GB server, and PyPy
compiler \cite{pypy}.
\begin{table}
\scriptsize
\centering
\begin{tabular}{|l|c|c|c|} \hline
\textbf{Topology} & \textbf{\# Switches} & \textbf{\# Edges} & \textbf{\# Demands} \\ \hline
Stanford & 26 & 92 & 20736 \\
Berkeley & 25 & 96 & 34225 \\
Purdue & 98 & 232 & 24336 \\
\hline
AS 1755 & 87 & 322 & 3600 \\
AS 1221 & 104 & 302 & 5184 \\
AS 6461 & 138 & 744 & 9216 \\
AS 3257 & 161 & 656 & 12544 \\
\hline
\end{tabular}
\caption{Statistics of evaluated enterprise/ISP topologies.}
\label{tab:enterprise-summary}
\end{table}
\begin{figure*}[t!]
\setlength{\abovecaptionskip}{4mm}
\setlength{\belowcaptionskip}{-2mm}
\begin{minipage}[t]{0.32\textwidth}
\captionsetup{justification=centering, font=scriptsize}
\includegraphics[width= .9\linewidth]{te-vs-org.pdf}
\caption{Compilation time of \Tunnel[scriptsize] with routing on enterprise/ISP networks.\label{fig:enterprise-te}}
\end{minipage} \hfill
\begin{minipage}[t]{0.32\textwidth}
\captionsetup{justification=centering, font=scriptsize}
\includegraphics[width=0.9\linewidth]{topo-times.pdf}
\caption{Compilation time of \Tunnel[scriptsize] with routing on IGen topologies.}
\label{fig:eval_topo}
\end{minipage} \hfill
\begin{minipage}[t]{0.32\textwidth}
\captionsetup{justification=centering, font=scriptsize}
\includegraphics[width= .9\linewidth]{state-times.pdf}
\caption{Compilation time for policies from Table~\ref{fig:example-list} incrementally
composed on a 50-switch network.}
\label{fig:eval_state}
\end{minipage}
\caption{Compiler runtimes for scenarios in Table~\ref{tab:phase-per-change} on various policies and topologies. Once compiled for the first time (cold start, policy change),
a policy reacts to traffic using its state variables. Topology/TM changes result in reoptimizing forwarding paths.}
\end{figure*}
\begin{table}
\centering
\scriptsize
\begin{tabular}{| l | c | c | c | c | c |} \cline{2-6}
\multicolumn{1}{c|}{} & \multirow{2}{*}{\textbf{P1-P2-P3} (s)} &
\multicolumn{2}{c|}{\textbf{P5} (s)}&
\multirow{2}{*}{\textbf{P6}(s)} &
\multirow{2}{*}{\textbf{P4} (s)} \\
\cline{3-4}
\multicolumn{1}{c|}{} & & \textbf{ST} & \textbf{TE} & & \\
\hline
Stanford & 1.1 & 29 & 10 & 0.1 & 75 \\
Berkeley & 1.5 & 47 & 18 & 0.1 & 150\\
Purdue & 1.2 & 67 & 27 & 0.1 & 169\\ \hline
AS 1755 & 0.6 & 19 & 6 & 0.04 & 22 \\
AS 1221 & 0.7 & 21 & 7 & 0.04 & 32 \\
AS 6461 & 0.8 & 116 & 47 & 0.1 & 120 \\
AS 3257 & 0.9 & 142 & 74 & 0.2 & 163 \\ \hline
\end{tabular}
\caption{Runtime of compiler phases when compiling \Tunnel[scriptsize] with routing on enterprise/ISP topologies.}
\label{tab:enterprise-results}
\end{table}
\snaptitle{Topologies.}
We used a set of three campus
networks and four inferred ISP topologies from
RocketFuel~\cite{rocketfuel} (Table~\ref{tab:enterprise-summary}).\footnote{\fontsize{8}{10} \selectfont The publicly available
Mininet instance of Stanford campus topology has
10 extra dummy switches to implement multiple links between two routers.} For ISP networks, we considered 70\% of the
switches with the lowest degrees as edge switches to form OBS external
ports.
The ``\# Demands'' column shows the number
of distinct OBS ingress/egress pairs. We assume directed
links.
Table~\ref{tab:enterprise-results} shows compilation time for the DNS tunneling example (\cref{sec:example}) on each network, broken down by compiler phase.
Figure~\ref{fig:enterprise-te} compares the compiler runtime for different
scenarios, combining the runtimes of phases relevant for each.
\snaptitle{Scaling with topology size.}
We synthesize networks
with 10--180 switches using IGen~\cite{igen}. In each network, 70\% of
the switches with the lowest degrees are chosen as edges and the DNS tunnel policy is
compiled with that network as a target. Figure~\ref{fig:eval_topo} shows
the compilation time for different scenarios, combining the
runtimes of phases relevant for each.
Note that by increasing the topology size,
the policy size also increases in the \codeb{assign-egress} and \codeb{assumption} parts.
\snaptitle{Scaling with number of policies.}
\cam{The performance of several phases of the compiler, specially xFDD\xspace generation, is a function of
the size and complexity of the input policy. Therefore, we
evaluated how the compiler's performance scales with policy size using the example
programs from Table~\ref{fig:example-list}. Given that these programs are taken from recent
papers and tools in the literature~\cite{chimera, FAST, bohatei, Snort}, we believe they form
a fair benchmark for our evaluation. Except for TCP state machine, the example programs are similar
in size and complexity to the DNS tunnel example (\cref{sec:example}).
We use the 50-switch network from the previous experiment and start with the first program in
Table~\ref{fig:example-list}.
We then gradually increase the size of the final policy by combining this program with more programs from Table~\ref{fig:example-list} using the parallel composition operator.
Each additional component program affects traffic destined to a separate egress port.
Figure~\ref{fig:eval_state} depicts the compilation time as a function of the number of
components from Table~\ref{fig:example-list} that form the final policy.}
The $10$-second jump from 18 to 19 takes place when the TCP state machine
policy is added, which is considerably more complex than others.
\cam{The increase in the compilation time mostly comes from the xFDD\xspace generation phase.
In this phase, the composed programs are transformed
into separate xFDDs\xspace, which are then combined to form the xFDD\xspace for the whole policy (\cref{sec:fdds}).
The cost of xFDD composition depends on the size of the operands, so as more components are put together, the cost grows.
The cost may also depend on the order of xFDD\xspace composition.
Our current prototype composes xFDDs\xspace in the same order as the programs themselves are composed
and leaves finding the optimal order to compose xFDDs\xspace to future work.
}
\cam{The last data point in Figure~\ref{fig:eval_state} shows the compilation time of a policy composed of
all the 20 examples in Table~\ref{fig:example-list}, with a total of 35 state variables. These policies are
composed using parallel composition, which does not introduce read/write dependencies between
state variables. Thus, the dependency graph for the final policy is a collection of the
dependency graphs of the composed policies.
Each of the composed policies affects the traffic to a separate egress
port, which is detected by the compiler in the packet-state mapping phase.
Thus, when compiled to the 50-switch network, state variables for each policy are placed on the switch
closest to the egress port whose traffic the policy affects. If a policy were to affect a larger
portion of traffic, e.g., the traffic of a set of ingress/egress ports, SNAP\xspace would place state variables
in an optimal location where the aggregated traffic of interest is passing through. }
\subsubsection{Analysis of Experimental Results}
\emph{Creating the MILP takes longer than solving it}, in most cases,
and much longer than other phases. Fortunately, this
is a \emph{one-time} cost. After creating the MILP instance,
incrementally adding or removing variables and constraints (as the
topology and/or state requirements change) takes just a few milliseconds.
\emph{Solving the ST MILP unsurprisingly takes longer as compared to the
rest of the phases} when topology grows. It takes $\scriptsize \sim$ 2.5
minutes for the biggest synthesized topology and $\scriptsize \sim$ 2.3 minutes for
the biggest RocketFuel topology. The curve is close to
exponential as the problem is inherently computationally
hard. However, this phase takes place only
in cold start or upon a \emph{policy} change, which are infrequent
and planned in advance.
\emph{Re-optimizing routing
with fixed state placement is much faster}. In response to network events
(e.g., link failures), TE MILP can
recompute paths in around a minute across all our experiments,
\cam{\emph{which is the timescale we initially expected for this phase} as it runs
in the topology/TM change scenarios}.
Moreover, it can be used even on \emph{policy} changes,
if the user settles for a sub-optimal state placement using
heuristics rather than ST MILP. We plan to explore such
heuristics.
Given the kinds of events that require complete (policy change) or
partial (network events) recompilation, we believe that our compilation techniques
meet the requirements of enterprise networks and medium-size ISPs.
Moreover, if needed,
our compilation procedure could be combined with
traffic-engineering techniques once the state placement is decided, to
avoid re-solving the original or even TE MILP on small timescales.
\section{SNAP System Overview}\label{sec:example}
\newenvironment{snappolicy}[1][htb]
{\renewcommand{\algorithmcfname}{SNAP-Policy}
\begin{algorithm}[#1]%
}{\end{algorithm}}
This section overviews the key concepts in our language
and compilation process using example programs.
\subsection{Writing SNAP\xspace Programs}
\label{sec:e2e_example}
\begin{figure}[t!]
\mdfsetup{
skipabove = 0cm,
skipbelow = 0.1mm,
innerrightmargin=-2mm}
\begin{mdframed}
\centering
\codeb[footnotesize]{\textbf{DNS-tunnel-detect}}\\
\end{mdframed}
\mdfsetup{
skipabove = 0cm,
skipbelow = 3mm,
innerrightmargin=-2mm}
\begin{mdframed}
\raggedright
\begin{internallinenumbers}
\setlength\linenumbersep{-0.5mm}
\setlength\leftskip{0.2cm}
\codeb[scriptsize]{
\boldifelse{\inters{\match{dstip}{10.0.6.0/24}}{\match{srcport}{53}}}
{\\ \hspace{0.3cm}
\seq{\seq{\modify{orphan[dstip][dns.rdata]}{True}}\\ \hspace{0.3cm}
{susp-client[dstip]++}}\\ \hspace{0.3cm}
{\boldifelse{\match{susp-client[dstip]}{\emph{threshold}}}
{\\ \hspace{0.8cm}\modify{blacklist[dstip]}{True}\\ \hspace{0.3cm}}
{id\\}
}
\hspace{-0.2cm}
}
{ \\ \hspace{0.3cm}
\boldifelse{\inters{\match{srcip}{10.0.6.0/24}}
{orphan[srcip][dstip]}\\\hspace{0.3cm}}
{\seq{\modify{orphan[srcip][dstip]}{False}}{\\ \hspace{1cm}
susp-client[srcip]\text{\codeb{-{}-}} \\ \hspace{0.1cm}
}}
{id\\}
}
}
\end{internallinenumbers}
\end{mdframed}
\caption{SNAP\xspace implementation of \Tunnel.}
\label{fig:tunnel-code}
\end{figure}
\snaptitle{DNS tunnel detection.} The DNS protocol is designed to resolve information about domain names.
\cam{Since it is not intended for general data transfer, DNS often
draws less attention in terms of security monitoring than other
protocols, and is used by attackers to bypass security policies and leak information. }
Detecting DNS tunnels is one of many
real-world scenarios that require
state to track the properties of network flows~\cite{chimera}.
The following steps can be used to detect DNS tunneling~\cite{chimera}:
\begin{enumerate}
\item For each client, keep track of the IP addresses resolved by DNS responses.
\item For each DNS response, increment a counter. This counter
tracks the number of resolved IP addresses that a client does not use.
\item When a client sends a packet to a resolved IP address,
decrement the counter for the client.
\item Report tunneling for clients that exceed a threshold for resolved, but unused IP addresses.
\end{enumerate}
Figure~\ref{fig:tunnel-code} shows a SNAP\xspace implementation of the above steps
that detects DNS tunnels to/from the %
CS department subnet 10.0.6.0/24 (see Figure~\ref{fig:example-topo}).
Intuitively, a SNAP program can be thought of as
a function that takes in a packet plus the current state of the
network and produces a set of transformed packets as well as updated state.
The incoming packet is read and written by referring to its fields
(such as \codeb{dstip} and \codeb{dns.rdata}).
The ``state'' of the network is read and written by referring to user-defined, array-based, global variables
(such as \codeb{orphan} or \codeb{susp-client}).
Before explaining the program in detail, note that it does not refer to specific network
device(s) on which it is implemented. SNAP\xspace programs are
expressed as if the network was \emph{one-big-switch} (OBS) connecting
edge ports directly to each other.
Our compiler automatically distributes the program across
network devices, freeing programmers from such details and making SNAP programs portable across
topologies.
The \Tunnel program examines two kinds of packets: incoming DNS responses
(which may lead to possible DNS tunnels) and outgoing packets to resolved IP addresses.
Line 1 checks whether the input packet is a DNS response
to the CS department.
The condition in the \codeb{if} statement is an example
of a simple \emph{test}. Such tests can involve any boolean combination of
packet fields.\footnote{\fontsize{8}{10} \selectfont {The design of the language is unaffected by the chosen set of fields.
For the purposes of this paper, we assume a rich set of fields, e.g. DNS response data.
New architectures such as P4~\cite{p4} have programmable parsers that allow users to customize
their applications to the set of fields required.}}
If the test succeeds, the packet could potentially belong to a DNS tunnel,
and will go through the detection steps (Lines 2--6).
Lines 2--6 use three global variables to keep track of DNS queries. Each variable
is a mapping between keys and values, persistent across multiple packets.
The \codeb{orphan} variable, for example,
maps each pair of IP addresses to a boolean value. If \codeb{orphan[c][s]} is
\codeb{True} then \codeb{c} has received a DNS
response for IP address \codeb{s}. The variable \codeb{susp-client}
maps the client's IP to the number of
DNS responses it has received
but not accessed yet.
If the packet is not a DNS response, a different test is performed,
which includes a stateful test over \codeb{orphan} (Lines 8).
If the test succeeds,
the program updates \codeb{orphan[srcip][dstip]} to \codeb{False} and
decrements \codeb{susp-client[srcip]} (Lines 10--11).
This step changes the global state and thus, affects the processing of future packets.
Otherwise, the packet is left unmodified --- \codeb{id} (Line 12) is a no-op.
\snaptitle{Routing.} \Tunnel
cannot stand on its own---it does not explain where to forward packets.
In SNAP\xspace, we can easily \textit{compose} it with a forwarding policy.
Suppose our target network is the simplified campus topology depicted in
Figure~\ref{fig:example-topo}. Here, $I_1$ and $I_2$ are connections to the Internet,
and $D_1$--$D_4$ represent edge switches in the departments, with $D_4$ connected to the CS building.
$C_1$--$C_6$ are core routers connecting the edges.
External ports (marked in red) are numbered 1--6
and IP subnet \codeb{10.0.i.0/24} is attached to port \codeb{i}.
The \codeb{assign-egress} program assigns outports to packets based on their destination IP
address:
{
\mdfsetup{
skipabove=4mm,
skipbelow=4mm,
rightmargin=.35cm,
leftmargin=.35cm,
}
\begin{mdframed}
\raggedright
\codeb[scriptsize]{
\hspace{-0.3cm} assign-egress = \boldifelse{\match{dstip}{10.0.1.0/24}\\}
{\modify{outport}{1}\\}
{\boldifelse{\match{dstip}{10.0.2.0/24}}
{\modify{outport}{2}\\}
{...}\\
}
\textbf{else }\boldifelse{\match{dstip}{10.0.6.0/24}}
{\modify{outport}{6} \\}
{drop \\}
}
\end{mdframed}
}
Note that the policy is independent of the internal network structure,
and recompilation is needed only if
the topology changes.
By combining \Tunnel with \codeb{assign-egress}, we have implemented a useful
end-to-end program:
\codeb[footnotesize]{
\seq{\Tunnel[footnotesize]}{assign-egress}}.
\snaptitle{Monitoring.} Suppose the operator wants to monitor packets
entering the network at each ingress port (ports 1-6). She
might use an array indexed by \codeb{inport} and increment the corresponding element
on packet arrival: \codeb[footnotesize]{count[inport]++}.
Monitoring should take place \emph{alongside} the rest of the program; thus, she might combine it using parallel
composition (\codeb[footnotesize]{+}): \codeb[footnotesize]{(\Tunnel[footnotesize] + } \codeb[footnotesize]{count[inport]++);} \codeb[footnotesize]{assign-egress}.
Intuitively, \codeb[footnotesize]{p + q} makes a copy of the incoming
packet and executes both \codeb{p} and \codeb{q} on it simultaneously.
Note that it is not always legal to compose two programs in parallel.
For instance, if one writes to the same global variable that the other reads,
there is a race condition, which leads to ambiguous state in the final program.
Our compiler detects such race conditions and rejects ambiguous programs.
\snaptitle{Network Transactions.}
Suppose that an operator sets up a
honeypot at port 3 with IP subnet 10.0.3.0/25. The following program records,
per inport, the IP and dstport of the last packet destined to the honeypot: \\
{
\mdfsetup{
skipabove=0mm,
skipbelow=5mm,
rightmargin=1cm,
leftmargin=1cm,
}
\begin{mdframed}
\raggedright
\codeb[scriptsize]{
\hspace{-2mm}\boldifelse{\match{dstip}{10.0.3.0/25} \\}
{\seq{\modify{hon-ip[inport]}{srcip}}{ \\ \hspace{0.6cm} \modify{hon-dstport[inport]}{dstport}} \\}
{id} \\
}
\end{mdframed}
}
Since this program processes many packets simultaneously,
it has an implicit race condition: if packets $p_1$ and $p_2$, both
destined to the honeypot, enter the network from port 1 and get reordered,
each may visit \codeb{hon-ip} and \codeb{hon-dstport}
in a different order (if the variables reside in different locations). Therefore, it is possible that \codeb{hon-ip[1]} contains the
source IP of $p_1$ and \codeb{hon-dstport[1]}
the destination \cam{port} of $p_2$ while the operator's intention was
that both variables refer to the same packet.
To establish such properties for a
collection of state variables, programmers can use \emph{network transactions} by
simply enclosing a series of statements in an \codeb{atomic} block. Atomic
blocks co-locate their enclosed state variables so that a series of updates can be made to appear atomic.
\begin{figure}[t!]
\centering
\includegraphics[width = 0.5\columnwidth]{example-topo.png}
\caption{Topology for the running example.}
\label{fig:example-topo}
\end{figure}
\subsection{Realizing Programs on the Data Plane}
\label{subsec:compiler-overview}
Consider
\codeb[footnotesize]{\Tunnel[footnotesize];}
\codeb[footnotesize]{assign-egress}.
To distribute this program across network
devices, the SNAP\xspace compiler should
decide (i) where to place state variables (\codeb{orphan}, \codeb{susp-client},
and \codeb{blacklist}), and (ii) how packets should be routed across the
physical network.
These decisions should be made in such a way
that each packet passes through
devices storing \emph{every} state variable it \emph{needs},
in the correct \emph{order}.
Therefore, the compiler needs information about which packets
need which state variables.
In our example program, for instance, packets with
\codeb{\match{dstip}{10.0.6.0/24}} and \codeb{\match{srcport}{53}}
need to pass all three state variables,
with \codeb{blacklist} accessed after the other two.
\snaptitle{Program analysis.} To extract the above information, we transform the program to an
intermediate representation called \emph{extended forwarding} \emph{decision diagram (xFDD)}
(see Figure~\ref{fig:ex-fdd3}).
FDDs were originally introduced in an earlier
work~\cite{FastNetKATCompiler}. We extended FDDs
in SNAP\xspace to support stateful packet processing.
An xFDD is like a binary decision diagram: each intermediate node is a
test on either packet fields or state variables.
The leaf nodes are sets of action sequences, rather than merely `true'
and `false' as in a BDD~\cite{bdds}. Each interior node has two successors:
\emph{true} (solid line), which determines the rest of the
forwarding decision process for inputs passing the test, and
\emph{false} (dashed line) for failed cases.
xFDDs are constructed compositionally; the xFDDs for different parts of
the program are combined to construct the final xFDD.
Composition is particularly more involved with
stateful operations: the same state variable may be referenced in two xFDDs
with different header fields, e.g., once as \codeb{s[srcip]} and then
as \codeb{s[dstip]}. How can we know whether or not those fields are
equal in the packet?
We add a new kind of test,
over pairs of packet fields (\codeb{srcip = dstip}), and new
ordering requirements on the xFDD structure.
Once the program is transformed to an xFDD\xspace,
we analyze the xFDD\xspace to extract information about
which groups of packets need which state variables.
In Figure~\ref{fig:ex-fdd3}, for example, leaf number 10 is on the true branch of
\codeb{dstip=10.0.6.0/24} and \codeb{srcport=53}, which indicates that all packets
with this property may end up there. These packets need
\codeb{orphan}, because it is modified, and \codeb{susp-client}, because it
is both tested and modified on the path.
We can also deduce these packets can enter the network
from any port and the ones that are not dropped will exit port 6.
Thus, we can \cam{use the xFDD\xspace to figure out which packets
need which state variables,
aggregate this information across OBS
ports,}
and choose paths for traffic between these ports accordingly.
\begin{figure}[t!]
\centering
\includegraphics[width = 0.78\columnwidth]{fdd3.png}
\caption{The equivalent xFDD\xspace for \\ \codeb[scriptsize]{\Tunnel[scriptsize]; assign-egress}
}
\label{fig:ex-fdd3}
\end{figure}
\snaptitle{Joint placement and routing.}
At this stage, the compiler has the information it needs to
distribute the program. It uses a mixed-integer linear
program (MILP) that solves an extension of the multi-commodity flow
problem to \emph{jointly} decide state placement and routing while
minimizing network congestion.
The constraints in the MILP guarantee that
the selected paths for each pair of OBS ports take
corresponding packets through
devices storing every state variable that they need,
in the correct order. Note that the xFDD\xspace analysis
can identify cases in which both directions of a
connection need the same state variable $s$, so the MILP
ensures they both traverse the device holding $s$.
In our example program,
the MILP places all state variables on D\textsubscript{4},
which is the optimal location as all packets to and from the protected
subnet must flow through D\textsubscript{4}.\footnote{
\fontsize{8}{10} \selectfont
State can be spread out across the network.
It just happens that in this case, one location turns out to be optimal.
} Note that this is not obvious
from the \Tunnel code alone, but rather from its \emph{combination} with
\codeb{assign-egress}.
This highlights the fact that in SNAP\xspace, program components can be written in a modular way,
while the compiler makes globally optimal decisions using information from all parts.
The optimizer also decides forwarding paths between
external ports. For instance, traffic from
$I_1$ and $D_1$ will go through $C_1$ and $C_5$ to reach $D_4$. The path from $I_2$ and $D_2$
to $D_4$ goes through $C_2$ and $C_6$, and $D_3$ uses $C_5$ to reach $D_4$. The paths between
the rest of the ports are also determined by the MILP in a way that minimizes
link utilization. The compiler takes state placement and routing results from the
MILP, partitions the program's intermediate representation (xFDD\xspace) among switches,
and generates rules for the controller to push to all stateless and stateful switches in the
network.
\snaptitle{Reacting to network events.}
The above phases only run if the operator
changes the OBS program.
Once the program compiles, and to respond to network events
such as failures or traffic shifts, we use
a simpler and much faster version of the MILP that given
the current state placement, only re-optimizes for routing.
Moreover, with state on the data plane, policy changes become considerably
\emph{less frequent} because the policy, and consequently switch configurations,
\emph{do not} change upon changes to state.
In \Tunnel[footnotesize], for instance,
attack detection and mitigation are both captured in the program itself, happen
\emph{on the data plane}, and therefore react rapidly to malicious activities in the
network.
This is in contrast to the case where all the state is on the
controller. There, the policy needs to change and recompile multiple times
both during detection and on mitigation, to reflect the state changes
on the controller in the rules on the data plane.
\section{FDD Sequential Composition}
\label{app:fdd-seq}
Figure~\ref{alg:fdd_seq} contains a high-level pseudocode for the base case of sequential composition, namely when composing one action sequence with another FDD. Apart from the composition operands, function \textproc{seq} has a third argument, $T$, which we call \emph{context}. Context is basically a set of pairs, where each pair consists of a test and its result ($y$ for yes if the tests holds, and $n$ for no). While recursively composing the action sequence with the FDD, we accumulate the resulting tests and their results in $T$ to further use them, deeper in the recursion, to find out whether two fields are equal or not, or whether a field is equal to a specific value or not.
\textproc{seq} uses several helper functions, the pseudocode of many of which are included in this section. We have excluded the details of some helper functions for simplicity. More specifically, \textproc{update} takes a context and a mapping from field to values, and updates the context according to the mapping. For instance, if $f$ is mapped to $v$ in the input mapping, the input context will be updated to include $(f = v, y)$. \textproc{infer} takes a context, a test, and a test result ($y$ or $n$), and returns true if the specified test result can be inferred from the context for the given test. \textproc{value} takes in a context and a field $f$. If it can be inferred from the context that $f = v$, \textproc{value} returns $v$, and returns $f$ otherwise. Finally, \textproc{reverse} reverses the input list.
\begin{figure}[h!]
\includegraphics[width=\textwidth]{fdd-seq-main.pdf}
\caption{Base Case for Sequential Composition of FDDs.\label{alg:fdd_seq}}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=\textwidth]{fdd-seq-helpers1.pdf}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=\textwidth]{fdd-seq-helpers2.pdf}
\end{figure}
\twocolumn
\section{Implementation}\label{sec:implementation}
The compiler is mostly implemented in Python, except for
the state placement and routing phase (\cref{sec:milp})
which uses the Gurobi Optimizer~\cite{gurobi} to solve the MILP.
The compiler's output for each switch
is a set of switch-level instructions
in a low-level language called \nohyphens{NetASM}~\cite{netasm}, which
comes with a software switch capable of executing
those instructions.
NetASM is an assembly language for programmable data planes
designed to serve as the ``narrow waist'' between high-level
languages such as SNAP\xspace, and NetCore\cite{NetCore},
and programmable switching architectures such as RMT~\cite{RMT}, FPGAs,
network processors and Open vSwitch.
As described in \cref{sec:rulegen}, each switch processes the packet
by its customized per-switch xFDD\xspace, and then forwards it based on
the fields of the SNAP\xspace-header using a match-action table.
To translate the switch's xFDD\xspace to \nohyphens{NetASM} instructions, we
traverse the xFDD\xspace and generate a \emph{branch} instruction for each
test node, which jumps to the instruction of either the true or false branch
based on the test's result.
Moreover, we generate instructions to create two tables for each
state variable, one for the indices and one for the values.
In the case of a state test in the xFDD\xspace, we first retrieve the value corresponding
to the index that matches the packet, and then perform the branch.
For xFDD\xspace leaf nodes, we generate \emph{store} instructions that modify
the packet fields and state tables accordingly.
Finally, we use NetASM support for atomic execution of multiple instructions
to guarantee that operations on state tables happen atomically.
While NetASM was useful for testing our compiler, any programmable device
that supports match-action tables,
branch instructions, and stateful operations
can be a SNAP\xspace target.
The prioritized rules in match-action tables, for instance, are effectively branch instructions. Thus,
one can use multiple match-action tables to implement xFDD\xspace in the data plane, generating
a separate rule for each path in the xFDD\xspace.
Several emerging switch interfaces support stateful
operations~\cite{p4, openstate, pof, openvswitch}. We discuss possible
software and hardware implementations for SNAP\xspace stateful
operations in~\cref{sec:discussion}.
\section{Introduction}\label{sec:intro}
The first generation of programming languages for software-defined networks (SDNs)~\cite{nox,frenetic,maple,pyretic,Kinetic} was built on top of OpenFlow 1.0, which offered simple match-action processing of packets. As a result, these systems were partitioned into (1) a stateless packet-processing part that could be analyzed statically, compiled, and installed on OpenFlow switches, and (2) a general stateful component that ran on the controller.
This ``two-tiered'' programming model can support any network functionality by running the stateful portions of the program on the controller and modifying the stateless packet-processing rules accordingly. However, simple stateful programs, such as detecting SYN floods or DNS amplification attacks, cannot
be implemented \emph{efficiently} because packets must go back-and-forth to the controller, incurring significant delay. Thus, in practice, stateful controller programs are limited to those that do not require \emph{per-packet} stateful processing.
Today, however, SDN technology has advanced considerably: there is a raft of new proposals for switch interfaces that \emph{expose persistent state on the data plane}, including those in P4~\cite{p4}, OpenState~\cite{openstate}, POF~\cite{pof}, \cam{Domino~\cite{domino}}, and Open vSwitch~\cite{openvswitch}.
Stateful programmable data planes enable us to \emph{offload} programs that require \emph{per-packet} stateful processing
onto switches, subsuming a variety of functionality normally relegated to middleboxes.
However, the mere existence of these stateful mechanisms does not make networks of these devices
easy to program. In fact, programming distributed collections of stateful devices is typically one of the
most difficult kinds of programming problems. We need new languages and abstractions to help us manage the
complexity and optimize resource utilization effectively.
For these reasons, we have developed SNAP\xspace, a new language that allows programmers to mix
primitive stateful operations with pure packet processing. However, rather than ask programmers
to program a large, distributed collection of independent, stateful devices manually, we provide
the abstraction that the network is \emph{one} big switch (OBS).
Programmers can allocate persistent arrays on that OBS
, and do not have to worry about where or how such arrays are stored in the
physical network.
Such arrays can be indexed by fields in incoming packets and modified over time as network conditions change. Moreover, if multiple arrays must be updated simultaneously, we provide a form of \emph{network transaction} to ensure such updates occur atomically. As a result,
it is easy to write SNAP\xspace programs that learn about the network environment and record its state,
store per-flow information or statistics, or implement a variety of stateful mechanisms.
While it simplifies programming,
the OBS model, together with the stateful primitives,
generates implementation challenges.
In particular, multiple flows may depend upon the same \cam{state}.
To process these flows correctly and efficiently, the compiler must simultaneously determine which flows
depend upon which components, how to route those flows, and where to place the
components.
Hence, to map OBS programs to concrete topologies, the SNAP\xspace compiler discovers read-write dependencies between statements. It then translates the program into an xFDD\xspace, a variant of forwarding decision diagrams (FDDs)~\cite{FastNetKATCompiler} extended to incorporate stateful operations. Next, the compiler generates a system of integer-linear equations that jointly optimizes array placement and traffic routing. Finally, the compiler generates the switch-level configurations from the xFDD\xspace and the optimization results. We assume that the switches chosen for array placement
support persistent programmable state; other switches
can still play a role in routing flows efficiently through the state variables.
Our main contributions are:
\begin{itemize}[leftmargin=*]
\item A stateful and compositional SDN programming language with persistent
global arrays, a one-big-switch programming model, and network transactions. (See \cref{sec:example}
for an overview and \cref{sec:language} for more technical details.)
\item Algorithms for compiling SNAP\xspace programs into low-level switch mechanisms (\cref{sec:compilation}): (i)
an algorithm for compiling SNAP\xspace programs into an intermediate representation that detects
program errors, such as race conditions introduced by parallel access to stateful components, using
our extended forwarding decision diagrams (xFDD\xspace) and (ii)
an algorithm to generate
a mixed integer-linear program, based on the xFDD\xspace,
which jointly decides array placement and routing while minimizing network congestion
and satisfying the constraints necessary for network transactions.
\item An implementation and evaluation of our language and compiler
using about 20 applications.
(\cref{sec:implementation}, \cref{sec:evaluation}).
\end{itemize}
We discuss various data-plane implementations for SNAP\xspace, how \cam{SNAP\xspace} relates to middleboxes, and possible extensions in \cref{sec:discussion}, discuss related work in
\cref{sec:related}, and conclude in \cref{sec:conclusion}.
\section{SNAP\xspace}\label{sec:language}
SNAP\xspace is a high-level language with two key features: programs
are \emph{stateful} and are written in terms of an abstract network
topology comprising a \emph{one-big-switch} (OBS). It has an algebraic
structure patterned on the NetCore/NetKAT family of
languages~\cite{NetCore,netkat},
with each program comprising one or more \emph{predicates} and
\emph{policies}.
SNAP\xspace's syntax is in Figure~\ref{fig:syntax}. Its semantics is
defined through an evaluation function ``$\mathsf{eval}{}$.'' $\mathsf{eval}{}$ determines,
in mathematical notation, how an input packet should be processed
by a SNAP\xspace program. Note that this is part of the \emph{specification} of the language,
\emph{not} the implementation. Any implementation of SNAP\xspace, including ours,
should ensure that packets are processed as defined by the
$\mathsf{eval}{}$ function: when we talk about
``running'' a program on a packet, we mean calling $\mathsf{eval}{}$ on
that program and packet.
We discuss $\mathsf{eval}{}$'s most interesting cases here; see appendix~\ref{app:semantics} for a full definition.
$\mathsf{eval}{}$ takes the SNAP\xspace term of interest,
a packet, and a starting
state and yields a set of packets and an output state.
To properly define the semantics of multiple updates
to state when programs are composed, we need to know
the reads and writes to state variables performed by each program while
evaluating the packet. Thus, $\mathsf{eval}{}$ also returns a \emph{log}
containing this information.
It adds ``$R\,s$'' to the log whenever a read from
state variable $s$ occurs,
and ``$W\,s$'' on writes.
Note that these logs are part of our formalism, but not our
implementation.
We express the program state as a dictionary that maps state
variables to their contents. The contents of each state variable is
itself a mapping from values to values.
Values are defined as
packet-related fields (IP address, TCP ports, MAC addresses,
DNS domains) along with integers, booleans and
vectors of such values.
\begin{figure}
\scriptsize
\mdfsetup{
innertopmargin = -1mm,
innerleftmargin = 1mm,
skipbelow = 2mm,
}
\begin{mdframed}
\[\begin{array}{rcll}
e \in \mathsf{Expr} & ::= & v \, | \, f \, | \, \overset{\rightharpoonup} e & \\
x, y \in \mathsf{Pred} & ::= & id & \text{Identity} \\
& | & drop & \text{Drop} \\
& | & f = v & \text{Test} \\
& | & \neg x & \text{Negation} \\
& | & x | y & \text{Disjunction} \\
& | & y \& x & \text{Conjunction} \\
& | & \highlight{s[e] = e} & \textbf{State Test} \\
p, q \in \mathsf{Pol} & ::= & x & \text{Filter} \\
& | & f \leftarrow v & \text{Modification} \\
& | & p + q & \text{Parallel comp.} \\
& | & p ; q & \text{Sequential comp.} \\
& | & \highlight{s[e] \gets e} & \textbf{State Modification} \\
& | & \highlight{s[e]\codeb{++}} & \textbf{Increment value} \\
& | & \highlight{s[e]\text{\codeb{-{}-}}} & \textbf{Decrement value} \\
& | & \highlight{\IfElse{a}{p}{q}} & \textbf{Conditional} \\
& | & \highlight{\mathsf{atomic}(p)} & \textbf{Atomic blocks}
\end{array}\]
\end{mdframed}
\caption{SNAP\xspace's syntax. \highlighttext{Highlighted} items are not in
NetCore.}
\label{fig:syntax}
\end{figure}
\snaptitle{Predicates.}
Predicates have a constrained semantics: they never update the
state (but may read from it), and either return the empty
set or the singleton set containing the input packet. That is,
they either pass or drop the input packet.
$id$ passes the packet and
$drop$ drops it.
The test $f=v$ passes a packet $pkt$ if the field
$f$ of $pkt$ is $v$.
These predicates yield empty logs.
The novel predicate in SNAP\xspace is the \emph{state test}, written $s[e_1]
= e_2$ and read ``state variable (array) $s$ at index $e_1$ equals $e_2$''.
Here $e_1$ and $e_2$ are \emph{expressions}, where an expression is
either a value $v$ (like an IP address or TCP port), a field $f$,
or a vector of them $\overset{\rightharpoonup} e$.
For $s[e_1] = e_2$, function $\mathsf{eval}$ evaluates $e_1$ and $e_2$ on
the input packet
to yield two values
$v_1$ and $v_2$.
The packet can pass if state variable $s$
indexed at $v_1$ is equal to $v_2$,
and is dropped otherwise.
The returned log will include $R\,s$,
to record that the predicate read from the state variable $s$.
We evaluate negation $\neg x$ by running $\mathsf{eval}{}$ on $x$
and then
complementing the result,
propagating whatever log $x$ produces.
$x | y$ (disjunction) unions the
results of running $x$ and $y$ individually, doing the reads
of both $x$ and $y$.
$x\&y$ (conjunction) intersects the results of
running $x$ and $y$ while doing the reads of $x$ and then $y$.
\snaptitle{Policies.}
Policies can modify packets and the state. Every predicate is a
policy---it simply makes no modifications.
Field modification $f \leftarrow v$
takes an input packet $pkt$ and yields a new packet,
$pkt'$, such that $pkt'.f = v$ but otherwise $pkt'$ is the same as
$pkt$. State update $s[e_1] \gets e_2$ passes the input packet through while (i)
updating the state so that $s$ at $\mathsf{eval}(e_1)$ is
set to $\mathsf{eval}(e_2)$, and (ii) adding $W\,s$ to the log. The $s[e]\codeb{++}$ (resp. $\text{\codeb{-{}-}}$) operators increment (decrement) the value of $s[e]$
and add $W\,s$ to the log.
Parallel composition $p + q$ runs $p$ and $q$ in parallel and tries to
merge the results. If the logs indicate a state read/write or
write/write conflict for $p$ and $q$ then there is
no consistent semantics we can provide, and we leave the semantics
undefined.
Take for example $(s[0] \gets 1) + (s'[0] \gets 2)$.
There is no conflict if $s \neq s'$.
However,
the state updates conflict if $s = s'$.
There is no good choice here, so we leave
the semantics undefined and raise \emph{compile error} in the implementation.
Sequential composition $p;q$ runs $p$ and then runs $q$ on each packet
that $p$ returned, merging the final results. We must
ensure the runs of $q$ are pairwise consistent, or else we will have a
read/write or write/write conflict.
For example, let $p$ be $(f \leftarrow 1 + f \leftarrow 2)$,
and $pkt[f \mapsto v]$ denote ``update $pkt$'s $f$ field to $v$''.
Given a packet $pkt$, the policy $p$
produces two packets: $pkt_1 = pkt[f \mapsto 1]$ and $pkt_2 = pkt[f
\mapsto 2]$.
Let $q$ be $s[0] \gets f$,
running $p;q$ fails because
running $q$ on $pkt_1$ and $pkt_2$ updates $s[0]$ differently.
However, $p;q$ runs fine for $q = g \leftarrow 3$.
We have an explicit conditional ``$\IfElse{a}{p}{q}$,'' which indicates \emph{either} $p$ or $q$ are executed. Hence, both $p$ and $q$ can perform reads and writes to the same state.
We have a notation for \emph{atomic blocks}, written
$\mathsf{atomic}(p)$. As described in \cref{sec:example}, there is
a risk of inconsistency between state variables residing on
different switches on a real network
when many packets are in flight concurrently.
When compiling $\mathsf{atomic}(p)$, our compiler ensures that
all the state in $p$ is updated atomically (\cref{sec:compilation}).
\section{Related Work}\label{sec:related}
\snaptitle{Stateful languages.}
Stateful NetKAT~\cite{StatefulNetKAT}, developed concurrently with SNAP\xspace,
is a stateful language
for ``event-driven'' network programming, which guarantees
consistent update when transitioning between configurations in response to events.
SNAP\xspace source language is richer and exponentially more
compact than stateful NetKAT as it
contains \emph{multiple arrays} (as opposed to one)
that can be indexed and updated by contents of \emph{packet headers} (as opposed to
constant integers only).
Moreover, they place multiple copies of state at the edge, proactively generate rules for all configurations, and optimize
for rule space,
while we distribute state
and optimize for congestion.
Kinetic~\cite{Kinetic} provides a per-flow state machine abstraction,
and NetEgg~\cite{NetEgg} synthesizes stateful programs from user's examples.
However, they both keep the state at the controller.
\snaptitle{Compositional languages.}
NetCore \cite{NetCore}, and other similar languages \cite{pyretic, frenetic, netkat},
have primitives for tests and modifications on packet fields as well as composition operators
to combine programs. SNAP\xspace builds on these languages by adding primitives
for stateful programming (\cref{sec:language}).
To capture the joint intent of two policies, sometimes the programmer
needs to decompose them into their constituent pieces, and then reassemble them
using \codeb{;} and \codeb{+}.
PGA \cite{pga} allows programmers to specify access control and service chain policies
using graphs as the basic building block, and
tackles this challenge by defining a new type of composition.
However, PGA does not have linguistic primitives for stateful programming,
such as those that read and write the contents of global arrays.
Thus, we view SNAP\xspace and PGA as complementary research
projects, with each treating different aspects of the language design
space.
\snaptitle{Stateful switch-level mechanisms.}
FAST~\cite{FAST} and OpenState~\cite{openstate}
propose flow-level state machines as a primitive for a \emph{single}
switch. SNAP\xspace offers a network-wide OBS programming model, with
a compiler to distribute the programs across the network. Thus,
although SNAP\xspace is exponentially more compact than a state machine
in cases where state is indexed by contents of packet header fields, both FAST
and OpenState can be used as a target for a subset of SNAP\xspace programs.
\snaptitle{Optimizing placement and routing.}
Several projects have
explored optimizing placement of middleboxes and/or routing traffic through
them.
These projects and SNAP\xspace share the mathematical problem of placement
and routing on a graph.
Merlin programs specify
service chains as well as optimization objectives \cite{Merlin},
and the compiler uses an MILP to
choose paths for traffic with respect to specification.
However, it does not decide the placement of service boxes itself.
Rather, it chooses the paths to pass through the existing instances of
the services in the physical network.
Stratos \cite{Stratos} explores middlebox placement and distributing
flows amongst them to minimize inter-rack traffic, and
Slick \cite{Slick} breaks middleboxes into
fine-grained elements and distributes them across the network while
minimizing congestion.
However, they both have a separate algorithm for placement.
In Stratos, placement results is used in an ILP to decide distribution of flows.
Slick uses a virtual topology on the placed elements with heuristic link weights,
and finds shortest paths between traffic endpoints. |
2,877,628,090,303 | arxiv | \section{Introduction}
Many contemporary reviews of elementary particle physics start by
celebrating (or lamenting!) the success of the Standard Model.
Indeed, with some nineteen%
\footnote{I assume the neutrinos are massless, but count the ``vacuum
angle'' of QCD.}
parameters the SU(3)$\times$SU(1)$\times$U(1) gauge theory explains an
enormous array of experiments.
Even a terse compendium\cite{PDG94} of the experiments is more than big
enough to fill a phone book.
A glance at Table~\ref{table:sm} shows, however, that roughly half of
the parameters are not so well determined.
To test the Standard Model stringently, and thus to gain an inkling of
what lies beyond, we must learn the values of these parameters more
precisely.
\begin{table}
\caption[table:sm]{Parameters of the standard model and lattice
calculations that will help determine them.
Numerical values taken from the 1994 Review of Particle
Properties,\cite{PDG94} except $m_t$ (\refcite{CDF94}), $\sin\delta$,
and $\theta_{\rm QCD}$. The strong coupling $\alpha_S$ refers to the
${\overline{\rm MS}}$ scheme at $M_Z$.
Adapted from \refcite{Kro93}.}\label{table:sm}
\begin{center} \begin{tabular}{c@{\hspace{2.0em}}c@{\hspace{2.0em}}c}
\hline \hline
\hspace{1em}parameter & value or range & related lattice calculations \\
\hline
\multicolumn{3}{l}{\em gauge couplings}\\
$\alpha_{\rm em}$ & $1/137.036$ & \\
$10^5G_F$ & 1.166 GeV$^{-2}$ & \\
$\alpha_S$ & $0.116\pm0.005$ &
$\Delta m_{\mbox{\scriptsize 1P--1S}}$; scaling \\
\multicolumn{3}{l}{\em electroweak masses}\\
$m_Z$ & 91.19 GeV & \\
$m_H$ & $>58$ GeV & \\
\multicolumn{3}{l}{\em lepton masses}\\
$m_e$ & 0.51100 MeV & \\
$m_\mu$ & 105.66 MeV & \\
$m_\tau$ & 1777 MeV & \\
\multicolumn{3}{l}{\em quark masses}\\
$m_u$ & 2--8 MeV & $m_\pi^2$, $m_K^2$ \\
$m_d$ & 5--15 MeV & $m_\pi^2$, $m_K^2$ \\
$m_s$ & 100--300 MeV & $m_K^2$ \\
$m_c$ & 1.0--1.6 GeV & $m_{J/\psi}$ \\
$m_b$ & 4.1--4.5 GeV & $m_{\Upsilon}$ \\
$m_t$ & $174\pm10^{+13}_{-12}$ GeV & \\
\multicolumn{3}{l}{\em CKM matrix}\\
$s_{12}$ & 0.218--0.224 & $K\to \pi e\nu$ \\
$s_{23}$ & 0.032--0.048 & $B\to D^* l\nu$ \\
$s_{13}$ & 0.002--0.005 & $B\to \pi l\nu$ \\
$\sin \delta$ & $\neq 0$ & $B_K$, $B_B$, $B_{B_s}$ \\
\multicolumn{3}{l}{\em QCD vacuum angle}\\
$\theta_{\rm QCD}$ & $<10^{-9}$ & $d_n$ \\
\hline \hline
\end{tabular} \end{center}\end{table}
Except for the mass of the Higgs boson (or any other undiscovered
remnant of electroweak symmetry breaking), the poorly known parameters
all involve quarks.
Other than top,\cite{CDF94} which decays too quickly for confinement
to play a role, the masses of the quarks are a bit better than wild
guesses.
The information on the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing
matrix is spotty, especially when one relaxes the assumption of
three-generation unitarity, as shown in Table~\ref{table:ckm}.
\begin{table}
\caption[table:ckm]{Ranges for CKM matrix elements $|V_{qr}|$ assuming
unitarity but {\em not\/} three generations.
Numerical values taken from the 1994 Review of Particle Properties.
In three generations $|V_{ud}|=s_{12}$, $|V_{cb}|=s_{23}$, and
$|V_{ub}|=s_{13}$, to excellent approximation.}\label{table:ckm}
\begin{center} \begin{tabular}{c@{\hspace{2.0em}}c@{\hspace{2.0em}}c}
\hline \hline
parameter & value or range & related lattice calculations \\
\hline
$|V_{ud}|$ & 0.974 & \\
$|V_{us}|$ & 0.218--0.224 & $K\to \pi e\nu$ \\
$|V_{ub}|$ & 0.002--0.005 & $B\to \pi l\nu$ \\
$|V_{cd}|$ & 0.180--0.228 & $D\to \pi l\nu$ \\
$|V_{cs}|$ & 0.800--0.975 & $D\to K l\nu$ \\
$|V_{cb}|$ & 0.032--0.048 & $B\to D^* l\nu$ \\
$|V_{td}|$ & 0.0--0.13 & $f_B^2B_B$; $B_K$ \\
$|V_{ts}|$ & 0.0--0.56 & $f_{B_s}^2B_{B_s}$ \\
$|V_{tb}|$ & 0.0--0.9995 & \\
\hline \hline
\end{tabular} \end{center}\end{table}
They are poorly determined simply because experiments measure
properties not of quarks, but of the hadrons inside which they
are confined.
Of course, everyone knows what to do: calculate with QCD, the part
of the Standard Model that describes the strong interactions.
But then, the strong coupling is known only at the 5\% level;
not bad, but nothing like the fine structure or Fermi constants.
Moreover, the binding of quarks into hadrons is nonperturbative---the
calculations cannot be done on the back of an envelope.
The most systematic technique for understanding nonperturbative QCD is
lattice gauge theory.
The lattice provides quantum field theory with a consistent and
mathematically well-defined ultraviolet regulator.
At fixed lattice spacing, the quantities of interest are straightforward
(combinations of) functional integrals.
These integrals can be approximated by a variety of techniques borrowed
from statistical mechanics.
Especially promising is a numerical technique, the Monte Carlo method
with importance sampling, which has become so pre-eminent that the
young and uninitiated probably haven't heard of any other.
Results from lattice-QCD Monte Carlo calculations have begun to
influence Table~\ref{table:sm}.
The world average for the SU(3) gauge coupling $\alpha_S$
includes results from lattice calculations of the quarkonium
spectrum,\cite{Kha92,Lep94,Kha95} and at the time of this conference an
even more precise result had appeared.\cite{Dav95}
The same calculations are also providing some of the best information
on the charm\cite{Kha94} and bottom\cite{Dav94} masses.
This is an auspicious beginning.
Over the next several years the lattice QCD calculations will mature.
They will help to determine the other unknowns---light quark masses
and the CKM matrix.
The third column of Tables~\ref{table:sm} and~\ref{table:ckm} lists
relevant quantities or processes, and the rest of this talk explains
how the program fits together.
Sect.~\ref{LGT} gives the non-expert some perspective on the conceptual
and numerical strengths and weaknesses of lattice QCD.
Sect.~\ref{QCD} reviews 1) the status of the light hadron spectrum and
the propects for extracting $m_u$, $m_d$, and $m_s$; and 2) results for
the quarkonium spectrum, which yield $\alpha_S$, $m_b$, and $m_c$.
Sect.~\ref{EW} outlines lattice QCD calculations of electroweak,
hadronic matrix elements that are needed to pin down the unitarity
triangle of the CKM matrix.
There are, of course, many other interesting applications to electroweak
phenomenology; for more comprehensive reviews the reader can consult
some of the papers listed in the bibliography.\cite{Kro93,EWR94}
\section{Rudiments of Lattice Gauge Theory}\label{LGT}
In quantum field theory physical measurements are related to vacuum
expectation values $\langle {\cal O}\rangle$.
Feynman's functional integral representation is
\begin{equation}\label{eq:path-integral}
\langle {\cal O}\rangle = \lim_{L\to\infty} \lim_{a\to0} Z^{-1}
\int\prod_{x,\mu}dA_\mu(x)\prod_{x,a} d\psi_a(x)d\bar{\psi}_a(x)
\,{\cal O}\,e^{-S(A_\mu,\psi,\bar{\psi})}.
\end{equation}
The formula is easier to understand if read from right to left.
$S$ is the action---in our case the action of QCD, so it depends on the
gluons $A_\mu$ and the quarks $\psi$ and anti-quarks $\bar{\psi}$.
${\cal O}$ depends on the physics under investigation; the most useful kinds
of ${\cal O}$'s are given below.
The integration over all components and positions of the basic fields,
with weight $e^{iS}$ would reproduce the familiar Schr\"odinger or
Heisenberg formulations of quantum mechanics.
The more convergent weight $e^{-S}$ provides some benefits and imposes
some restrictions---see below.
$Z$ is the same integral without ${\cal O}$ in it,
so that $\langle1\rangle=1$.
The limits are there to satisfy the mathematicians; without them the
integrals are not well defined.
These ``cutoffs'' also have a physical significance: we do not claim to
understand physics either at distances smaller than $a$ (the ultraviolet
cutoff), or at distances larger than $L$ (the infrared cutoff).
The limit $a\to0$ requires the renormalization group; it must be carried
out holding $L$ and physical, infrared scales fixed.
In particular, the integration variables $A_\mu(x)$, $\psi(x)$, and
$\bar{\psi}(x)$ really represent all degrees of freedom in a block of
size $a^4$.
The limit ``$a\to0$'' can be obtained not only literally, but also by
improving the action of the blocked fields.
These observations apply to any cutoff scheme for quantum field theory.
A nice introduction to the renormalization-group aspects is a
summer-school lecture by Lepage.\cite{Lep90}
In lattice gauge theory $a$ is nothing but the spacing between lattice
sites.
If there are $N$ on a side, $L=Na$.
For given $N$ one can compute the integrals numerically.
With the $10^7$--$10^{10}$-dimensional integrals that arise, the only
viable technique is a statistical one:
Monte Carlo with importance sampling.
To compute masses the observable ${\cal O}=\Phi(t)\Phi^\dagger(0)$, where
$\Phi(t)$ is an operator at time $t$ with the flavor and
angular-momentum quantum numbers of the state of interest.
One can construct such operators using symmetry alone.
The radial quantum number would require a solution of the theory,
but that's what we're after.
The ``two-point function''
\begin{equation}\label{eq:two-point}
\langle \Phi(t)\Phi^\dagger(0) \rangle =
\sum_n \left|\langle0|\Phi|n\rangle\right|^2 e^{-m_nt},
\end{equation}
where the sum is over the radial quantum number.
The exponentials are a happy consequence of the weight $e^{-S}$
in eq.~(\ref{eq:path-integral}).
It is advantageous because at long times $t$ only the lowest-lying
state survives.
In a numerical calculation masses are obtained by fitting two-point
functions, once single-exponential behavior is verified.
Since $\Phi$ is largely arbitrary, some artistry enters: if
single-exponential behavior sets in sooner, the statistical quality of
the mass estimate is better.
To compute a matrix element of part of the electroweak Hamiltonian,
${\cal H}$, the observable ${\cal O}=\Phi_\pi(t_\pi){\cal H}(t_h)\Phi_B^\dagger(0)$
for the transition from hadron ``$B$'' to hadron ``$\pi$.''
At long times $t_h$ and $t_\pi-t_h$ the ``three-point function''
\begin{equation}\label{eq:three-point}
\langle \Phi_\pi(t_\pi){\cal H}(t_h)\Phi_B^\dagger(0) \rangle \approx
\langle0|\Phi_\pi|\pi\rangle e^{-m_\pi(t_\pi-t_h)}
\langle\pi|{\cal H}|B\rangle e^{-m_Bt_h} \langle B |\Phi_B^\dagger|0\rangle,
\end{equation}
plus excited-state contributions.
If, as in decays of hadrons to leptons, the hadronic final-state is the
vacuum, a two-point function will do:
\begin{equation}\label{eq:H-point}
\langle {\cal H}(t)\Phi_B^\dagger(0) \rangle =\sum_n
\langle0|{\cal H}|B_n\rangle e^{-m_nt}
\langle B_n|\Phi_B^\dagger|0\rangle.
\end{equation}
The desired matrix elements $\langle\pi|{\cal H}|B\rangle$ and
$\langle0|{\cal H}|B\rangle$ can be obtained from eq.~(\ref{eq:three-point})
and~(\ref{eq:H-point}), because the masses and $\Phi$-matrix elements
are obtained from eq.~(\ref{eq:two-point}).
To obtain good results from
eqs.~(\ref{eq:two-point})--(\ref{eq:H-point}), it is crucial to devise
nearly optimal operators in the two-point analysis.
Consumers of numerical results from lattice QCD should be wary of
results, still too prevalent in the literature, that are contaminated by
unwanted states.
In the numerical work that mostly concerns us here, the integrals are
computed at a sequence of fixed $a$'s and $L$'s.
One adopts a standard mass, say $m_\rho$, and defines
\begin{equation}\label{eq:units}
a=\frac{(am_\rho)^{\rm lQCD}}{m_\rho^{\rm expt}}
\end{equation}
to obtain the latice spacing in physical units, and other quantities are
predicted via
\begin{equation}
m_B=\frac{(am_B)^{\rm lQCD}}{a}.
\end{equation}
For continuum-limit, infinite-volume results this is the same as
extrapolating dimensionless ratios, e.g.\
\begin{equation}
\frac{m_B}{m_\rho}= \lim_{L\to\infty} \lim_{a\to0}
\frac{am_B(a,L)}{am_\rho(a,L)}.
\end{equation}
There is theoretical guidance for both limits.
According to general properties of massive quantum field theories in
finite boxes,\cite{Lue86} the infinite-volume limit is rapid for
$m_\pi L\gg1$, exponential for masses.
In non-Abelian gauge theories the renormalization-group $a\to0$ limit is
controlled by asymptotic freedom.
The main strength of lattice QCD is that it {\em is\/} QCD.
It has $1+n_f$ adjustable parameters, corresponding to the gauge
coupling and the quark masses.
From the renormalization group, the adjustment of the gauge
coupling is equivalent to setting the lattice spacing in physical
units, eq.~(\ref{eq:units}).
Once the parameters are determined by $1+n_f$ experimental inputs,
QCD should predict all other strong-interaction phenomena.
There is no need to introduce condensates, as in ITEP sum rules,
or non-renormalizable couplings, as in chiral perturbation theory or
heavy-quark effective theory.
If theory and experiment disagree, it is a signal of new physics.
There are some disadvantages.
A practical, though not conceptual, problem is that large-scale
computational work is more labor-intensive than traditional theoretical
physics.
Careful work is needed to estimate the uncertainties reliably.
The improvements in computer power and algorithms of recent years have
helped practitioners understand their uncertainties better and better.
As the consumers of their results become commensurately sophisticated,
this trend will continue.
After all, in the context of Table~\ref{table:sm}, meaningful error bars
are just as important as the central value.
\subsection{The quenched approximation}\label{sect:quenched}
The biggest disadvantage of most of the numerical results mentioned in
this talk is something called the ``quenched'' approximation.
A meson consists of a valence quark and anti-quark exchanging any number
of gluons.
The gluons can turn into virtual quark loops and back again.
The latter process costs a factor of 100-1000 in computer time, so many
Monte Carlo programs just omit the virtual quark loops.
To accentuate the positive---gluons and valence quarks are treated
better than in non-QCD models of hadrons---the omission is sometimes
called the {\em valence\/} approximation.
To admit the negative, it is less often called the {\em loopless\/}
approximation.
But most often lattice mavens borrow a jargon from condensed-matter
physics and call it the {\em quenched\/} approximation.
If quenched QCD makes any sense, it is as a kind of model or effective
theory.
The parameters of quenched QCD can be tuned to reproduce physics at one
scale.
But the $\beta$ function of quenched and genuine QCD differ, as one sees
in perturbation theory, so one cannot expect agreement at all
scales.
As with any model, only in special cases can one argue that these
effects are correctable or negligible; these cases will be highlighted
in the rest of the talk.
\section{From Hadron Spectra to the QCD Parameters}\label{QCD}
\subsection{Light hadrons and light-quark masses}
Over the past few years a group at IBM has carried out a systematic
calculation of the light-hadron spectrum using the dedicated
supercomputer GF11.\cite{But93}
They have numerical data for 5 different combinations of $(a,L)$.
At $L\approx2.3$~fm there are three lattice spacings varying by a factor
of $\sim2.5$.
At the coarsest lattice spacing ($a^{-1}\approx1.4$~GeV) there are three
volumes, up to almost 2.5~fm.
A variety of quark masses are used, and the physical strange quark is
reached by interpolation, whereas the light (up and down) through
extrapolation.
The mass dependence is assumed linear, as expected from weakly broken
chiral symmetry, and the data substantiate the assumption.
The units (i.e.\ lattice spacing) has been fixed with $m_\rho$ and the
quark masses with $m_\pi$ and $m_K$.
The final results, after extrapolation to the continuum limit and
infinite volume, are shown in Fig.~\ref{fig:spectrum} for two vector
mesons and six baryons.
(The quark-mass interpolation could reach only the combination
$m_\Xi+m_\Sigma-m_N$.)
Despite the quenched approximation the agreement with experiment is
spectacular.
\begin{figure}
\epsfxsize=\textwidth \epsfbox{gf11bw.eps}
\caption[fig:spectrum]{The spectrum and decay constants of the light
hadrons. Error bars are from lattice calculations in the quenched
approximation,\cite{But93,But94} and $\bullet$ denotes
experiment.}\label{fig:spectrum}
\end{figure}
Fig.~\ref{fig:spectrum} also includes results from the same
investigation for decay constants.
The agreement of $f_\pi/m_\rho$ and $f_K/m_\rho$ is not as good as for
the masses.
Because of the quenched approximation, this is not entirely unexpected.
Recall the argument concerning distance scales and effective theories in
sect.~\ref{sect:quenched}.
The binding mechanism responsible for the masses encompasses distances
out to the typical hadronic radius.
The decay constant, on the other hand, is proportional to the
wavefunction at the origin and thus is more sensitive to shorter
distance scales.
One sees better agreement when forming the ratio $f_K/f_\pi$,
which---recall eq.~(\ref{eq:units}) and subsequent discussion---is
like retuning to the shorter distance.
One would like to use the hadron masses to extract the quark masses.
Because of confinement, the quark mass more like a renormalized coupling
than the classical concept of mass.
Calculations like the one described above yield immediately the
bare mass of the lattice theory.
More useful to others would be the ${\overline{\rm MS}}$ scheme of dimensional
regularization.
A one-loop perturbative calculation can be used to convert from one
scheme to another.\cite{GHS84,Mor93,Kro94,Kha94}
For the light quarks it is convenient to discuss the combinations
$\hat{m}=\mbox{\small $\frac{1}{2}$}(m_d+m_u)$, $\Delta m^2_{du}=m^2_d-m^2_u$, and $m_s$.
Ratios of the light-quark masses are currently best estimated using
chiral perturbation theory.\cite{Gas82}
To set the overall scale requires a dynamical calculation in QCD.
In lattice QCD, $\hat{m}$ and $m_s$ can be extracted from the variation
in the square of the pseudoscalar mass between $m_\pi^2$ and $m_K^2$.
The most difficult quark-mass combination is $\Delta m^2_{du}$, which
causes the isospin-violating end of the splittings in hadron multiplets.
Since chiral perturbation theory provides a formula for
$\Delta m^2_{du}/m_s^2$ with only second-order corrections, it is
likely that the best determination of $\Delta m^2_{du}$ will come from
combining the formula with a lattice QCD result for $m_s$.
Using the compliation of quenched and unquenched results of
Ukawa,\cite{Ukawa} Mack\-en\-zie\cite{Mac94} has estimated
$\hat{m}_{\overline{\rm MS}}(1~{\rm GeV})\sim2.3$~MeV and
$m_{s,{\overline{\rm MS}}}(1~{\rm GeV})\sim65$~MeV.
The symbol $\sim$ stresses the lack of error bar.
This is outside the ranges of 3.5--11.5~MeV and 100--300~MeV
indicated in Table~\ref{table:sm}.
A more recent analysis of the strange quark finds
$m_{{\overline{\rm MS}},s}(2~{\rm GeV})=127\pm18$~MeV,\cite{All94} in the lower
part of the range in Table~\ref{table:sm}.
None of these results should be taken seriously until a more complete
error analysis exists, but it is intriguing that the conventional
estimates might be too high.
\subsection{Quarkonia, $\alpha_S$, and heavy-quark masses}
Quarkonia are bound states of a heavy quark and heavy anti-quark.
Three families of states exist, charmonium ($\eta_c$, $J/\psi$, etc),
bottomonium ($\eta_b$, $\Upsilon$, etc), and the as yet unobserved $B_c$
($b\bar{c}$ and $\bar{b}c$ bound states).
Compared to light hadrons, these systems are simple.
The quarks are nonrelativistic, and potential models give an excellent
empirical description.
But a fundamental treatment of these systems requires nonperturbative
QCD, i.e.\ lattice QCD.%
\footnote{The utility of quarkonia for testing the methodology of
lattice gauge theory and the theory of QCD has been stressed over and
over by Peter Lepage.}
Potential models can be exploited, however, to estimate lattice
artifacts, and in the quenched approximation they can be used to make
corrections.
Many states have been observed in the lab, providing cross-checks of
the methodology of uncertainty estimation.
Once the checks are satisfactory, one can use the spectra to determine
$\alpha_S$, $m_c$, and $m_b$.
One can also have some confidence in further applications, such as the
phenomenology of $D$ and $B$ mesons discussed in sect.~\ref{EW}.
For charm, and especially for bottom, the quark mass is close to the
ultraviolet cutoff, $1/a$ or $\pi/a$, of present-day numerical
calculations.
Originally lattice gauge theory was formulated assuming $m_qa\ll1$,
so quarks $m_qa\sim1$ require some reassessment.
There are four ways to react.
The patient, stolid way is to wait ten years, until computers are
powerful enough to reach a cutoff of 20~GeV---not very inspiring.
The naive way is to extrapolate from smaller masses, assuming the
$m_qa\ll1$ interpretation of the lattice theory is adequate; history
shows that naive extrapolations can lead to naive and, thus,
unacceptable error estimates.
The insightful way is to formulate an effective theory for heavy quarks
with a lattice cutoff;\cite{Lep87,Eic87} this is the computationally
most efficient approach, and when the effectiveness of the heavy-quark
expansion is {\em a~priori\/} clear, it is the method of choice.
The compulsive way to examine a wide class of lattice theories without
assuming either $m_qa\ll1$ or $m_q\gg(\Lambda_{\rm QCD},a^{-1})$; by imposing
physical normalization conditions on masses and matrix elements,
it is possible to interpret the correlation functions at {\em any\/}
value of $m_qa$.\cite{KKM9?}
The underlying reason is that the lattice theory is completely
compatible with the heavy-quark limit, so the mass-dependent
interpretation connects smoothly onto both the insightful method for
$m_qa\gg1$ and the standard method for $m_qa\ll1$.
Fig.~\ref{fig:cbarc} shows the charmonium spectrum, on a scale
appropriate to the spin-averaged spectrum.
\begin{figure}
\epsfxsize=\textwidth \epsfbox{ccbar_comp.eps}
\caption[fig:cbarc]{A comparison of the charmonium spectrum as
calculated in lattice QCD, using two different methods.
\refcite{NRQ94}: $\circ$, \refcite{Kha93}: $\Box$.
From \refcite{Kha95}.}\label{fig:cbarc}
\end{figure}
Light quark loops are quenched in these calculations.\cite{NRQ94,Kha93}
The agreement with experimental measurements is impressive, but
Fig.~\ref{fig:cbarc} barely displays the attainable precision.
Fig.~\ref{fig:hyperfine} shows the fine and hyperfine structure of the P
states, now for bottomonium.\cite{NRQ94}
\begin{figure}
\input b_hyper.ltx
\caption[fig:hyperfine]{Lattice QCD results for the spin-dependent
splittings of the lowest-lying P states in bottomonium.
The dashed lines are the experimental values, where available.
Energies are measured relative to the spin average of the $\chi$ states.
From \refcite{NRQ94}.}\label{fig:hyperfine}
\end{figure}
(The $^1$P$_1$ state $h_b$ has not been observed in the lab; the
$h_c$ has been seen.)
The authors of \refcite{NRQ94} also have results with the virtual quark
loops from two light quarks, i.e.\ up and down are no longer quenched,
but strange still is.
The agreement is comparable.\cite{Slo94}
To obtain these results only two parameters have been adjusted.
The standard mass in eq.~(\ref{eq:units}) is $\Delta m_{\mbox{\scriptsize 1P--1S}}$, the spin-averaged
splitting of the 1P and 1S states, which is insensitive to the quark
mass.
By the renormalization group, this is equivalent to eliminating the bare
gauge coupling, or to determining $\Lambda_{\rm QCD}$.
The bare quark mass is adjusted to obtain the spin average of the 1S
states that is measured in the lab.
Otherwise figs.~\ref{fig:cbarc} and~\ref{fig:hyperfine} represent
predictions of quenched QCD.
The success of these calculations permits one to extract the basic
parameters, $\alpha_S$ and $m_q$.
There are four steps:
\begin{enumerate}
\item\label{compute}
Compute the charm- and bottomonium spectra with $n_{f,\rm MC}=0,2$ or 3
flavors of virtual quark loops.
($n_{f,\rm MC}=0$ corresponds to the quenched approximation;
$n_{f,\rm MC}=2$ quenches just the strange quark;
$n_{f,\rm MC}=3$ would be the real world.)
\item\label{convert}
With perturbation theory, convert the bare lattice coupling
$\alpha_0^{(n_{f,\rm MC})}$ to the quark-potential ($V$) or ${\overline{\rm MS}}$
scheme; convert the bare lattice mass $(m_0a)^{(n_{f,\rm MC})}$ to the
pole or ${\overline{\rm MS}}$ scheme.
The natural scale for this conversion is near
(but not quite\cite{Lep93}) $\pi/a$.
\item\label{correct}
Unless $n_{f,\rm MC}=3$, correct for the quenched approximation.
\item\label{amps}
Eliminate $a$ from $\alpha_{\overline{\rm MS}}(\pi/a)$ and $am_{\overline{\rm MS}}(\pi/a)$ using
\begin{equation}
a=\frac{a\Delta m_{\mbox{\scriptsize 1P--1S}}}{460~{\rm MeV}},
\end{equation}
where the numerator is the 1P--1S splitting in lattice units.
\end{enumerate}
Steps \ref{compute} and \ref{amps} are explained above.
Step~\ref{convert} requires one-loop perturbation theory, suitably
optimized.\cite{Lep93}
Step~\ref{correct} is crucial, because without it the results have no
business in Table~\ref{table:sm}.
Consider first $\alpha_S$, and recall the idea of treating the quenched
approximation as an effective theory.
One sees that the couplings are implicitly matched at some scale $q_Q$
characteristic of quarkonia.
So the matching hypothesis, supported by figs.~\ref{fig:cbarc}
and~\ref{fig:hyperfine}, asserts
\begin{equation}
\alpha_S^{(n_{f,\rm MC})}(q_Q)=\alpha_S^{(3)}(q_Q).
\end{equation}
Potential models tell us that $200<q_c<800$~MeV and $200<q_b<1400$~MeV.
Step 3 yields $\alpha_S^{(n_{f,\rm MC})}(\pi/a)$, so one can use the
two-loop perturbative renormalization group to run from $\pi/a$ to
$q_Q$.
The perturbative running is an overestimate if $q_Q$ is taken at the
lower end of these ranges.%
\footnote{For light hadrons, $q_{\rm light}\sim\Lambda_{\rm QCD}$, so there would
be no perturbative control whatsoever.}
This argument was used for the original lattice determinations of the
strong coupling,\cite{Kha92,Lep94} and its reliability was confirmed in
$n_{f,\rm MC}=2$ calculations.\cite{Aok95}
Currently the most accurate result is from \refcite{Dav95},
\begin{equation}\label{eq:alpha_V}
\alpha_V^{(3)}(8.2~{\rm GeV})=0.196\pm0.003,
\end{equation}
based on $n_{f,\rm MC}=0$ and $n_{f,\rm MC}=2$ results, with an
extrapolation in $n_f$.
The $V$ scheme is preferred for the matching argument, not only for
physical reasons, but also because of its empirical scaling
behavior.\cite{Lep93}
The scaling behavior implies that one can run with the two-loop
renormalization group to high scales and convert to other schemes.
For comparison to other determinations,
eq.~(\ref{eq:alpha_V}) corresponds to
\begin{equation}\label{eq:alpha_MSbar}
\alpha_{\overline{\rm MS}}(M_Z)=0.115\pm0.002.
\end{equation}
The quoted uncertainty is smaller than that reported from any other
method.
The largest contributor is the quenched correction; the second largest
is the perturbative conversion $0\to V\to{\overline{\rm MS}}$.
To determine the quark mass the one applies the same
renormalization-group argument.
But, quark masses don't run below threshold!\cite{Dav94}
Hence, for heavy quarks%
\footnote{For light quarks the threshold is deep in brown muck, and all
bets are off.}
$m_Q^{(n_{f,\rm MC})}(m_Q)=m_Q^{(n_{f,\rm MC})}(q_Q)=%
m_Q^{(3)}(q_Q)=m_Q^{(3)}(m_Q)$.
The only corrections are perturbative, from lattice conventions to
${\overline{\rm MS}}$ or physical conventions.
Using the convention of the perturbative ``pole mass''
\begin{equation}\label{eq:mq}
\begin{array}{r@{\,=\,}l@{\;\rm MeV\;}l}
m_c & 1.5\pm0.2 & \mbox{(\refcite{Kha94}, preliminary)},\\[1.0em]
m_b & 5.0\pm0.2 & \mbox{(\refcite{Dav94})}.
\end{array}
\end{equation}
At the nonperturbative level confinement wipes out the pole, so the name
``pole mass'' should not be taken too literally.
The perturbative pole mass is like a running mass, except that it
has a fixed scale built into the definition.
It is useful, because it is thought\cite{Dav95} to correspond to the
mass of phenomenological models that do not probe energies less than
$\Lambda_{\rm QCD}$, such as potential models.
In other contexts---such as the study of Yukawa couplings in unification
scenarios---the ${\overline{\rm MS}}$ convention may be more appropriate.
Eq.~(\ref{eq:mq}b) corresponds to $m_{b,{\overline{\rm MS}}}(m_b)=4.0\pm0.1$~GeV.
\section{From Matrix Elements to the CKM Matrix}\label{EW}
Electroweak decays of flavored hadrons follow the schematic formula
\begin{equation}\label{eq:factors}
\left( \begin{array}{c} {\rm experimental} \\
{\rm measurement} \end{array} \right) =
\left[ \begin{array}{c} {\rm known} \\ {\rm factors} \end{array} \right]
\left( \begin{array}{c} {\rm QCD} \\ {\rm factor} \end{array} \right)
\left( \begin{array}{c} {\rm CKM} \\ {\rm factor} \end{array} \right)
\end{equation}
North American, Japanese, and European taxpayers provide us with lots of
money for the relevant experiments, because they want to know the CKM
factors.
But unless we calculate the inherently nonperturbative QCD factor,
they will be sorely disappointed.
It is convenient to start with the assumption of three-generation
unitarity.
Then
\begin{equation}\label{eq:triangle}
V_{ud}V_{ub}^* + V_{cd}V_{cb}^* + V_{td}V_{tb}^* = 0,
\end{equation}
an equation that prescribes a triangle in the complex plane.
Dividing by $V_{cd}V_{cb}^*$ and writing
$V_{ud}V_{ub}^*/V_{cd}V_{cb}^*=\bar{\rho}+i\bar{\eta}$, one sees that
unitarity predicts
\begin{equation}
\frac{V_{td}V_{tb}^*}{V_{cd}V_{cb}^*} = 1-\bar{\rho}-i\bar{\eta}.
\end{equation}
The notation\cite{BLO94} $(\bar{\rho},\bar{\eta})$ is to distinguish
these parameters from the standard Wolfenstein parameters
$(\rho,\eta)=(\bar{\rho},\bar{\eta})/|V_{ud}|$.
The standard CKM phase in Table~\ref{table:sm} is
$\delta=\tan^{-1}(\eta/\rho)=\tan^{-1}(\bar{\eta}/\bar{\rho})$.
One would like to determine $(\bar{\rho},\bar{\eta})$ through as many
physical processes as possible.
For example, the strength of $CP$ violation in $b$-flavored hadrons is
related to the angles of the triangle, hence the high interest in $B$
factories.
If all experiments agree, the test verifies the CKM explanation of $CP$
violation; if not, the discrepency would have to be explained by physics
beyond the Standard Model.
Meanwhile, lattice QCD is useful for measuring the sides of the
unitarity triangle.
Depending on the shape of the triangle, the precision may be good enough
to predict the angles before the $B$ factories have been built.
Let us consider the CKM matrix elements in eq.~(\ref{eq:triangle}).
Assuming three-generation unitarity, $1-|V_{tb}|$ is too small to
worry about, and $|V_{cd}|-|V_{us}|$ is also very small.
In principle, however, $|V_{us}|$ and $|V_{cd}|$ can be determined from
lattice QCD and measurements of the semi-leptonic decays $K\to\pi e\nu$
and $D\to\pi l\nu$, respectively.
The technique is the same as for $|V_{ub}|$ from $B\to\pi l\nu$,
discussed below.
It is unlikely that lattice QCD can, or will need to, improve on
$|V_{ud}|=0.9744\pm0.0010$ during the period relevant to this
discussion.
The most poorly known elements of eq.~(\ref{eq:triangle}) are
$|V_{cb}|$, $|V_{ub}|$, and $|V_{td}|$.
In principle, the first two can be determined from leptonic decays
$B_q\to\tau\nu$, $q=u,\,b$, but the experimental prospects are bleak.
The semi-leptonic decay is more promising.
Near $q^2_{\rm max}=(m_B-m_{D^*})^2$ the differential decay rate for
$B\to D^*l\nu$ is
\begin{equation}\label{eq:semi-vector}
\frac{d\Gamma}{dq^2}= \left[\frac{G_F^2q^2}{64\pi^3m_B}
\Big((q^2_{\rm max}-q^2)(4m_{D^*}m_B + q^2_{\rm max}-q^2)\Big)^{1/2}
\right] |A_1(q^2)|^2|V_{cb}|^2,
\end{equation}
where $q^2$ is the invariant mass-squared of the lepton system.
One must carry out a nonperturbative QCD calculation to obtain
the form factor $A_1(q^2)$.
By heavy-quark symmetry, however, $A_1(q^2_{\rm max})$ obeys a
normalization condition, up to $1/m_{D^*}^2$ corrections\cite{Luk90}
(estimated to be small) and known radiative corrections.
Other form factors, which are phase-space suppressed near
$q^2_{\rm max}$, are also related by heavy-quark symmetry to $A_1(q^2)$.
Hence, eq.~(\ref{eq:semi-vector}) provides an essentially
model-independent\cite{Neu91} way to determine $|V_{cb}|$.
The difficulty with the model-independent analysis is that the decay
rate vanishes at $q^2_{\rm max}$.
To aid experimentalists' extrapolation to that point, several
groups\cite{Ber93,Boo94,Man94}
have used quenched lattice QCD to compute the slope of $A_1$.
A typical analysis is to fit the slope to lattice-QCD numerical data,
and then fit the normalization to CLEO's experimental data, as shown in
Fig.~\ref{fig:simone}.
\begin{figure}
\epsfxsize=\textwidth \epsfbox{simone.eps}
\caption[fig:simone]{The Isgur-Wise function $\xi(\omega)$
(essentially the form factor $A_1$ of the text) from lattice QCD
and CLEO. The kinematic variable
$\omega=v_a\cdot v_b=1-(q^2_{\rm max}-q^2)/2m_Bm_{D^*}$.
From \refcite{Sim94}.}\label{fig:simone}
\end{figure}
For example, Simone of the UKQCD Collaboration finds\cite{Sim94}
\begin{equation}\label{eq:Vcb}
|V_{cb}|=0.034^{+3+2}_{-2-2}\sqrt{\frac{\tau_B}{1.49~\rm ps}}.
\end{equation}
The first error is experimental; the second is from the lattice-QCD
slope.
Unfortunately, it is not clear how to correct for the quenched
approximation, and the associated uncertainty has not been estimated.
Moreover, consistency checks of varying lattice spacing, volume, etc,
are still in progress.
Nevertheless, the overall consistency with experiment, shown in
Fig.~\ref{fig:simone}, is encouraging.
$|V_{ub}|$ can be obtained from the semi-leptonic decays
$B\to\rho l\nu$ and $B\to\pi l\nu$.
Expanding in $q^2$ near $q^2_{\rm max}=(m_B-m_\pi)^2$, the differential
decay rate for $B\to\pi l\nu$ reads
\begin{equation}\label{eq:semi-pseudoscalar}
\frac{d\Gamma}{dq^2} =
\left[\frac{G_F^2(q^2_{\rm max}-q^2)^{3/2}}{24\pi^3}
\left(\frac{m_\pi}{m_B}\right)^{3/2}\right]
|f_+(q^2_{\rm max})|^2|V_{ub}|^2
\left(1+{\rm O}(q^2_{\rm max}-q^2)\right),
\end{equation}
where $f_+(q^2)$ is the form factor that must be calculated in lattice
QCD.
Now, however, heavy-quark symmetry does not restrict
$f_+(q^2_{\rm max})$, so a calculation is needed to make any progress.
These calculations are underway at Fermilab, and, presumably, many
other places.
The third row of the CKM matrix can be probed via the box diagrams
responsible for neutral meson mixing.
The $CP=+$ admixture of the $K_L$ is parameterized by
$|\varepsilon_K|=2.26\times10^{-3}$.
The Standard Model predicts
\begin{equation}\label{eq:ek}
\begin{array}{l}
|\varepsilon_K|=
\left[\dfrac{\sqrt{2}G_F^2m_W^2}{16\pi^2m_K\Delta m_K}\right]
\mbox{\small$\frac{8}{3}$}m_K^2f_K^2\hat{B}_K
|V_{ud}V_{us}|^2|V_{cb}|^2 \times\\[1.0em] \hspace{6em}
\bar{\eta}\Big( |V_{cb}|^2(1-\bar{\rho}) y_t\eta_2f_2(y_t) +
y_c(\eta_3f_3(y_t)-\eta_1) \Big),
\end{array}
\end{equation}
where $y_q=m_q^2/m_W^2$.
This formula assumes three-generation unitarity and neglects the
deviation of $|V_{cs}|$ and $|V_{tb}|$ from unity.
The $\eta_i$ and $f_i$ multiplying the CKM factors arise from box
diagrams and their QCD corrections.\cite{Ina81}
The nonperturbative QCD factor is
$\mbox{\small$\frac{8}{3}$}m_K^2f_K^2B_K$, which is the $K$--$\bar{K}$
transition matrix element of a $\Delta S=2$ operator.
The best result for $B_K$ is\cite{Sha94,Ish93}
\begin{equation}\label{eq:B_K}
\begin{array}{r@{\,\pm\,}l}
B_K({\rm NDR,~2~GeV})=0.616 & 0.020\pm0.014\pm0.009\pm0.004\pm0.002
\\[0.7em] & ({\rm few~\%}) \pm3\%.
\end{array}
\end{equation}
The many error bars are exihibited to show how mature the uncertainty
analysis has become.
The first is statistical and the others are systematic.
The ``few~\%'' are for the quenched approximation.
This estimate comes from repeating some of the numerical computations
for full QCD,\cite{Kil92} though not enough to obtain the
other error bars, and from an analysis of chiral
logarithms.\cite{Sha90}
The latter study is reassuring only for degenerate quarks, so the
calculations are done with both quarks at $\mbox{\small $\frac{1}{2}$} m_s$.
The 3\% uncertainty is an estimate of ${\rm O}(m_s-m_d)$ contributions.
Combining the errors and converting to the renormalization-group
invariant that appears in eq.~(\ref{eq:ek}), one finds\cite{Sha94}
\begin{equation}\label{eq:hatB_K}
\begin{array}{r@{\,=\,}l}
\hat{B}_K & (\alpha_{\overline{\rm MS}}(\mu))^{-6/25}B_K(\rm NDR,~\mu) \\[0.7em]
& 0.825\pm0.027(\,{\rm stat.}) \pm0.023(\,{\rm syst.})
\pm({\rm few~\%}) \pm3\%.
\end{array}
\end{equation}
This result places a high standard on calculations of $\hat{B}_K$,
whether by lattice QCD or any other method.
Would-be competitors must not only reach 10\% uncertainties, they must
do so with an error analysis as thorough and forthright as
\refcite{Sha94}.
Mixing in the $B^0$--$\bar{B}^0$ system is also sensitive to $V_{td}$.
In the Standard Model the mass splitting is given by
\begin{equation}\label{eq:xd}
x_d=\frac{\Delta m_{B_d}}{\Gamma_{B_d}}=
\left[\frac{G_F^2m_t^2\tau_{B_d}}{16\pi^2m_{B_d}}
\eta_Bf_2(y_t) \right]
\mbox{\small$\frac{8}{3}$}m_{B_d}^2f_{B_d}^2\hat{B}^{ }_{B_d}
|V_{td}^*V_{tb}|^2,
\end{equation}
The same formula holds for the $B_s$--$\bar{B}_s$ system, but with the
$d$ quark replaced by $s$
(i.e.\ $B_d\mapsto B_s$, $V_{td}\mapsto V_{ts}$.)
The nonperturbative QCD factor is
$\mbox{\small$\frac{8}{3}$}m_{B_q}^2f_{B_q}^2\hat{B}_{B_q}$,
which is the ${B_q}$--$\bar{B}_q$ transition matrix element of a
$\Delta B=2$ operator.
The calculation of the decay constant $f_B$ has received a great deal of
attention over the last several years,\cite{Som95} but the matrix
element needed here,
$\mbox{\small$\frac{8}{3}$}m_B^2f_B^2\hat{B}_B$,
has been mostly neglected.
(There are some older, exploratory papers.\cite{Ber88})
It is interesting to see how the lattice results influence the unitarity
triangle.
Fig.~\ref{fig:vtd} shows constraints from eqs.~(\ref{eq:ek}),
$|V_{ub}/V_{cb}|$, and $x_d/x_s$, taking for the masses
\begin{equation}
\begin{array}{r@{\,=\,}l}
m_{c,{\overline{\rm MS}}} & 1.3\pm 0.2, \\
m_{t,{\overline{\rm MS}}} & 175\pm 15,
\end{array}
\end{equation}
for the hadronic matrix elements
\begin{equation}
\begin{array}{r@{\,=\,}l}
\hat{B}_K & 0.825\pm0.050, \\
|f_{B_d}/f_{B_s}| & 0.90\pm0.05, \\
|B_{B_d}/B_{B_s}| & 1.0\pm0.2,
\end{array}
\end{equation}
for ``experimental''%
\footnote{Nonperturbative QCD is needed to extract these results!}
CKM results
\begin{equation}
\begin{array}{r@{\,=\,}l}
|V_{cb}| & 0.040\pm0.005, \\
|V_{ub}/V_{cb}| & 0.08\pm0.02,
\end{array}
\end{equation}
and for neutral $B$ mixing measurements
\begin{equation}
\begin{array}{r@{\,=\,}l}
x_d & 0.72\pm0.08, \\
x_s & 15\pm5.
\end{array}
\end{equation}
\begin{figure}
\epsfxsize=\textwidth \epsfbox{vtd.eps}
\caption[fig:vtd]{Constraints on $(\bar{\rho},\bar{\eta})$ from
$|\varepsilon_K|$ (solid hyperbolae),
$|V_{ub}/V_{cb}|$ (dashed circles with origin (0,0)),
and $x_d/x_s$ (dash-dotted circles with origin (1,0)), and contemporary
uncertainties.}\label{fig:vtd}
\end{figure}
Other inputs are as in \refcite{BLO94}.
I've made two wild guesses: $|B_{B_d}/B_{B_s}|$ and $x_s$.
But note that I take the uncertainty estimate in $\hat{B}_K$ seriously;
doubling it would not make much difference, in view of the uncertainties
in $m_t$ and $|V_{cb}|$.
Alas, these and the other uncertainties are too large to make
Fig.~\ref{fig:vtd} interesting.
What if lattice QCD calculation and the experiments improve?
Consider for the masses
\begin{equation}
\begin{array}{r@{\,=\,}l}
m_{c,{\overline{\rm MS}}} & 1.3\pm 0.1, \\
m_{t,{\overline{\rm MS}}} & 175\pm 5,
\end{array}
\end{equation}
for the hadronic matrix elements
\begin{equation}
\begin{array}{r@{\,=\,}l}
\hat{B}_K & 0.825\pm0.027, \\
|f_{B_d}/f_{B_s}| & 0.90\pm0.02, \\
|B_{B_d}/B_{B_s}| & 1.0\pm0.1,
\end{array}
\end{equation}
in particular eliminating almost all the statistical error in
$\hat{B}_K$; for ``experimental'' CKM results
\begin{equation}
\begin{array}{r@{\,=\,}l}
|V_{cb}| & 0.035\pm0.002, \\
|V_{ub}/V_{cb}| & 0.080\pm0.004~~\mbox{``low,''} \\
& 0.091\pm0.004~~\mbox{``high,''} \\
\end{array}
\end{equation}
and for neutral $B$ mixing measurements
\begin{equation}
\begin{array}{r@{\,=\,}l}
x_d & 0.72\pm0.04,\\
x_s & 18\pm2.
\end{array}
\end{equation}
Fig.~\ref{fig:vtd5} shows how this 5--10\% level of precision improves
the limits on $(\bar{\rho},\bar{\eta})$.
\begin{figure}
\epsfxsize=\textwidth \epsfbox{vtd5.eps}
\caption[fig:vtd5]{Constraints on $(\bar{\rho},\bar{\eta})$ from
$|\varepsilon_K|$ (solid hyperbolae),
low $|V_{ub}/V_{cb}|$ (dashed circles with origin (0,0)) or
high $|V_{ub}/V_{cb}|$ (dotted circles with origin (0,0)),
and $x_d/x_s$ (dash-dotted circles with origin (1,0)), and improved
(5--10\%) uncertainties.}\label{fig:vtd5}
\end{figure}
The wildest guess remains $x_s$, so ignore the dashed-dotted
curves momentarily.
The region allowed by the hyperbolic band from $\varepsilon_K$ and the
circular band from $|V_{ub}/V_{cb}|$ shrinks if $|V_{ub}|$ is too small.
The tension between these two constraints is partly a consequence of the
low value of $|V_{cb}|$ suggested by eq.~(\ref{eq:Vcb}).
Increasing $|V_{cb}|$ brings the hyperbolic band down more rapidly than
it shrinks the circular band.
If the real-world values of $|V_{cb}|$ and $|V_{ub}/V_{cb}|$ allow a
sizable region, as for the dotted circles in Fig.~\ref{fig:vtd5},
neutral $B$ mixing becomes crucial.
The constraint becomes more restrictive as $x_s$ increases.
Unfortunately, the experimental measurement becomes more difficut as
$x_s$ increases.
If it proves impossible to obtain useful information on $x_s$, one can
return to eq~(\ref{eq:xd}), and focus on $x_d$ alone.
The lattice-QCD calculations of
$\mbox{\small$\frac{8}{3}$}m_B^2f_B^2\hat{B}_B$ will carry larger
uncertainties, however, than the $B_d:B_s$ ratio.
\section{Conclusions}
This talk has examined several ways in which lattice QCD can aid the
determination of standard-model couplings.
The quenched lattice calculations may be divided into several classes,
according to the maturity of the error analysis and the presumed
reliability of the quenched approximation.
One class consists of the light-hadon and quarkonia spectra
and the $K$--$\bar{K}$ mixing parameter $B_K$.
For them the straightforward uncertainties (statistics, $a$, $L$,
excited states, perturbation theory) seem fairly estimated.
The quenched approximation is another matter.
In quarkonia, one can correct for it with potential models, yielding
determinations of $\alpha_S$ and the charm and bottom masses.
The quenched error in $B_K$ is also thought to be under control,
and---taking the error bars at face value---$B_K$ is no longer the
limiting factor in the $|\varepsilon_K|$ constraint on the unitarity
triangle.
A second class consists of $f_B$, the semi-leptonic form factors of $K$
and $D$ mesons (not discussed in this talk, but see \refcite{Som95}),
and the Isgur-Wise function.
These quantities are essential for direct determinations of the first
two rows of the CKM matrix.
The quenched-approximation calculations are in good shape, but the
the corrections to it cannot be simply estimated.
A third class consists of the semi-leptonic decay $B\to\pi l\nu$
and neutral $B$ mixing, for which only exploratory work has appeared.
Nevertheless, all QCD quantities discussed here will follow a
conceptually clear path to ever-more-precise results.
The next ten years or so will almost certainly witness computing and
other technical improvements that will allow for wide-ranging
calculations without the quenched approximation.
By then the most efficient techniques for extracted the most relevant
information will have been perfected.
\section*{Acknowledgements}
The Adriatic glistened in the moonlight as it lapped against the quay.
In the bar of the Hotel Neptun a winsome lounge singer cooed
``\ldots strangers in the night, exchanging glances,\ldots.''
I looked up at the waiter and said, ``Molim pivo,'' when a man strolled
into the bar, clapped me on the back, and cried, ``Ay, you kook, what's
new?''
In a different place and a different time he had rescued my sanity,
if not my life.
I peered into his eyes and replied, ``It's the same old story, always
the same.
And even when it changes, it always ends the same way:
Fermilab is operated by Universities Research Association, Inc.,
under contract DE-AC02-76CH03000 with the U.S. Department of Energy.''
\section*{References}
|
2,877,628,090,304 | arxiv | \section{Introduction}
Targeted sentiment classification is a fine-grained sentiment analysis task, which aims at determining the sentiment polarities (e.g., negative, neutral, or positive) of a sentence over ``opinion targets'' that explicitly appear in the
sentence. For example, given a sentence \textit{``I hated their service, but their food was great''}, the sentiment polarities for the target \textit{``service''} and \textit{``food''} are negative and positive respectively.
A target is usually an entity or an entity aspect.
In recent years, neural network models are designed to automatically learn useful low-dimensional representations from targets and contexts and obtain promising results ~\cite{dong2014adaptive,tang2016effective}.
However, these neural network models are still in infancy to deal with the fine-grained targeted sentiment classification task.
Attention mechanism, which has been successfully used in
machine translation \cite{bahdanau2014neural}, is incorporated to enforce the model to pay more attention to context words with closer semantic relations with the target.
There are already some studies use attention to generate target-specific sentence representations \cite{wang2016attention,ma2017interactive,chen2017recurrent}
or to transform sentence representations according to target words \cite{li2018transformation}.
However, these studies depend on complex recurrent neural networks (RNNs)
as sequence encoder to compute hidden semantics of texts.
The first problem with previous works is that the modeling of text relies on RNNs.
RNNs, such as LSTM, are very expressive, but they are hard to parallelize and backpropagation through time (BPTT) requires large amounts of memory and computation.
Moreover, essentially every training algorithm of RNN is the truncated BPTT, which affects the model's ability to capture dependencies over longer time scales \cite{werbos1990backpropagation}.
Although LSTM can alleviate the vanishing gradient problem to a certain extent and thus maintain long distance information,
this usually requires a large amount of training data.
Another problem that previous studies ignore is the label unreliability issue,
since \textit{neutral} sentiment is a fuzzy sentimental state and brings difficulty for model learning.
As far as we know, we are the first to raise the label unreliability issue in the targeted sentiment classification task.
This paper propose an attention based model to solve the problems above.
Specifically, our model eschews recurrence and employs attention as a competitive alternative to draw the introspective and interactive semantics between target and context words.
To deal with the label unreliability issue, we employ a label smoothing regularization
to encourage the model to be less confident with fuzzy labels.
We also apply pre-trained BERT \cite{devlin2018bert}
to this task and show our model enhances the performance of basic BERT model.
Experimental results on three benchmark datasets show that the proposed model achieves competitive performance and is a lightweight alternative of the best RNN based models.
The main contributions of this work are presented as follows:
\begin{enumerate}
\item We design an attentional encoder network to draw the hidden states and semantic interactions between target and context words.
\item We raise the label unreliability issue and add an effective label smoothing regularization term to the loss function for encouraging the model to be less confident with the training labels.
\item We apply pre-trained BERT to this task, our model enhances the performance of basic BERT model and obtains new state-of-the-art results.
\item We evaluate the model sizes of the compared models and show the lightweight of the proposed model.
\end{enumerate}
\section{Related Work}
The research approach of the targeted sentiment classification task including traditional machine learning methods and neural networks methods.
Traditional machine learning methods, including rule-based methods \cite{ding2008holistic} and statistic-based methods \cite{jiang2011target}, mainly focus on extracting a set of features like sentiment lexicons features and bag-of-words features to train a sentiment classifier \cite{rao2009semi}.
The performance of these methods highly depends on the effectiveness of the feature engineering works, which are labor intensive.
In recent years, neural network methods are getting more and more attention as they do not need handcrafted features and can encode sentences with low-dimensional word vectors where rich semantic information stained.
In order to incorporate target words into a model,
Tang et al. \shortcite{tang2016effective} propose TD-LSTM to extend LSTM by using two single-directional LSTM to model the left context and right context of the target word respectively.
Tang et al. \shortcite{tang2016aspect} design MemNet which consists of a multi-hop attention mechanism with an external memory to capture the importance of each context word concerning the given target. Multiple attention is paid to the memory represented by word embeddings to build higher semantic information.
Wang et al. \shortcite{wang2016attention} propose ATAE-LSTM which concatenates target embeddings with word representations and let targets participate in computing attention weights.
Chen et al. \shortcite{chen2017recurrent} propose RAM which adopts multiple-attention mechanism on the memory built with bidirectional LSTM and nonlinearly combines the attention results with gated recurrent units (GRUs).
Ma et al. \shortcite{ma2017interactive} propose IAN which learns the representations of the target and context with two attention networks interactively.
\section{Proposed Methodology}
Given a context sequence $\mathbf{w^c} = \{w_1^c, w_2^c, ..., w_n^c\}$
and a target sequence $\mathbf{w^t} = \{w_1^t, w_2^t, ..., w_m^t\}$,
where $\mathbf{w^t}$ is a sub-sequence of $\mathbf{w^c}$.
The goal of this model is to predict the sentiment polarity of the
sentence $\mathbf{w^c}$ over the target $\mathbf{w^t}$.
Figure \ref{fig:model} illustrates the overall architecture of the proposed \textbf{A}ttentional \textbf{E}ncoder \textbf{N}etwork (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer.
Embedding layer has two types: GloVe embedding and BERT embedding.
Accordingly, the models are named \textbf{AEN-GloVe} and \textbf{AEN-BERT}.
\subsection{Embedding Layer}
\subsubsection{GloVe Embedding}
Let $L \in \mathbb{R}^{d_{emb} \times |V|}$ to be the pre-trained GloVe \cite{pennington2014glove} embedding matrix,
where $d_{emb}$ is the dimension of word vectors and $|V|$ is the vocabulary size.
Then we map each word $w^i \in \mathbb{R}^{|V|}$ to its corresponding embedding vector $e_i \in \mathbb{R}^{d_{emb} \times 1}$,
which is a column in the embedding matrix $L$.
\subsubsection{BERT Embedding}
BERT embedding uses the pre-trained BERT to generate word vectors of sequence.
In order to facilitate the training and fine-tuning of BERT model,
we transform the given context and target to
``[CLS] + context + [SEP]'' and ``[CLS] + target + [SEP]'' respectively.
\begin{figure}
\centering
\includegraphics[scale=0.5]{model.pdf}
\caption{Overall architecture of the proposed AEN.}
\label{fig:model}
\end{figure}
\subsection{Attentional Encoder Layer} \label{Attentional Encoder}
The attentional encoder layer is a parallelizable and interactive alternative of LSTM
and is applied to compute the hidden states of the input embeddings.
This layer consists of two submodules:
the \textbf{M}ulti-\textbf{H}ead \textbf{A}ttention (MHA) and the \textbf{P}oint-wise \textbf{C}onvolution \textbf{T}ransformation (PCT).
\subsubsection{Multi-Head Attention} \label{sec:MHA}
\textbf{M}ulti-\textbf{H}ead \textbf{A}ttention (MHA) is the attention that can perform multiple attention function in parallel.
Different from Transformer \cite{vaswani2017attention}, we use \textbf{Intra-MHA} for introspective context words modeling
and \textbf{Inter-MHA} for context-perceptive target words modeling, which is more lightweight and target is modeled according to a given context.
An attention function maps a key sequence $\mathbf{k} = \{k_1, k_2, ..., k_n\}$ and
a query sequence $\mathbf{q} = \{q_1, q_2, ..., q_m\}$ to an output sequence $\mathbf{o}$:
\begin{align}
Attention(\mathbf{k}, \mathbf{q}) &= softmax(f_{s}(\mathbf{k}, \mathbf{q})) \mathbf{k}
\end{align}
where $f_{s}$ denotes the alignment function which learns the semantic relevance between $q_j$ and $k_i$:
\begin{align}
f_{s}(k_i, q_j) &= tanh([k_i; q_j] \cdot W_{att})
\end{align}
where $W_{att} \in \mathbb{R}^{2d_{hid}}$ are learnable weights.
MHA can learn \emph{n\_head} different scores in parallel child spaces and is very powerful for alignments.
The $n_{head}$ outputs are concatenated and projected to the specified hidden dimension $d_{hid}$, namely,
\begin{align}
MHA(\mathbf{k}, \mathbf{q}) &= [\mathbf{o}^1; \mathbf{o}^2...; \mathbf{o}^{n_{head}}] \cdot W_{mh} \\
\mathbf{o}^h &= Attention^h(\mathbf{k}, \mathbf{q})
\end{align}
where ``$;$'' denotes vector concatenation, $W_{mh} \in \mathbb{R}^{d_{hid} \times d_{hid}}$,
$\mathbf{o}^h = \{o_1^h, o_2^h, ..., o_m^h\}$ is the output of the $h$-th head attention and $h \in [1, n_{head}]$.
\textbf{Intra-MHA}, or multi-head self-attention,
is a special situation for typical attention mechanism that $\mathbf{q} = \mathbf{k}$.
Given a context embedding $\mathbf{e^c}$, we can get the introspective context representation $\mathbf{c^{intra}}$ by:
\begin{align}
\mathbf{c^{intra}} = MHA(\mathbf{e^c}, \mathbf{e^c})
\end{align}
The learned context representation
$\mathbf{c^{intra}}=\{c_1^{intra}, c_2^{intra}, ..., c_n^{intra}\}$ is aware of long-term dependencies.
\textbf{Inter-MHA} is the generally used form of attention mechanism that $\mathbf{q}$ is different from $\mathbf{k}$.
Given a context embedding $\mathbf{e^c}$ and a target embedding $\mathbf{e^t}$,
we can get the context-perceptive target representation $\mathbf{t^{inter}}$ by:
\begin{align}
\mathbf{t^{inter}} = MHA(\mathbf{e^c}, \mathbf{e^t})
\end{align}
After this interactive procedure,
each given target word $e_j^t$ will have a composed representation selected from context embeddings $\mathbf{e^{c}}$.
Then we get the context-perceptive target words modeling $\mathbf{t^{inter}}=\{t_1^{inter}, t_2^{inter}, ..., t_m^{inter}\}$.
\subsubsection{Point-wise Convolution Transformation} \label{sec:PCT}
A \textbf{P}oint-wise \textbf{C}onvolution \textbf{T}
ransformation (PCT)
can transform contextual information gathered by the MHA.
Point-wise means that the kernel sizes are 1 and
the same transformation is applied to every single token belonging to the input.
Formally, given a input sequence $\mathbf{h}$, PCT is defined as:
\begin{align}
PCT(\mathbf{h}) &= \sigma(\mathbf{h} * W_{pc}^1 + b_{pc}^1) * W_{pc}^2 + b_{pc}^2
\end{align}
where $\sigma$ stands for the ELU activation,
$*$ is the convolution operator,
$W_{pc}^1 \in \mathbb{R}^{d_{hid} \times d_{hid}}$ and $W_{pc}^2 \in \mathbb{R}^{d_{hid} \times d_{hid}}$
are the learnable weights of the two convolutional kernels,
$b_{pc}^1 \in \mathbb{R}^{d_{hid}}$ and $b_{pc}^2 \in \mathbb{R}^{d_{hid}}$
are biases of the two convolutional kernels.
Given $\mathbf{c^{intra}}$ and $\mathbf{t^{inter}}$,
PCTs are applied to get the output hidden states of the attentional encoder layer
$\mathbf{h^c}=\{h_1^c, h_2^c, ..., h_n^c\}$
and $\mathbf{h^t}=\{h_1^t, h_2^t, ..., h_m^t\}$
by:
\begin{align}
\mathbf{h^c} &= PCT(\mathbf{c^{intra}}) \\
\mathbf{h^t} &= PCT(\mathbf{t^{inter}})
\end{align}
\subsection{Target-specific Attention Layer}
After we obtain the introspective context representation $\mathbf{h^c}$ and
the context-perceptive target representation $\mathbf{h^t}$,
we employ another MHA to obtain the target-specific context representation $\mathbf{h^{tsc}}=\{h_1^{tsc}, h_2^{tsc}, ..., h_m^{tsc}\}$ by:
\begin{align}
\mathbf{h^{tsc}} = MHA(\mathbf{h^c}, \mathbf{h^t})
\end{align}
The multi-head attention function here also has its independent parameters.
\subsection{Output Layer}
We get the final representations of the previous outputs by average pooling,
concatenate them as the final comprehensive representation $\mathbf{\tilde{o}}$,
and use a full connected layer to project the concatenated vector into the space of the targeted $C$ classes.
\begin{align}
\mathbf{\tilde{o}} &= [h_{avg}^c; h_{avg}^t; h_{avg}^{tsc}] \\
x &= \tilde{W_o}^T{{\mathbf{\tilde{o}}}}+\tilde{b_o} \\
y &= softmax(x) \\
&= \frac{exp(x)}{\sum_{k=1}^{C} exp(x)}
\end{align}
where $y \in \mathbb{R}^{C}$ is the predicted sentiment polarity distribution,
$\tilde{W_o} \in \mathbb{R}^{1 \times C}$ and $\tilde{b_o} \in \mathbb{R}^{C}$ are learnable parameters.
\subsection{Regularization and Model Training} \label{sec:LSR}
Since \textit{neutral} sentiment is a very fuzzy sentimental state, training samples which labeled \textit{neutral} are unreliable.
We employ a \textbf{L}abel \textbf{S}moothing \textbf{R}egularization (LSR) term in the loss function.
which penalizes low entropy output distributions \cite{szegedy2016rethinking}.
LSR can reduce overfitting by preventing a network from assigning the full probability to each training example during training, replaces the 0 and 1 targets for a classifier with smoothed values like 0.1 or 0.9.
For a training sample $x$ with the original ground-truth label distribution $q(k|x)$,
we replace $q(k|x)$ with
\begin{align}
q(k|x) = (1-\epsilon) q(k|x) + \epsilon u(k)
\end{align}
where $u(k)$ is the prior distribution over labels ,
and $\epsilon$ is the smoothing parameter.
In this paper, we set the prior label distribution to be uniform $u(k) = 1/C$.
LSR is equivalent to the KL divergence between the prior label distribution $u(k)$ and the network's predicted distribution $p_\theta$.
Formally, LSR term is defined as:
\begin{align}
\mathcal{L}_{lsr} = - D_{KL}(u(k) \| p_\theta)
\end{align}
The objective function (loss function) to be optimized is the cross-entropy loss with $\mathcal{L}_{lsr}$ and $\mathcal{L}_2$ regularization, which is defined as:
\begin{align}
\mathcal{L}(\theta) = - \sum_{i=1}^{C} \hat{y}^c log (y^c) + \mathcal{L}_{lsr} + \lambda \sum_{\theta \in \Theta} {\theta}^2 &
\end{align}
where $\hat{y} \in \mathbb{R}^C $ is the ground truth represented as a one-hot vector,
$y$ is the predicted sentiment distribution vector given by the output layer,
$\lambda$ is the coefficient for $\mathcal{L}_2$ regularization term, and $\Theta$ is the parameter set.
\section{Experiments}
\subsection{Datasets and Experimental Settings}
We conduct experiments on three datasets: SemEval 2014 Task 4 \footnote{The detailed introduction of this task can be found at \url{http://alt.qcri.org/semeval2014/task4}.} \cite{pontiki2014semeval} dataset composed of \emph{Restaurant} reviews and \emph{Laptop} reviews, and ACL 14 \emph{Twitter} dataset gathered by Dong et al. \shortcite{dong2014adaptive}. These datasets are labeled with three sentiment polarities: \emph{positive}, \emph{neutral} and \emph{negative}.
Table \ref{tab:stat} shows the number of training and test instances in each category.
Word embeddings in AEN-GloVe do not get updated in the learning process,
but we fine-tune pre-trained BERT
\footnote{We use uncased BERT-base from \url{https://github.com/google-research/bert}.} in AEN-BERT.
Embedding dimension $d_{dim}$ is 300 for GloVe and is 768 for pre-trained BERT.
Dimension of hidden states $d_{hid}$ is set to 300.
The weights of our model are initialized with Glorot initialization \cite{glorot2010understanding}.
During training, we set label smoothing parameter $\epsilon$ to 0.2 \cite{szegedy2016rethinking}, the coefficient $\lambda$ of $\mathcal{L}_2$ regularization item is $10^{-5}$ and dropout rate is 0.1.
Adam optimizer \cite{kingma2014adam} is applied to update all the parameters.
We adopt the \emph{Accuracy} and \emph{Macro-F1} metrics to evaluate the performance of the model.
\begin{table}[tp]
\small
\centering
\begin{threeparttable}
\caption{Statistics of the datasets.}
\begin{tabular}{ccccccc}
\toprule
\multirow{2}{*}{\textbf{Dataset}}&
\multicolumn{2}{c}{\textbf{Positive}}&\multicolumn{2}{c}{\textbf{Neural}}&\multicolumn{2}{c}{\textbf{Negative}}\cr
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
&Train&Test&Train&Test&Train&Test \cr
\midrule
Twitter &1561 &173 &3127 &346 &1560 &173 \cr
Restaurant &2164 &728 &637 &196 &807 &196 \cr
Laptop &994 &341 &464 &169 &870 &128 \cr
\bottomrule
\end{tabular}
\label{tab:stat}
\end{threeparttable}
\end{table}
\subsection{Model Comparisons}
In order to comprehensively evaluate and analysis the performance of AEN-GloVe,
we list 7 baseline models and design 4 ablations of AEN-GloVe.
We also design a basic BERT-based model to evaluate the performance of AEN-BERT.
~\\
\textbf{Non-RNN based baselines:}
$\bullet$ \textbf{Feature-based SVM} \cite{kiritchenko2014nrc} is a traditional support vector machine based model with extensive feature engineering.
$\bullet$ \textbf{Rec-NN} \cite{dong2014adaptive} firstly uses rules to transform the dependency tree and put the opinion target at the root, and then
learns the sentence representation toward target via semantic composition using Recursive NNs.
$\bullet$ \textbf{MemNet} \cite{tang2016aspect} uses multi-hops of attention layers on the context word embeddings for sentence representation to explicitly captures the importance of each context word.
~\\
\textbf{RNN based baselines:}
$\bullet$ \textbf{TD-LSTM} \cite{tang2016effective} extends LSTM by using two LSTM networks to model the left context with target and the right context with target respectively. The left and right target-dependent representations are concatenated for predicting the sentiment polarity of the target.
$\bullet$ \textbf{ATAE-LSTM} \cite{wang2016attention} strengthens the effect of target embeddings, which appends the target embeddings with each word embeddings and use LSTM with attention to get the final representation for classification.
$\bullet$ \textbf{IAN} \cite{ma2017interactive} learns the representations of the target and context with two LSTMs and attentions interactively, which generates the representations for targets and contexts with respect to each other.
$\bullet$ \textbf{RAM} \cite{chen2017recurrent} strengthens MemNet by representing memory with bidirectional LSTM and using a gated recurrent unit network to combine the multiple attention outputs for sentence representation.
~\\
\textbf{AEN-GloVe ablations:}
$\bullet$ \textbf{AEN-GloVe w/o PCT} ablates PCT module.
$\bullet$ \textbf{AEN-GloVe w/o MHA} ablates MHA module.
$\bullet$ \textbf{AEN-GloVe w/o LSR} ablates label smoothing regularization.
$\bullet$ \textbf{AEN-GloVe-BiLSTM} replaces the attentional encoder layer with two bidirectional LSTM.
~\\
\textbf{Basic BERT-based model:}
$\bullet$ \textbf{BERT-SPC} feeds sequence ``[CLS] + context + [SEP] + target + [SEP]''
into the basic BERT model for sentence pair classification task.
\subsection{Main Results}
\begin{table*}[tp]
\small
\centering
\begin{threeparttable}
\caption{Main results.
The results of baseline models are retrieved from published papers.
``-" means not reported.
Top 3 scores are in \textbf{bold}.}
\begin{tabular}{cccccccc}
\toprule
\multirow{2}{*}{ }&\multirow{2}{*}{\textbf{Models}}&
\multicolumn{2}{c}{\textbf{Twitter}}&\multicolumn{2}{c}{\textbf{Restaurant}}&\multicolumn{2}{c}{\textbf{Laptop}}\cr
\cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8}
&&Accuracy&Macro-F1&Accuracy&Macro-F1&Accuracy&Macro-F1\cr
\midrule
\multirow{4}*{\textbf{RNN baselines}}
&TD-LSTM &0.7080&0.6900 &0.7563&- &0.6813&- \cr
&ATAE-LSTM &-&- &0.7720&- &0.6870&- \cr
&IAN &-&- &0.7860&- &0.7210&- \cr
&RAM &0.6936&0.6730 &0.8023&0.7080 &\textbf{0.7449}&\textbf{0.7135} \cr
\midrule
\multirow{3}*{\textbf{Non-RNN baselines}}
&Feature-based SVM &0.6340&0.6330 &0.8016&- &0.7049&- \cr
&Rec-NN &0.6630&0.6590 &-&- &-&- \cr
&MemNet &0.6850&0.6691 &0.7816&0.6583 &0.7033&0.6409 \cr
\midrule
\multirow{4}*{\textbf{AEN-GloVe ablations}}
&AEN-GloVe w/o PCT &0.7066&0.6907 &0.8017&0.7050 &0.7272&0.6750 \cr
&AEN-GloVe w/o MHA &0.7124&0.6953 &0.7919&0.7028 &0.7178&0.6650 \cr
&AEN-GloVe w/o LSR &0.7080&0.6920 &0.8000&0.7108 &0.7288&0.6869 \cr
&AEN-GloVe-BiLSTM &0.7210&\textbf{0.7042} &0.7973&0.7037 &0.7312&0.6980 \cr
\midrule
\multirow{3}*{\textbf{Ours}}
&AEN-GloVe &\textbf{0.7283}&0.6981 &\textbf{0.8098}&\textbf{0.7214} &0.7351&0.6904 \cr
&BERT-SPC &\textbf{0.7355}&\textbf{0.7214} &\textbf{0.8446}&\textbf{0.7698} &\textbf{0.7899}&\textbf{0.7503} \cr
&AEN-BERT &\textbf{0.7471}&\textbf{0.7313} &\textbf{0.8312}&\textbf{0.7376} &\textbf{0.7993}&\textbf{0.7631} \cr
\bottomrule
\end{tabular}
\label{tab:result}
\end{threeparttable}
\end{table*}
Table \ref{tab:result} shows the performance comparison of AEN with other models.
BERT-SPC and AEN-BERT obtain substantial accuracy improvements,
which shows the power of pre-trained BERT on small-data task.
The overall performance of AEN-BERT is better than BERT-SPC,
which suggests that it is important to design a downstream network customized to a specific task.
As the prior knowledge in the pre-trained BERT is not specific to any particular domain,
further fine-tuning on the specific task is necessary for releasing the true power of BERT.
The overall performance of TD-LSTM is not good since it only makes a rough treatment of the target words.
ATAE-LSTM, IAN and RAM are attention based models, they stably exceed the TD-LSTM method on \emph{Restaurant} and \emph{Laptop} datasets.
RAM is better than other RNN based models, but it does not perform well on \emph{Twitter} dataset,
which might because bidirectional LSTM is not good at modeling small and ungrammatical text.
Feature-based SVM
is still a competitive baseline,
but relying on manually-designed features.
Rec-NN gets the worst performances among all neural network baselines
as dependency parsing is not guaranteed to work well on ungrammatical short texts such as tweets and comments.
Like AEN, MemNet also eschews recurrence, but its overall performance is not good
since it does not model the hidden semantic of embeddings, and the result of the last attention is essentially a linear combination of word embeddings.
\subsection{Model Analysis}
As shown in Table \ref{tab:result}, the performances of AEN-GloVe ablations are incomparable
with AEN-GloVe in both accuracy and macro-F1 measure.
This result shows that all of these discarded components are crucial for a good performance.
Comparing the results of AEN-GloVe and AEN-GloVe w/o LSR, we observe that the accuracy of AEN-GloVe w/o LSR drops significantly on all three datasets.
We could attribute this phenomenon to the unreliability of the training samples with \textit{neutral} sentiment.
The overall performance of AEN-GloVe and AEN-GloVe-BiLSTM is relatively close,
AEN-GloVe performs better on the \emph{Restaurant} dataset.
More importantly, AEN-GloVe has fewer parameters and is easier to parallelize.
To figure out whether the proposed AEN-GloVe is a lightweight alternative of recurrent models, we study the model size of each model on the \emph{Restaurant} dataset.
Statistical results are reported in Table \ref{tab:result2}.
We implement all the compared models base on the same source code infrastructure,
use the same hyperparameters, and run them on the same GPU
\footnote{NVIDIA GTX 1080ti. }.
RNN-based and BERT-based models indeed have larger model size.
ATAE-LSTM, IAN, RAM, and AEN-GloVe-BiLSTM are all attention based RNN models,
memory optimization for these models will be more difficult
as the encoded hidden states must be kept simultaneously in memory in order to perform attention mechanisms.
MemNet has the lowest model size as it only has one shared attention layer and two linear layers, it does not calculate hidden states of word embeddings.
AEN-GloVe's lightweight level ranks second,
since it takes some more parameters than MemNet in modeling hidden states of sequences.
As a comparison, the model size of AEN-GloVe-BiLSTM is more than twice that of AEN-GloVe, but does not bring any performance improvements.
\begin{table}[tp]
\small
\centering
\caption{Model sizes. Memory footprints are evaluated on the Restaurant dataset. Lowest 2 are in \textbf{bold}.}
\begin{tabular}{ccc}
\toprule
\multirow{2}{*}{\textbf{Models}}&
\multicolumn{2}{c}{\textbf{Model size}}\cr
\cmidrule(lr){2-3}
&Params $\times 10^6$ & Memory (MB) \cr
\midrule
TD-LSTM &1.44 &12.41\\
ATAE-LSTM &2.53 &16.61\\
IAN &2.16 &15.30\\
RAM &6.13 &31.18\\
MemNet &\textbf{0.36} &\textbf{7.82}\\
\midrule
AEN-BERT &112.93 &451.84\\
AEN-GloVe-BiLSTM &3.97 &22.52\\
AEN-GloVe &\textbf{1.16} &\textbf{11.04}\\
\bottomrule
\end{tabular}
\label{tab:result2}
\end{table}
\section{Conclusion}
In this work, we propose an attentional encoder network for the targeted sentiment classification task.
which employs attention based encoders for the modeling between context and target.
We raise the the label unreliability issue add a label smoothing regularization
to encourage the model to be less confident with fuzzy labels.
We also apply pre-trained BERT to this task and obtain new state-of-the-art results.
Experiments and analysis demonstrate the effectiveness and lightweight of the proposed model.
\bibliographystyle{acl_natbib}
|
2,877,628,090,305 | arxiv |
\section{Introduction}
The \kepler{} space-mission showed that hot-Jupiters are usually lone planets
that do not show transit time variation (TTV) signals \citep{Agol2005MNRAS.359..567A,HolmanMurray2005Sci...307.1288H,Steffen2012PNAS..109.7982S}.
The occurrence rate of companions to hot-Jupiters is currently uncertain and unreliable \citep{Huang2016ApJ...825...98H}.
On the other hand, almost $50\%$ of warm-Jupiters
(gas giant planets with orbital periods between $\sim$8 and 200 days)
of the \kepler{} sample are found in multi-planet systems \citep{Huang2016ApJ...825...98H}.
These warm-Jupiters show a wide variety of orbital configurations
possibly resulting from different formation and migration mechanisms \citep{Wu2018AJ....156...96W,Kley2019},
i.e. disk migration \citep{Lin1996Natur.380..606L,Baruteau2016SSRv..205...77B}
or high-eccentricity migration \citep{Nagasawa2008ApJ...678..498N}.
The measurement of the sky-projected orientation of
the planet orbit with respect to the spin axis of the star
(the so called projected spin-orbit angle $\lambda$)
can help to discern between these two models.
Misaligned warm-Jupiters could be formed by high-eccentricity migration,
while circular and aligned warm-Jupiters,
potentially in a mean motion resonance (MMR) with an outer companion,
are expected to be the result of disk-driven migration process \citep{Baruteau2016SSRv..205...77B}.
In this scenario, the outer companion is expected
to be less massive than the inner warm-Jupiter,
if produced by convergent migration \citep{Kley2019}.
Although the orbits should be nearly circular and well aligned,
a mild eccentricity of the outer planet is expected to build up because of the resonant perturbations
\citep{Baruteau2016SSRv..205...77B}.
The TTVs of a resonant pair of planets are particularly strong
and might be found even if the companion has a significantly lower mass
that cannot be easily detected using high-precision radial velocity (RV) measurements
\citep{Agol2005MNRAS.359..567A,HolmanMurray2005Sci...307.1288H,Steffen2012PNAS..109.7982S}.
\par
Observing an outer perturber on possibly eccentric and inclined orbit
in a system where an eccentric (and misaligned) warm-Jupiter is present
would be the hint for a high-eccentricity mechanism,
driven by planet-planet (P--P) scattering \citep{Marzari2019AA...625A.121M}
followed by tidal interactions with the host star.
\par
Finding planetary perturbers of known transiting exoplanets
can provide precious insights onto the architecture and the evolution of planetary systems
\citep{Malavolta2017,Teyssandier2020AA...643A..11T,MacDonald2020ApJ...891...20M,Kane2019AJ....157..171K,Masuda2020AJ....159...38M,Poon2020MNRAS.491.5595P}.
Detecting a TTV signal of a known transiting warm-Jupiter
induced by a perturber of planetary nature would help to understand their evolution path,
which is expected to be different from that of hot-Jupiters
\citep{Huang2016ApJ...825...98H,Frewen2016MNRAS.455.1538F}.
\par
The CHaracterising ExOPlanet Satellite \citep[CHEOPS,][]{Benz2020ExA...tmp...53B}
was launched on December 18, 2019, and
it started observations in April, 2020.
CHEOPS is a follow-up mission that aims at characterising exoplanets
known to transit their host star using high-precision photometry.
It already demonstrated its performances
improving the precision on the planetary parameters
of KELT-11 b \citep{Benz2020ExA...tmp...53B}.
\citet{Lendl2020AA...643A..94L} used the transit and the occultation observed with CHEOPS
to characterise the atmosphere and the spin-orbit obliquity of the highly-irradiated WASP-189 b,
measuring the asymmetry of the transit shape due to the stellar gravity darkening.
Furthermore, CHEOPS has been already used to characterise
two multiple-planet systems,
improving the ephemerides and the orbital parameters of the system TOI-1233 \citep{Bonfanti2021AA...646A.157B}
and solving the orbital configuration of TOI-178 \citep{Leleu2021arXiv210109260L}.
\par
As part of the CHEOPS Guaranteed Time Observation (GTO) programme,
we are currently searching for TTV signals in a selected sample of known transiting warm-Jupiters (Section~\ref{sec:warmJup}).
The purpose of this work is to demonstrate CHEOPS' capability to schedule multiple observations and
obtain transit time measurements with sufficient accuracy to allow detection of TTV signals.
In Section~\ref{sec:timing} we present the first 17 CHEOPS transit light curves of seven targets of our GTO program,
we describe the strategy and planning of our observations,
and the data analysis of single and multiple transits for each target.
We summarise and discuss the results in Section~\ref{sec:results}
and draw our conclusions in Section~\ref{sec:conclusions}.
\par
\section{Target selection of the sample}
\label{sec:warmJup}
The planets in our sample have significantly non-zero eccentricity
measured from Doppler observations and,
when possible, measured spin-orbit angle, $\lambda$,
from observations of the Rossiter-McLaughlin (RM) effect \citep{Ohta2005ApJ...622.1118O} or
Doppler tomography \citep[e.g.,][]{Brown2012ApJ...760..139B}.
We based our initial sample selection on the TEPCAT catalogue \citep{Southworth2011MNRAS.417.2166S},
then we checked if each candidate target was observable with CHEOPS using
the Feasibility Checker (FC) provided by the Consortium
\footnote{Available through ESA website; for more information see \url{https://www.cosmos.esa.int/web/cheops-guest-observers-programme}.}.
\par
The possible high mutual inclination ($\Delta i$) of the perturber
expected by a P-P scattering event
implies an almost null probability
of transit and reduces the RV semi-amplitude ($K_\mathrm{RV}$).
Nevertheless, the mass of the perturber, coupled with the eccentricity and the inclination,
is expected to induce a detectable TTV signal of the known transiting warm-Jupiter.
The lack of a TTV signal in highly eccentric and misaligned transiting warm-Jupiters
would indicate that P--P scattering is not efficient in producing eccentric and
misaligned warm gas giant planets.
We expect to observe 15 transits per target during 3.5 years, the nominal duration of the CHEOPS mission.
After the first five transits we should be able to have hint or rule out the presence of a TTV,
but only with the full 15 transits we will be able to sample the TTV period and amplitude
and draw conclusions about the existence of a perturber and
on the formation path (P-P scattering or migration in disk).
\par
We estimated the expected amplitude of the TTV signal ($A_\mathrm{TTV}$)
produced by an outer perturber on a transiting warm-Jupiter,
following a procedure similar to that used by \citet{Borsato2021arXiv210309239B}.
We used the parameters of the transiting warm-Jupiter from the literature and
we assumed the existence of a hypothetical outer planetary companion.
The main parameters of the perturber that influence the period and the amplitude of the TTV
are the mass ($M_\mathrm{perturber}$), the period ($P_\mathrm{perturber}$),
the eccentricity ($e_\mathrm{perturber}$), and the mutual inclination ($\Delta i_\mathrm{perturber}$)
of the perturber,
as widely demonstrated analytically and numerically by, e.g.,
\citet{Agol2005MNRAS.359..567A, HolmanMurray2005Sci...307.1288H} and \cite{Nesvorny2009ApJ...701.1116N}.
We created different TTV maps based on different initial values of this set of four parameters of the perturber.
We computed the orbits with \trades{}\footnote{Publicly available at \url{https://github.com/lucaborsato/trades}.}
\citep{Borsato2014AA...571A..38B,Nespral2017,Malavolta2017,Borsato2019MNRAS.484.3233B}
over a grid of mass and period values of the perturber
with 30 log-spaced values of masses,
ranging from $1\, M_{\earth}$ to $1\, M_\mathrm{Jup}$,
and 30 log-spaced values of different orbital periods.
The period grid of the perturber
ranged from slightly longer values than the period of the transiting planet to 100 days.
We used \trades{} to integrate the orbits for 3.5 years
(i.e., the nominal duration of the CHEOPS mission) and computed transit times ($T_0$) and linear ephemerides.
We then selected 15 random transits (without replacement),
i.e., the expected maximum number of transits to be obtained for each target
during the CHEOPS nominal mission,
re-computed the linear ephemeris,
and calculated the $A_\mathrm{TTV}$ as the semi-amplitude of the $O-C$
(selected transit times, $O$, minus the newly computed linear ephemeris, $C$).
This was done for each simulation and repeated for 100 times.
The final $A_\mathrm{TTV}$ was computed as the median of the $A_\mathrm{TTV}$ of the 100 repetitions.
We obtained a map of the $A_\mathrm{TTV}$ as a function of mass ($M_\mathrm{perturber}$)
and period ($P_\mathrm{perturber}$) of the perturber.
It is well known that the eccentricity of the perturber ($e_\mathrm{perturber}$)
boosts the $A_\mathrm{TTV}$ \citep{Agol2005MNRAS.359..567A,HolmanMurray2005Sci...307.1288H}.
We also took into account the effect of mutual inclination ($\Delta i$).
We repeated the same analysis
with different sets of initial conditions of the perturber:
$e_\mathrm{perturber} = 0.0$ and 0.1,
$\Delta i_\mathrm{perturber} = 0\degr$ and $60\degr$
(see Figures from \ref{fig:grid_hatp17} to \ref{fig:grid_k2-287} in Appendix~\ref{apdx:ttv}
for a selection of simulation outcomes).
\par
We found that a perturber less massive than the transiting planet
on an external orbit can induce a TTV with amplitude of a few minutes,
detectable with about 15 transits.
Finally, combining information on planet characterisation,
target visibility with CHEOPS,
and dynamical simulations,
we selected a sample of eight warm-Jupiters to follow-up with CHEOPS and
measure their transit times with the purpose of detecting possible TTV signals.
In this work we present the analysis of the timing of
CHEOPS observations obtained so far
within the context of TTV search of the warm-Jupiters.
\par
\section{Exploiting transit timing from CHEOPS data}
\label{sec:timing}
We present the analysis of 17 CHEOPS single visits
of the transits of seven targets
(HAT-P-17 b, KELT-6 b, WASP-8 b, WASP-38 b, WASP-106 b, WASP-130 b, K2-287 b)
out of the eight targets of our sample,
with the purpose to investigate the performance of CHEOPS on the timing precision
of the first year of observations.
Currently, for five targets (HAT-P-17 b, WASP-8 b, WASP-38 b, WASP-130 b, K2-287 b)
we have multiple visits (from two to four visits),
that is we have multiple transit observations.
Four targets, HAT-P-17 b, WASP-8 b, WASP-130 b, and K2-287 b
have been observed with an exposure time of $60$~s,
while we used an exposure of $55$~s for WASP-38.
We used the CHEOPS Exposure Time Calculator (ETC\footnote{Available at \url{https://cheops.unige.ch/pht2/exposure-time-calculator/}.}) to determine the exposure time of each target.
\par
\subsection{Observing strategy}
\label{sec:strategy}
The CHEOPS orbit \citep[with period of 98.77 minutes, for more details see][]{Benz2020ExA...tmp...53B}
affects the scheduling and
the strategy of the observations.
Each CHEOPS observation is called visit.
We aimed to collect CHEOPS data with visit duration ($\mathrm{dur_{vis}}$) that covers
the transit event with an out-of-transit baseline long enough
to sample astrophysical and instrumental noise sources (systematics).
Furthermore, to increase the chance to schedule a transit observation,
it is advisable
to allow for some level of flexibility in the start time of the visit
including a start lag ($l$),
defined as
the difference between an earliest and latest starting phase
($\phi_\mathrm{start,earliest}$ and $\phi_\mathrm{start,latest}$, respectively).
We defined the starting phase ($\phi_\mathrm{start}$)
at half visit duration with respect to the expected centre of the transit,
but the observation can start between
$\phi_\mathrm{start} - l/2$ and $\phi_\mathrm{start} + l/2$.
We used a start lag, $l$, of half transit duration,
enough to take into account
the uncertainties on the transit duration and the linear ephemeris,
the possible presence of a TTV,
and making more flexible the visit scheduling.
Our visit duration definition changed with time after the analysis of the collected data
and planetary parameters update.
We found that
a possible good choice for the visit duration, especially in case of short transits,
is given by
$\mathrm{dur_{vis}} = \mathrm{max} ( T_{14} + l + n_c \times c_\mathrm{o}, 2.5 \times T_{14})$,
where
$T_{14}$ is the total transit duration \citep[elapsed time from first to fourth contact, eq. 30 of][]{Kipping2010MNRAS.407..301K},
$c_\mathrm{o} = 98.77$~min is CHEOPS orbit duration
and $n_c$ is the minimum number of CHEOPS orbits to cover the out-of-transit light curve.
We need at least one CHEOPS orbit before and one after the transit to sample the
possible systematics,
so we decided to set $n_c = 3$ to have a more robust analysis.
We remind the reader that this definition of the visit duration is indicative and specific
for our targets,
and it must be computed carefully based on the characteristics of the transiting exoplanet
and on the purpose of the observation.
With the aim of precisely measuring the transit time ($T_0$)
we need high temporal sampling of the ingress and egress phases.
The global efficiency of a CHEOPS visit (\ensuremath{\mathrm{G}_{\mathrm{EFF}}}),
defined as the ratio between the time effectively spent on target and the total visit duration,
depends on the satellite pointing
exclusion angles,
Earth occultations,
straylight conditions,
and passages through the South Atlantic Anomaly (SAA).
A low \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} translates into periodic gaps in the light curve that
for a minimum \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} of $50\%$
can be as long as
about half an orbit in duration each.
This impacts how well we can sample the ingress and egress phases of the transit
(critical phase ranges efficiency, \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}),
and so it greatly affects the precision on the mid-time of transit $T_0$.
However, the \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} and \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{} predicted by the feasibility checker
can be inaccurate as the CHEOPS orbit implemented in the FC
is an approximation to the satellite's true orbit on the date of the observation.
The uncertainty of CHEOPS exact position along its orbit makes the exact timing
of these gaps obsolete beyond a few weeks.
As the FC is not updated on a weekly basis to take the revised CHEOPS orbit into account,
we cannot predict \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} and \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{} far in advance.
The precision and the accuracy on the transit linear ephemeris
and on the parameters of the exoplanet also impact the \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}.
We set as minimum value of the global efficiency \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} $\ge 50\%$.
When possible, we selected the \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{} transit-by-transit,
favouring events with
at least one \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{} (ingress or egress) $\ge 70\%$ and the other one at least $\ge 30\%$,
or both \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{} $\ge 50\%$.
The selection of the visits evolved in time and with updated
planetary parameters and FC version.
Furthermore, some of the predicted \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{} from FC mismatched
the sampling of the ingress and egress phases of the transit observations,
as we will explain in Section~\ref{sec:results}.
It would also be advisable to have non-consecutive transits
to increase the temporal baseline for the TTV identification and analysis,
but, due to all constraints and to the automatic scheduling,
in a few cases CHEOPS observed consecutive transits of the same exoplanet.
\par
\subsection{Data analysis}
\label{sec:data_analysis}
For all the visits we used the light curve extracted by the CHEOPS Data Reduction Pipeline
\citep[DRP version 12,][]{Hoyer2020A&A...635A..24H}
with the default aperture size of 25 pixels (which corresponds to $25\arcsec$).
We used the same aperture size for all the visits of all the targets for consistency.
The DRP extracts the flux,
the error on flux measurement,
the background,
the centroid position (and the offset position in $x$ and $y$ pixel coordinates),
the contamination,
and the roll angle of the satellite
\citep[for further details see][]{Hoyer2020A&A...635A..24H,Bonfanti2021AA...646A.157B,Leleu2021arXiv210109260L}.
We clipped out the outliers by filtering out values
5 times the mean absolute deviation away
from the median-smoothed\footnote{We used the scipy.signal.medfilt} light curve.
\subsubsection{Stellar parameters}
We obtained
the stellar effective temperature $T_{\mathrm{eff}}$,
surface gravity $\log g$,
and the metallicy \feh{}
from SWEET-Cat \citep[e.g.,][]{Santos2013AA...556A.150S,Sousa2018AA...620A..58S}.
For K2-287 the spectroscopic parameters were reviewed with more recent
spectroscopic data within the CHEOPS Stellar Characterization working group.
The parameters were derived with \texttt{ARES+MOOG} \citep{Sneden1973PhDT.......180S,Sousa2015A&A...577A..67S}
following the same procedure as for SWEET-Cat \citep[e.g.,][]{Sousa2014dapb.book..297S, Bonfanti2021AA...646A.157B}.
We use the infrared flux method \citep[IRFM;][]{Blackwell1977MNRAS.180..177B}
to determine the stellar radius $R_{\star}$ of targets in this study via comparison between optical and infrared broadband fluxes,
and synthetic photometry of stellar atmospheric models,
and using known relationships between stellar angular diameter,
effective temperature, and parallax (\gaia{} DR2).
This is conducted in a Markov chain Monte Carlo (MCMC) approach
by taking the spectral parameter values derived above
as priors on stellar spectral energy distribution selection to be used for synthetic photometry.
We retrieved broadband photometry for the following bandpasses from the most recent data releases, that are
\textit{Gaia} $G$, $G_\mathrm{BP}$, and $G_\mathrm{RP}$, 2MASS $J$, $H$, and $K$,
and \textit{WISE} W1 and W2 \citep{Gaia2016_A&A...595A...1G,GaiaDR2_2018yCat.1345....0G,Skrutskie2006AJ....131.1163S, Wright2010AJ....140.1868W},
and we used the \textit{Gaia} DR2 parallax and \textsc{atlas} Catalogues \citep{Castelli2003IAUS..210P.A20C} of models.
Stellar mass $M_{\star}$ and age
values were determined
by combining the results coming from two different sets of stellar evolutionary models,
namely PARSEC v1.2S \citep[PAdova \& TRieste Stellar Evolutionary Code,][]{marigo17} and
CLES \citep[Code Li\`{e}geois d'\'{E}volution Stellaire][]{scuflaire08}.
The adopted input parameters were
$T_{\mathrm{eff}}$, metallicity \feh, and $R_{\star}$.
In particular, the results from PARSEC were inferred employing the isochrone placement algorithm
described in \citet{bonfanti15,bonfanti16},
which interpolates within a pre-computed grid of models to retrieve the best-fit parameters.
Instead, the results from CLES are retrieved by directly modelling the star with CLES code
following a Levenberg-Marquardt minimisation \citep{Salmon2020arXiv201114932S}.
The final adopted values for $M_{\star}$ and age $t_{\star}$
derive from
a careful combination of the two pairs of outputs,
as described in details in \citet{Bonfanti2021AA...646A.157B}.
\par
Of all the stellar properties of all the targets,
we found that three values agree with literature within 3-$\sigma$,
four values are within 2-$\sigma$,
and all others agree within 1-$\sigma$.
\par
\subsubsection{Light curves analysis}
We analysed all single and multiple visits with
\pycheops\footnote{Publicly available at \url{https://github.com/pmaxted/pycheops}. We used version 0.9.3 of \pycheops.}
\citep[][Maxted et al., submitted]{Benz2020ExA...tmp...53B},
a custom python package developed to manage and analyse CHEOPS datasets.
\paragraph{Single-visit analysis}
The fitting parameters of the single-visit transit model within \pycheops{} are:
the transit time ($T_0$),
the orbital period ($P$),
the transit depth ($D$)\footnote{The transit depth $D$ is defined as the square of the planet-star radius ratio ($k$):
$D = k^2 = \left( \frac{R_\mathrm{p}}{R_\star} \right)^2$.},
the transit duration \citep[$W$, eq.~16 of][]{SeagerMallenOrnelas2003ApJ...585.1038S} in unit of $P$,
the impact parameter ($b$)\footnote{Impact parameter for the circular case $b = \frac{a}{R_\star}\cos i$},
the combination of eccentricity ($e$) and argument of pericenter ($\omega$) in the form
$\sqrt{e}\cos \omega$ and $\sqrt{e}\sin \omega$.
\pycheops{} implements the algorithm \texttt{qpower2} \citep{Maxted2019AA...622A..33M}
for the power-2 law for the limb-darkening (LD)
with parameters $h_1$ and $h_2$,
but constrained in the ($0, 1$) uniform space of
the fitting parameters $q_1$ and $q_2$
\citep{Maxted2018AA...616A..39M,Short2019RNAAS...3..117S}.
The program takes into account trends and/or patterns using
detrending parameters, such as
first and second order derivative in time (linear $\mathrm{d} f/\mathrm{d} t$ and quadratic $\mathrm{d}^2 f/\mathrm{d} t^2$ term),
first and second order derivative of the centroid offset in $x$ and $y$ pixel coordinates
$(\mathrm{d} f/\mathrm{d} x,\ \mathrm{d}^2 f/\mathrm{d} x^2,\ \mathrm{d} f/\mathrm{d} y,\ \mathrm{d}^2 f/\mathrm{d} y^2)$,
background $(\mathrm{d} f/\mathrm{d} \mathrm{bg})$,
contamination $(\mathrm{d} f/\mathrm{d} \mathrm{contam})$,
and the first three harmonics of the roll angle (in $\cos \phi$ and $\sin \phi$).
It has an additional term called glint, that models the internal reflections
as a smooth function of the roll angle;
this parameter can be modelled measuring the roll angle relative to the apparent Moon distance
(that is the glint is caused by the moonlight).
It also models the stellar activity, i.e., the stellar granulation, with
Gaussian process \citep[GP,][]{Rasmussen2006gpml.book.....R}
with the \texttt{SHOTerm} kernel,
with a fixed quality factor $Q=1/\sqrt{2}$,
implemented in \texttt{celerite} \citep{Harvey1985ESASP.235..199H,Kallinger2014AA...570A..41K,celerite,Barros2020AA...634A..75B}.
The \texttt{SHOTerm} kernel describes a stochastically-driven, damped harmonic oscillator,
characterised by a damping time scale equal to $\tau = 2 Q / \omega_0$ and
a standard deviation of the process $\sigma_\mathrm{GP} = \sqrt{S_0 \omega_0 Q}$.
The fitting hyper-parameters used in the kernel are $\log S_0$ and $\log \omega_0$.
A jitter term has been always added in quadrature to the flux errors
and it was fitted as $\log \sigma_j$;
also a constant term ($c$) has been taken into account in the detrending model.
\par
During the single-visit analysis we did not fit all the parameters.
We fixed $P$, $\sqrt{e}\cos \omega$, and $\sqrt{e}\sin \omega$
to the values found in literature.
For each visit we compared
the Bayesian Information Criterion (BIC)
for two transit models,
i.e., fitting the parameters of the transit shape,
that is $D$, $W$, and $b$,
or fixing them.
The physical parameters of the planets taken from the literature
are used to compute the initial parameters and
the Gaussian priors for the fitting parameters $D$, $W$, and $b$.
For all the detrending parameters we used Uniform priors between -1 and 1,
only the glint parameter was bounded between 0 and 2.
From the determinant of the Jacobian matrix
we constrained the model to have uniform priors on
$\cos i$, $\log k$, and $\log a/R_\star$.
During the fit, \pycheops{} computes the $\log$ of the stellar density ($\log \rho_{\star}$)
from $k$, $b$, $W$, and $P$ and
it applies a prior determined from the stellar parameters, i.e. mass and radius.
Also the LD power-2 law coefficient values and priors are computed from the stellar parameters
in the form $h_1$ and $h_2$, defined in \citet{Maxted2018AA...616A..39M}.
\par
We did the analysis using as initial points the parameters in literature,
fitted with the Levenberg-Marquart \citep[based on MINPACK,][]{MINPACK} implemented in \texttt{lmfit}\footnote{\url{https://lmfit.github.io/lmfit-py/}},
and then we ran an MCMC analysis with
the affine-invariant sampler \citep{GoodmanWeare2010CAMCS...5...65G}
implemented in the \emcee{} package \citep{DFM2013ascl.soft03002F, DFM2019JOSS....4.1864F}.
First, we used only the detrending parameters without the GP,
then we fixed the transit shape (if fitted) and the $T_0$
training the GP on the residuals.
The posteriors of the hyperparameters obtained are then used to define the priors
of for the subsequent analyses as
twice the error computed from the posterior distribution.
We re-ran the full analysis (transit model, detrending parameters, and GP)
with physical and hyperparameter priors.
So, for each visit we ran the analysis both fitting and fixing the transit shape,
different combinations of detrending parameters of the same kind,
e.g. linear and quadratic trend in time,
first and second order derivative of the $x$ and $y$ pixel offset, etc,
and an additional set of detrending parameters determined
with a least squares fit on the out-of-transit part,
and with and without the GP.
For each of these analysis we computed the BIC,
and we visually inspected each single fit to avoid overcompensation
of the GP,
looking for transit-like feature (also upside-down).
In addition,
we computed the Pearson's correlation $r$\footnote{Implemented within \texttt{SciPy} at \url{https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html}.}
between the flux and the best-fit transit model ($r_\mathrm{tra}$),
and the flux and the best-fit GP model ($r_\mathrm{GP}$) without the transit model.
We found that all the transit models are strongly and significantly
correlated with the flux
($r_\mathrm{tra} > 0.9,\,p\mathrm{-value} < 0.05$),
while $r_\mathrm{GP}$ did not show any correlation.
We also tried to evaluate the possible level of overcompensation by adding a scaled transit model
(from the best-fit without the GP) to the GP model and
computing the correlation coefficient $r_\mathrm{GP,scaled}$.
We tested a scale factor ranging from $2.5\%$ to $0.5\%$ in steps of $0.5\%$.
We found that all $r_\mathrm{GP}$ were lower than $r_\mathrm{GP,scaled}$ with the scale factor at $0.5\%$,
allowing us to conclude that all our GP models could contribute to the transit model for less than $0.5\%$.
Even if this analysis is not conclusive, it is a further indication we are not introducing a strong bias in our
transit model and parameter estimation.
This allowed us to determine the best-fit combination of transit, detrending, and GP parameters for each visit as the model with the lowest BIC.
In the \emcee{} analysis we used 128 walkers
and we fine-tuned the number of steps and burn-in for each visit,
that is repeating the analysis with an increased number of steps
if the chains did not converge
(we checked it with visual inspections of all the chains).
\par
\paragraph{Multi-visit analysis}
For the targets already observed by CHEOPS multiple times,
with \pycheops{} we were able to combine
the best-fit results of the single-visit analysis.
This allowed us to analyse simultaneously the multiple visits.
We fitted the transit and LD common model,
as for the single visit
($D$, $W$, $b$, $h_1$, and $h_2$).
We also used the detrending parameters of each single visit,
and a common \texttt{SHOTerm} GP kernel \citep{Foreman-Mackey2017AJ....154..220F}
with two common hyperparameters ($\log S_0$ and $\log \omega_0$).
The GP is able to take into account linear trends in time,
so if present, we used very tight priors on $\mathrm{d} f/\mathrm{d} t$.
The priors on the hyperparameters were determined as the average (with error propagation)
of the single-visit hyperparameters.
We used the default values of the GP hyperparameters if not present in the single-visit.
In the multi-visit the roll angle model within \pycheops{}
is not part of the detrending model as in the single-visit analysis.
The detrending parameters of the roll angle (and its harmonics)
are treated as nuisance parameter following the recipe by \citet{Luger2017RNAAS...1....7L}
and they are marginalised away as a \texttt{celerite CosineTerm} kernel
(see Maxted et al., in prep. for further details)
added to the covariance matrix.
This method implicitly assumes that the roll angle is a linear function
of time for each visit,
that is the rate of change of the roll angle is constant.
\par
First, we fitted the common transit parameter,
the detrending and GP hyperparameters,
and a linear ephemeris with parameters
the reference time ($T_{0,\mathrm{ref}}$) and the period ($P$).
Then, we took the best-fit parameters from the posterior distribution and
we repeated the analysis, but
fixing $T_{0,\mathrm{ref}}$ and $P$ and
fitting $\Delta T_{0,n}$\footnote{Also referred into \pycheops{} as $\mathrm{ttv}_n$, with $n$ the visit number.}
for each visit $n$,
that is the deviation from the calculated transit time from the linear ephemeris
$T_{0,n} = T_{0,\mathrm{ref}} + E \times P + \Delta T_{0,n}$,
with $E$ the epoch, an integer number that identifies the transit.
We found that using a number of walkers (or chains) between 64 and 128
(depending on the number of fitting parameters)
was enough to reach convergence
for the multi-visit analysis with \emcee{},
because we are starting from previous single-visit analysis.
\par
For targets with multiple visits we calculated the so-called Observed-Calculated ($O-C$) plot,
where $O$ is the observed $T_0$ and
the $C$ is the transit time computed from the linear ephemeris.
The $O-C$ diagram is a simple tool able to identify a possible TTV signal.
We computed two $O-C$ values,
one for the $T_{0,n}$ of single visits with the ephemeris obtained from multi-visit analysis,
and a second one as direct output of the multi-visit analysis,
that is $(O-C)_{n} = \Delta T_{0,n}$.
In this way we were also able to assess visual improvement
on the transit timing measurement with simultaneous multi-visit analysis.
\par
For all the single and multiple visit analysis
we used as best-fit solution the maximum likelihood estimation (MLE),
that is the set of parameters that maximises the likelihood of the posterior distribution.
We computed as error, $\sigma$, of the best-fit the semi-interval of
the high density interval (HDI\footnote{
Based on the implementation of \texttt{TraceAnalysis.hpd} within the \texttt{PyAstronomy} package,
available at \url{https://pyastronomy.readthedocs.io/en/latest/}.
})
at $68.27\%$ of the posterior,
which is equivalent to the semi-interval defined by the $16$-th and $84$-th percentiles
in case of Gaussian distribution\footnote{The error of the fitted parameters computed as the semi-difference between
the $84$-th and the $16$-th percentile is the default method within \pycheops.}.
\par
\subsection{HAT-P-17 b}
\label{sec:lc_hatp17}
HAT-P-17 is an early K dwarf (see Table~\ref{tab:hatp17} for stellar parameters)
that hosts two exoplanets,
it was the second multi-planet system detected by a ground-based facility \citep{Howard2012ApJ...749..134H}.
The outer planet, HAT-P-17 c, has a poorly constrained orbit with a period
that could be anywhere in range between 10 and 36 years \citep{Fulton2013ApJ...772...80F}.
It does not appear to transit.
HAT-P-17 b is a transiting exoplanet with mass and radius of about
$0.5\, M_\mathrm{Jup}$ and $1\, R_\mathrm{Jup}$, respectively,
and an orbital period of 10.3 days.
The planet has a high eccentricity $e = 0.342$ that
would suggest a perturbation process responsible for the formation of the system,
even if the spin-orbit misalignment ($\lambda = 19^{+14}_{-16}\degr$) was not significantly detected
by \citet{Fulton2013ApJ...772...80F}.
The same author, from adaptive optics analysis, ruled out the possible presence of
a distant ($> 50$~au) and massive object ($M \sim 80\ M_\mathrm{Jup}$).
This suggests that Kozai-Lidov process was not responsible for the formation of the system.
Detecting a TTV signal from a fourth lighter object on a mutually inclined orbit
would be the evidence that the P-P scattering could be the main process
in the evolution of the HAT-P-17 system.
\par
CHEOPS observed HAT-P-17 from August 2020 to October 2020,
obtaining three transits of the planet b with \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} of $65.8\%$, $57.4\%$, and $48.5\%$, respectively.
The third visit covers almost 0\% of both ingress and egress,
lowering the precision on the transit time of this visit.
\par
Before observing we realised that there are two stars of magnitude
$G = 14.6$ and $15.7$ ($4-5$ magnitudes fainter than the target),
located close to the edge of the photometric aperture (aperture radius of $25\arcsec$).
These two stars are not physically bound to HAT-P-17,
at a distance of about $26\arcsec$,
as we can infer from their parallax ($\pi$) and proper motions ($\mu_\alpha,\, \mu_\delta$)
from \gaia{} EDR3\footnote{
Only in this case we used EDR3 instead of DR2 because of the updated values of
parallaxes and proper motions of neighbour stars at time of writing.
Using EDR3 parallaxes in the stellar properties of all the targets
would have not affected the results of our work.
}
\citep[][Gaia Collaboration et al., in prep.]{GaiaPLX_2020arXiv201201742L},
i.e.,
$\pi= 1.29\pm0.02$~mas,
$\mu_\alpha= 5.21\pm0.02,\, \mu_\delta= -14.19\pm0.02$~mas/yr and
$\pi= 0.53\pm0.05$~mas,
$\mu_\alpha= 6.31\pm0.04,\, \mu_\delta= -9.53\pm0.04$~mas/yr,
respectively,
while HAT-P-17 has
$\pi = 10.82 \pm 0.02$~mas and
$\mu_\alpha= 80.28\pm0.02 ,\, \mu_\delta= -127.04\pm0.02$~mas/yr.
We estimated that the flux contribution from the contaminants is about $2.5\%$,
but we were able to model it with \pycheops.
\par
For each visit we modelled the light curves fitting the shape of the transit and
the systematics with the contaminant parameter in the detrending model and
the GP kernel.
These were the models with the lowest BIC.
Table~\ref{tab:hatp17} lists literature values used as initial guess and priors.
See Fig.~\ref{fig:visits_hatp17} for the three single visits of HAT-P-17 b,
with best-fit model (transit, detrending, and GP).
We obtained from single-visit analysis an error on the transit time of
$\sigma_{T_0} = 87$~s, 82~s, and 97~s, respectively.
We used the single-visit analysis as input for a simultaneous-combined multi-visit analysis (Fig.~\ref{fig:mv_hatp17}).
We reported in Table~\ref{tab:hatp17} the best-fit solution of the multi-visit analysis.
The $O-C$ plot of the three visits is shown in Fig.~\ref{fig:mv_hatp17} with
the linear ephemeris from the first iteration of the multi-visit analysis
(see $T_{0,\mathrm{ref}}$ and $P$ in Table~\ref{tab:hatp17}).
We found that the first two visits have an improvement on the $\sigma_{T_0}$ of $\sim30$~s
($40\%$ for the first visit, $35\%$ for the second visit),
and they agree with the linear ephemeris at 1-$\sigma$.
On the other hand the third visit improved the $T_0$ only by 3~s and
shows a deviation from the linear ephemeris.
Also the $T_0$ of the multi-visit analysis
agrees with the single-visit analysis only at 2-$\sigma$.
As shown by \citet{Barros2013MNRAS.430.3032B} the uncertainties from partial transits
are usually underestimated which explains the discrepancies found.
To confirm or rule out the TTV signal in the $O-C$ diagram of Fig.~\ref{fig:mv_hatp17},
we need to analyse the CHEOPS observations with literature and TESS data,
but this was not the purpose of this work.
\begin{figure*}
\centering
\includegraphics[width=0.99\columnwidth]{HAT-P-17/HAT-P-17_id03_v1_lc_emcee_mle_gp.png}
\includegraphics[width=0.99\columnwidth]{HAT-P-17/HAT-P-17_id03_v2_lc_emcee_mle_gp.png}
\includegraphics[width=0.99\columnwidth]{HAT-P-17/HAT-P-17_id03_v3_lc_emcee_mle_gp.png}
\caption{HAT-P-17 b single visit analysis.
Maximum Likelihood Estimation (MLE, orange line) from the posterior distribution as the best-fit model
(lowest BIC)
with 128 random samples as green lines
(un-detrended and detrended in the first and second panel, respectively, of each figure);
black line as the transit model (with out-of-transit set to 1 by default).
If gaussian process (GP) has been used an additional panel shows the residuals with over-plotted the best-fit GP model (red line).
The last panel shows the residuals with respect to the best-fit model
with the photometric jitter term (fitted as $\log \sigma_j$)
added in quadrature to the photometric errors.
\textit{Upper-left:} first visit, fitted transit shape and detrending against contaminants and GP;
\textit{upper-right:} second visit, same fitting and detrending parameters of first visit;
\textit{lower:} third visit, model parameters as first and second visit.
}
\label{fig:visits_hatp17}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.1\columnwidth]{HAT-P-17/MV_HAT-P-17.png}
\includegraphics[width=0.9\columnwidth]{HAT-P-17/OC_HAT-P-17.png}
\caption{Multi-visit analysis of HAT-P-17 b.
\textit{Left}: three CHEOPS visits in phase, $\phi$,
with respect to the linear ephemeris and
taking into account possible TTV signal
by fitting $\Delta T_{0,n}$;
data points plotted as white, gray and black circles for first, second, and third visit, respectively;
coloured circles represent the model for different visit;
from top to bottom panels: first panel shows the raw light curves,
second panel shows the detrended light curves also corrected by gaussian process,
third panel shows the residuals.
\textit{Right}: $O-C$ diagram with values from
the single-visit analysis (squares)
and the multi-visit analysis (circles).
We used a common linear ephemeris (on top of the figure)
from the first iteration of the multi-visit analysis as calculated $C$
and the $T_0$s of the sigle-visit analysis as observed $O$.
The $O-C$ values for the multi-visit analysis correspond to the directly fitted $\Delta T_{0,n}$, with $k$ the visit number.
}
\label{fig:mv_hatp17}
\end{figure*}
\begin{table}
\centering
\caption{HAT-P-17 summary table of stellar and planetary (planet b) parameters.
Input and priors planetary parameters from \citet{Howard2012ApJ...749..134H}
and \citet{Fulton2013ApJ...772...80F}.
Best-fit solution (MLE and semi-interval HDI at $68.27\%$) from the simultaneous three visits analysis.
}
\label{tab:hatp17}
\resizebox{0.9\columnwidth}{!}{%
\begin{tabular}{lcc}
\hline
Parameters & Input/priors & Source\\
\hline
HAT-P-17 & \multicolumn{2}{c}{Gaia DR2 1849786481031300608} \\
RA (J2000) & 21:38:08.73 & Simbad\\
DEC (J2000) & +30:29:19.4 & Simbad\\
$\mu_\mathrm{\alpha}$~(mas/yr) & $-80.4 \pm 0.2$ & \gaia{} DR2\\
$\mu_\mathrm{\delta}$~(mas/yr) & $-127.0 \pm 0.2$ & \gaia{} DR2\\
age (Gyr) & $7 \pm 2$ & This work\\
parallax (mas) & $10.80 \pm 0.06$ & \gaia{} DR2\\
$V$~(mag) & 10.4 & Simbad\\
$G$~(mag) & 10.3 & \gaia{} DR2\\
$M_{\star}\, (M_{\sun})$ & $0.88 \pm 0.04$ & This work\\
$R_{\star}\, (R_{\sun})$ & $0.84 \pm 0.01$ & This work\\
$\rho_{\star}\, (\rho_{\sun})$ & $1.1 \pm 0.5$ & This work\\
$T_\mathrm{eff}$~(K) & $5332 \pm 55$ & SWEET-Cat\\
$\log g$ & $4.45 \pm 0.13$ & SWEET-Cat\\
\feh~(dex) & $+0.05 \pm 0.03$ & SWEET-Cat\\
\hline
HAT-P-17 b & & \\
Model & Input/priors & Multi-visit (MLE \& HDI) \\
$T_{0,\mathrm{ref}}^{(a)}$~(days) & $-2198.8306 \pm 0.0002$& $2122.67790 \pm 0.00008$ \\
$P$~(days) & $10.338523 \pm 0.000009$ & $10.338524 \pm 0.000009$ \\
$D = k^2$ & & $ 0.0153 \pm 0.0002$\\
$W$~(unit of $P$) & & $ 0.01609 \pm 0.00008$\\
$b$ & $0.31 \pm 0.07$ & $0.47 \pm 0.02$\\
$h_1$ & $0.71 \pm 0.01$ & $0.70 \pm 0.01$\\
$h_2$ & $0.44 \pm 0.05$ & $0.47 \pm 0.05$\\
$T_{0,1}^{(a)}$~(days) & $2091.66222 \pm 0.00100$ & $2091.66185 \pm 0.00060$ \\
$T_{0,2}^{(a)}$~(days) & $2122.67674 \pm 0.00095$ & $2122.67751 \pm 0.00062$ \\
$T_{0,3}^{(a)}$~(days) & $2143.36047 \pm 0.00112$ & $2143.35789 \pm 0.00109$ \\
$\log \sigma_j$ & - & $-8.22 \pm 0.07$\\
Derived/physical & & \\
$k=R_\mathrm{b}/R_{\star}$ & $0.124 \pm 0.001$ & $0.1237 \pm 0.0007$\\
$R_\mathrm{b}\, (R_\mathrm{Jup})$ & - & $1.04 \pm 0.02$ \\
$a/R_{\star}$ & $22.6 \pm 0.5$ & $20.2 \pm 0.3$\\
$i\, (\degr)$ & $89.2 \pm 0.2$ & $88.7 \pm 0.1$\\
$T_{14}^{(b)}$~(days) & $0.1690 \pm 0.0009$ & $0.1664 \pm 0.0008$\\
$e$ & $0.342 \pm 0.005$ & fixed \\
$\omega\, (\degr)$ & $201.5 \pm 1.6$ & fixed \\
$K_\mathrm{RV}$~(ms$^{-1}$)) & $58.6 \pm 0.7$ & - \\
$M_\mathrm{b}\, (M_\mathrm{Jup})$ & - & $0.54 \pm 0.02$\\
$\rho_\mathrm{b}$~(gcm$^{-3}$) & - & $0.44 \pm 0.02$\\
$\lambda^{(c)}\, (\degr)$ & $19^{+14}_{-16}$& \\
GP hyperparameters & & \\
$\log S_0$ & - & $-18.9 \pm 0.2$ \\
$\log \omega_0$ & - & $4.78 \pm 0.07$\\
\hline
\end{tabular}
}
\\
\textbf{Notes:}
$^{(a)}$: Transit times in BJD$_\mathrm{TDB}-2457000$.
$T_{0,n}$ single visit output in the input/priors column,
while they are the linear ephemeris plus $\Delta T_{0,n}$ from multi-visit analysis.
$^{(b)}$: Total duration. The eq. used depends on the literature.
The multi-visit duration is equal to $T_{14} = W \times P$.
$^{(c)}$: spin-orbit angle measured from the Rossiter-McLaughlin effect.
\end{table}
\subsection{KELT-6 b}
\label{sec:lc_kelt6}
KELT-6 is a late F-type star that hosts two exoplanets,
one transiting, KELT-6 b \citep{Collins2014AJ....147...39C}, with a period of 7.85~d
and an outer more massive non-transiting planet, KELT-6 c \citep{Damasso2015AA...581L...6D}.
\citet{Damasso2015AA...581L...6D} proposed that
the main formation process of the system can be the result of
a P-P scattering of more than two planets and
a successive coplanar high-eccentricity migration \citep[CHE][]{Petrovich2015ApJ...805...75P}.
Detecting a TTV signal induced by a lighter planet on an outer coplanar orbit and in MMR (or close to)
with planet b would imply a disk-driven migration,
instead of a P-P scattering, that would result in a perturber on a mutually inclined orbit,
and outside a MMR.
\par
We collected only one CHEOPS visit of KELT-6 b on May 6, 2020,
with a \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} of about $69\%$.
The visit duration was too short to sample correctly the post-egress part,
and also the egress phase was completely missed (Fig.~\ref{fig:visits_kelt6}).
In this case we run only the single-visit analysis,
and the BIC analysis favoured the model
fitting for the transit shape and
detrending for the first three harmonics of the roll angle
(as $\mathrm{d} f/\mathrm{d} \cos (n \times \phi)$, $\mathrm{d} f/\mathrm{d} \sin(n \times \phi)$ with
$n = 1, 2, 3$ that identifies the harmonic)
and with the GP.
We obtained an error on the $T_0$ of about 114~s,
totally dominated by the lack of points in the egress.
However, with only one CHEOPS visit
we were able to improve the parameters of KELT-6 b (see Table~\ref{tab:kelt6}).
More transits are needed to run a combined analysis covering all the phases of the transit
to improve the precision on the transit time.
\par
\begin{figure}
\centering
\includegraphics[width=0.99\columnwidth]{KELT-6/KELT-6_id05c_v1_lc_emcee_mle_gp.png}
\caption{KELT-6 b single visit analysis (see Fig.~\ref{fig:visits_hatp17} for description).
The model, with the lowest BIC,
contains the fitted transit shape and detrending against first three harmonics of the satellite roll angle.
}
\label{fig:visits_kelt6}
\end{figure}
\begin{table}
\centering
\caption{KELT-6 summary table of stellar and planetary (planet b) parameters.
Input and priors planetary parameters from \citet{Collins2014AJ....147...39C}
and \citet{Damasso2015AA...581L...6D}.
Best-fit solution (MLE and semi-interval HDI at $68.27\%$) from the one single-visit analysis.
}
\label{tab:kelt6}
\resizebox{0.9\columnwidth}{!}{%
\begin{tabular}{lcc}
\hline
Parameters & Input/priors & Source\\
\hline
KELT-6 & \multicolumn{2}{c}{Gaia DR2 1464700950221781504} \\
RA (J2000) & 13:03:55.65 & Simbad \\
DEC (J2000) & +30:38:24.28 & Simbad \\
$\mu_\mathrm{\alpha}$~(mas/yr) & $-5.11 \pm 0.05$ & \gaia{} DR2\\
$\mu_\mathrm{\delta}$~(mas/yr) & $15.64 \pm 0.05$ & \gaia{} DR2\\
age (Gyr) & $5 \pm 1$ & This work\\
parallax (mas) & $4.13 \pm 0.03$ & \gaia{} DR2\\
$V$~(mag) & 10.3 & Simbad \\
$G$~(mag) & 10.2 & \gaia{} DR2\\
$M_{\star}\, (M_{\sun})$ & $1.11 \pm 0.06$ & This work\\
$R_{\star}\, (R_{\sun})$ & $1.34 \pm 0.06$ & This work\\
$\rho_{\star}\, (\rho_{\sun})$ & $0.4 \pm 0.2$ & This work\\
$T_\mathrm{eff}$~(K) & $6246 \pm 88$ & SWEET-Cat\\
$\log g$ & $4.22 \pm 0.09$ & SWEET-Cat\\
\feh~(dex) & $-0.22 \pm 0.06$ & SWEET-Cat\\
\hline
KELT-6 b & & \\
Model & Input/priors & Single-visit (MLE \& HDI) \\
$P$~(days) & $7.845582 \pm 0.000007$ & fixed \\
$D=k^2$ & $ 0.0060 \pm 0.0002$ & $0.0058 \pm 0.0001$\\
$W$~(unit of $P$) & $0.0311 \pm 0.003$ & $0.0310 \pm 0.0004$ \\
$b$ & $0.22 \pm 0.17$ & $0.43 \pm 0.07$ \\
$h_1$ & $0.76 \pm 0.01$ & $0.76 \pm 0.01$\\
$h_2$ & $0.46 \pm 0.05$ & $0.46 \pm 0.05$\\
$T_{0,1}^{(a)}$~(days) & - & $1976.0773 \pm 0.0013$\\
$\log \sigma_j$ & - & $-7.78 \pm 0.07$\\
Derived/physical & & \\
$k=R_\mathrm{b}/R_{\star}$ & $0.077 \pm 0.001$ & $0.0764 \pm 0.0008$\\
$R_\mathrm{b}\, (R_\mathrm{Jup})$ & - & $1.02 \pm 0.05$\\
$a/R_{\star}$ & $10.8 \pm 0.9$ & $10.1 \pm 0.4$\\
$i\, (\degr)$ & $88.8 \pm 0.9$ & $87.6 \pm 0.5$\\
$T_{14}^{(b)}$~(days) & - & $0.243 \pm 0.003$\\
$e$ & $0.029 \pm 0.016$ & fixed \\
$\omega\, (\degr)$ & $308 \pm 272$ & fixed \\
$K_\mathrm{RV}$~(ms$^{-1}$)) & $41.8 \pm 1.1$ & - \\
$M_\mathrm{b}\, (M_\mathrm{Jup})$ & - & $0.44 \pm 0.02$\\
$\rho_\mathrm{b}$~(gcm$^{-3}$) & - & $0.27 \pm 0.04$\\
$\lambda^{(c)}\, (\degr)$ & $-36 \pm 11$ & \\
GP hyperparameters & & \\
$\log S_0$ & - & $-23 \pm 2$\\
$\log \omega_0$ & - & $8 \pm 2$\\
\hline
\end{tabular}
}
\\
\textbf{Notes:}
$^{(a)}$: Transit times in BJD$_\mathrm{TDB}-2457000$.
$T_{0,n}$ single visit output.
$^{(b)}$: Total duration equal to $T_{14} = W \times P$.
$^{(c)}$: spin-orbit angle measured from the Rossiter-McLaughlin effect.
\end{table}
\subsection{WASP-8 b}
\label{sec:lc_wasp8}
WASP-8 b is an exoplanet with a radius similar to Jupiter and a mass of about 2~$M_\mathrm{Jup}$.
It has an eccentric, retrograde orbit ($\lambda = - 143\degr$),
with a period of about 8.16~d
\citep{Queloz2010AA...517L...1Q,Knutson2014ApJ...785..126K,Bourrier2017AA...599A..33B}.
\par
The host star, WASP-8 A, has a physical stellar companion, B,
at about $4\farcs5$ \citep{GaiaDR2_2018yCat.1345....0G}.
WASP-8 B lies within the CHEOPS point spread function of WASP-8 A,
but is four magnitudes fainter than A (in $G$-band),
and its contribution to the flux (less than $2\%$) in the aperture is almost negligible.
The presence of the stellar companion impacts the depth of transit,
but without changing the symmetry with respect to the $T_0$ and its measurement.
For these reasons, we did not take into account a dilution factor in the transit analysis,
but it will be done in future works.
\par
\citet{Knutson2014ApJ...785..126K} found that WASP-8 B,
having a mass of about $0.5\ M_{\sun}$
and a sky-projected separation greater than 390~au,
was not sufficient to explain the RV trend and modulation.
The authors suggested that two massive planets on outer orbits are needed.
So this system cannot be the result of disk-driven migration.
Instead a Kozai or a P-P scattering mechanism was invoked \citep{Knutson2014ApJ...785..126K},
making WASP-8 b a good candidate for our purpose.
\par
The transit of WASP-8 b was observed twice by CHEOPS,
with one visit in July and one in October 2020,
with a \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} of $57.8\%$
and $67.8\%$, respectively.
The first visit shows a low coverage (almost null) of the ingress phase and good egress,
while the second visit has a \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}~$> 50\%$ of both ingress and egress.
We run the analysis and found that the BIC favoured models
fitting the shape of both transits and
detrending for the background and GP the first visit and
for all the parameters (but glint) and GP the second visit.
The single-visit analysis provides a $\sigma_{T_0}$ of 53~s and 31~s
for the first and second visit, respectively (Fig.~\ref{fig:visits_wasp8}).
From the multi-visit analysis we had an improvement of about 4~s for both visits
(see Table~\ref{tab:wasp8} for the summary of the results),
taking into account that the gaps of the two visits are in phase,
lowering the effective sample timing of the transit,
i.e., the start of the ingress phase is missing.
With further visits with different \ensuremath{\mathrm{G}_{\mathrm{EFF}}}, \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}, and gap phases,
we will be able to reach a higher precision on the $T_0$,
improving also the preliminary result of the $O-C$ (see Fig.~\ref{fig:mv_wasp8}).
\par
\begin{figure*}
\centering
\includegraphics[width=0.99\columnwidth]{WASP-8/WASP-8_id02_v1_lc_emcee_mle_gp.png}
\includegraphics[width=0.99\columnwidth]{WASP-8/WASP-8_id07a_v2_lc_emcee_mle_gp.png}
\caption{WASP-8 b single visits analysis (see Fig.~\ref{fig:visits_hatp17} for description).
\textit{Left:} first visit, fitted transit shape and detrending against background and GP;
\textit{right:} second visit, fitted transit shape and detrending against all parameters (but glint effect) and GP.
}
\label{fig:visits_wasp8}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.1\columnwidth]{WASP-8/MV_WASP-8.png}
\includegraphics[width=0.9\columnwidth]{WASP-8/OC_WASP-8.png}
\caption{As in Fig~\ref{fig:mv_hatp17}, but for WASP-8 b.
\textit{Left:} multi-visit phase plot of two CHEOPS visits;
\textit{right:} O-C diagram.
}
\label{fig:mv_wasp8}
\end{figure*}
\begin{table}
\centering
\caption{WASP-8 summary table of stellar and planetary (planet b) parameters.
Input and priors planetary parameters from \citet{Queloz2010AA...517L...1Q}, \citet{Knutson2014ApJ...785..126K} and \citet{Bourrier2017AA...599A..33B}.
Best-fit solution (MLE and semi-interval HDI at $68.27\%$) from the two multi-visit analysis.
}
\label{tab:wasp8}
\resizebox{0.9\columnwidth}{!}{%
\begin{tabular}{lcc}
\hline
Parameters & Input/priors & Source\\
\hline
WASP-8 & \multicolumn{2}{c}{Gaia DR2 2312679845530628096} \\
RA (J2000) & 23:59:36.07 & Simbad\\
DEC (J2000) & -35:01:52.92 & Simbad\\
$\mu_\mathrm{\alpha}$~(mas/yr) & $109.75 \pm 0.06$ & \gaia{} DR2\\
$\mu_\mathrm{\delta}$~(mas/yr) & $7.61 \pm 0.06$ & \gaia{} DR2\\
age (Gyr) & $3 \pm 1$ & This work\\
parallax (mas) & $11.09 \pm 0.05$ & \gaia{} DR2\\
$V$~(mag) & 9.9 & Simbad\\
$G$~(mag) & 9.6 & \gaia{} DR2\\
$M_{\star}\, (M_{\sun})$ & $1.07 \pm 0.04$ & This work\\
$R_{\star}\, (R_{\sun})$ & $0.96 \pm 0.03$ & This work\\
$\rho_{\star}\, (\rho_{\sun})$ & $0.9 \pm 0.6$ & This work\\
$T_\mathrm{eff}$~(K) & $5690 \pm 36$ & SWEET-Cat\\
$\log g$ & $4.42 \pm 0.15$ & SWEET-Cat\\
\feh~(dex) & $0.29 \pm 0.03$ & SWEET-Cat\\
\hline
WASP-8 b & & \\
Model & Input/priors & Multi-visit (MLE \& HDI) \\
$T_{0,\mathrm{ref}}^{(a)}$~(days) & $-2320.6661 \pm 0.0005$ & $2093.20574 \pm 0.00016$ \\
$P$~(days) & $8.15872 \pm 0.00002$ & $8.15872 \pm 0.00001$ \\
$D=k^2$ & $0.0127 \pm 0.0003$ & $0.0136 \pm 0.0002$\\
$W$~(unit of $P$) & $0.018 \pm 0.001$ & $0.0179 \pm 0.0003$\\
$b$ & $0.46 \pm 0.06$ & $0.47 \pm 0.02$\\
$h_1$ & $0.72 \pm 0.01$ & $0.72 \pm 0.01$\\
$h_2$ & $0.44 \pm 0.05$ & $0.48 \pm 0.05$\\
$T_{0,1}^{(a)}$~(days) & $2052.41241 \pm +0.00061$ & $2052.41218 \pm +0.00058$ \\
$T_{0,2}^{(a)}$~(days) & $2134.00000 \pm +0.00036$ & $2133.99944 \pm +0.00032$ \\
$\log \sigma_j$ & - & $-8.1 \pm 0.1$ \\
Derived/physical & & \\
$k=R_\mathrm{b}/R_{\star}$ & $0.1130 \pm 0.0015$ & $0.11685 \pm 0.0009$\\
$R_\mathrm{b}\, (R_\mathrm{Jup})$ & - & $1.12 \pm 0.04$\\
$a/R_{\star}$ & $18.2 \pm 0.8$ & $18.3 \pm 0.5$\\
$i\, (\degr)$ & $88.6 \pm 0.2$ & $88.5 \pm 0.1$\\
$T_{14}^{(b)}$~(days) & $0.144 \pm 0.008$ & $0.146 \pm 0.003$\\
$e$ & $0.304 \pm 0.004$ & fixed \\
$\omega\, (\degr)$ & $274.2 \pm 0.1$ & fixed \\
$K_\mathrm{RV}$~(ms$^{-1}$)) & $221.1 \pm 1.2$ & - \\
$M_\mathrm{b}\, (M_\mathrm{Jup})$ & - & $2.19 \pm 0.06$\\
$\rho_\mathrm{b}$~(gcm$^{-3}$) & - & $2.1 \pm 0.2$\\
$\lambda^{(c)}\, (\degr)$ & $-143.0^{+1.6}_{-1:5}$ & \\
GP hyperparameters & & \\
$\log S_0$ & - & $-21.8 \pm 0.1$\\
$\log \omega_0$ & - & $6.7 \pm 0.1$\\
\hline
\end{tabular}
}
\\
\textbf{Notes:}
$^{(a)}$: Transit times in BJD$_\mathrm{TDB}-2457000$.
$T_{0,n}$ single visit output in the input/priors column,
while they are the linear ephemeris plus $\Delta T_{0,n}$ from multi-visit analysis.
$^{(b)}$: Total duration is equal to $T_{14} = W \times P$.
$^{(c)}$: spin-orbit angle measured from the Rossiter-McLaughlin effect.
\end{table}
\subsection{WASP-38 b}
\label{sec:lc_wasp38}
WASP-38 is the brightest star of our current sample, with $G=9.2$ and $V=9.4$.
It hosts a quite massive ($2.7\ M_\mathrm{Jup}$) warm-Jupiter, WASP-38 b,
on a slightly eccentric orbit ($e = 0.028\pm0.003$) with a period of about 6.9 days
\citep{Barros2011AA...525A..54B, Simpson2011MNRAS.414.3023S, Brown2012ApJ...760..139B, Bonomo2017AA...602A.107B}.
WASP-38 b orbit is aligned (within $2\sigma$)
with the stellar spin \citep{Brown2012ApJ...760..139B},
even if it was expected to be misaligned
due to its eccentricity and mass \citep{Simpson2011MNRAS.414.3023S}.
Table~\ref{tab:wasp38} summarises the parameters from literature
that we used in our analysis.
The lack of an RV trend due to an external massive planet or stellar companion would rule out
the Kozai and P-P scattering mechanisms in the formation process,
making this system the result of a disk-driven migration or
of a more complex scenario.
\par
We collected four visits with CHEOPS,
spanning an observing period of only two months from May to July 2020.
The first three visits have very high \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} ($>91\%$) and
high temporal sampling of both ingress and egress.
Only the egress of the first visit has a low coverage ($\sim 30\%$).
The fourth visit has a \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} of $62.2\%$,
but both ingress and egress were sampled with a high \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}.
The BIC favoured the analysis fitting the shape of all the four transits and
detrending for the linear trend,
$x$ and $y$ pixel offset, first harmonic of the roll angle, and GP the first ($\sigma_{T_0} = 24$~s) and
for the background, contaminants, quadratic term, second order of $x$ and $y$ pixel offset,
two harmonics of the roll angle, and GP the second visit ($\sigma_{T_0} = 13$~s),
for the linear trend and $x$ and $y$ pixel offset without the GP the third ($\sigma_{T_0} = 16$~s) and
for the background, contaminants, $y$ pixel offset, first harmonics of roll angle, and GP the fourth visit ($\sigma_{T_0} = 16$~s).
See Fig.~\ref{fig:visits_wasp38} for the single-visit plots and fits.
This analysis allowed us to determine the $T_0$ of the transits with the highest precision
of our current whole data sample.
From the multi-visit analysis
(see Fig.~\ref{fig:mv_wasp38} and Table~\ref{tab:wasp38} for the summary of the results),
we obtained $\sigma_{T_0} = 20$, $16$, $17$, and $17$~s for the four visits, respectively.
Only for the first visit we had a slightly improved $\sigma_{T_0}$ ($\sim 17\%$)
due to the partial egress, whose phase is covered in the joint analysis.
We had a worsening on $\sigma_{T_0}$ of the latest three visits
($-22\%$, $-12\%$, and $-2\%$, respectively).
The third visit was not detrended with the GP
(see lower-left plot in Fig.~\ref{fig:visits_wasp38}),
so, we suspect that
the common GP kernel in multi-visit could have introduced more noise
due to an overfitting.
However, this aspect will be analysed in detail in a future work.
The second single-visit analysis used the GP,
but it appears (see upper-right plot in Fig.~\ref{fig:visits_wasp38}) to
be a modulation more than a short timescale variation
(i.e., the stellar granulation),
and, as for the third visit, the common GP kernel could have introduced some noise,
increasing the uncertainty in the transit time determination.
\par
Unfortunately, the first three visits have been scheduled as consecutive,
reducing the time-span needed to identify TTV signal.
The third visit shows a slight departure from the linear ephemeris
(see $O-C$ plot in Fig.~\ref{fig:mv_wasp38}),
but it is still within $2\sigma$.
We cannot draw any conclusion on the existence of a TTV signal
based on the current dataset,
and we need to extend the temporal baseline of the observations.
\par
\begin{figure*}
\centering
\includegraphics[width=0.99\columnwidth]{WASP-38/WASP-38_iddecorr_v1_lc_emcee_mle_gp.png}
\includegraphics[width=0.99\columnwidth]{WASP-38/WASP-38_iddecorr_v2_lc_emcee_mle_gp.png}\\
\includegraphics[width=0.99\columnwidth]{WASP-38/WASP-38_iddecorr_v3_lc_emcee_mle.png}
\includegraphics[width=0.99\columnwidth]{WASP-38/WASP-38_iddecorr_v4_lc_emcee_mle_gp.png}
\caption{WASP-38 b single visits analysis (see Fig.~\ref{fig:visits_hatp17} for description).
\textit{Upper-left:} first visit, fitted transit shape and detrending against
linear trend, $x$ and $y$ pixel offset, first harmonic of the roll angle, and GP;
\textit{upper-right:} second visit, fitted transit shape and detrending against
the background, contaminants, quadratic term, second order of $x$ and $y$ pixel offset,
two harmonics of the roll angle, and GP;
\textit{lower-left:} third visit, fitted transit shape and detrending against
the linear trend and $x$ and $y$ pixel offset without the GP;
\textit{lower-right:} fourth visit, fitted transit shape and detrending against
the background, contaminants, $y$ pixel offset, first harmonics of roll angle, and GP.
}
\label{fig:visits_wasp38}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.1\columnwidth]{WASP-38/MV_WASP-38.png}
\includegraphics[width=0.9\columnwidth]{WASP-38/OC_WASP-38.png}
\caption{As in Fig~\ref{fig:mv_hatp17}, but for WASP-38 b.
\textit{Left:} multi-visit phase plot of four CHEOPS visits;
\textit{right:} O-C diagram.
}
\label{fig:mv_wasp38}
\end{figure*}
\begin{table}
\centering
\caption{WASP-38 summary table of stellar and planetary (planet b) parameters.
Input and priors planetary parameters from \citet{Brown2012ApJ...760..139B}
and \citet{Bonomo2017AA...602A.107B}.
Best-fit solution (MLE and semi-interval HDI at $68.27\%$) from the four multi-visit analysis.
}
\label{tab:wasp38}
\resizebox{0.9\columnwidth}{!}{%
\begin{tabular}{lcc}
\hline
Parameters & Input/priors & Source\\
\hline
WASP-38 & \multicolumn{2}{c}{Gaia DR2 4453211899986180352} \\
RA (J2000) & 16:15:50.37 & Simbad\\
DEC (J2000) & +10:01:57.28 & Simbad\\
$\mu_\mathrm{\alpha}$~(mas/yr) & $-31.07 \pm 0.05$ & \gaia{} DR2\\
$\mu_\mathrm{\delta}$~(mas/yr) & $-39.17 \pm 0.04$ & \gaia{} DR2\\
age (Gyr) & $2.8 \pm 0.6$ & This work\\
parallax (mas) & $7.31 \pm 0.04$ & \gaia{} DR2\\
$V$~(mag) & 9.4 & Simbad\\
$G$~(mag) & 9.2 & \gaia{} DR2\\
$M_{\star}\, (M_{\sun})$ & $1.28 \pm 0.05$ & This work\\
$R_{\star}\, (R_{\sun})$ & $1.35 \pm 0.03$ & This work\\
$\rho_{\star}\, (\rho_{\sun})$ & $0.52 \pm 0.04$ & This work\\
$T_\mathrm{eff}$~(K) & $6436 \pm 60$ & SWEET-Cat\\
$\log g$ & $ 4.8 \pm 0.07$ & SWEET-Cat\\
\feh~(dex) & $0.06 \pm 0.04$ & SWEET-Cat\\
\hline
WASP-38 b & & \\
Model & Input/priors & Multi-visit (MLE \& HDI) \\
$T_{0,\mathrm{ref}}^{(a)}$~(days) & $-1664.0795 \pm 0.0007$ & $2005.51241 \pm 0.00008$\\
$P$~(days) & $6.87182 \pm 0.00005$ & $6.87187 \pm 0.00003$\\
$D=k^2$ & $0.0069 \pm 0.0001$ & $0.00633 \pm 0.00003$\\
$W$~(unit of $P$) & $0.02865 \pm 0.00015$ & $0.02915 \pm 0.00004$ \\
$b$ & $0.12 \pm 0.08$ & $0.35 \pm 0.02$\\
$h_1$ & $0.8 \pm 0.1$ & $0.78 \pm 0.01$\\
$h_2$ & $0.5 \pm 0.1$ & $0.29 \pm 0.05$\\
$T_{0,1}^{(a)}$~(days) & $1991.76886 \pm +0.00028$ & $1991.76881 \pm +0.00023$ \\
$T_{0,2}^{(a)}$~(days) & $1998.64072 \pm +0.00015$ & $1998.64066 \pm +0.00018$ \\
$T_{0,3}^{(a)}$~(days) & $2005.51210 \pm +0.00018$ & $2005.51216 \pm +0.00020$ \\
$T_{0,4}^{(a)}$~(days) & $2033.00014 \pm +0.00019$ & $2033.00013 \pm +0.00019$ \\
$\log \sigma_j$ & - & $-8.47 \pm 0.03$ \\
Derived/physical & & \\
$k=R_\mathrm{b}/R_{\star}$ & $0.0831 \pm 0.0006$ & $0.0796 \pm 0.0002$ \\
$R_\mathrm{b}\, (R_\mathrm{Jup})$ & - & $1.07 \pm 0.02$ \\
$a/R_{\star}$ & $12.1 \pm 0.1$ & $11.17 \pm 0.07$ \\
$i\, (\degr)$ & $89.4 \pm 0.4$ & $88.2 \pm 0.1$ \\
$T_{14}^{(b)}$~(days) & $0.197 \pm 0.001$ & $0.2003 \pm 0.0003$ \\
$e$ & $0.028 \pm 0.003$ & fixed \\
$\omega\, (\degr)$ & $338 \pm 9$ & fixed \\
$K_\mathrm{RV}$~(ms$^{-1}$)) & $246.6 \pm 1.2$ & - \\
$M_\mathrm{b}\, (M_\mathrm{Jup})$ & - & $2.7 \pm 0.1$ \\
$\rho_\mathrm{b}$~(gcm$^{-3}$) & - & $2.3 \pm 0.1$ \\
$\lambda^{(c)}\, (\degr)$ & $7.5^{+4.7}_{-6.1}$ & \\
GP hyperparameters & & \\
$\log S_0$ & - & $-24.0 \pm 0.3$ \\
$\log \omega_0$ & - & $5.0 \pm 0.2$ \\
\hline
\end{tabular}
}
\\
\textbf{Notes:}
$^{(a)}$: Transit times in BJD$_\mathrm{TDB}-2457000$.
$T_{0,n}$ single visit output in the input/priors column,
while they are the linear ephemeris plus $\Delta T_{0,n}$ from multi-visit analysis.
$^{(b)}$: Total duration. The eq. used depends on the literature.
The multi-visit duration is equal to $T_{14} = W \times P$.
$^{(c)}$: spin-orbit angle measured from the Rossiter-McLaughlin effect.
\end{table}
\subsection{WASP-106 b}
\label{sec:lc_wasp106}
WASP-106 is the faintest target in the G band ($G = 11.4$ and $V=11.2$) of our sample,
and it hosts a warm-Jupiter planet (b) with a mass about double that of Jupiter,
and a radius slightly larger than Jupiter.
WASP-106 b has been discovered by \citet{Smith2014AA...570A..64S}
and it has a circular orbit with a period of about 9.3 days.
The same authors found that the planetary orbit cannot be circularised by tidal forces,
so the orbit remained almost circular for the system lifetime.
This could be a hint of a disk-driven migration as the main process of
the evolution of the system \citep{Smith2014AA...570A..64S}.
\par
We observed the transit of WASP-106 b only once with CHEOPS in April 2020.
We obtained a light curve with a \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} of about $66.3\%$ and
with ingress and egress sampled with an efficiency of about $56\%$ and $60\%$, respectively.
We modelled the light curve, based on BIC statistics,
fitting the shape of the transit and
detrending for the $x$ and $y$ pixel offset, without GP
(see Fig.~\ref{fig:visits_wasp106} and Table~\ref{tab:wasp106}).
We obtained $\sigma_{T_0} = 60$~s, probably due to the noisy data
and due to the short visit and bad sampling of the pre-ingress phase,
making it difficult to properly constrain the detrending parameters
during the model fit.
We need more visits to better understand the possibility to detect a TTV for this target.
\par
\begin{figure}
\centering
\includegraphics[width=0.99\columnwidth]{WASP-106/WASP-106_id04a_v1_lc_emcee_mle.png}
\caption{WASP-106 b single visit analysis (see Fig.~\ref{fig:visits_hatp17} for description).
The model contains the fitted transit shape and detrending against $x$ and $y$ pixel offset.
}
\label{fig:visits_wasp106}
\end{figure}
\begin{table}
\centering
\caption{WASP-106 summary table of stellar and planetary (planet b) parameters.
Input and priors planetary parameters from \citet{Smith2014AA...570A..64S}.
Best-fit solution (MLE and semi-interval HDI at $68.27\%$) from the one single-visit analysis.
}
\label{tab:wasp106}
\resizebox{0.9\columnwidth}{!}{%
\begin{tabular}{lcc}
\hline
Parameters & Input/priors & Source\\
\hline
WASP-106 & \multicolumn{2}{c}{Gaia DR2 3788394461991295488} \\
RA (J2000) & 11:05:43.14 & Simbad\\
DEC (J2000) & -05:04:45.94 & Simbad\\
$\mu_\mathrm{\alpha}$~(mas/yr) & $-24.818 \pm 0.077$ & \gaia{} DR2\\
$\mu_\mathrm{\delta}$~(mas/yr) & $-13.294 \pm 0.060$ & \gaia{} DR2\\
age (Gyr) & $2.5 \pm 0.6$ & This work\\
parallax (mas) & $2.81 \pm 0.05$ & \gaia{} DR2\\
$V$~(mag) & 11.2 & Simbad\\
$G$~(mag) & 11.4 & \gaia{} DR2\\
$M_{\star}\, (M_{\sun})$ & $1.26 \pm 0.05$ & This work\\
$R_{\star}\, (R_{\sun})$ & $1.42 \pm 0.02$ & This work\\
$\rho_{\star}\, (\rho_{\sun})$ & $0.81 \pm 0.15$ & This work\\
$T_\mathrm{eff}$~(K) & $6265 \pm 36$ & SWEET-Cat\\
$\log g$ & $4.38 \pm 0.04$ & SWEET-Cat\\
\feh~(dex) & $+0.15 \pm 0.03$ & SWEET-Cat\\
\hline
WASP-106 b & & \\
Model & Input/priors & Single-visit (MLE \& HDI) \\
$P$~(days) & $9.28972 \pm 0.00001$ & fixed\\
$D=k^2$ & $0.00642 \pm 0.00018$ & $0.00607 \pm 0.00016$\\
$W$~(unit of $P$) & $0.0240 \pm 0.0008$ & $0.0247 \pm 0.0003$\\
$b$ & $0.13 \pm 0.16$ & $0.57 \pm 0.07$\\
$h_1$ & $0.75 \pm 0.01$ & $0.75 \pm 0.01$\\
$h_2$ & $0.46 \pm 0.05$ & $0.46 \pm 0.05$\\
$T_{0,1}^{(a)}$~(days) & - & $1962.68825 \pm 0.00069$ \\
$\log \sigma_j$ & - & $-7.27 \pm 0.06$\\
Derived/physical & & \\
$k=R_\mathrm{b}/R_{\star}$ & $0.080 \pm 0.001$ & $0.078 \pm 0.001$\\
$R_\mathrm{b}\, (R_\mathrm{Jup})$ & - & $1.10 \pm 0.02$\\
$a/R_{\star}$ & $14.2 \pm 0.4$ & $11.8 \pm 0.7$\\
$i\, (\degr)$ & $89.5 \pm 0.6$ & $87.2 \pm 0.5$\\
$T_{14}^{(b)}$~(days) & $0.223 \pm 0.008$ & $0.229 \pm 0.003$\\
$e$ & 0 & fixed \\
$\omega\, (\degr)$ & $90$ & fixed \\
$K_\mathrm{RV}$~(ms$^{-1}$)) & $165.3 \pm 4.3$ & - \\
$M_\mathrm{b}\, (M_\mathrm{Jup})$ & - & $2.00 \pm 0.08$\\
$\rho_\mathrm{b}$~(gcm$^{-3}$) & - & $1.14 \pm 0.22$\\
$\lambda^{(c)}\, (\degr)$ & - & \\
\hline
\end{tabular}
}
\\
\textbf{Notes:}
$^{(a)}$: Transit times in BJD$_\mathrm{TDB}-2457000$.
$T_{0,n}$ single visit output.
$^{(b)}$: Total duration equal to $T_{14} = W \times P$.
$^{(c)}$: spin-orbit angle measured from the Rossiter-McLaughlin effect.
\end{table}
\subsection{WASP-130 b}
\label{sec:lc_wasp130}
WASP-130 was classified
as a metal-rich G6 star, with magnitude V=11.1,
by \citet{Hellier2017MNRAS.465.3693H}.
The same authors discovered WASP-130 b,
a warm-Jupiter with period of about 11.6~d and a circular orbit.
There is no evidence of a RV trend due to a planetary or stellar companion.
So, also this target will be part of the sample
for testing the disk-driven migration process.
\par
We obtained three visits with CHEOPS in May and June 2020.
The first visit has a \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} of $61.8\%$
and good sampling of ingress and egress,
but it is too short and strongly affected by systematic effects.
The \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} of the second and third visit is of $54.3\%$ for both.
Furthermore, the second visit is characterised by an empty sampling of ingress and egress,
and the third visit covered only about $50\%$ of the ingress (see Fig.~\ref{fig:visits_wasp130}).
Due to these reasons, for the first
and the second visit in the single-visit analysis
we obtained the best-fit transit model with fixed shape parameters.
For these two visits we used as detrending parameters the background with GP (first visit) and
the $x$ and $y$ pixel offset with GP (second visit).
We fitted the shape of the transit of the third visit,
detrending for the first harmonic of the roll angle with the GP.
See Fig.~\ref{fig:visits_wasp130} for the single-visit light curves with models.
From the single-visit analysis we obtained $\sigma_{T_0} = 82$, 251, and 45 seconds,
for the first, the second, and third visits, respectively.
In the multi-visit analysis we fit the shape of the transit
(as already mentioned in Sec.~\ref{sec:data_analysis}),
and used the detrending parameters and GP information from the single-visit analysis
(see the phase folded light curve of the multi-visit analysis in Fig.~\ref{fig:mv_wasp130} and
the summary of the results in Table~\ref{tab:wasp130}).
We obtained an improvement on the $\sigma_{T_0}$ of
the first ($\sigma_{T_0}=44$~s) and
second visit ($\sigma_{T_0}=198$~s),
and a worsening by 20~s of the third visit.
This is due to the fact that in the detrending model of the multi-visit
the roll angle harmonic of the third visit is not used,
because the multi-visit GP kernel should already incorporate it,
but not so efficiently in this case.
A more careful and detailed analysis is mandatory.
This effect of the large $\sigma_{T_0}$ is clearly visible in
the $O-C$ diagram in Fig.~\ref{fig:mv_wasp130},
that does not show any hint of TTV with the current dataset.
\par
\begin{figure*}
\centering
\includegraphics[width=0.99\columnwidth]{WASP-130/WASP-130_id02_v1_lc_emcee_mle_gp.png}
\includegraphics[width=0.99\columnwidth]{WASP-130/WASP-130_id04a_v2_lc_emcee_mle_gp.png}
\includegraphics[width=0.99\columnwidth]{WASP-130/WASP-130_id05a_v3_lc_emcee_mle_gp.png}
\caption{WASP-130 b single visits analysis (see Fig.~\ref{fig:visits_hatp17} for description).
\textit{Upper-left:} first visit, fixed transit shape and detrending against background and GP;
\textit{upper-right:} second visit, fixed transit shape and detrending against $x$ and $y$ pixel offset and GP;
\textit{lower:} third visit, fitted transit shape and detrending against the first harmonics of the satellite roll angle and GP.
}
\label{fig:visits_wasp130}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.1\columnwidth]{WASP-130/MV_WASP-130.png}
\includegraphics[width=0.9\columnwidth]{WASP-130/OC_WASP-130.png}
\caption{As in Fig~\ref{fig:mv_hatp17}, but for WASP-130 b.
\textit{Left:} multi-visit phase plot of three CHEOPS visits;
\textit{right:} O-C diagram.
}
\label{fig:mv_wasp130}
\end{figure*}
\begin{table}
\centering
\caption{WASP-130 summary table of stellar and planetary (planet b) parameters.
Input and priors planetary parameters from \citet{Hellier2017MNRAS.465.3693H}.
Best-fit solution (MLE and semi-interval HDI at $68.27\%$) from the three multi-visit analysis.
}
\label{tab:wasp130}
\resizebox{0.9\columnwidth}{!}{%
\begin{tabular}{lcc}
\hline
Parameters & Input/priors & Source\\
\hline
WASP-130 & \multicolumn{2}{c}{Gaia DR2 6112606840179716096} \\
RA (J2000) & 13:32:25.44 & Simbad\\
DEC (J2000) & -42:28:30.97 & Simbad\\
$\mu_\mathrm{\alpha}$~(mas/yr) & $6.11 \pm 0.08$ & \gaia{} DR2\\
$\mu_\mathrm{\delta}$~(mas/yr) & $-1.24 \pm 0.08$ & \gaia{} DR2\\
age (Gyr) & $3.2 \pm 0.7$ & This work\\
parallax (mas) & $5.78 \pm 0.05$ & \gaia{} DR2\\
$V$~(mag) & 11.1 & Simbad\\
$G$~(mag) & 11.0 & \gaia{} DR2\\
$M_{\star}\, (M_{\sun})$ & $1.06 \pm 0.04$ & This work\\
$R_{\star}\, (R_{\sun})$ & $1.02 \pm 0.01$ & This work\\
$\rho_{\star}\, (\rho_{\sun})$ & $1.0 \pm 0.2$ & This work\\
$T_\mathrm{eff}$~(K) & $5667 \pm 34$ & SWEET-Cat\\
$\log g$ & $4.43 \pm 0.05$ & SWEET-Cat\\
\feh~(dex) & $0.31 \pm 0.03$ & SWEET-Cat\\
\hline
WASP-130 b & & \\
Model & Input/priors & Multi-visit (MLE \& HDI) \\
$T_{0,\mathrm{ref}}^{(a)}$~(days) & $-78.85693 \pm 0.00025$ & $2000.31939 \pm 0.00023$ \\
$P$~(days) & $11.55098 \pm 0.00001$ & $11.55098 \pm 0.00001$\\
$D=k^2$ & $0.00916 \pm 0.00014$ & $0.0092 \pm 0.0001$\\
$W$~(unit of $P$) & $0.01342 \pm 0.00009$ & $0.01347 \pm 0.00007$\\
$b$ & $0.53 \pm 0.03$ & $0.49 \pm 0.02$\\
$h_1$ & $0.72 \pm 0.01$ & $0.72 \pm 0.01$\\
$h_2$ & $0.44 \pm 0.05$ & $0.46 \pm 0.05$\\
$T_{0,1}^{(a)}$~(days) & $1977.21671 \pm +0.00094$ & $1977.21727 \pm +0.00051$ \\
$T_{0,2}^{(a)}$~(days) & $2000.32184 \pm +0.00291$ & $2000.31935 \pm +0.00228$ \\
$T_{0,3}^{(a)}$~(days) & $2011.86986 \pm +0.00052$ & $2011.86962 \pm +0.00075$ \\
$\log \sigma_j$ & - & $-7.40 \pm 0.04$\\
Derived/physical & & \\
$k=R_\mathrm{b}/R_{\star}$ & $0.0957 \pm 0.0007$ & $0.0961 \pm 0.0005$\\
$R_\mathrm{b}\, (R_\mathrm{Jup})$ & - & $0.98 \pm 0.01$\\
$a/R_{\star}$ & $22.7 \pm 2.4$ & $23.2 \pm 0.3$\\
$i\, (\degr)$ & $88.66 \pm 0.12$ & $88.79 \pm 0.07$\\
$T_{14}^{(b)}$~(days) & $0.155 \pm 0.001$ & $0.1556 \pm 0.0008$ \\
$e$ & $0$ & fixed \\
$\omega\, (\degr)$ & $90$ & fixed \\
$K_\mathrm{RV}$~(ms$^{-1}$)) & $108 \pm 2$ & - \\
$M_\mathrm{b}\, (M_\mathrm{Jup})$ & - & $1.25 \pm 0.04$\\
$\rho_\mathrm{b}$~(gcm$^{-3}$) & - & $2.2 \pm 0.1$\\
$\lambda^{(c)}\, (\degr)$ & - & \\
GP hyperparameters & & \\
$\log S_0$ & - & $-21.1 \pm 0.2$ \\
$\log \omega_0$ & - & $5.5 \pm 0.1$ \\
\hline
\end{tabular}
}
\\
\textbf{Notes:}
$^{(a)}$: Transit times in BJD$_\mathrm{TDB}-2457000$.
$T_{0,n}$ single visit output in the input/priors column,
while they are the linear ephemeris plus $\Delta T_{0,n}$ from multi-visit analysis.
$^{(b)}$: Total duration. The eq. used depends on the literature.
The multi-visit duration is equal to $T_{14} = W \times P$.
$^{(c)}$: spin-orbit angle measured from the Rossiter-McLaughlin effect.
\end{table}
\subsection{K2-287 b}
\label{sec:lc_k2-287}
K2-287 is a V=11.3 star (the faintest in the V band, $G=11.1$)
observed by \kepler/\textit{K2} \citep{Howell2014PASP..126..398H}
during campaign 15.
This star hosts K2-287 b,
a warm-Saturn ($M_\mathrm{b}=0.3\ M_\mathrm{Jup}$, $R_\mathrm{b}=0.8\ R_\mathrm{Jup}$)
recently discovered by \citet{Jordan2019AJ....157..100J}.
Even if this planet has been classified as warm-Saturn,
we included it in our sample because
it lies on an eccentric ($e=0.478$) orbit with a period of about 15 days.
The authors suggested that this planet needs more follow-up observations
to better understand the evolution process responsible for its orbital configuration.
In particular, they suggested long-term RV monitoring,
RM analysis, and search for TTV signal due to close companions that
migrated with K2-287 b.
The long period and transit duration of K2-287 b makes it difficult
to schedule and observe with ground-based facilities.
\par
We obtained three visits spanning two months of CHEOPS observations,
with \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} of $88\%$, $71.9\%$, and $57.3\%$ for the first, second, and third visit, respectively.
We observed many strong dips in the first visit with amplitude greater than the transit depth,
and we found that they were caused by the background.
We decided to remove these points with $5\sigma$-clipping above the median of the background flux,
reducing the effective \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} of about $30\%$.
These dips did impact also the \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{} of egress, lowering it to less than $30\%$.
Furthermore, the pre-ingress part is very short in the first visit.
We did not find the background features in the second and third visit.
The \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{} of both ingress and egress of the second visit is below $30\%$,
as also the \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{} of the ingress of the third visit.
\par
In the best-fit model of the single-visit analysis
we fixed the transit shape for all three visits.
We used as detrending the background with GP in the first visit,
only the GP in the second visit,
and the first two harmonics of the roll angle with GP for the third visit.
See the best-fit modelling in Fig.~\ref{fig:visits_k2-287}.
We obtained a precision $\sigma_{T_0}=85$~s, 226~s, and 71~s, for the first,
second, and third visit, respectively.
The lack of both ingress and egress and the low \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} of the second visits
have a huge impact on the determination of the transit time.
In the multi-visit analysis we fitted the transit shape,
the background of the first visit,
and GP incorporates the roll angle harmonics of the third visit
(see best-fit model in Fig.~\ref{fig:mv_k2-287} and final parameters in Table~\ref{tab:k2-287}).
We obtained $\sigma_{T_0} = 80$~s, $129$, and $103$~s,
with a slight improvement of about 5~s ($\sim6\%$) on the $\sigma_{T_0}$ of the first transit,
a huge improvement of 97~s ($\sim43\%$) for the second visit,
and a worsening by 32~s ($\sim46\%$) for the third transit.
As seen for WASP-130 b, the implementation of the GP kernel in the multi-visit analysis
cannot properly model the roll angle of the third visit,
reducing the precision on the $T_0$.
However, the $T_0$ values of the single-visit and of the multi-visit analysis
are all consistent within $1\sigma$, as shown in the $O-C$ plot in Fig.~\ref{fig:mv_k2-287}.
There is not evidence of a TTV,
because of short baseline and consecutive visits (second and third).
So, it is still too early to draw any conclusion.
\par
\begin{figure*}
\centering
\includegraphics[width=0.99\columnwidth]{K2-287/K2-287_id02_v1_lc_emcee_mle_gp.png}
\includegraphics[width=0.99\columnwidth]{K2-287/K2-287_id00_v2_lc_emcee_mle_gp.png}
\includegraphics[width=0.99\columnwidth]{K2-287/K2-287_id05b_v3_lc_emcee_mle_gp.png}
\caption{K2-287 b single visits analysis (see Fig.~\ref{fig:visits_hatp17} for description).
\textit{Upper-left:} first visit, fixed transit shape and detrending against background and GP;
\textit{upper-right:} second visit, fixed transit shape and detrending with only GP;
\textit{lower:} third visit, fixed transit shape and detrending against the first two harmonics of the satellite roll angle and GP.
}
\label{fig:visits_k2-287}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.1\columnwidth]{K2-287/MV_K2-287.png}
\includegraphics[width=0.9\columnwidth]{K2-287/OC_K2-287.png}
\caption{As in Fig~\ref{fig:mv_hatp17}, but for K2-287 b.
\textit{Left:} multi-visit phase plot of three CHEOPS visits;
\textit{right:} O-C diagram.
}
\label{fig:mv_k2-287}
\end{figure*}
\begin{table}
\centering
\caption{K2-287 summary table of stellar and planetary (planet b) parameters.
Input and priors planetary parameters from \citet{Jordan2019AJ....157..100J}.
Best-fit solution (MLE and semi-interval HDI at $68.27\%$) from the three multi-visit analysis.
}
\label{tab:k2-287}
\resizebox{0.9\columnwidth}{!}{%
\begin{tabular}{lcc}
\hline
Parameters & Input/priors & Source\\
\hline
K2-287 & \multicolumn{2}{c}{Gaia DR2 6239702034929248512} \\
RA (J2000) & 15:32:17.85 & Simbad\\
DEC (J2000) & -22:21:29.76 & Simbad\\
$\mu_\mathrm{\alpha}$~(mas/yr) & $-4.59 \pm 0.11$ & \gaia{} DR2\\
$\mu_\mathrm{\delta}$~(mas/yr) & $-17.90 \pm 0.07$ & \gaia{} DR2\\
age (Gyr) & $6.6 \pm 1.5$ & This work\\
parallax (mas) & $6.29 \pm 0.05$ & \gaia{} DR2\\
$V$~(mag) & 11.3 & Simbad\\
$G$~(mag) & 11.1 & \gaia{} DR2\\
$M_{\star}\, (M_{\sun})$ & $1.03 \pm 0.04$ & This work\\
$R_{\star}\, (R_{\sun})$ & $1.10 \pm 0.01$ & This work\\
$\rho_{\star}\, (\rho_{\sun})$ & $0.7 \pm 0.3$ & This work\\
$T_\mathrm{eff}$~(K) & $5625 \pm 64$ & SWEET-Cat\\
$\log g$ & $4.32 \pm 0.11$ & SWEET-Cat\\
\feh~(dex) & $+0.27 \pm 0.04$ & SWEET-Cat\\
\hline
K2-287 b & & \\
Model & Input/priors & Multi-visit (MLE \& HDI) \\
$T_{0,\mathrm{ref}}^{(a)}$~(days) & $1001.72138 \pm 0.00015$ & $1999.5651 \pm 0.0004$ \\
$P$~(days) & $14.893291 \pm 0.000025$ & $14.893289 \pm 0.000025$\\
$D=k^2$ & $0.00642 \pm 0.00016$ & $0.0064 \pm 0.0001$\\
$W$~(unit of $P$) & $0.0100 \pm 0.0006$ & $0.0098 \pm 0.0003$\\
$b$ & $0.78 \pm 0.04$ & $0.80 \pm 0.03$\\
$h_1$ & $0.72 \pm 0.01$ & $0.72 \pm 0.01$\\
$h_2$ & $0.45 \pm 0.05$ & $0.42 \pm 0.05$\\
$T_{0,1}^{(a)}$~(days) & $1969.77881 \pm +0.00098$ & $1969.77816 \pm +0.00093$ \\
$T_{0,2}^{(a)}$~(days) & $1999.56614 \pm +0.00262$ & $1999.56494 \pm +0.00149$ \\
$T_{0,3}^{(a)}$~(days) & $2014.45800 \pm +0.00082$ & $2014.45821 \pm +0.00119$ \\
$\log \sigma_j$ & - & $-7.31 \pm 0.04$\\
Derived/physical & & \\
$k=R_\mathrm{b}/R_{\star}$ & $0.08014 \pm 0.00098$ & $0.0799 \pm 0.0006$\\
$R_\mathrm{b}\, (R_\mathrm{Jup})$ & - & $0.88 \pm 0.01$\\
$a/R_{\star}$ & $23.87 \pm 0.31$ & $23.6 \pm 0.6$\\
$i\, (\degr)$ & $88.1 \pm 0.1$ & $88.1 \pm 0.1$\\
$T_{14}^{(b)}$~(days) & $0.15 \pm 0.01$ & $0.146 \pm 0.005$\\
$e$ & $0.478 \pm 0.026$ & fixed \\
$\omega\, (\degr)$ & $10.1 \pm 4.6$ & fixed \\
$K_\mathrm{RV}$~(ms$^{-1}$)) & $28.8 \pm 2.3$ & - \\
$M_\mathrm{b}\, (M_\mathrm{Jup})$ & - & $0.31 \pm 0.03$\\
$\rho_\mathrm{b}$~(gcm$^{-3}$) & - & $0.63 \pm 0.07$\\
$\lambda^{(c)}\, (\degr)$ & - & \\
GP hyperparameters & & \\
$\log S_0$ & - & $-21.5 \pm 0.1$\\
$\log \omega_0$ & - & $6.5 \pm 0.1$\\
\hline
\end{tabular}
}
\\
\textbf{Notes:}
$^{(a)}$: Transit times in BJD$_\mathrm{TDB}-2457000$.
$T_{0,n}$ single visit output in the input/priors column,
while they are the linear ephemeris plus $\Delta T_{0,n}$ from multi-visit analysis.
$^{(b)}$: Total duration is equal to $T_{14} = W \times P$.
$^{(c)}$: spin-orbit angle measured from the Rossiter-McLaughlin effect.
\end{table}
\section{Results and discussion}
\label{sec:results}
From the current dataset of 17 transits of seven warm-Jupiters
we obtained a wide range of timing precision $\sigma_{T_0}$,
summarised in Table~\ref{tab:results}.
The best timing is of about 13~s, for the brightest target WASP-38,
and the worst case is of about 250~s (for the single-visit analysis),
for WASP-130.
Beyond the stellar brightness and the global efficiency,
another major contributor, or limiting factor,
to the precision of the transit time is the efficiency of the critical phase ranges (\ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}),
that is the coverage of the transit ingress and egress.
In case of small temporal sampling of the ingress/egress phases (\ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}~$< 30\%-50\%$)
we improved the timing precision combining multiple visits.
In the case of WASP-8 b, we had almost no improvement from the multi-visit analysis,
because the combined transits (see Fig.~\ref{fig:mv_wasp8}) did not fully cover
both ingress and egress phases.
\par
We have to take into account that
the higher the requested \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{} in both ingress and egress,
the lower the probability to schedule with CHEOPS that particular visit,
simply because there are less visits available for scheduling
that actually satisfy the stringent constraints on the critical phase ranges.
To ensure an appropriate time sampling of the TTV signal
we have to request for visits
with high efficiencies in the critical phase ranges.
We compared the expected \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} and \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{} from the FC
with the observed \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} and \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{} in actual CHEOPS visits.
We remind the reader that the FC was meant as a statistically indicative tool
and not as a planning tool for the mission.
A few early visits have been scheduled without checking the \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}},
but we computed the critical phase ranges (cpr) for all the targets
with the parameters from the literature propagating the errors
\footnote{We used \textit{Uncertainties:
a Python package for calculations with uncertainties},
Eric O. LEBIGOT,
\url{http://pythonhosted.org/uncertainties/.}}
and we ran the FC to obtain the expected \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}.
We computed the observed \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} of a visit as the ratio of the number of data-points,
that is the number of real exposures,
over the maximum possible number of exposures due to the visit duration.
We computed the observed \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{} in the same way as the \ensuremath{\mathrm{G}_{\mathrm{EFF}}},
but taking into account only the length and the data within the phase ranges of the ingress and egress.
Then, we computed the maximum value of
the absolute difference
between the expected and the observed efficiency.
For cpr timescales of the order of $\sim30$~min,
which is the typical duration of the ingress and/or egress
of the transit of a warm-Jupiter,
we found that the predicted \ensuremath{\mathrm{cpr}_{\mathrm{EFF}}}{}s agree with the observed ones
within $\sim10\%$.
Also the difference of the \ensuremath{\mathrm{G}_{\mathrm{EFF}}}{} is of the same order.
We expect that these differences will increase with time,
because the orbit file in the FC will not be updated.
\par
We found that for some targets
the cpr of our visits do not cover the observed ingress and egress phases.
We re-computed the cpr with the updated linear ephemeris and parameters from this work.
We found that all cpr match exactly the ingress and egress of all the visits.
The mismatch on the positions of the cpr does not seem
to depend on the difference between the FC's orbit and the actual orbit,
but rather on the accuracy and precision of the ephemeris and transit parameters,
which are fundamental to prepare CHEOPS observations.
\par
In our cases, the best timing would allow us
to detect all the expected range of the TTV signals ($\geq 1$~min)
probing all the possible, and realistic, regions of the parameter space of a perturber.
Our worse cases, WASP-106 and K2-287,
have an average $\sigma_{{T_0},\mathrm{multi}}$ of less than 2~min with only three transits.
This would limit the range of the detectable TTV signals,
but the possible orbital configurations
of the system with a further planet (see Fig.~\ref{fig:grid_wasp106} and \ref{fig:grid_k2-287})
are so numerous and extended that the current study is still feasible.
We can affirm that, in general, CHEOPS will be able to detect TTV signals
with amplitude less than 1~min for target brighter than $G = 11-12$,
if the multiple visits could cover with high efficiency the ingress and egress phases.
\par
It is worth noting that one of the few hot Jupiters hosts known to have planetary companions,
WASP-47 \citep{Becker2015ApJ...812L..18B}, also falls within this magnitude range and
is well observable by CHEOPS.
It is actually included in another GTO subprogram (Nascimbeni et al., in prep.).
By applying the same techniques described in this work, its 40-s TTV \citep{Becker2015ApJ...812L..18B}
is expected to be detectable.
\par
\begin{table}
\centering
\caption{Summary of the $\sigma_{T_0}$ in seconds of all targets and visits
(columns V1, V2, V3, and V4).
In case of multi-visit analysis:
$\sigma_{{T_0},\mathrm{multi}}\ (\sigma_{{T_0},\mathrm{single}})$;
if only single-visit analysis: $\sigma_{{T_0},\mathrm{single}}$.
}
\label{tab:results}
\begin{tabular}{lcccc}
\hline
& \multicolumn{4}{c}{$\sigma_{T_0}$~(seconds)}\\
target & V1 & V2 & V3 & V4 \\
\hline
HAT-P-17 b & $52\ (87)$ & $53\ (82)$ & $94\ (97)$ & \\
KELT-6 b & $114$ & & & \\
WASP-8 b & $50\ (53)$ & $28\ (31)$ & & \\
WASP-38 b & $20\ (24)$ & $16\ (13)$ & $17\ (16)$ & $17\ (16)$ \\
WASP-106 b & $60$ & & & \\
WASP-130 b & $44\ (81)$ & $197\ (251)$ & $65\ (45)$ & \\
K2-287 b & $80\ (85)$ & $128\ (226)$ & $103\ (71)$ & \\
\hline
\end{tabular}
\end{table}
\section{Conclusions}
\label{sec:conclusions}
The main purpose of this work was to demonstrate CHEOPS capability to schedule multiple observations
and obtain transit times with sufficient accuracy to allow detection of TTV signals.
In this context, we present one of the CHEOPS GTO programs
aimed at the detection of possible TTV signals
with amplitude of the order of a few minutes of warm-Jupiter exoplanets
due to gravitational interaction with a planetary companion
on outer orbit.
\par
We collected 17 light curves of transits of seven out of eight targets of our sample,
and presented the observing strategy and the data analysis.
We demonstrated the impact and the importance of a good sampling of the ingress and egress phases of a transit
on the precision of the transit time,
but also of the pre- and post-transit portions to properly detrend the light curve
for the systematic effects.
We showed improvement on timing precision $\sigma_{T_0}$ combining the multiple visits of five targets:
HAT-P-17 b, WASP-8 b, WASP-38 b, WASP-130 b, and K2-287 b.
The precision $\sigma_{T_0}$ ranges from about ten seconds (i.e., WASP-38 b)
to a couple of minutes (i.e., WASP-130 b and K2-287 b)
for visits with high and low temporal sampling
of both ingress and egress phases,
respectively.
\par
These observations were very helpful to understand how to properly prepare next observations,
how to precisely set the visit duration and the required efficiency of each transit phase.
A simulation of the feasible visits with updated linear ephemeris and
stellar and planetary parameters is mandatory to increase the efficiency
of the CHEOPS observations
\par
With the current dataset,
we cannot draw any conclusions about the existence of a TTV signal
in our target sample due to the short temporal span of our observations,
but this was not the purpose of this work,
focused on the demonstration of the timing capabilities of the CHEOPS mission.
We aim to collect further visits for each target
to reach at least five visits covering about a year of CHEOPS mission,
with the goal of 15 visits in the total nominal mission duration of 3.5~yr.
For each target we will analyse CHEOPS data simultaneously
with literature photometric and spectroscopy data to detect a TTV signal
on a long temporal baseline.
This will help us to improve the planetary parameters and
to reduce the error on the ephemeris,
necessary to increase the efficiency of further follow-up
with current and future ground- and space-based
facilities,
i.e.,
HARPS \citep{Mayor2003Msngr.114...20M},
HARPS-N \citep{Cosentino2012SPIE.8446E..1VC},
ESPRESSO \citep{Pepe2010SPIE.7735E..0FP,Pepe2020arXiv201000316P},
the European Extremely Large Telescope (ELT),
JWST \citep{Gardner2006SSRv..123..485G},
and
ARIEL \citep{Pascale2018SPIE10698E..0HP,Pilbratt2019ESS.....450304P,Puig2018ExA....46..211P,Tinetti2018ExA....46..135T}.
\par
\section*{Acknowledgements}
CHEOPS is an ESA mission in partnership with Switzerland
with important contributions to the payload and the ground segment from
Austria, Belgium, France, Germany, Hungary, Italy, Portugal, Spain, Sweden, and the United Kingdom.
The CHEOPS Consortium would like to gratefully acknowledge the support received
by all the agencies, offices, universities, and industries involved.
Their flexibility and willingness to explore new approaches
were essential to the success of this mission.
KGI is the ESA CHEOPS Project Scientist and
is responsible for the ESA CHEOPS Guest Observers Programme.
She does not participate in, or contribute to,
the definition of the Guaranteed Time Programme of the CHEOPS mission
through which observations described in this paper have been taken,
nor to any aspect of target selection for the programme.
The Swiss participation to CHEOPS has been supported by
the Swiss Space Office (SSO) in the framework of the Prodex Programme and
the Activite Nationales Complementaires (ANC), the Universities of Bern and Geneva
as well as of the NCCR PlanetS and the Swiss National Science Foundation.
The early support for CHEOPS by Daniel Neuenschwander is gratefully acknowledged.
GPi, VN, GSs, IPa, LBo, GLa, and RRa acknowledge the funding support from Italian Space Agency (ASI)
regulated by ``Accordo ASI-INAF n. 2013-016-R.0 del 9 luglio 2013 e integrazione del 9 luglio 2015 CHEOPS Fasi A/B/C''.
GLa acknowledges support by CARIPARO Foundation, according to the agreement
CARIPARO-Universit{\`a} degli Studi di Padova (Pratica n. 2018/0098),
and scholarship support by the ``Soroptimist International d'Italia'' association (Cortina d'Ampezzo Club).
VVG is an FRS-FNRS Research Associate.
VVG, LD and MG thank the Belgian Federal Science Policy Office (BELSPO)
for the provision of financial support in the framework of the PRODEX Programme
of the European Space Agency (ESA) under contract number PEA 4000131343.
DG, MF, SC, XB, and JL acknowledge their roles as ESA-appointed CHEOPS science team members.
ZG was supported by the Hungarian NKFI grant No. K-119517 and the GINOP
grant No. 2.3.2-15-2016-00003 of the Hungarian National Research
Development and Innovation Office, by the City of Szombathely under
agreement No. 67.177-21/2016, and by the VEGA grant of the Slovak
Academy of Sciences No. 2/0031/18.
This work was supported by FCT - Funda\c{c}\~ao para a Ci\^encia e a Tecnologia
through national funds and by FEDER through COMPETE2020 -
Programa Operacional Competitividade e Internacionaliza\c{c}\~ao by these grants:
UID/FIS/04434/2019; UIDB/04434/2020; UIDP/04434/2020; PTDC/FIS-AST/32113/2017 \&
POCI-01-0145-FEDER-032113; PTDC/FIS-AST/28953/2017 \&
POCI-01-0145-FEDER-028953; PTDC/FIS-AST/28987/2017 \&
POCI-01-0145-FEDER-028987.
A.C.C. and T.G.W. acknowledge support from STFC consolidated grant number ST/M001296/1.
SH acknowledges CNES funding through the grant 837319.
O.D.S.D. is supported in the form of work contract (DL 57/2016/CP1364/CT0004)
funded by national funds through Funda\c{c}\~{a}o para a Ci\^{e}ncia e Tecnologia (FCT).
This project has received funding from the European Research Council (ERC)
under the European Union’s Horizon 2020
research and innovation programme (project {\sc Four Aces}; grant agreement No 724427)
\section*{Data Availability}
Data will be available at CDS.
Data type: default aperture data and best-fit model in ascii file.
\bibliographystyle{mnras}
|
2,877,628,090,306 | arxiv | \section{Introduction}
Hawkes processes \cite{hawkes1971} are self-exciting point processes
that have found applications in fields such
as neuroscience \cite{cardanobile},
genomics analysis \cite{reynaud-bouret},
as well as finance \cite{embrechts2}, or social media \cite{younglee}.
\medskip
The analysis of statistical properties of Hawkes processes is made
difficult by their recursive nature, making the computation of moments
difficult.
In \cite{jovanovic}, a tree-based method for the computation of cumulants
has been introduced, with an explicit computation of third order cumulants.
Ordinary differential equation (ODE) methods have been applied in
\cite{dassios-zhao2} to the computation of
the moment and probability generating functions of
(generalized) Hawkes processes and their intensity, with the computation
of first and second moments in the stationary case, see also
\cite{errais}, and \cite{cui} and \cite{daw} for other ODE-based
approaches.
\medskip
In \cite{bacry}, stochastic calculus and martingale arguments have been
applied to the computation of first and second order moments,
however those approaches seem difficult to generalize to higher-orders
moments.
In \cite{vargas},
cumulant recursion formulas have been obtained
for general random variables
using martingale brackets.
Third-order cumulant expressions for Hawkes processes
have been used in \cite{achab} for the
the analysis of order books in finance,
and in
\cite{ocker},
\cite{montangie}
for neuronal networks.
\medskip
In \cite{hawkescumulants},
the cumulants of Hawkes processes have been computed
using using Bell polynomials, based on a recursive relation for
the Probability Generating Functional (PGFl) of self-exciting
point processes started from a single point.
This provides a closed-form alternative to the tree-based approach of \cite{jovanovic}.
\medskip
In this note we apply the algorithm of \cite{hawkescumulants}
to the recursive computation of joint moments of all orders
of Hawkes processes, and present the
corresponding codes written in Maple and Mathematica.
The algorithm uses sums over partitions and Bell polynomials to compute
joint cumulants in the case of an
exponential branching intensity on $[0,\infty )$.
\medskip
We proceed as follows.
After reviewing some combinatorial identities in Section~\ref{s2},
we will consider the computation of the
joint cumulants of self-exciting Hawkes Poisson cluster processes
in Section~\ref{s3}.
Explicit computations for the time-dependent
joint third and fourth cumulants
of Hawkes processes with exponential kernels are presented
in Section~\ref{s4},
and are confirmed by Monte Carlo estimates.
\section{Joint moments and cumulants}
\label{s2}
In this section we present background combinatorial results
that will be needed in the sequel.
Given the Moment Generating Function (MGF)
\aimention{Thiele, T.N.}
\begin{align}
\nonumber
&
M_X (t_1,\ldots , t_n) : =
\mathbb{E} \big[ \mathrm{e}^{t_1X_1+\cdots + t_n X_n} \big]
\\
\nonumber
& \qquad =
1 + \sum_{k_1,\ldots , k_n \geq 1}
\frac{t^{k_1}_1\cdots t^{k_n}_n}{n!}
\mathbb{E} [ X^{k_1}_1 \cdots X^{k_n}_n ],
\end{align}
of a random vector $X=(X_1,\ldots , X_n)$,
the joint {cumulants} of $(X_1,\ldots , X_n)$ of orders
$(l_1,\ldots , l_n)$ are the coefficients
$\kappa_{l_1,\ldots , l_n} (X)$ appearing in the log-MGF expansion
\begin{align}
\label{cgf}
&
\hskip-0.4cm
\log M_X (t_1,\ldots , t_n)
=
\log \big( \mathbb{E}\big[\mathrm{e}^{t_1X_1+\cdots + t_n X_n}\big] \big)
\\
\nonumber
& =
\sum_{l_1,\ldots , l_n\geq 1} \frac{t^{l_1}_1\cdots t^{l_n}_n}{l_1! \cdots l_n!}
\kappa_{l_1,\ldots , l_n} (X_1,\ldots , X_n),
\end{align}
for $(t_1,\ldots , t_n)$ in a neighborhood of zero in $\mathbb{R}^n$.
In the sequel we let
$$
\kappa (X_1,\ldots , X_n)
:= \kappa_{1,\ldots , 1} (X_1,\ldots , X_n),
\quad
n\geq 1,
$$
and
$$
\kappa^{(n)} (X)
:= \kappa_{1,\ldots , 1} (X,\ldots , X),
\quad n\geq 1.
$$
The joint moments of $(X_1,\ldots , X_n)$
are then given by the joint moment-cumulant relation
\begin{equation}
\nonumber
\mathbb{E} [ X_1 \cdots X_n ]
=
\sum_{l=1}^n
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\prod_{j=1}^l
\kappa^{(|\pi_j|)} \big( (X_i)_{i \in \pi_j} \big).
\end{equation}
where the sum runs over the partitions
$\pi_1,\ldots , \pi_k$ of the set $\{ 1 , \ldots , n \}$,
By the multivariate Fa\`a di Bruno formula, \eqref{cgf} can be inverted as
\begin{align*}
& \kappa (X_1,\ldots , X_n)
\\
& =
\sum_{l=1}^n
(l-1)!
(-1)^{l-1}
\hskip-0.1cm
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\prod_{j=1}^l
\mathbb{E} \Bigg[ \prod_{i\in \pi_j} X_i \Bigg].
\end{align*}
In the univariate case, the moments $\mathbb{E}[X^n]$ of a random variable $X$
are linked to its cumulants $\big(\kappa^{(n)}(X)\big)_{n\geq 1}$
through the relation
\begin{align}
\nonumber
\mathbb{E} [ X^n ]
& =
B_n \big( \kappa^{(1)}(X), \ldots , \kappa^{(n)} (X) \big)
\\
\nonumber
& =
\sum_{k=1}^n
B_{n,k} ( \kappa^{(1)} (X) \cdots \kappa^{(n-k+1)} (X) ),
\end{align}
where
\begin{align}
\nonumber
&
\hskip-0.3cm
B_{n,k} ( a_1 , \ldots , a_{n-k+1} )
=
\frac{n!}{k!}
\sum_{l_1+\cdots + l_k=n \atop
l_1\geq 1,\ldots ,l_k \geq 1
}
\frac{a_{l_1}}{l_1!}
\cdots
\frac{a_{l_k}}{l_k!}
\\
\label{dfjkl0}
& =
\sum_{\pi_1 \cup \cdots \cup \pi_k = \{ 1, \ldots , n \}}
a_{|\pi_1|}(X) \cdots a_{|\pi_k|}(X),
\end{align}
$1\leq k \leq n$, is the partial Bell polynomial
of order $(n,k)$,
where
the sum \eqref{dfjkl0} holds on the integer compositions
$(l_1,\ldots ,l_k)$ of $n$,
see e.g. Relation~(2.5) in \cite{elukacs},
and
\begin{align*}
B_n ( a_1 , \ldots , a_n )
& =
\sum_{k=1}^n
B_{n,k} ( a_1 , \ldots , a_{n-k+1} )
\end{align*}
is the complete Bell polynomial of degree $n \geq 1$.
We also have the inversion relation
\begin{align}
\nonumber
& \kappa^{(n)} (X)
\\
\nonumber
& = \sum_{k=0}^{n-1}
k! (-1)^k
B_{n,k+1} \big(
\mathbb{E} \big[ X \big] , \mathbb{E} \big[ X^2 \big] , \ldots , \mathbb{E} \big[ X^{n-k} \big]
\big)
\end{align}
$n \geq 1$,
see e.g. Theorem~1 of \cite{elukacs},
and also \cite{leonov},
Relations~(2.8)-(2.9) in \cite{mccullagh}, or
Corollary~5.1.6 in \cite{stanley}.
\medskip
As an example we consider the recursive computation of Borel cumulants as an example.
Let $(X_n)_{n\geq 0}$ be a branching process started at $X_0=1$
with Poisson distributed offspring count $N$ of parameter $\mu \in (0,1)$,
and let $X$ denote the total count of offsprings generated by $(X_n)_{n\geq 0}$
It is known,
see \cite{polya-szego}
and \S~3.2 of \cite{consul}
that $X$ has the {Borel distribution}
\index{Lagrangian distribution}
\index{distribution!Lagrangian}
\index{Borel distribution}
\index{distribution!Borel}
\aimention{Consul, P.C.}
\aimention{Famoye, F.}
\aimention{P{\'o}lya, G.}
\aimention{Szeg{\"o}, G.}
$$
\mathbb{P} ( X = n )
= \mathrm{e}^{-\mu n}\frac{(\mu n)^{n-1}}{n!},
\qquad n \geq 1.
$$
We have $\kappa^{(1)} (X)= 1/(1-\mu)$ and the induction relation
\begin{align}
\nonumber
& \kappa^{(n)}(X)
=
\frac{\mu}{1-\mu}
\big( B_n \big( \kappa^{(1)}(X), \ldots , \kappa^{(n)}(X) \big)
- \kappa^{(n)}(X) \big)
\\
\label{fjls}
& =
\frac{\mu}{1-\mu}
\sum_{k=2}^n
B_{n,k} \big( \kappa^{(1)}(X), \ldots , \kappa^{(n-k+1)}(X) \big)
,
\end{align}
$n\geq 2$, see \S~8.4.3 in \cite{consul}
and Proposition~2.1 in \cite{hawkescumulants}.
The recursion \eqref{fjls} is implemented in the following Maple
code.
\medskip
\begin{lstlisting}[language=Maple]
c := proc(n, mu) local tmp, k, z1; option remember; if n = 1 then return 1/(1 - mu); end if;
tmp := 0; z1 := []; for k from n by -1 to 2 do z1 := [op(z1), c(n - k + 1, mu)]; tmp := tmp + IncompleteBellB(n, k, op(z1)); end do;
return mu*tmp/(1 - mu); end proc;
m := proc(n, mu) local tmp, z, k; option remember; if n = 0 then return 1; end if;
tmp := 0; z := []; for k from n by -1 to 1 do z := [op(z), c(n - k + 1, mu)]; tmp := tmp + IncompleteBellB(n, k, op(z)); end do;
return tmp; end proc;
\end{lstlisting}
\vspace{-0.6cm}
\noindent
In particular, the command ${\rm c}(2,\mu)$ in Maple yields
the second cumulant $\kappa^{(2)}(X) = \mu / (1-\mu)^3$,
and by the commands ${\rm c}(3,\mu)$ and ${\rm c}(4,\mu)$
we find
$\kappa^{(3)}(X)
=
\mu ( 1+2\mu)/(1-\mu)^5$,
and
$
\kappa^{(4)}(X)
= \mu ( 1 + 8\mu + 6 \mu^2 )/(1-\mu)^7$,
see also (8.85) page 159 of \cite{consul}.
Those results can be recovered from
the command ${\rm c}[2,\mu]$,
${\rm c}[3,\mu]$ and ${\rm c}[4,\mu]$
using the following Mathematica code.
\medskip
\begin{lstlisting}[language=Mathematica]
c[n_, mu_] := c[n, mu] = (Module[{tmp, k}, If[n == 1, Return[1/(1 - mu)]]; tmp = 0; z1 = {};
For[k = n, k >= 2, k--, z1 = Append[z1, Block[{i = n - k + 1}, c[i, mu]]]; tmp += BellY[n, k, z1]];
Simplify[mu*tmp/(1 - mu)]]);
m[n_, mu_] := (Module[{tmp, z, k}, tmp = 0; If[n == 0, Return[1]];
z = {}; For[k = n, k >= 1, k--, z = Append[z, Block[{i = n - k + 1}, c[i, mu]]]; tmp += BellY[n, k, z]]; Simplify[tmp]])
\end{lstlisting}
\vspace{-0.8cm}
\section{Joint Hawkes cumulants}
\label{s3}
In the cluster process framework of \cite{hawkes},
we consider a real-valued self-exciting point process on $[0,\infty )$,
with Poisson
offspring intensity $\gamma (dx)$ and Poisson
immigrant intensity $\nu (dx)$ on $[0,\infty )$, built on the space
\begin{eqnarray*}
\lefteqn{
\Omega = \big\{
\xi = \{ x_i \}_{i\in I} \subset [0,\infty ) \ : \
}
\\
& &
\#( A \cap \xi ) < \infty
\mbox{ for all compact } A\subset [0,\infty )
\big\}
\end{eqnarray*}
of locally finite configurations on $[0,\infty )$, whose elements
$\xi \in \Omega$ are identified with the Radon point measures
$\displaystyle \xi (dz) = \sum_{x\in \xi} \epsilon_x (dz)$,
where $\epsilon_x$ denotes the Dirac measure at $x\in \mathbb{R}_+$.
In particular, any initial immigrant point $y \in \mathbb{R}_+$ branches into a Poisson
random sample denoted by
$\xi_\gamma (y + d z ) \displaystyle = \sum_{x\in \xi} \epsilon_{x+y} (d z )$
and centered at $y$, with intensity measure $\gamma (y + d z )$ on $[0,\infty )$.
Figure~\ref{fig00} presents a graph of the point measure $\xi (dz)$ followed by
the corresponding sample paths of the self-exciting counting process
$\displaystyle X_t (\xi ) := \xi ( [0,t]) = \sum_{x\in \xi} {\bf 1}_{[0,t]}(x)$
and its stochastic intensity $\lambda_t$, $t\in [0,10]$,
in the exponential kernel example
of the next section.
\medskip
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{figure1}
\caption{Sample paths of $X_t$ and of the intensity $\lambda (t)$.}
\label{fig00}
\end{figure}
\vskip-0.2cm
\noindent
In the sequel, we assume that $\gamma ( [0,\infty ) ) < 1$ and
consider the integral operator $\Gamma$ defined as
$$
\Gamma f (z) = \int_0^\infty f(z+y ) \gamma (dy), \qquad z\in \mathbb{R}_+,
$$
and the inverse operator $(I_d-\Gamma)^{-1}$ given by
\begin{align*}
&
(I_d-\Gamma)^{-1} f(z )
= f(z)
\\
&
\quad + \sum_{m=1}^\infty \int_{\mathbb{R}_+^m} f(z+x_1+\cdots + x_m) \gamma ( dx_1)
\cdots \gamma ( dx_m),
\end{align*}
with
\begin{align*}
&
(I_d-\Gamma)^{-1} \Gamma f(z )
= (I_d-\Gamma)^{-1} f(z ) - f(z )
\\
&
\quad = \sum_{m=1}^\infty \int_{\mathbb{R}_+^m} f(z+x_1+\cdots + x_m) \gamma ( dx_1)
\cdots \gamma ( dx_m),
\end{align*}
$z\in \mathbb{R}_+$.
The first cumulant
$\kappa_z^{(1)}(f)$ of $\displaystyle \sum_{x\in \xi} f(x)$
given that $\xi$ is started
from a single point at $z\in \mathbb{R}_+$
is given by
$\kappa_z^{(1)}(f) = (I_d-\Gamma )^{-1} f(z)$ for $n=1$.
The next proposition provides a way to compute the
higher order cumulants $\kappa_z^{(n)}(f)$ of $\displaystyle \sum_{x\in \xi} f(x)$
given that $\xi$ is started
from a single point at $z\in \mathbb{R}_+$
by an induction relation based on set partitions,
see Proposition~3.5 in \cite{hawkescumulants}.
\begin{prop}
\label{djklds}
For $n\geq 2$, the joint cumulants
$\kappa_z^{(n)}(f_1,\ldots , f_n)$
of $\displaystyle \sum_{x\in \xi} f_1(x), \ldots , \sum_{x\in \xi} f_n(x)$
given that $\xi$ is started
from a single point at $z\in \mathbb{R}_+$
are given by the induction relation
\begin{align}
\label{fjkl}
& \kappa_z^{(n)}(f_1,\ldots , f_n)
\\
\nonumber
&
=
\sum_{k=2}^n
\sum_{\pi_1\cup \cdots \cup \pi_k = \{1,\ldots , n\}}
(I_d-\Gamma )^{-1} \Gamma \kappa_z^{(|\pi_j|)} ((f_i)_{i\in \pi_j})
,
\end{align}
$n \geq 2$, where the above sum is over set partitions
$\pi_1\cup \cdots \cup \pi_k=\{1,\ldots , n\}$, $k=2,\ldots , n$,
and $|\pi_i|$ denotes the cardinality of the set $\pi_i \subset \{1,\ldots , n\}$.
\end{prop}
\noindent
The joint cumulants
$\kappa^{(n)} (f_1,\ldots , f_n)$
of $\displaystyle \sum_{x\in \xi} f_1(x), \ldots , \sum_{x\in \xi} f_n(x)$
can be obtained as a consequence
of Proposition~\ref{djklds}, by the combinatorial summation
\begin{align}
\label{al}
&
\kappa^{(n)} (f_1,\ldots , f_n)
\\
\nonumber
& =
\sum_{k=1}^n
\sum_{\pi_1\cup \cdots \cup \pi_k = \{1,\ldots , n\}}
\int_0^\infty \prod_{j=1}^k
\kappa_z^{(|\pi_j|)}((f_i)_{i\in \pi_j})
\nu ( dz ),
\end{align}
see Corollary~3.4 and Proposition~3.5 in \cite{hawkescumulants}.
Joint moments can then be recovered by the joint moment-cumulant relation
\begin{align}
\label{mc}
& \hskip-0.3cm
\mathbb{E} \Bigg[
\sum_{x\in \xi} f_1(x)
\cdots
\sum_{x\in \xi} f_n(x)
\Bigg]
\\
\nonumber
& =
\sum_{l=1}^n
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\prod_{j=1}^l
\kappa^{(|\pi_j|)} ((f_i)_{i\in \pi_j}),
\end{align}
which can be inverted as
\begin{align*}
& \kappa^{(n)} (f_1,\ldots , f_n )
\\
& \hskip-0.05cm
=
\sum_{l=1}^n
(l-1)!
(-1)^{l-1}
\hskip-0.5cm
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\prod_{j=1}^l
\mathbb{E} \Bigg[
\prod_{i\in \pi_j} \sum_{x\in \xi} f_i(x)
\Bigg].
\end{align*}
\section{Joint Hawkes moments with exponential kernel}
\label{s4}
\noindent
In this section we consider the exponential kernel
$\gamma (dx) = a {\bf 1}_{[0,\infty )} (x) \mathrm{e}^{-bx} dx$, $0< a < b$,
and constant Poisson intensity $\nu (dz) = \nu dz$, $\nu >0$. In this case,
$$
\displaystyle X_t (\xi ) := \xi ( [0,t]) = \sum_{x\in \xi} {\bf 1}_{[0,t]}(x),
\quad t\in \mathbb{R}_+,
$$
defines the self-exciting Hawkes process with stochastic intensity
$$
\lambda_t := \nu + a \int_0^t \mathrm{e}^{-b(t-s)} dX_s, \qquad t\in \mathbb{R}_+.
$$
In this case, the integral operator $\Gamma$ satisfies
$$
\Gamma f (z) = a \int_0^\infty f(z+y ) \mathrm{e}^{-by} dy, \quad z\in \mathbb{R}_+,
$$
and the recursive calculation of joint moments and cumulants
will be performed by evaluating
$(I_d-\Gamma )^{-1} \Gamma$
in Proposition~\ref{djklds}
on the family of functions $e_{p,\eta,t}$ of the form
$e_{p,\eta, t}(x) := x^p \mathrm{e}^{\eta x} {\bf 1}_{[0,t]}(x)$,
$\eta < b$, $p\geq 0$,
as in the next lemma.
\begin{lemma}
\label{l1}
For $f$ in the linear span generated by the functions
$e_{p,\eta, t}$, $p\geq 0$, $\eta \in \mathbb{R}$,
the operator
$( I_d - \Gamma )^{-1} \Gamma$ is given by
$$
( I_d - \Gamma )^{-1} \Gamma
f ( z )
=
a \int_0^{t-z}
f(z+y) \mathrm{e}^{( a -b)y}
dy,
$$
$z\in [0,t]$.
\end{lemma}
\begin{Proof}
For all $p,\eta \geq 0$ we have the equality
\begin{align*}
& ( I_d - \Gamma )^{-1} \Gamma
e_{p,\eta, t} ( z )
\\
& =
\sum_{n=1}^\infty
\int_{[0,t]^n}
e_{p,\eta, t} ( z+x_1+\cdots + x_n )
\gamma ( dx_1 ) \cdots \gamma ( dx_n )
\\
& =
\sum_{n=1}^\infty
\frac{a^n}{(n-1)!}
\int_0^{t-z}
(z+y)^p
\mathrm{e}^{\eta (z+y)}
y^{n-1} \mathrm{e}^{-by}
dy
\\
& =
a
\mathrm{e}^{\eta z} \int_0^{t-z}
(z+y)^p \mathrm{e}^{(\eta + a -b)y}
dy, \quad z\in [0,t],
\end{align*}
which follows from the fact that the sum $\tau_1+\cdots +\tau_n$
of $n$ exponential random variables with parameter $b>0$ has
a gamma distribution with shape parameter $n\geq 1$ and scaling parameter
$b>0$.
\end{Proof}
Using Lemma~\ref{l1}, we can rewrite \eqref{fjkl}
for $t_1<\cdots < t_n$ as
\begin{align}
\label{fdsf}
& \kappa_z^{(n)}\big({\bf 1}_{[0,t_1]},\ldots , {\bf 1}_{[0,t_n]}\big)
\\
\nonumber
&
\hskip-0.1cm
=
\sum_{k=2}^n
\sum_{\pi_1\cup \cdots \cup \pi_k = \{1,\ldots , n\}}
\hskip-0.1cm
\int_0^{t_1-z}
\hskip-0.5cm
a
\mathrm{e}^{( a -b)y}
\kappa_{z+y}^{(|\pi_j|)} \big( \big( {\bf 1}_{[0,t_i]}\big)_{i\in \pi_j} \big)
dy,
\end{align}
with
\begin{align*}
\kappa_z^{(1)}({\bf 1}_{[0,t]}) & = (I_d-\Gamma )^{-1} {\bf 1}_{[0,t]}(z)
\\
& =
\frac{b}{b-a} +\frac{a}{a-b} \mathrm{e}^{(a-b)(t-z)},
\quad z\in [0,t] ,
\end{align*}
if $a\not= b$, and
$$
\kappa_z^{(1)}({\bf 1}_{[0,t]}) = (I_d-\Gamma )^{-1} {\bf 1}_{[0,t]}(z)
= 1 + a (t-z),
$$
$z\in [0,t]$, if $a=b$.
The recursive computation of
$\kappa_z^{(n)}\big({\bf 1}_{[0,t_1]},\ldots , {\bf 1}_{[0,t_n]}\big)$
in \eqref{fdsf} is implemented in the following Mathematica
code using Lemma~\ref{l1}.
\medskip
\noindent
The computation of joint Hawkes cumulants by
the recursive relation \eqref{al} is then implemented in the following code.
\medskip
\noindent
Finally, joint moments are computed from the joint moment-cumulant relation \eqref{mc}
which is implemented in the following code.
The joint moments $\mathbb{E}[ X_{t_1} \cdots X_{t_n} ]$
of $X_{t_1}, \ldots , X_{t_n}$ are
obtained from the above code using the command
${\rm m}(a,b,[t_1,\ldots , t_n])$ in Maple or
${\rm m}[a,b,\{t_1,\ldots , t_n\}]$ in Mathematica.
Figures~\ref{fig0} to \ref{fig3-b}
are plotted with $\nu=1$, $a=0.5$, $b=1$, $T=2$, and one million
Monte Carlo samples, and
Figure~\ref{fig0} presents the first moment
$m_1(t) = {\rm m}(a,b,[t]) = {\rm m}[a,b,\{t\}]$.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{figure2}
\caption{First moment.}
\label{fig0}
\end{figure}
\vskip-0.2cm
\noindent
For example, the second joint moment of $(X_t,X_T)$ is obtained by the command
$m_2(t,T) = {\rm m}(a,b,[t,T])$ in Maple or
$m_2(t,T) = {\rm m}[a,b,\{t,T\}]$ in Mathematica
presented in appendix,
which yields
\begin{align*}
& \mathbb{E} [ X_t X_T ] =
\\ &
\frac{1}{(a - b)^4}
\left(\frac{1}{2}
\mathrm{e}^{-a t -
b (t + T)} (2 b^4 t \mathrm{e}^{a t + b (t + T)}
\right.
\\ &
+ a^2 b (-\mathrm{e}^{a (2 t + T)} + \mathrm{e}^{2 b t + a T} +
2 b t \mathrm{e}^{2 a t + b T}
\\ &
+ 2 b t \mathrm{e}^{b t + a (t + T)} ) +
2 a^3 (\mathrm{e}^{a (2 t + T)} - b t \mathrm{e}^{2 a t + b T}
\\ &
-
\mathrm{e}^{b t + a (t + T)} (1 + b t)) -
2 a b^2 (\mathrm{e}^{2 b t + a T} - 2 \mathrm{e}^{2 a t + b T}
\\ &
- \mathrm{e}^{
b t + a (t + T)} +
\mathrm{e}^{a t + b (t + T)} (2 + b t)))
\\ &
+ (b^2 t +
a ( \mathrm{e}^{(a - b) t} - b t -1))
\\ &
\left.
\quad \times (b^2 T +
a ( \mathrm{e}^{(a - b) T} - b T -1))\right),
\end{align*}
see Figure~\ref{fig1}.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{figure3}
\caption{Second joint moment.}
\label{fig1}
\end{figure}
\vskip-0.2cm
\noindent
Figures~\ref{fig2-a} and \ref{fig2-b}
show the numerical evaluation of third and fourth joint moments,
obtained from
$m_3(t_1,t_2,t_3) = {\rm m}(a,b,[t_1,t_2,t_3])= {\rm m}[a,b,\{t_1,t_2,t_3\}]$
and
$m_4(t_1,t_2,t_3,t_4) ={\rm m}(a,b,[t_1,t_2,t_3,t_4])={\rm m}[a,b,\{t_1,t_2,t_3,t_4\}]$.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{figure4}
\caption{Third joint moments.}
\label{fig2-a}
\end{figure}
\vskip-0.2cm
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{figure5}
\caption{Fourth joint moments.}
\label{fig2-b}
\end{figure}
\vskip-0.2cm
\noindent
The following tables count the (approximate) numbers of summands
appearing in joint cumulant and moment expressions when expanded as
a sum of the form
$$
\sum_{k,l,p_1,\ldots, p_n \geq 0\atop
q_1,\ldots , q_n, r_1,\ldots , r_n \geq 0}
\hskip-0.7cm
a^k b^l t_1^{p_1}\cdots t_n^{p_n} \mathrm{e}^{q_1 a t_1 + \cdots + q_n a t_n
+
r_1 b t_1 + \cdots + r_n b t_n},
$$
excluding factorizations and simplifications of expressions.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c|}
\cline{2-3}
& \multicolumn{2}{c|}{One variable}
\\
\cline{1-5}
\multicolumn{1}{|c|}{Cumulant} & Time & Count & \multicolumn{2}{c|}{All variables}
\\
\cline{1-5}
\multicolumn{1}{|c|}{Sixth } & 64s & 671 & Time & Count
\\
\cline{1-5}
\multicolumn{1}{|c|}{Fifth } & 11s & 226 & 2403s & 3288
\\
\cline{1-5}
\multicolumn{1}{|c|}{Fourth } & 1.7s & 81 & 31s & 536
\\
\cline{1-5}
\multicolumn{1}{|c|}{Third } & 0.5s & 35 & 1.6s & 91
\\
\cline{1-5}
\multicolumn{1}{|c|}{Second } & 0.2s & 12 & 0.3s & 14
\\
\cline{1-5}
\multicolumn{1}{|c|}{First} & 0.06s & 4
\\
\cline{1-3}
\end{tabular}
\caption{Counts of summands and cumulant computation times in Maple.}
\label{t1}
\end{table}
\vskip-0.2cm
The tables also display the corresponding computation times on a
8-core laptop computer with 8Gb RAM.
Symbolic computation appears faster with Maple, although
computation times become similar at the order six and above.
\noindent
Figures~\ref{fig3-a} and \ref{fig3-b} show the
numerical evaluation of fifth and sixth joint moments.
$m_5(t_1,t_2,t_3,t_4,t_5)$ and $m_6(t_1,t_2,t_3,t_4,t_5,t_6)$.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{figure6}
\caption{Fifth joint moments.}
\label{fig3-a}
\end{figure}
\vskip-0.2cm
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{figure7}
\caption{Sixth joint moments.}
\label{fig3-b}
\end{figure}
\vskip-0.2cm
\noindent
Computation times are presented in seconds for symbolic calculations
using the variables $a,b,t_1,\ldots , t_n$,
and can be significantly shorter when the variables are set to specific
numerical values.
Moment computations times in Table~\ref{t2} are similar to those
of Table~\ref{t1}, and can be sped up if cumulant functions are memoized
after repeated calls.
\noindent
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c|}
\cline{2-3}
& \multicolumn{2}{c|}{One variable}
\\
\cline{1-5}
\multicolumn{1}{|c|}{Moment} & Time & Count & \multicolumn{2}{c|}{All variables}
\\
\cline{1-5}
\multicolumn{1}{|c|}{Sixth} & 66s & 2159 & Time & Count
\\
\cline{1-5}
\multicolumn{1}{|c|}{Fifth} & 11s & 762 & 2544s & 27116
\\
\cline{1-5}
\multicolumn{1}{|c|}{Fourth} & 1.9s & 265 & 35s & 2236
\\
\cline{1-5}
\multicolumn{1}{|c|}{Third} & 0.5s & 88 &
1.8s & 266
\\
\cline{1-5}
\multicolumn{1}{|c|}{Second} & 0.2s & 22 & 0.5s & 29
\\
\cline{1-5}
\multicolumn{1}{|c|}{First} & 0.05s & 4
\\
\cline{1-3}
\end{tabular}
\caption{Counts of summands and moment computation times in Maple.}
\label{t2}
\end{table}
\vskip-0.2cm
\noindent
One-variable examples are computed using the following codes
which use Bell polynomials instead of sums over partitions.
\medskip
\begin{lstlisting}[language=Maple]
kz := proc(z, n, a, b, t) local tmp, z1, k; option remember;
if n = 1 then if a = b then return 1 + a*(t - z); else return b/(b - a) + a*exp((a - b)*(t - z))/(a - b); end if; end if;
tmp := 0; z1 := []; for k from n by -1 to 2 do z1 := [op(z1), kz(y + z, n - k + 1, a, b, t)]; tmp := tmp + IncompleteBellB(n, k, op(z1)); end do;
return int(a*exp((a - b)*y)*tmp, y = 0 .. t - z); end proc;
c := proc(a, b, t, n) local y, k, z, temp; option remember; temp := 0; y := [];
for k from n by -1 to 1 do y := [op(y), kz(z, n - k + 1, a, b, t)]; temp := temp + IncompleteBellB(n, k, op(y)); end do;
return int(temp, z = 0 .. t); end proc;
m := proc(a, b, t, n) local tmp, z, k; option remember;
tmp := 0; if n = 0 then return 1; end if; z := []; for k from n by -1 to 1 do z := [op(z), c(a, b, t, n - k + 1)]; tmp := tmp + IncompleteBellB(n, k, op(z)); end do;
return tmp; end proc;
\end{lstlisting}
\vskip-0.6cm
\noindent
For example, the second moment of $X_t$ is obtained in Maple
by the command
${\rm m}(a,b,t,2)$, which yields
\begin{align*}
\mathbb{E} [ X_t^2 ] = &
\frac{(b^2 t + a (-1 + \mathrm{e}^{(a - b) t} - b t))^2}{(a - b)^4}
\\
&
+ \frac{1}{2 (a - b)^4}
\big(
\mathrm{e}^{-2 b t} (2 b^4 \mathrm{e}^{2 b t} t
\\
&
+
a^2 b (-\mathrm{e}^{2 a t} + \mathrm{e}^{2 b t} + 4 b \mathrm{e}^{(a + b) t} t)
\\
&
-
2 a b^2 (-3 \mathrm{e}^{(a + b) t} + \mathrm{e}^{2 b t} (3 + b t))
\\
& +
2 a^3 (\mathrm{e}^{2 a t} - \mathrm{e}^{(a + b) t} (1 + 2 b t)))\big).
\end{align*}
The same result can be obtained in Mathematica from the command
${\rm m}[a,b,t,2]$ using the code below.
\medskip
\begin{lstlisting}[language=Mathematica]
kz[z_, n_, a_, b_, t_] :=
kz[z, n, a, b, t] = (Module[{tmp, k, y, z1},
If[n == 1, If[a === b, Return[1 + a*(t - z)],
Return[b/(b - a) + a*E^((a - b)*(t - z))/(a - b)]]]; tmp = 0;
z1 = {}; For[k = n, k >= 2, k--, z1 = Append[z1, Block[{i = n - k + 1, u = y + z}, kz[u, i, a, b, t]]];
tmp += BellY[n, k, z1];]; a*Integrate[E^((a - b)*y)*tmp, {y, 0, t - z}]]);
c[a_, b_, t_, n_] := c[a, b, t, n] = (Module[{y, k, z, temp}, temp = 0; y = {}; For[k = n, k >= 1, k--,
y = Append[y, Block[{i = n - k + 1, u = z}, kz[u, i, a, b, t]]]; temp += BellY[n, k, y]]; Return[Integrate[temp, {z, 0, t}]]]);
m[a_, b_, t_, n_] := m[a, b, t, n] = (Module[{tmp, z, k}, tmp = 0; If[n == 0, Return[1]]; z = {}; For[k = n, k >= 1, k--, z = Append[z, c[a, b, t, n - k + 1]]; tmp += BellY[n, k, z]]; tmp])
\end{lstlisting}
\vspace{-0.8cm}
\section*{Appendix A - Maples codes}
\begin{lstlisting}[language=Maple]
kz := proc(z, a, b, t::list) local pm, pp2, p, pp, tmp, k, y, h, i, j, ii, u, n, zz, c; option remember; n := nops(t);
if n = 1 then if a = b then return 1 + a*(t[1] - z); else return b/(b - a) + a*exp((a - b)*(t[1] - z))/(a - b); end if; end if;
tmp := 0; pp2 := Iterator:-SetPartitions(n); for pp in pp2 do p := pp2:-ToSets(pp);
if 2 <= nops(p) then c := 1; for i to nops(p) do c := c*kz(y + z, a, b, map(op, convert(p[i], list), t)); end do;
tmp := tmp + c; end if; end do; return a*int(exp((a - b)*y)*tmp, y = 0 .. t[1] - z); end proc;
\end{lstlisting}
\vskip-0.6cm
\begin{lstlisting}[language=Maple]
c := proc(a, b, t::list) option remember; local y, e, k, pm, tmp, p2, pp, p, c, i, zz, j, u, ii, n; n := nops(t);
tmp := kz(y, a, b, t); if 2 <= n then pm := Iterator:-SetPartitions(n); for pp in pm do p := pm:-ToSets(pp);
if 2 <= nops(p) then e := 1; for i to nops(p) do e := e*kz(y, a, b, map(op, convert(p[i], list), t)); end do; tmp := tmp + e;
end if; end do; end if; return int(tmp, y = 0 .. t[1]); end proc;
\end{lstlisting}
\vskip-0.6cm
\begin{lstlisting}[language=Maple]
m := proc(a, b, t::list) option remember; local y, e, k, u, ii, pm, tmp, p2, pp, p, i, zz, j, n; n := nops(t); tmp := c(a, b, t);
if 2 <= n then pm := Iterator:-SetPartitions(n); for pp in pm do p := pm:-ToSets(pp); if 2 <= nops(p) then e := 1; for i to nops(p) do e := e*c(a, b, map(op, convert(p[i], list), t)); end do; tmp := tmp + e;
end if; end do; end if; return tmp; end proc;
\end{lstlisting}
\vspace{-1.1cm}
\section*{Appendix B - Mathematica codes}
\begin{lstlisting}[language=Mathematica]
Needs["Combinatorica`"];
kz[z_, a_, b_, t__] :=
kz[z, a, b, t] = (Module[{tmp, y, i, c, n}, n = Length[t];
If[n == 1, If[a === b, Return[1 + a*(t[[1]] - z)],
Return[b/(b - a) + a*E^((a - b)*(t[[1]] - z))/(a - b)]]];
tmp = 0; Do[c = 1; If[Length[p] >= 2, For[i = 1, i <= Length[p], i++,
c = c*Block[{u = y + z, v = t[[p[[i]]]]}, kz[u, a, b, v]]];
tmp += c], {p, SetPartitions[n]}];
Return[a*Integrate[E^((a - b)*y)*tmp, {y, 0, t[[1]] - z}]]]);
\end{lstlisting}
\vskip-0.6cm
\begin{lstlisting}[language=Mathematica]
c[a_, b_, t__] :=
c[a, b, t] = (Module[{y, e, tmp, n, i}, n = Length[t]; tmp = 0;
Do[e = 1; For[i = 1, i <= Length[p], i++,
e = e*Block[{u = y, v = t[[p[[i]]]]}, kz[u, a, b, v]]];
tmp += Flatten[{e}][[1]],
{p, SetPartitions[n]}];
Return[Integrate[tmp, {y, 0, t[[1]]}]]]);
\end{lstlisting}
\vskip-0.6cm
\begin{lstlisting}[language=Mathematica]
m[a_, b_, t__] := (Module[{n, e, i, tmp}, tmp = 0; n = Length[t]; If[n == 0, Return[1]];
Do[e = 1; For[i = 1, i <= Length[p], i++, e = e*c[a, b, t[[p[[i]]]]]];
tmp += e, {p, SetPartitions[n]}]; Flatten[{tmp}][[1]]])
\end{lstlisting}
\vspace{-1.1cm}
\footnotesize
\def$'${$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def$'${$'$}
|
2,877,628,090,307 | arxiv | \section{Introduction}
\label{secintro}
The goal of this paper is to present the theoretical underpinnings for
computer-assisted branch validation using functional analytic techniques
including the constructive implicit function theorem and Neumann series methods,
such that pointwise estimates result in solution branch validation.
While the individual proof techniques presented here are not novel,
we present this approach in a modular way such that it is flexible, adaptable,
and as computationally feasible as possible in more than one space dimension.
In particular, we apply this methodology in the case of the Ohta--Kawasaki
model for diblock copolymers~\cite{ohta:kawasaki:86a}. Diblock copolymers
are formed by the chemical reaction of two linear polymers (known as
blocks) which contain different monomers. Whenever the blocks are
thermodynamically incompatible, the blocks are forced to separate
after the reaction, but since the blocks are covalently bonded they
cannot separate on a macroscopic scale. The competition between these
long-range and short-range forces causes {\em microphase separation},
resulting in pattern formation on a mesoscopic scale.
We study the Ohta-Kawasaki equation in the case of homogeneous Neumann
boundary conditions on rectilinear domains $\Omega$ in dimensions one,
two, and three, which is given by
\begin{eqnarray*}
w_t &=& -\Delta ( \Delta w + \lambda f(w) ) - \lambda \sigma (w - \mu)
\quad\mbox{ in } \Omega \;, \\[1.5ex]
& & \frac{ \partial w }{\partial \nu } =
\frac{\partial (\Delta w)}{\partial \nu }= 0
\quad\mbox{ on } \partial \Omega \; .
\end{eqnarray*}
The notation~$\nu$ denotes the unit outward normal on the boundary
of~$\Omega$ --- corresponding to homogeneous Neumann boundary conditions.
The quantity $w(t,x)$ is the local average density of the two blocks.
The parameter $\mu$ is the space average of~$w$, meaning it is a measure
of the relative total proportion of the two polymers, which we tersely
refer to as the {\em mass} of the system. The equation obeys a mass
conservation, implying that~$\mu$ is time-invariant. A large value of
parameter~$\lambda$ corresponds to a large short-range repulsion,
while a large value of the parameter~$\sigma$ corresponds to large
long-range elasticity forces. We refer the reader
to~\cite{johnson:etal:13a} for a detailed description of
how~$\lambda$ and~$\sigma$ are defined.
The nonlinear function $f: {\mathbb R} \to {\mathbb R}$ is often assumed to be $f(u) = u-u^3$, but
the results in this paper still apply as long as $f$ is a $C^2$-function.
Finally, note that the second
boundary condition is necessary since this is a fourth order equation.
In this paper, we focus on equilibrium solutions $w = w(x)$.
For notational convenience, we reformulate our equation slightly.
For a solution~$w$ of the diblock copolymer equation, we define
$u = w - \mu$. Since the space average of~$w$ is~$\mu$, the average
of the shifted function~$u$ is zero. Therefore the equilibrium equation
becomes
\begin{eqnarray} \label{eqn:dbcp}
-\Delta ( \Delta u + \lambda f(u+\mu) ) - \lambda \sigma u &=& 0 \quad\mbox{ in } \Omega \;, \nonumber\\[1.5ex]
\frac{ \partial u }{\partial \nu }= \frac{ \partial (\Delta u) }{\partial \nu }&=& 0 \quad\mbox{ on } \partial \Omega \;, \\[1.5ex]
\int_\Omega u \;dx &=& 0 \nonumber \;.
\end{eqnarray}
We will use this version of the equation for the rest of the paper. We focus
on solutions to this equation as we vary any of the three parameters: the
degree of short-range repulsion $\lambda$, the mass $\mu$, and the degree of
long-range elasticity $\sigma$. Our main goal is to establish bounds that
make it possible to use a functional analytic approach to rigorous validation
using the point of view of the constructive implicit function theorem which we
have already developed in previous work~\cite{sander:wanner:16a, wanner:16a,
wanner:17a, wanner:18a}. Our bounds are developed mostly using theoretical
techniques, but in the case of Sobolev embeddings, the bounds themselves are
developed using computer-assisted means. This method is designed for validated
continuation of branches of solutions which depend on a parameter, in the
spirit of the numerical method of pseudo-arclength continuation, such as
seen in the software packages AUTO~\cite{doedel:81a} and Matcont~\cite{matcont}.
Successive application of this theorem allows us to validate branches of
equilibrium solutions by giving precise bounds on both the branch
approximation error and isolation. This is much more powerful than
only validating individual solutions along a branch, since it allows
us to guarantee that a set of solutions lie along the same connected
branch component.
In order to establish what is new in this paper, we give a brief discussion
of previous results. A number of papers have previously considered numerical
computation of bifurcation diagrams for the Ohta-Kawasaki and Cahn-Hilliard
equations, such as for example~\cite{choksi:etal:11a, choksi:peletier:09a,
choksi:ren:03a, choksi:ren:05a, desi:etal:11a, johnson:etal:13a, maier:etal:08a,
maier:etal:07a}. There are also several decades of results on computer validation
for dynamical systems and differential equations solutions which combine fixed
point arguments and interval arithmetic; see for example~\cite{arioli:koch:10a,
day:etal:07b, gameiro:etal:08a, plum:95a, plum:96a, plum:09a, rump:10a, wanner:16a,
wanner:17a}. A constructive implicit function theorem was formulated in the work
of Chierchia~\cite{chierchia:03a}. Our approach follows most closely the work of
Plum~\cite{nakao:etal:19a, plum:95a, plum:96a,plum:09a}, in which functional
analytic approaches are given for establishing needed apriori bounds. Such
methods have also been applied by Yamamoto~\cite{yamamoto:98a, yamamoto:etal:11a}.
In our previous work on the constructive implicit function theorem, our goal has
been to give a systematic procedure for adapting these works to the context of
parameter continuation. There are several papers that have already considered
rigorous validation of parameter-dependent solutions for the Ohta-Kawasaki
model~\cite{cai:watanabe:19a, cyranka:wanner:18a, lessard:sander:wanner:17a,
sander:wanner:16a, vandenberg:williams:17a, vandenberg:williams:p19b,
vandenberg:williams:19a, wanner:16a, wanner:17a, wanner:18a}. Many of these
papers also include methods of bounding the terms in a generalized Fourier
series, and the estimates on the tail. However, it was necessary to make
quite substantial ad hoc calculations in order to establish needed bounds
before it is possible to proceed with numerical validation.
Our goal in the current paper is to establish a set of flexible
bounds on the size of the inverse of the derivative, the required
truncation dimension, Lipschitz bounds on the equations with respect to all
parameters, as well as constructive Sobolev embedding constant bounds for
comparison to the $L^\infty$-norm, meaning that equilibrium verifications
along branch segments can be done without having to resort to ad hoc
calculations which crucially depend on the specific nonlinearity.
More precisely, we obtain the following:
\begin{itemize}
\item The approach of this paper derives general estimates that work in one,
two, and three space dimensions, and under the natural homogeneous Neumann
boundary conditions. This is in contrast to~\cite{cai:watanabe:19a}
and~\cite{wanner:17a}, which only considered the case of one-dimensional
domains, or to~\cite{vandenberg:williams:p19b, vandenberg:williams:19a}, which
considered the three-dimensional case only under periodic boundary conditions
and symmetry constraints.
\item Our approach uses the natural functional analytic setting for the
diblock copolymer evolution equation, which is based on the Sobolev space
of twice weakly differentiable functions. This is in contrast
to~\cite{vandenberg:williams:p19b, vandenberg:williams:19a}, which seek
the equilibria in spaces of analytic functions.
\item As part of our approach, we obtain accurate upper bounds for the
operator norm of the inverse of the diblock copolymer Fr\'echet derivative.
For this estimate, we use the natural Sobolev norms of the underlying
problem. In contrast to~\cite{kinoshita:etal:19a,
watanabe:etal:19a, watanabe:etal:16a} our method is based on Neumann
series.
\end{itemize}
Throughout this paper, we focus on the theoretical underpinnings
which allow one to apply the constructive implicit function
theorem~\cite{sander:wanner:16a}. Due to space constraints, we leave
the practical application of these results to path-following with
slanted boxes as in~\cite{sander:wanner:16a}, as well as extensions
to pseudo-arclength continuation, for future work. Nevertheless, while
this paper is focussed only on the Ohta-Kawasaki model, the general
approach can be used for other parabolic partial differential
equations as well.
The remainder of this paper is organized as follows. In
Section~\ref{sec:tbd}, we introduce the necessary functional analytic
framework, while Section~\ref{subsec:die} is devoted to finding bounds
on the operator norm of the inverse of the linearized operator. After
that, Section~\ref{sec:cift} establishes Lipschitz bounds on the diblock
copolymer operator for continuation with respect to any of the three
parameters~$\lambda$, $\sigma$, and~$\mu$, before in Section~\ref{sec:eg}
we give a brief numerical illustration of how this method rigorously
establishes a variety of equilibrium branch pieces for the Ohta-Kawasaki
model in multiple dimensions. Finally, in Section~\ref{sec:theend} we
wrap up with conclusions and future plans.
\section{Basic definitions and setup}
\label{sec:tbd}
In this section, we establish notation and crucial auxiliary bounds. In
Section~\ref{sec:ciftstatement} we recall the constructive implicit function
theorem, before in Section~\ref{sec:fun} we define the function spaces that will
be used in our computer-assisted proofs. These spaces are particularly adapted for
the use with Fourier series expansions to represent functions with Neumann
boundary conditions and zero average. In Section~\ref{sec:sob}, we
collect a set of Sobolev embedding results giving precise rigorous
bounds on the similarity constants for passing between equivalent norms
on these function spaces. Finally, in Section~\ref{sec:proj} we
introduce the necessary finite-dimensional spaces and associated
projection operators that are used in our computer-assisted proofs.
\subsection{The constructive implicit function theorem}
\label{sec:ciftstatement}
In this section we state a constructive implicit function theorem that makes
it possible to validate a branch of solutions changing with respect to a
parameter. This theorem appears in~\cite{sander:wanner:16a}, where we
demonstrated the validation of solutions for the lattice Allen-Cahn equation.
The theorem is based on previous work of Plum~\cite{plum:09a} and
Wanner~\cite{wanner:17a}. To put this in context, our overarching goal
is to find a connected curve of values~$(\alpha,x)$ in the zero set for a
specific nonlinear operator~${\mathcal G}(\alpha,x)$. In this paper, the zero set
consists of the equilibria of the Ohta-Kawasaki equation. Starting at a point
for which the operator~${\mathcal G}$ is close to zero, we use the theorem as the
iterative step in a validated continuation. That is, we iteratively validate
small portions along the solution curve, each time using the constructive
implicit function theorem which is stated below. We also validate that these
portions combine to create a piece of a single connected solution curve,
and show that it is isolated from any other branch of the solution curve.
Rather than getting bogged down in the details of the iterative process, we
first concentrate on the single iterative step and the estimates needed in
order to perform it. Specifically, we consider solutions to the equation
\begin{equation} \label{nift:nleqn}
{\mathcal G}(\alpha,x) = 0 \; ,
\end{equation}
where ${\mathcal G} : {\mathcal P} \times {\mathcal X} \to {\mathcal Y}$ is a Fr\'echet differentiable
nonlinear operator between two Banach spaces~${\mathcal X}$ and~${\mathcal Y}$, and
the parameter~$\alpha$ is taken from a Banach space~${\mathcal P}$. The
norms on these Banach spaces are denoted by~$\|\cdot\|_{\mathcal P}$,
$\|\cdot\|_{\mathcal X}$, and~$\|\cdot\|_{\mathcal Y}$, respectively. One possible
choice of~${\mathcal G}$ would be to directly use the nonlinear operator
associated with~(\ref{eqn:dbcp}), but this is not a numerically
viable option for validation of a branch of solutions. Instead we
will introduce an extended system which gives a validated version of
pseudo-arclength continuation. The system contains not only the
Ohta-Kawasaki model equilibrium equation, but is in a way designed
to optimize the needed number of validation steps.
In order to present the constructive implicit function theorem in
detail, we begin by making the following hypotheses.
For the classical implicit function theorem, the existence of constants
satisfying the hypotheses given below is sufficient. In contrast, since we wish to
use a computer assisted proof to validate existence of equilibria with
specified error bounds, we require
explicit values for each of the constants in (H1)--(H4).
\begin{itemize}
\item[(H1)] Unlike the traditional implicit function theorem, we assume
only an approximate solution to the equation. That is, assume that we
are given a pair~$(\alpha^*,x^*) \in {\mathcal P} \times {\mathcal X}$ which is an
approximate solution of the nonlinear problem~(\ref{nift:nleqn}).
More precisely, the residual of the nonlinear operator~${\mathcal G}$ at the
pair~$(\alpha^*,x^*)$ is small, i.e., there exists a constant
$\rho > 0$ such that
\begin{displaymath}
\left\| {\mathcal G}(\alpha^*,x^*) \right\|_{\mathcal Y} \le \rho \; .
\end{displaymath}
\item[(H2)] Assume that the operator~$D_x{\mathcal G}(\alpha^*,x^*)$
is invertible and not very close to being singular. That is,
the Fr\'echet derivative~$D_x{\mathcal G}(\alpha^*,x^*) \in {\mathcal L}({\mathcal X},{\mathcal Y})$,
where~${\mathcal L}({\mathcal X},{\mathcal Y})$ denotes the Banach space of all bounded
linear operators from~${\mathcal X}$ into~${\mathcal Y}$, is one-to-one and
onto, and its inverse~$D_x{\mathcal G}(\alpha^*,x^*)^{-1} : {\mathcal Y} \to {\mathcal X}$
is bounded and satisfies
\begin{displaymath}
\left\| D_x{\mathcal G}(\alpha^*,x^*)^{-1} \right\|_{{\mathcal L}({\mathcal Y},{\mathcal X})} \le K
\; ,
\end{displaymath}
where~$\| \cdot \|_{{\mathcal L}({\mathcal Y},{\mathcal X})}$ denotes the operator norm
in~${\mathcal L}({\mathcal Y},{\mathcal X})$.
\item[(H3)] For~$(\alpha,x)$ close to~$(\alpha^*,x^*)$,
the Fr\'echet derivative~$D_x{\mathcal G}(\alpha,x)$ is locally
Lipschitz continuous in the following sense. There exist
positive real constants~$L_1$, $L_2$, $\ell_x$,
and~$\ell_\alpha \ge 0$ such that for all pairs $(\alpha,x)
\in {\mathcal P} \times {\mathcal X}$ with $\| x - x^* \|_{\mathcal X} \le \ell_x$ and
$\|\alpha - \alpha^*\|_{\mathcal P} \le \ell_\alpha$ we have
\begin{displaymath}
\left\| D_x{\mathcal G}(\alpha,x) -
D_x{\mathcal G}(\alpha^*,x^*) \right\|_{{\mathcal L}({\mathcal X},{\mathcal Y})} \le
L_1 \left\| x - x^* \right\|_{\mathcal X} +
L_2 \left\|\alpha - \alpha^* \right\|_{\mathcal P} \; .
\end{displaymath}
To verify this condition, as well as the next one, we will
give specific Lipschitz bounds on the Ohta-Kawasaki operator.
We will then show the precise way to combine these bounds in
order to get the constants $L_k$.
\item[(H4)] For~$\alpha$ close to~$\alpha^*$, the
Fr\'echet derivative~$D_\alpha {\mathcal G}(\alpha,x^*)$ satisfies
a Lipschitz-type bound. More precisely, there exist positive
real constants~$L_3$ and~$L_4$, such that for all $\alpha \in {\mathcal P}$
with $\|\alpha - \alpha^*\|_{\mathcal P} \le \ell_\alpha$ one has
\begin{displaymath}
\left\| D_\alpha {\mathcal G}(\alpha,x^*) \right\|_{{\mathcal L}({\mathcal P},{\mathcal Y})} \le
L_3 + L_4 \left\| \alpha - \alpha^* \right\|_{\mathcal P} \; ,
\end{displaymath}
where~$\ell_\alpha$ is the constant that was chosen in~(H3).
\end{itemize}
Keeping these hypotheses in mind, the constructive implicit
function theorem can then be stated as follows.
\begin{theorem}[Constructive Implicit Function Theorem]
\label{nift:thm}
Let~${\mathcal P}$, ${\mathcal X}$, and~${\mathcal Y}$ be Banach spaces, suppose that the
nonlinear operator ${\mathcal G} : {\mathcal P} \times {\mathcal X} \to {\mathcal Y}$
is Fr\'echet differentiable, and assume that the
pair~$(\alpha^*,x^*) \in {\mathcal P} \times {\mathcal X}$ satisfies
hypotheses~(H1), (H2), (H3), and~(H4).
Finally, suppose that
\begin{equation} \label{nift:thm1}
4 K^2 \rho L_1 < 1
\qquad\mbox{ and }\qquad
2 K \rho < \ell_x \; .
\end{equation}
Then there exist pairs of constants~$(\delta_\alpha,\delta_x)$ with
$0 \le \delta_\alpha \le \ell_\alpha$ and $0 < \delta_x \le \ell_x$,
as well as
\begin{equation} \label{nift:thm2}
2 K L_1 \delta_x + 2 K L_2 \delta_\alpha \le 1
\qquad\mbox{ and }\qquad
2 K \rho + 2 K L_3 \delta_\alpha + 2 K L_4 \delta_\alpha^2
\le \delta_x \; ,
\end{equation}
and for each such pair the following holds. For every~$\alpha \in {\mathcal P}$
with $\|\alpha - \alpha^*\|_{\mathcal P} \le \delta_\alpha$ there exists a uniquely
determined element~$x(\alpha) \in {\mathcal X}$ with $\| x(\alpha) - x^* \|_{\mathcal X}
\le \delta_x$ such that ${\mathcal G}(\alpha, x(\alpha)) = 0$.
In other words, if we define
\begin{displaymath}
{\mathcal B}_\delta^{\mathcal X} = \left\{ \xi \in {\mathcal X} \; : \;
\left\| \xi - x^* \right\|_{\mathcal X} \le \delta \right\}
\quad\mbox{ and }\quad
{\mathcal B}_\delta^{\mathcal P} = \left\{ p \in {\mathcal P} \; : \;
\left\| p - \alpha^* \right\|_{\mathcal P} \le \delta \right\}
\; ,
\end{displaymath}
then all solutions of the nonlinear problem ${\mathcal G}(\alpha,x)=0$ in the
set ${\mathcal B}_{\delta_\alpha}^{\mathcal P} \times {\mathcal B}_{\delta_x}^{\mathcal X}$ lie on the graph
of the function $\alpha \mapsto x(\alpha)$. In addition, the following
two statements are satisfied.
\begin{itemize}
\item For all pairs $(\alpha,x) \in {\mathcal B}_{\delta_\alpha}^{\mathcal P}
\times {\mathcal B}_{\delta_x}^{\mathcal X}$ the Fr\'echet derivative~$D_x{\mathcal G}(\alpha,x)
\in {\mathcal L}({\mathcal X},{\mathcal Y})$ is a bounded invertible linear operator, whose
inverse is in~${\mathcal L}({\mathcal Y},{\mathcal X})$.
\item If the mapping~${\mathcal G} : {\mathcal P} \times {\mathcal X} \to {\mathcal Y}$ is $k$-times
continuously Fr\'echet differentiable, then so is the solution
function $\alpha \mapsto x(\alpha)$.
\end{itemize}
\end{theorem}
Throughout the remainder of this paper, we concentrate on finding
computationally accessible versions of hypotheses~(H2), (H3), and~(H4)
for the Ohta-Kawasaki model.
\subsection{Function spaces}
\label{sec:fun}
Throughout this paper, we let $\Omega = (0,1)^d$ denote the unit cube
in dimension $d = 1,2,3$, and define the constants
\begin{displaymath}
c_0 = 1
\quad\mbox{ and }\quad
c_\ell = \sqrt{2}
\quad\mbox{ for }\quad
\ell \in {\mathbb N} \; .
\end{displaymath}
If $k \in {\mathbb N}_0^d$ denotes an arbitrary multi-index of the form
$k = (k_1,\dots,k_d)$, then let
\begin{displaymath}
c_k = c_{k_1} \cdot \ldots \cdot c_{k_d} \; .
\end{displaymath}
If we then define
\begin{equation}
\label{eqn:phik}
\phi_k(x) = c_k \prod_{i=1}^d \cos ( k_i \pi x_i )
\quad\mbox{ for all }\quad
x = (x_1,\ldots,x_d) \in \Omega \; ,
\end{equation}
then the function collection $\{ \phi_k \}_{k \in {\mathbb N}_0^d}$ forms a complete orthonormal
basis for the space~$L^2(\Omega)$. Any measurable and square-integrable function
$u : \Omega \to {\mathbb R}$ can be written in terms of its Fourier cosine series
\begin{equation} \label{eqn:ucossum}
u(x) = \sum_{k \in {\mathbb N}_0^d} \alpha_k \phi_k(x) \; ,
\end{equation}
where $\alpha_k \in {\mathbb R}$ are the Fourier coefficients of~$u$. Finally, we
define
\begin{displaymath}
|k| = (k_1^2 + \dots + k_d^2)^{1/2}
\quad\mbox{ and }\quad
|k|_\infty = \max( k_1, \dots , k_d ) \; .
\end{displaymath}
Each function $\phi_k(x)$ is an eigenfunction of the negative Laplacian. The corresponding eigenvalue
is given by $\kappa_k$, defined via the equation
\begin{displaymath}
-\Delta \phi_k(x) = \kappa_k \phi_k(x)
\qquad\mbox{ with }\qquad
\kappa_k = \pi^2 \left(k_1^2 + k_2^2 + \dots + k_d^2\right) =
\pi^2 |k|^2 \; .
\end{displaymath}
A straightforward direct computation shows that each~$\phi_k(x)$ satisfies
the homogeneous Neumann boundary condition $\partial \phi_k /\partial \nu = 0$.
In addition, as a result of being an eigenfunction of~$-\Delta$, each function $\phi_k(x)$
also satisfies the second boundary condition in~(\ref{eqn:dbcp}), since the
identity $\partial (\Delta \phi_k) /\partial \nu = -\kappa_k \partial
\phi_k /\partial \nu = 0$ holds. Therefore any finite Fourier series as
above automatically satisfies both boundary conditions of the diblock copolymer
equation.
Based on our construction, the family $\{ \phi_k \}_{k \in {\mathbb N}_0^d}$ is a
complete orthonormal basis for the space~$L^2(\Omega)$. Thus, if~$u$ is
given as in~(\ref{eqn:ucossum}) one can easily see that
\begin{displaymath}
\| u \|_{L^2} = \left( \sum_{k \in {\mathbb N}_0^d} \alpha_k^2 \right)^{1/2} \; .
\end{displaymath}
For our application to the diblock copolymer model, we need to work
with suitable subspaces of the Sobolev spaces~$H^k(\Omega) = W^{k,2}(\Omega)$,
see for example~\cite{adams:fournier:03a}. These subspaces have to
reflect the required homogeneous Neumann boundary conditions and they can be
introduced as follows.
For $\ell \in {\mathbb N}$ consider the space
\begin{displaymath}
{\mathcal H}^\ell =
\left\{ u = \sum_{k \in {\mathbb N}_0^d} \alpha_k \phi_k : \| u \|_{{\mathcal H}^\ell}
< \infty \right\} \;,
\end{displaymath}
where
\begin{displaymath}
\|u\|_{{\mathcal H}^\ell} = \left( \sum_{k \in {\mathbb N}_0^d} \left( 1 +
\kappa_k^\ell \right) \alpha_k^2 \right)^{1/2} \; .
\end{displaymath}
One can easily verify that this is equivalent to the definition
\begin{displaymath}
\|u\|^2_{{\mathcal H}^\ell} =
\|u \|_{L^2}^2 + \left\| (-\Delta)^{\ell/2} u \right\| ^2_{L^2} \;,
\end{displaymath}
where~$\| \cdot \|_{L^2}$ denotes the standard $L^2(\Omega)$-norm
on the domain~$\Omega$ as mentioned above, and the fractional Laplacian
for odd~$\ell$ is defined using the spectral definition.
We note that we have incorporated the boundary conditions
of~(\ref{eqn:dbcp}) into our definition of the spaces
${\mathcal H}^\ell$. For example,
\begin{eqnarray*}
{\mathcal H}^1 &=& H^1(\Omega), \\
{\mathcal H}^2 &=& \left\{ u \in H^2(\Omega): \frac{\partial u}{\partial \nu} =0 \right\} \, \quad\mbox{, and } \\
{\mathcal H}^4 &=& \left\{ u \in H^4(\Omega): \frac{\partial u}{\partial \nu} = \frac{\partial \Delta u}{\partial \nu} =0 \right\} \, ,
\end{eqnarray*}
where the boundary conditions in the second and third equations are considered in the sense of the trace operator.
The first identity follows as a special case from the results
in~\cite{huseynov:shykhammedov:12a, muradov:salmanov:14a}, the second
identity has been established in~\cite[Lemma~3.2]{maier:wanner:00a}, and also the third
identity can be verified as in~\cite[Lemma~3.2]{maier:wanner:00a}.
For the sake of simplicity
we further define ${\mathcal H}^0 = L^2(\Omega)$.
While the spaces~${\mathcal H}^\ell$ incorporate the boundary conditions
of~(\ref{eqn:dbcp}), recall that we have reformulated the diblock
copolymer equation in such a way that solutions satisfy the integral
constraint $\int_\Omega u \; dx = 0$, since the case of nonzero average
has been absorbed into the placement of the parameter~$\mu$. In order
to treat this additional constraint, we therefore need to restrict the
spaces~${\mathcal H}^\ell$ further. Consider now an {\em arbitrary integer\/}~$\ell
\in {\mathbb Z}$ and define the space
\begin{equation} \label{defhlbar}
\overline{{\mathcal H}}^\ell =
\left\{ u = \sum_{k \in {\mathbb N}_0^d, \; |k|>0} \alpha_k \phi_k
\; : \; \| u \|_{\overline{{\mathcal H}}^\ell} < \infty \right\} \;,
\end{equation}
where we use the modified norm
\begin{equation} \label{defhlbarnorm}
\| u \|_{\overline{{\mathcal H}}^\ell} =
\left( \sum_{k \in {\mathbb N}_0^d, \; |k|>0} \kappa_k^\ell \alpha_k^2
\right)^{1/2} \;.
\end{equation}
Notice that for $\ell = 0$ this definition reduces to the subspace
of~$L^2(\Omega)$ of all functions with average zero equipped with its
standard norm, since we removed the constant basis function from the
Fourier series. For $\ell > 0$ one can easily see that
$\overline{{\mathcal H}}^\ell \subset {\mathcal H}^\ell$, and that the new norm
is equivalent to our norm on~${\mathcal H}^\ell$. We still need to shed
some light on the new definition~(\ref{defhlbar}) for
negative integers $\ell < 0$. In this case, the series
in~(\ref{eqn:ucossum}) is interpreted formally, i.e., the element
$u \in \overline{{\mathcal H}}^\ell$ for $\ell < 0$ is identified with the
sequence of its Fourier coefficients. Moreover, one can easily see
that in this case~$u$ acts as a bounded linear functional
on~$\overline{{\mathcal H}}^{-\ell}$. In fact, for all~$\ell < 0$ the
space~$\overline{{\mathcal H}}^\ell$ can be considered as a subspace of
the negative exponent Sobolev space~$H^\ell(\Omega) =
W^{\ell,2}(\Omega)$, see again~\cite{adams:fournier:03a}.
Finally, for every $\ell \in {\mathbb Z}$ the space~$\overline{{\mathcal H}}^\ell$ is
a Hilbert space with inner product
\begin{displaymath}
(u,v)_{\overline{{\mathcal H}}^\ell} =
\sum_{k \in {\mathbb N}_0^d, \; |k|>0} \kappa_k^\ell \alpha_k \beta_k \;,
\end{displaymath}
where
\begin{displaymath}
u = \sum_{k \in {\mathbb N}_0^d, \; |k|>0} \alpha_k \phi_k
\in \overline{{\mathcal H}}^\ell
\qquad\mbox{ and }\qquad
v = \sum_{k \in {\mathbb N}_0^d, \; |k|>0} \beta_k \phi_k
\in \overline{{\mathcal H}}^\ell \;.
\end{displaymath}
The above spaces form the functional analytic backbone of this
paper, and they allow us to reformulate the equilibrium problem
for~(\ref{eqn:dbcp}) as a zero finding problem. Note first, however,
that the functions~$\phi_k$ can also be used to obtain an orthonormal
basis in~$\overline{{\mathcal H}}^\ell$. In fact, we only have to drop the
constant function~$\phi_0$ and apply the following rescaling.
\begin{lemma} \label{lemma:orthoghell}
The set $\left\{ \kappa_k^{-\ell/2} \phi_k(x) \right\}_{k \in {\mathbb N}_0^d,
\; |k|>0}$
forms a complete orthonormal set for the Hilbert space~$\overline{{\mathcal H}}^\ell$.
\end{lemma}
We close this section by briefly showing how the diblock copolymer
equilibrium problem can be stated as a zero set problem in our functional
analytic setting. For this, consider the operator
\begin{displaymath}
F : {\mathbb R}^3 \times X \to Y \; ,
\qquad\mbox{ with }\qquad
X = \overline{{\mathcal H}}^2
\quad\mbox{ and }\quad
Y = \overline{{\mathcal H}}^{-2} \; ,
\end{displaymath}
which is defined as
\begin{equation} \label{eqn:deffoperator}
F(\lambda,\sigma,\mu, u) =
-\Delta \left( \Delta u + \lambda f(u + \mu) \right)
- \lambda \sigma u \; .
\end{equation}
The problem is now formulated weakly, and in particular, the second boundary
condition $\partial (\Delta u) / \partial \nu = 0$ is no longer explicitly
stated in this weak formulation. Note, however, that the first boundary
condition $ \partial u / \partial \nu = 0$ has been incorporated into the
space~$X = \overline{{\mathcal H}}^2$.
The fact that $f$ is $C^2$ is sufficient to guarantee that the function~$F$
maps $X$ to $Y$, since we only consider domains up to dimension three.
Then for fixed parameters, an equilibrium solution~$u$ to the diblock
copolymer equation~(\ref{eqn:dbcp}) is a function which satisfies the
identity $F(\lambda,\sigma,\mu, u) = 0$. Moreover, the Fr\'echet derivative
of the operator~$F$ with respect to~$u$ at this equilibrium is given by
\begin{equation} \label{eqn:deffrechetderivative}
D_u F(\lambda, \sigma, \mu, u) [v] =
- \Delta \left( \Delta v + \lambda f'(u + \mu)v \right)
- \lambda \sigma v \; .
\end{equation}
In our formulation, the boundary and integral conditions which are
part of~(\ref{eqn:dbcp}) have been incorporated into the choice of
the domain $X = \overline{{\mathcal H}}^2$ of the nonlinear operator~$F$.
\subsection{Constructive Sobolev embedding and Banach algebra constants}
\label{sec:sob}
For classical Sobolev embedding theorems, it is sufficient to write
statements such as ``the Sobolev space~${\mathcal H}^2$ can be continuously embedded
into~$L^\infty(\Omega)$,'' without worrying about the specific constants needed
to do so. However, for the purpose of computer-assisted proofs, such
statements are insufficient. Instead we need specific numerical bounds
to compare the norms of a function or product of functions when
considered in different spaces. Parallel to the name constructive
implicit function theorem, we refer to the bounds on the constants as
{\em constructive Sobolev embedding constants\/}. In addition, we will
need a {\em constructive Banach algebra estimate\/} on the relationship between
$\| u v \|_{{\mathcal H}^2}$ and the product $\|u\|_{{\mathcal H}^2} \|v\|_{{\mathcal H}^2}$. In
particular, we require the exact values of $C_m$, $\overline{C}_m$, and
$C_b$ in one, two, and three dimensions given in the following
equations:
\begin{eqnarray}
\|u \|_\infty &\le& C_m \; \| u \|_{{\mathcal H}^2} \;,
\qquad\qquad\mbox{ for all } u \in {\mathcal H}^2 \;,
\nonumber \\[1ex]
\| u \|_\infty &\le& \overline{C}_m \; \| u \|_{\overline{{\mathcal H}}^2} \;,
\qquad\qquad\mbox{ for all } u \in \overline{{\mathcal H}}^2
\label{eqn:cmcb} \;,\\[1.5ex]
\| u v \|_{{\mathcal H}^2} &\le& C_b \; \| u \|_{{\mathcal H}^2} \| v \|_{{\mathcal H}^2}
\;, \quad\;\;\;\mbox{ for all } u,v \in {\mathcal H}^2 \nonumber \;.
\end{eqnarray}
The values of~$C_m$ and~$C_b$ in dimensions~$1$, $2$, and~$3$ were
established in~\cite{wanner:18b} using rigorous computational techniques.
The values of~$\overline{C}_m$ can be obtained by adapting the approach
in this paper, as outlined in the next lemma. Table~\ref{table1}
summarizes the values of all necessary constants.
\begin{table}
\begin{center}
\begin{tabular}{|c||c|c|c|}
\hline
Dimension $d$ & $1$ & $2$ & $3$ \\ \hline\hline
Sobolev Embedding Constant $C_m$ & $1.010947$ &
$1.030255$ & $1.081202$ \\ \hline
Sobolev Embedding Constant $\overline{C}_m$ &
$0.149072$ & $0.248740$ & $0.411972$ \\ \hline
Banach Algebra Constant $C_b$ &
$1.471443$ & $1.488231$ & $1.554916$ \\ \hline
\end{tabular}
\vspace*{0.3cm}
\caption{\label{table1}
These values are rigorous upper bounds for the embedding
constants in~(\ref{eqn:cmcb}).}
\end{center}
\end{table}
\begin{lemma}[Sobolev embedding for the zero mass case]
\label{lemma:sobzeromass}
For all functions $u \in \overline{{\mathcal H}}^2$ we have the estimate
\begin{equation} \label{lemma:sobzeromass1}
\| u \|_\infty \quad\le\quad
\| u \|_{\overline{{\mathcal H}}^2} \cdot \left( \sum_{k \in {\mathbb N}_0^d, \; |k|>0}
c_k^2 \kappa_k^{-2} \right)^{1/2} \quad\le\quad
\overline{C}_m \| u \|_{\overline{{\mathcal H}}^2} \; ,
\end{equation}
where the value of the constant~$\overline{C}_m$ is given
in Table~\ref{table1}.
\end{lemma}
\begin{proof}
Suppose that $u \in \overline{{\mathcal H}}^2$ is given by
$u = \sum_{k \in {\mathbb N}_0^d, \; |k|>0} \alpha_k \phi_k$.
According to the definition of the functions~$\phi_k$
we have $\| \phi_k \|_\infty = c_k$, which immediately implies
for all $x \in \Omega$ the estimate
\begin{eqnarray*}
|u(x)| & \le & \sum_{k \in {\mathbb N}_0^d, \; |k|>0} \left| \alpha_k \right|
\, \left| \phi_k(x) \right|
\; \le \;
\sum_{k \in {\mathbb N}_0^d, \; |k|>0} \left| \alpha_k \right| c_k
\; = \;
\sum_{k \in {\mathbb N}_0^d, \; |k|>0} \left| \alpha_k \right|
\kappa_k \cdot \frac{c_k}{\kappa_k} \\[2ex]
& \le & \left( \sum_{k \in {\mathbb N}_0^d, \; |k|>0} \alpha_k^2
\kappa_k^2 \right)^{1/2} \cdot
\left( \sum_{k \in {\mathbb N}_0^d, \; |k|>0} c_k^2 \kappa_k^{-2} \right)^{1/2} \; ,
\end{eqnarray*}
and together with~(\ref{defhlbarnorm}) this immediately establishes the
first estimate in~(\ref{lemma:sobzeromass1}).
In order to complete the proof one only has to find a rigorous upper
bound on the second factor in the last line of the above estimate.
For this, one can first use the proof of~\cite[Corollary~3.3]{wanner:18b}
to establish the tail bound
\begin{displaymath}
\sum_{k \in {\mathbb N}_0^d, \; |k| \ge N} c_k^2 \kappa_k^{-2}
\; \le \;
\frac{2^d}{\pi^4} \cdot \gamma_d(N) \; ,
\end{displaymath}
where~$\gamma_d(N)$ is explicitly defined in~\cite[Equation~(16)]{wanner:18b}.
This in turn yields the estimate
\begin{displaymath}
\sum_{k \in {\mathbb N}_0^d, \; |k|>0} c_k^2 \kappa_k^{-2}
\quad\le\quad
\sum_{k \in {\mathbb N}_0^d, \; 0<|k| < N} c_k^2 \kappa_k^{-2} \; + \;
\frac{2^d}{\pi^4} \cdot \gamma_d(N) \; .
\end{displaymath}
Evaluating the finite sum and the tail bound using interval arithmetic
and $N = 1000$ then furnishes the constant in Table~\ref{table1}.
\end{proof}
The next lemma derives explicit bounds for the norm equivalence
of the norms on the Hilbert spaces~$\overline{{\mathcal H}}^2$ and on~${\mathcal H}^2$,
which contain functions of zero and nonzero average, respectively.
\begin{lemma}[Norm equivalence between zero and nonzero mass]
\label{lemma:sobnobar2bar}
For all $u \in \overline{{\mathcal H}}^2$ we have
\begin{displaymath}
\| u \|_{\overline{{\mathcal H}}^2} \le
\| u \|_{{\mathcal H}^2} \le
\frac{\sqrt{1 + \pi^4}}{\pi^2} \| u \|_{\overline{{\mathcal H}}^2} \;.
\end{displaymath}
\end{lemma}
\begin{proof}
The first inequality is clear from the definitions of the two
norms in the last section, since $\kappa_k^2 \le 1 + \kappa_k^2$.
For the second inequality, note that for $|k|>0$ one has the
inequality $\kappa_k = \pi^2 |k|^2 \ge \pi^2$, and therefore
\begin{displaymath}
1+ \kappa_k^2 =
\kappa_k^2 \left( 1 + \frac{1}{\kappa_k^2} \right) \le
\kappa_k^2 \left(1 + \frac{1}{\pi^4} \right) =
\kappa_k^2 \, \frac{1 + \pi^4}{\pi^4} \;.
\end{displaymath}
This in turn implies
\begin{displaymath}
\| u \|_{{\mathcal H}^2}^2 =
\sum_{k \in {\mathbb N}_0^d, \; |k|>0} (1 + \kappa_k^2) \alpha_k^2 \le
\frac{1 + \pi^4}{\pi^4} \sum_{k \in {\mathbb N}_0^d, \; |k|>0}
\kappa_k^2 \alpha_k^2 =
\frac{1 + \pi^4}{\pi^4} \| u \|_{\overline{{\mathcal H}}^2}^2 \; ,
\end{displaymath}
which completes the proof of the lemma.
\end{proof}
Note that from the above lemma one could conclude
$\overline{C}_m \le (\sqrt{1 + \pi^4}/\pi^2) C_m$, but the results
given in Lemma~\ref{lemma:sobzeromass} are around an order of
magnitude better.
Our specific norm choice on the spaces~$\overline{{\mathcal H}}^\ell$
has some convenient implications for its relation to the
Laplacian operator~$\Delta$. Clearly for any function
$u \in \overline{{\mathcal H}}^\ell$ we have both $\Delta u \in
\overline{{\mathcal H}}^{\ell-2}$ and $\Delta^{-1} u \in
\overline{{\mathcal H}}^{\ell+2}$. Furthermore, if~$u$
is of the form
\begin{displaymath}
u = \sum_{k \in {\mathbb N}_0^d, \; |k|>0} \alpha_k \phi_k \;,
\qquad\mbox{ then }\qquad
-\Delta u = \sum_{k \in {\mathbb N}_0^d, \; |k|>0}
\kappa_k \alpha_k \phi_k \;,
\end{displaymath}
and we obtain the representation for $-\Delta^{-1}u$ if we
replace~$\kappa_k$ in the last sum by~$\kappa_k^{-1}$. This
immediately yields
\begin{eqnarray*}
\| \Delta u \|_{\overline{{\mathcal H}}^{\ell-2}}^2 &=&
\sum_{k \in {\mathbb N}_0^d, \; |k|>0} \kappa_k^{\ell-2}
\kappa_k^2 \alpha_k^2 \;, \\[1.5ex]
\| u \|_{\overline{{\mathcal H}}^\ell}^2 &=&
\sum_{k \in {\mathbb N}_0^d, \; |k|>0} \kappa_k^\ell
\alpha_k^2 \;, \\[1.5ex]
\| \Delta^{-1} u \|_{\overline{{\mathcal H}}^{\ell+2}}^2 &=&
\sum_{k \in {\mathbb N}_0^d, \; |k|>0} \kappa_k^{\ell+2}
\kappa_k^{-2} \alpha_k^2 \; ,
\end{eqnarray*}
and altogether we have verified the following lemma.
\begin{lemma}[The Laplacian is an isometry] \label{lemma:lapiso}
For every $\ell \in {\mathbb Z}$ the Laplacian operator~$\Delta$ is an isometry
from~$\overline{{\mathcal H}}^\ell$ to~$\overline{{\mathcal H}}^{\ell-2}$, i.e., we have
\begin{eqnarray*}
\| \Delta^{-1} u \|_{\overline{{\mathcal H}}^{\ell+2}} =
\| u \|_{\overline{{\mathcal H}}^\ell} =
\| \Delta u \|_{\overline{{\mathcal H}}^{\ell-2}} \; .
\end{eqnarray*}
\end{lemma}
To close this section we present a final result which relates the
standard norm in the Hilbert space~$\overline{{\mathcal H}}^\ell$ to the norm
in~$\overline{{\mathcal H}}^m$ if~$\ell \le m$. This inequality will turn out
to be useful later on.
\begin{lemma}[Relating the norms in $\overline{{\mathcal H}}^\ell$ and $\overline{{\mathcal H}}^m$]
\label{lemma:bscaleest}
For all $u \in \overline{{\mathcal H}}^m$ and all $\ell \le m$ we have the estimate
\begin{displaymath}
\| u \|_{\overline{{\mathcal H}}^\ell} \; \le \;
\frac{1}{\pi^{m-\ell}} \, \| u \|_{\overline{{\mathcal H}}^m}\;.
\end{displaymath}
Furthermore, note that in the special case $\ell = 0 \le m$ we have
$\| u \|_{\overline{{\mathcal H}}^0} = \| u \|_{L^2}$.
\end{lemma}
\begin{proof}
Suppose that $u \in \overline{{\mathcal H}}^m$ is given by
$u = \sum_{k \in {\mathbb N}_0^d, \; |k|>0} \alpha_k \phi_k$.
Then we have
\begin{displaymath}
\| u \|_{\overline{{\mathcal H}}^\ell}^2 \; = \; \sum_{k \in {\mathbb N}_0^d, \; |k|>0}
\frac{\kappa_k^m \alpha_k^2}{\kappa_k^{m-\ell}} \; \le \;
\frac{1}{\pi^{2(m-\ell)}} \sum_{k \in {\mathbb N}_0^d, \; |k|>0}
\kappa_k^m \alpha_k^2 \; = \;
\frac{1}{\pi^{2(m-\ell)}} \| u \|_{\overline{{\mathcal H}}^m}^2 \; ,
\end{displaymath}
since for all $|k| > 0$ one has $\kappa_k \ge \pi^2$.
\end{proof}
\subsection{Projection operators}
\label{sec:proj}
In order to establish computer-assisted existence proofs for
equilibrium solutions of~(\ref{eqn:dbcp}) one needs to work
with suitable finite-dimensional approximations. In our framework,
we use truncated cosine series, and this is formalized in the current
section through the introduction of suitable projection operators.
For this, let~$N \in {\mathbb N}$ denote a positive integer, and
consider $u \in {\mathcal H}^\ell$ for $\ell \in {\mathbb N}_0$, or alternatively
$u \in \overline{{\mathcal H}}^\ell$ for $\ell \in {\mathbb Z}$, of the form
$u = \sum_{k \in {\mathbb N}_0^d} \alpha_k \phi_k$, where in the latter
case $\alpha_0 = 0$. Then we define the projection
\begin{equation} \label{eqn:defpn}
P_N u = \sum_{k \in {\mathbb N}_0^d, \; |k|_\infty< N} \alpha_k \phi_k \; .
\end{equation}
Note that in this definition we use the $\infty$-norm of the
multi-index~$k$, since this simplifies the implementation of our
method. The so-defined operator~$P_N$ is a bounded linear operator
on~${\mathcal H}^\ell$ with induced operator norm~$\| P_N \| = 1$, and one
can easily see that it leaves the space~$\overline{{\mathcal H}}^\ell$
invariant if $\ell \in {\mathbb Z}$. Furthermore, it is straightforward to
show that for any $N \in {\mathbb N}$ we have
\begin{displaymath}
\dim P_N {{\mathcal H}}^\ell = N^d
\qquad\mbox{ and }\qquad
\dim P_N \overline{{\mathcal H}}^\ell = N^d - 1 \;.
\end{displaymath}
For all $\ell \in {\mathbb N}_0$ we would like to point out that
$(I - P_1) {{\mathcal H}}^\ell = \overline{{\mathcal H}}^\ell$. Since this is
an especially useful operator, we introduce the abbreviation
\begin{equation}
\overline{P} = I - P_1 \; .
\end{equation}
The operator~$\overline{P}$ satisfies the following useful
identity.
\begin{lemma}
\label{lemma:pbarl2prod}
For arbitrary $u \in {\mathcal H}^0$ and $v \in \overline{{\mathcal H}}^0$ we
have the equality
\begin{displaymath}
\left( \overline{P} u, v \right)_{L^2} =
(u,v)_{L^2} \; .
\end{displaymath}
\end{lemma}
\begin{proof}
This result can be established via direct calculation. Note that
\begin{eqnarray*}
\left( \overline{P} u, v \right)_{L^2} &=&
(u - \alpha_0 \phi_0, v)_{L^2} =
(u,v)_{L^2} - \alpha_0 (\phi_0,v)_{L^2} \\
&=& (u,v)_{L^2} - \alpha_0 \int_\Omega v(x) \; dx =
(u,v)_{L^2} - 0 \; ,
\end{eqnarray*}
where for the last step we used the fact that $v \in \overline{{\mathcal H}}^0$.
\end{proof}
We close this section by deriving a norm bound for the
infinite cosine series part that is discarded by the
projection~$P_N$ in terms of a higher-regularity norm.
More precisely, we have the following.
\begin{lemma}[Projection tail estimates]
\label{lemma:projtailest}
Consider two integers $\ell \le m$ and let the function $u \in \overline{{\mathcal H}}^m$
be arbitrary. Then the projection tail~$(I - P_N)u$ satisfies
\begin{displaymath}
\| (I - P_N) u \|_{\overline{{\mathcal H}}^\ell} \; \le \;
\frac{1}{\pi^{m - \ell} N^{m - \ell}} \,
\| (I - P_N) u \|_{\overline{{\mathcal H}}^m} \; \le \;
\frac{1}{\pi^{m - \ell} N^{m - \ell}} \,
\| u \|_{\overline{{\mathcal H}}^m} \;.
\end{displaymath}
\end{lemma}
\begin{proof}
Suppose that $u \in \overline{{\mathcal H}}^m$ is given by
$u = \sum_{k \in {\mathbb N}_0^d, \; |k|>0} \alpha_k \phi_k$.
Then we have
\begin{eqnarray*}
\| (I-P_N) u \|_{\overline{{\mathcal H}}^\ell}^2 & = &
\sum_{k \in {\mathbb N}_0^d, \;|k|_\infty \ge N} \kappa_k^\ell \alpha_k^2 \; = \;
\sum_{k \in {\mathbb N}_0^d, \;|k|_\infty \ge N} \frac{\kappa_k^m \alpha_k^2}
{\kappa_k^{m-\ell}} \\[1.5ex]
& \le & \sum_{k \in {\mathbb N}_0^d, \;|k|_\infty \ge N}
\frac{\kappa_k^m \alpha_k^2}{(\pi^2 N^2)^{m-\ell}} \; = \;
\frac{1}{(\pi^2 N^2)^{m-\ell}} \;
\| (I-P_N) u \|_{\overline{{\mathcal H}}^m}^2 \; ,
\end{eqnarray*}
since the estimate $|k|_\infty \ge N$ yields $|k| \ge N$.
\end{proof}
\section{Derivative inverse estimate}\label{subsec:die}
This section is devoted to establishing derivative inverse bound in
hypothesis~(H2), which is required for Theorem~\ref{nift:thm}, the
constructive implicit function theorem. More precisely, our goal in
the following is to derive a constant~$K$ such that
\begin{displaymath}
\left\| (D_u F)^{-1} \right\|_{{\mathcal L}(Y,X)} \le K \; ,
\end{displaymath}
i.e., we need to find a bound on the operator norm of the inverse
of the Fr\'echet derivative of~$F$ with respect to~$u$. We divide
the derivation of this estimate into four parts. In
Section~\ref{sec:investoutline} we give an outline of our approach,
introduce necessary definitions and auxiliary results, and present
the main result of this section. This result will be verified in the
following three sections. First, we discuss the finite-dimensional
projection of~$D_u F$ in Section~\ref{sec:findim}. Using this
finite-dimensional operator, we then construct an approximative
inverse to the Fr\'echet derivative in Section~\ref{sec:approxinverse},
before everything is assembled to provide the desired estimate in
the final Section~\ref{sec:comprof}.
\subsection{General outline and auxiliary results}
\label{sec:investoutline}
For convenience of notation in the subsequent discussion, for
fixed parameters and~$u$ we abbreviate the Fr\'echet derivative
of~$F$ by
\begin{equation} \label{eqn:ldef}
L v = D_u F(\lambda, \sigma, \mu, u) [v] \; , \quad
L \in {\mathcal L}( X, Y ) \; , \quad\mbox{ with }\quad
X = \overline{{\mathcal H}}^{2} \; , \quad
Y = \overline{{\mathcal H}}^{-2} \; .
\end{equation}
Standard results imply that~$L$ is a bounded linear
operator $L \in {\mathcal L}(\overline{{\mathcal H}}^{2},\overline{{\mathcal H}}^{-2})$, which
explicitly is given by
\begin{equation}\label{eqn:ldefinition}
L v = -\Delta ( \Delta v + \lambda \, f'(u + \mu) v ) -
\lambda \sigma v \; .
\end{equation}
More precisely, note that since the nonlinearity~$f$ is twice continuously differentiable,
and in view of Sobolev's imbedding recalled in~(\ref{eqn:cmcb}), the
function~$f'(u + \mu)$ is continuous on~$\overline{\Omega}$, which
makes the product $\lambda f'(u + \mu) v$ an $L^2(\Omega)$-function,
and therefore $- \Delta(\lambda f'(u + \mu) v) \in \overline{{\mathcal H}}^{-2}$.
We will also use the abbreviation
\begin{equation} \label{eqn:defq}
q(x) = \lambda f'(u(x) + \mu) \;.
\end{equation}
As mentioned earlier, the constructive implicit function theorem
crucially relies on being able to find a bound~$K$ such that
$\|L^{-1}\| \le K$. Our goal is to do so by using a finite-dimensional
approximation for~$L$, since that can be analyzed via rigorous
computational means. Our finite-dimensional approximation for~$L$
is given as follows. For fixed $N \in {\mathbb N}$ define the finite-dimensional
spaces
\begin{displaymath}
X_N = P_N X
\qquad\mbox{ and }\qquad
Y_N = P_N Y \; ,
\end{displaymath}
where the projection operator is given in~(\ref{eqn:defpn}).
Define $L_N: X_N \to Y_N$ by
\begin{equation} \label{eqn:defln}
L_N = \left. P_N L \right|_{X_N} \; .
\end{equation}
Let~$K_N$ be a bound on the inverse of the finite-dimensional
operator~$L_N$, i.e., suppose that
\begin{equation} \label{eqn:defkn}
\left\| L_N^{-1} \right\|_{{\mathcal L}(Y_N,X_N)} \le K_N \; ,
\end{equation}
where the spaces~$X_N$ and~$Y_N$ are equipped with the norms
of~$X$ and~$Y$, respectively. We will discuss further details
on appropriate coordinate systems and the actual computation
of both~$L_N$ and~$K_N$ in Section~\ref{sec:findim}. Our main
result for this section is as follows.
\begin{theorem}[Derivative inverse estimate]
\label{thm:k}
Assume there is a constant~$\tau > 0$ and an integer $N \in {\mathbb N}$ such that
\begin{displaymath}
\frac{1}{\pi^2 N^2} \sqrt{ K_N^2 \, \| q \|_\infty^2 +
C_b^2 \, \frac{1+\pi^4}{\pi^4} \, \| q \|_{{\mathcal H}^2}^2}
\; \le \; \tau \; < \; 1 \; ,
\end{displaymath}
where~$K_N$ and~$q$ are defined in~(\ref{eqn:defkn})
and~(\ref{eqn:defq}), respectively. Then the derivative
operator~$L$ in~(\ref{eqn:ldefinition}) satisfies
\begin{displaymath}
\left\| L^{-1} \right\|_{{\mathcal L}(X,Y)} \le
\frac{\max ( K_N, 1) }{1-\tau} \; .
\end{displaymath}
\end{theorem}
Before we begin to prove this main theorem, we state a
necessary result which is based on a Neumann series argument
to derive bounds on the operator norm of an inverse of an
operator. This is a standard functional-analytic technique,
which we state here for the reader's convenience. A proof
can be found in~\cite[Lemma~4]{sander:wanner:16a}.
\begin{proposition}[Neumann series inverse estimate]
\label{prop:neumann}
Let ${\mathcal A} \in {\mathcal L}(X,Y)$ be an arbitrary bounded linear operator
between two Banach spaces, and let ${\mathcal B} \in {\mathcal L}(Y,X)$ be one-to-one.
Assume that there exist positive constants~$\rho_1$ and~$\rho_2$
such that
\begin{displaymath}
\| I - {\mathcal B} {\mathcal A} \|_{{\mathcal L}(X,X)} \le \rho_1 < 1
\qquad\mbox{ and }\qquad
\|{\mathcal B}\|_{{\mathcal L}(Y,X)} \le \rho_2 \;.
\end{displaymath}
Then ${\mathcal A}$ is one-to-one and onto, and
\begin{displaymath}
\| {\mathcal A}^{-1}\|_{{\mathcal L}(Y,X)} \le \frac{\rho_2}{1-\rho_1} \;.
\end{displaymath}
In subsequent discussions, we will refer to~${\mathcal B}$ as an
{\em approximate inverse}.
\end{proposition}
We are now ready to proceed with the proof of the main result
of the section, Theorem~\ref{thm:k}. For this, we fix all
parameters, as well as $u \in \overline{{\mathcal H}}^2$. Our goal is
to prove that~$L$ is one-to-one, onto, and has an inverse whose
operator norm is bounded by the value $K = \max(K_N,1)/(1-\tau)$.
\subsection{Finite-dimensional projections of the linearization}
\label{sec:findim}
In this section, we consider~$L_N$, the finite dimensional projection
of the operator~$L$. The linear map~$L_N$ is tractable using rigorous
computational methods, since calculating a finite-dimensional inverse
is something that can be done using numerical linear algebra. To
derive~$L_N$ in more detail, we recall the definitions of the
following projection spaces, all of which are Hilbert spaces:
\begin{displaymath}
\begin{array}{rclcrclcrcl}
X & = & \overline{{\mathcal H}}^2 \;, & \quad &
X_N & = & P_N X \; , & \quad &
X_\infty & = & (I- P_N) X \; , \\[1ex]
Y & = & \overline{{\mathcal H}}^{-2} \;, & \quad &
Y_N & = & P_N Y \; , & \quad &
Y_\infty & = & (I- P_N) Y \;.
\end{array}
\end{displaymath}
Recall that in~(\ref{eqn:defln}) we defined $L_N: X_N \to Y_N$ via
$L_N = \left. P_N L \right|_{X_N}$. In order to work with this operator
in a straightforward computational manner, we need to find its matrix
representation. Since both~$X_N$ and~$Y_N$ have the basis~$\phi_k$
for all $k \in {\mathbb N}_0^d$ with~$0 < |k|_\infty < N$, one obtains such
a matrix~$B = (b_{k,\ell}) \in {\mathbb R}^{(N^d-1) \times
(N^d-1)}$ via the definition
\begin{displaymath}
b_{k,\ell} = (L \phi_\ell,\phi_k)_{L^2} =
(L_N \phi_\ell,\phi_k)_{L^2} \; ,
\end{displaymath}
where~$k,\ell \in {\mathbb N}_0^d$ satisfy $0 < |k|_\infty < N$ and
$0 < |\ell|_\infty < N$.
The above matrix representation characterizes~$L_N$ on the
algebraic level in the following sense. If we consider a function
$v_N \in X_N$, introduce the representations
\begin{displaymath}
v_N = \sum_{k \in {\mathbb N}_0^d, \, 0 < |k|_\infty < N}
\alpha_k \phi_k(x)
\qquad\mbox{ and }\qquad
L_N v_N = \sum_{k \in {\mathbb N}_0^d, \, 0 < |k|_\infty < N}
\beta_k \phi_k(x) \; ,
\end{displaymath}
and if we collect the numbers~$\alpha_k$ and~$\beta_k$ in
vectors~$\alpha$ and~$\beta$ in the straightforward way, then
we have
\begin{displaymath}
\beta = B \alpha \; .
\end{displaymath}
This natural algebraic representation has one drawback. We would like
to use the regular Euclidean norm on real vector spaces, as well as
the induced matrix norm, to study the ${\mathcal L}(X_N,Y_N)$-norm of~$L_N$.
To achieve this, we recall Lemma~\ref{lemma:orthoghell} which shows that
the collection~$\{ \kappa_k^{-1} \phi_k(x) \}$ with~$k$ as above is
an orthonormal basis in $X_N \subset X$, and~$\{ \kappa_k \phi_k(x) \}$
is an orthonormal basis in $Y_N \subset Y$. Thus, we need to use the
representations
\begin{displaymath}
v_N = \sum_{k \in {\mathbb N}_0^d, \, 0 < |k|_\infty < N}
\tilde{\alpha}_k \kappa_k^{-1} \phi_k(x)
\qquad\mbox{ and }\qquad
L_N v_N = \sum_{k \in {\mathbb N}_0^d, \, 0 < |k|_\infty < N}
\tilde{\beta}_k \kappa_k \phi_k(x)
\end{displaymath}
instead of the ones given above. In order to pass back and forth
between these two representations we define the diagonal matrix
\begin{displaymath}
D = \left( \begin{array}{cccc}
\kappa_1 & 0 & \cdots & 0 \\
0 & \kappa_2 & \ddots & \vdots \\
\vdots & \ddots & \ddots & 0 \\
0 & \cdots & 0 & \kappa_{N-1}
\end{array} \right) \; .
\end{displaymath}
One can easily see that on the level of vectors we have
\begin{displaymath}
\alpha = D^{-1} \tilde{\alpha}
\quad\mbox{ and }\quad
\beta = D \tilde{\beta} \; ,
\quad\mbox{ and therefore }\quad
\tilde{\beta} = D^{-1} B D^{-1} \tilde{\alpha} \; .
\end{displaymath}
In view of Lemma~\ref{lemma:orthoghell} one then obtains
\begin{displaymath}
\left\| L_N \right\|_{{\mathcal L}(X_N,Y_N)} =
\| \tilde{B} \|_2
\qquad\mbox{ with }\qquad
\tilde{B} = D^{-1} B D^{-1} \; ,
\end{displaymath}
where~$\| \cdot \|_2$ denotes the regular induced $2$-norm of a matrix.
Moreover, one can verify that we also have the identity
\begin{equation} \label{eqn:lninversenorm}
\left\| L_N^{-1} \right\|_{{\mathcal L}(Y_N,X_N)} =
\left\| \tilde{B}^{-1} \right\|_{L^2} \; .
\end{equation}
In other words, using this formula, we can use interval arithmetic to
establish a rigorous upper bound on the norm of this finite-dimensional
inverse.
So far our considerations applied to any bounded linear operator between
the spaces~$X$ and~$Y$. Specifically for the linearization of the diblock
copolymer equation we can derive an explicit formula for the matrix
entries~$b_{k,\ell}$. Recall that~$\phi_k$ as defined in~(\ref{eqn:phik})
is an eigenfunction for the negative Laplacian~$-\Delta$ with
eigenvalue~$\kappa_k$. Therefore, for all multi-indices $k,\ell \in {\mathbb N}_0^d$
with $0 < |k|_\infty < N$ and $0 < |\ell|_\infty < N$ one obtains
\begin{eqnarray}
b_{k,\ell} & = & (L \phi_\ell,\phi_k)_{L^2} \; = \;
(-\kappa_k^2 - \lambda \sigma) (\phi_k,\phi_\ell)_{L^2} -
(\Delta(\lambda f'(u+\mu) \phi_\ell),\phi_k)_{L^2}
\nonumber \\[1ex]
& = & (-\kappa_k^2 - \lambda \sigma) \delta_{k,\ell} -
(\Delta(q \phi_\ell),\phi_k)_{L^2} \nonumber \\[1ex]
& = & (-\kappa_k^2 - \lambda \sigma) \delta_{k,\ell} -
(q \phi_\ell,\Delta \phi_k)_{L^2} \nonumber \\[1ex]
& = & -\left( \kappa_k^2 + \lambda \sigma \right) \delta_{k,\ell} +
\kappa_k \left( q \phi_\ell, \phi_k \right)_{L^2} \;.
\label{eqn:defbkell}
\end{eqnarray}
The above formula explicitly gives the entries of the matrix~$B$.
For our computer-assisted proof, we are however interested in the
scaled matrix~$\tilde{B} = D^{-1} B D^{-1}$. One can immediately
verify that its entries~$\tilde{b}_{k,\ell}$ are given by
\begin{equation} \label{eqn:deftildebkell}
\tilde{b}_{k,\ell} \; = \;
-\left(1 + \frac{\lambda \sigma}{\kappa_k^2} \right) \delta_{k,\ell} +
\frac{1}{\kappa_\ell} (q \phi_\ell, \phi_k)_{L^2}
\quad\mbox{ with }\quad
q(x) = \lambda f'(\mu + u(x)) \; .
\end{equation}
In view of~(\ref{eqn:lninversenorm}), this formula will allow us
to bound the operator norm of the inverse of the finite-dimensional
projection~$L_N$ using techniques from interval arithmetic.
\subsection{Construction of an approximative inverse}
\label{sec:approxinverse}
The crucial part in the derivation of our norm bound for the
inverse of~$L$ is the application of Proposition~\ref{prop:neumann}.
For this, we need to construct an approximative inverse of this
operator. Since this construction has to be explicit, we will
approach it in two steps. The first has already been accomplished
in the last section, where we considered a finite-dimensional
projection of~$L$, which can easily be inverted numerically.
In this section, we complement this finite-dimensional part with
a consideration of the infinite-dimensional complementary space.
For this, we refer the reader again to the definition of the
matrix representation~$B$ in~(\ref{eqn:defbkell}). As $N \to \infty$,
this representation leads to better and better approximations of
the operator~$L$. Note in particular that the entry~$b_{k,\ell}$
is the sum of two terms. The first of these is a diagonal matrix,
and its entries clearly dominate the second term in~(\ref{eqn:defbkell}).
We therefore use the inverse of the first term in order to complement
the inverse of~$L_N$.
To describe this procedure in more detail, suppose that the function
$v \in Y$ is given by
\begin{displaymath}
v = \sum_{k \in {\mathbb N}_0^d, \, |k|_\infty > 0}
\alpha_k \phi_k(x)
= v_N + v_{\infty} \in
Y_N \oplus Y_\infty \; ,
\end{displaymath}
where we define
\begin{displaymath}
Y_N = P_N Y
\qquad\mbox{ and }\qquad
Y_\infty = \left( I - P_N \right) Y \; .
\end{displaymath}
Using this representation the approximative inverse~$S \in {\mathcal L}(Y,X)$
of~$L \in {\mathcal L}(X,Y)$ is defined via the formula
\begin{displaymath}
S v = L_N^{-1} v_N -
\sum_{k \in {\mathbb N}_0^d, \, |k|_\infty \ge N} \frac{\alpha_k}
{\kappa_k^2 + \lambda \sigma} \, \phi_k \; .
\end{displaymath}
In addition, consider the operator $T = S|_{Y_\infty}$, i.e., let
\begin{displaymath}
T \sum_{k \in {\mathbb N}_0^d, \, |k|_\infty\ge N} \alpha_k \phi_k =
-\sum_{k \in {\mathbb N}_0^d, \, |k|_\infty\ge N} \frac{\alpha_k}
{\kappa_k^2 + \lambda \sigma} \phi_k \; .
\end{displaymath}
One can easily see that~$T : Y_\infty \to X_\infty = (I-P_N)X$
is one-to-one and onto, and in fact we have the identity
\begin{displaymath}
T^{-1} \sum_{k \in {\mathbb N}_0^d, \, |k|_\infty\ge N} \alpha_k \phi_k =
-\sum_{k \in {\mathbb N}_0^d, \, |k|_\infty\ge N} \left( \kappa_k^2 +
\lambda \sigma \right) \alpha_k \phi_k \; ,
\end{displaymath}
which can be rewritten in the form
\begin{equation} \label{eqn:tinv}
T^{-1} v_\infty = -\left( \Delta^2 v_\infty +
\lambda \sigma v_\infty \right) \; .
\end{equation}
Also, from the definition of~$S$ we get the
alternative representation
\begin{equation} \label{eqn:approxinv}
Sv = L_N^{-1} v_N + T v_\infty \; .
\end{equation}
To close this section, we now derive a bound on the
operator norm of~$S$, since this will be needed in the
application of Proposition~\ref{prop:neumann}. As a first
step, we show that $\| T v_\infty \|_X \le \| v_\infty \|_Y$
for all $y_\infty \in Y_\infty$, which follows readily from
\begin{eqnarray*}
\left\| T \sum_{k \in {\mathbb N}_0^d, \, |k|_\infty\ge N}
\alpha_k \phi_k \right\|_{X}^2 & = &
\left\| \sum_{k \in {\mathbb N}_0^d, \, |k|_\infty\ge N}
\frac{\alpha_k}{\kappa_k^2 + \lambda \sigma} \phi_k
\right\|_{\overline{{\mathcal H}}^2}^2 \\[2ex]
& = &
\sum_{k \in {\mathbb N}_0^d, \, |k|_\infty\ge N}
\frac{\alpha_k^2 \kappa_k^2}{(\kappa_k^2 +
\lambda \sigma)^2} \\[2ex]
& \le &
\sum_{k \in {\mathbb N}_0^d, \, |k|_\infty\ge N}
\frac{\alpha_k^2 \kappa_k^2}{(\kappa_k^2)^2}
\; = \;
\sum_{k \in {\mathbb N}_0^d, \, |k|_\infty\ge N}
\kappa_k^{-2} \alpha_k^2 \\[2ex]
& = &
\left\| \sum_{k \in {\mathbb N}_0^d, \, |k|_\infty\ge N}
\alpha_k \phi_k \right\|_{\overline{{\mathcal H}}^{-2}}^2
\; = \;
\left\| \sum_{k \in {\mathbb N}_0^d, \, |k|_\infty\ge N}
\alpha_k \phi_k \right\|_Y^2.
\end{eqnarray*}
This estimate in turn implies for all $v = v_N + v_\infty
\in Y_N \oplus Y_\infty$ the estimate
\begin{eqnarray*}
\| S v \|_X^2 & = &
\| L_N^{-1} v_N \|_X^2 + \| T v_\infty \|_X^2 \\[2ex]
& \le &
\underbrace{\| L_N^{-1} \|_{{\mathcal L}(Y_N,X_N)}^2}_{\le K_N^2}
\| v_N \|_Y^2 + \| v_\infty \|_Y^2
\; \le \; \max(K_N,1)^2 \| v \|_Y^2 \; ,
\end{eqnarray*}
where we used the definition of~$K_N$ from~(\ref{eqn:defkn}).
Altogether, we have shown that
\begin{equation} \label{eqn:sbound}
\|S\|_{{\mathcal L}(Y,X)} \le \max(K_N,1) \; .
\end{equation}
In other words, the operator norm of the approximate inverse~$S$
given in~(\ref{eqn:approxinv}) can be bounded in terms of the
inverse bound for the finite-dimensional projection given
in~(\ref{eqn:defkn}). Furthermore, it follows directly from
the definition of~$S$ that this operator is one-to-one.
\subsection{Assembling the final inverse estimate}
\label{sec:comprof}
In the last section we addressed two crucial aspects of
Proposition~\ref{prop:neumann}. On the one hand, we provided an
explicit construction for the approximative inverse~$S \in {\mathcal L}(Y,X)$
of the Fr\'echet derivative~$L$ defined in~(\ref{eqn:ldef}). On the
other hand, we derived an upper bound on the operator norm of~$S$,
which can be computed using the finite-dimensional projection~$L_N$
of~$L$. This in turn provides the constant~$\rho_2$ in
Proposition~\ref{prop:neumann}. In this final subsection, we focus
on the constant~$\rho_1$, i.e., we derive an upper bound on the
norm~$\|I - S L \|_{{\mathcal L}(X,X)}$, and show how this bound can be made
smaller than one. Altogether, this will complete the proof of the
estimate for the constant~$K$ in the constructive implicit function
theorem, which was given in Theorem~\ref{thm:k}.
Before we begin, recall the abbreviation $q(x) = \lambda f'(u(x) + \mu)$.
From our definitions of the operators~$L \in {\mathcal L}(X,Y)$, $S \in {\mathcal L}(Y,X)$,
$L_N \in {\mathcal L}(X_N,Y_N)$, and~$T \in {\mathcal L}(Y_\infty,X_\infty)$, as well
as the projection~$P_N$, and using the additive representation $v = v_N + v_\infty
\in Y_N \oplus Y_\infty$, we have the identity
\begin{equation}\label{eqn:lv}
Lv = \left( L_N v_N - P_N \Delta(q v_\infty) \right) +
\left( T^{-1} v_\infty - \left( I-P_N \right)
\Delta(q v) \right) \; ,
\end{equation}
which will be derived in detail in the following calculation. Notice
that the first parentheses contain only terms in the finite-dimensional
space~$Y_N$, while the second parentheses contain terms in~$Y_\infty$.
With this in mind, we have
\begin{eqnarray*}
Lv & = &
-\Delta\left( \Delta v + q v \right) - \lambda \sigma v \\[1ex]
& = & -\Delta^2 v_N - \Delta^2 v_\infty - P_N \Delta (q v_N) -
(I - P_N) \Delta (q v_N) \\[0.5ex]
& & \qquad - \Delta (q v_\infty) -
\lambda \sigma v_N - \lambda \sigma v_\infty \\[1ex]
& = & \left( -\Delta^2 v_N - P_N \Delta (q v_N) -
\lambda \sigma v_N \right) - \left( \Delta^2 v_\infty +
\lambda \sigma v_\infty \right) \\[0.5ex]
& & \qquad - (I - P_N) \Delta (q v_N) -
\Delta (q v_\infty) \\[1ex]
& = & L_N v_N + T^{-1} v_\infty - (I - P_N) \Delta (q v_N) -
P_N \Delta (q v_\infty) - (I - P_N) \Delta (q v_\infty) \\[1ex]
& = & L_N v_N + T^{-1} v_\infty - P_N \Delta (q v_\infty) -
(I - P_N) \Delta (q v) \; .
\end{eqnarray*}
The first two lines follow just from the definitions, projections, and
rearrangements of terms. The third line is a consequence of~(\ref{eqn:lv})
and~(\ref{eqn:tinv}). Finally, the fourth and fifth lines involve only
rearrangements using the projection operator.
Using the above representation~(\ref{eqn:lv}) of the operator~$L$ which
is split along the subspaces~$Y_N$ and~$Y_\infty$, we can now derive an
expression for~$I - SL \in {\mathcal L}(X,X)$. More precisely, we
have
\begin{equation}\label{eqn:imsl}
(I - SL)v = L_N^{-1} P_N \Delta(q v_\infty) +
T (I - P_N) \Delta (q v) \; ,
\end{equation}
and this will be verified in detail below. Notice that in this
representation, the first term of the right-hand side lies in the
finite-dimensional space~$X_N$, while the second term is contained
in the complement~$X_\infty$. The identity in~(\ref{eqn:imsl}) now
follows from~(\ref{eqn:approxinv}) and
\begin{eqnarray*}
SL v & = & L_N^{-1} \left( L_N v_N - P_N \Delta(q v_\infty) \right) +
T \left( T^{-1} v_\infty - (I - P_N) \Delta(qv) \right) \\[1ex]
& = & v_N - L_N^{-1} P_N \Delta(q v_\infty) +
v_\infty - T (I - P_N) \Delta(qv) \\[1ex]
& = & I v - L_N^{-1} P_N \Delta(q v_\infty) -
T (I - P_N) \Delta(qv) \; .
\end{eqnarray*}
After these preparation, we can now show that the operator norm
of~$I - SL$ can be expected to be small for sufficiently large $N$. This will provide an estimate
for the constant~$\rho_1$ in Proposition~\ref{prop:neumann}, and
conclude the proof of Theorem~\ref{thm:k}. In order to show that
$\|I - SL \|_{{\mathcal L}(X,X)}$ is indeed small, we separately bound the
two terms in~(\ref{eqn:imsl}) as
\begin{displaymath}
\begin{array}{rclcrcl}
\displaystyle \left\| L_N^{-1} P_N \Delta(q v_\infty) \right\|_X
& \le & \displaystyle A \|v\|_X & \quad\mbox{ with }\quad &
\displaystyle A & := & \displaystyle \frac{K_N \|q\|_\infty}{\pi^2 N^2}
\; , \\[3ex]
\displaystyle \left\| T (I-P_N) \Delta (q v) \right\|_X & \le &
\displaystyle B \|v\|_X & \quad\mbox{ with }\quad &
\displaystyle B & := & \displaystyle \frac{C_b \sqrt{1+\pi^4} \,
\|q\|_{{\mathcal H}^2}}{\pi^4 N^2} \; .
\end{array}
\end{displaymath}
The first of these inequalities is established in the following
calculation, which makes liberal use of Sobolev embeddings and
other established inequalities:
\begin{eqnarray*}
\left\| L_N^{-1} P_N \Delta(q v_\infty) \right\|_X & \le &
\left\| L_N^{-1} \right\|_{{\mathcal L}(Y_N,X_N)} \|P_N
\Delta(q v_\infty) \|_Y \\[1.5ex]
& \le & K_N \left\| P_N \Delta(q v_\infty)
\right\|_{\overline{{\mathcal H}}^{-2}}
\; \le \; K_N \left\| \Delta(q v_\infty)
\right\|_{\overline{{\mathcal H}}^{-2}} \\[1.5ex]
& \le &
K_N \| q v_\infty \|_{{{\mathcal H}}^{0}}
\; \le \;
K_N \|q\|_{\infty} \left\| (I-P_N) v
\right\|_{\overline{{\mathcal H}}^0} \\[1.5ex]
& \le & K_N \|q\|_{\infty} \,
\frac{\| v \|_{\overline{{\mathcal H}}^{2}}}{\pi^2 N^2}
\; = \;
\frac{K_N \|q\|_\infty}{\pi^2 N^2} \, \|v\|_X
\; = \;
A \|v\|_X \; ,
\end{eqnarray*}
where for the last inequality we used Lemma~\ref{lemma:projtailest}.
The second estimate, the one involving the constant~$B$, is verified
as follows, again with help from our previously derived inequalities,
in particular the fact that~$\| T \|_{{\mathcal L}(Y_\infty,X_\infty)} \le 1$
and Lemmas~\ref{lemma:sobnobar2bar} and~\ref{lemma:projtailest}:
\begin{eqnarray*}
\left\| T (I-P_N) \Delta (q v) \right\|_X & \le &
\left\| (I-P_N) \Delta (q v) \right\|_{\overline{{\mathcal H}}^{-2}}
\; \le \;
\frac{\| \Delta (q v)\|_{\overline{{\mathcal H}}^{0}} }{\pi^2 N^2} \\[1.5ex]
& = &
\frac{\left\| \overline{P} (q v)
\right\|_{\overline{{\mathcal H}}^{2}}}{\pi^2 N^2}
\; \le \;
\frac{\| q v \|_{{\mathcal H}^{2}} }{\pi^2 N^2}
\; \le \;
\frac{C_b \| q \|_{{\mathcal H}^{2}} \| v \|_{{\mathcal H}^{2}} }{\pi^2 N^2} \\[1.5ex]
& \le &
\frac{C_b \| q\|_{{\mathcal H}^{2}}}{\pi^2 N^2} \cdot
\frac{\sqrt{1+\pi^4}}{\pi^2} \cdot \|v \|_{\overline{{\mathcal H}}^{2}}
\; = \; B \|v\|_X \; .
\end{eqnarray*}
Now that we have established these two inequalities, the proof of
Theorem~\ref{thm:k} can easily be completed using an application of
Proposition~\ref{prop:neumann}. Specifically, the inequalities which
involve the constants~$A$ ands~$B$ combined with~(\ref{eqn:imsl})
imply that
\begin{displaymath}
\| I - S L \|_{{\mathcal L}(X,X)} \; \le \;
\sqrt{A^2 + B^2} \; = \;
\frac{1}{\pi^2 N^2} \, \sqrt{K_N^2 \|q\|_\infty^2 +
C_b^2 \, \frac{1+\pi^4}{\pi^4} \, \|q\|_{{\mathcal H}^2}^2} \; .
\end{displaymath}
We also know from~(\ref{eqn:sbound}) that $\| S \|_X \le \max(K_N,1)$.
Therefore, we can directly apply Proposition~\ref{prop:neumann} with
the constants $\rho_1 = \sqrt{A^2 + B^2} \le \tau < 1$ and
$\rho_2 = \max(K_N,1)$, and this immediately implies that the
operator~$L \in {\mathcal L}(X,Y)$ is one-to-one, onto, and the norm of its
inverse operator is bounded via
\begin{displaymath}
\left\| L^{-1} \right\|_{{\mathcal L}(Y,X)} \; \le \;
\frac{\rho_2}{1-\rho_1} \; = \;
\frac{\max(K_N,1)}{1-\tau} \; .
\end{displaymath}
This completes the proof of Theorem~\ref{thm:k}.
\section{Lipschitz estimates}
\label{sec:cift}
In this section, our goal is to establish the Lipschitz constants needed in
hypotheses~(H3) and~(H4) required for Theorem~\ref{nift:thm}, the constructive
implicit function theorem. Namely, we need to establish Lipschitz bounds
for the derivatives of~$F$
with respect to both~$u$ and with respect to the continuation
parameter. We are considering single-parameter continuation, meaning that
we have three separate situations to discuss, corresponding to the three
different parameters~$\lambda$, $\sigma$, and~$\mu$. Specifically, for~$p$
being one of these three parameters, for a fixed parameter-function
pair~$(p^*,u^*) \in {\mathbb R} \times X$, and for fixed values of~$d_p$ and~$d_u$,
we assume that $|p - p^*| \le d_p$, and $\| u - u^* \|_X \le d_u$. Furthermore,
by a slight abuse of notation we drop the parameters different from~$p$
from the argument list of~$F$ in~(\ref{eqn:deffoperator}). Our goal in
the current section is to obtain tight and easily computable bounds on
the constants~$M_1$ through~$M_4$ in the following two formulas:
\begin{equation} \label{eqn:lipschitz}
\begin{array}{rcl}
\displaystyle \| D_u F ( p, u) - D_u F ( p, u) \| _{{\mathcal L}(X,Y)} & \le &
\displaystyle M_1 \; \| u - u^* \|_X + M_2 \; |p - p^*| \; , \\[1ex]
\displaystyle \| D_p F ( p, u) - D_p F ( p, u) \| _{{\mathcal L}({\mathbb R},Y)} & \le &
\displaystyle M_3 \; \| u - u^* \|_X + M_4 \; |p - p^*| \; .
\end{array}
\end{equation}
These bounds will be determined using standard Sobolev embedding theorems
and the constants from the previous section, for each of the three
parameters~$\lambda$, $\sigma$, and~$\mu$. Notice that throughout this
section, we always assume $\lambda > 0$ and $\sigma \ge 0$, while the
mass~$\mu$ could be a real number of either sign.
\subsection{Variation of the short-range repulsion}
\label{subsec:lips}
We now state the Lipschitz estimates for the constructive implicit
function theorem in the case where~$\lambda$, the short-range repulsion
term, varies and the remaining parameters~$\mu$ and~$\sigma$ are held fixed.
\begin{lemma}[Lipschitz constants for variation of~$\lambda$]
Let $\lambda^* \in {\mathbb R}$ and $u^* \in \overline{{\mathcal H}}^2$ be arbitrary, and
consider fixed positive constants~$d_{\lambda}$ and~$d_u$. Finally let~$\lambda$
and~$u$ be such that
\begin{displaymath}
|\lambda-\lambda^*| \le d_{\lambda}
\quad\mbox{ and }\quad
\|u-u^*\|_{\overline{{\mathcal H}}^2 } \le d_u \;.
\end{displaymath}
Then the Lipschitz constants in~(\ref{eqn:lipschitz}) can be chosen as
\begin{displaymath}
\begin{array}{rclcrcl}
\displaystyle M_1 &=& \displaystyle \frac{\overline{C}_m f_{\max}^{(2)} (\lambda^* +
d_\lambda)}{\pi^2} \; , & \qquad &
\displaystyle M_2 &=& \displaystyle \frac{\| f'(u^*+ \mu) \|_\infty}{\pi^2} +
\frac{\sigma}{\pi^4} \; , \\[2ex]
\displaystyle M_3 &=& \displaystyle \frac{f^{(1)}_{\max}}{\pi^2} +
\frac{\sigma}{\pi^4} \; , & \qquad &
\displaystyle M_4 &=& 0 \; ,
\end{array}
\end{displaymath}
where~$f_{\max}^{(1)}$ and~$f_{\max}^{(2)}$ are defined as
\begin{equation} \label{eqn:fpmax}
f^{(p)}_{\max} =
\max_{|\rho| \le \|u^*\|_\infty + \overline{C}_m d_u}
|f^{(p)} (\rho + \mu)| \;.
\end{equation}
These are well-defined since $f$ is a $C^2$-function.
\end{lemma}
\begin{proof}
For our choice of constants~$d_\lambda$, $d_u$, reference parameter~$\lambda^* \in {\mathbb R}$
and function~$u^* \in \overline{{\mathcal H}}^2$, and for arbitrary $v \in \overline{{\mathcal H}}^2$,
assume that $|\lambda - \lambda^*| \le d_\lambda$ and $\| u - u^*\|_{\overline{{\mathcal H}}^2}
\le d_u$. We start by deriving expressions for both~$M_1$ and~$M_2$. Notice that we have
\begin{eqnarray*}
& & \hspace*{-2cm} \| D_uF(\lambda,u)[v] -
D_uF(\lambda^*,u^*)[v] \|_{\overline{{\mathcal H}}^{-2}} \\[1ex]
&\le& \| \Delta ( \lambda f'(u+\mu) v - \lambda^* f'(u^*+\mu)v )
\|_{\overline{{\mathcal H}}^{-2}} + \sigma \, |\lambda - \lambda^*|
\|v\|_{\overline{{\mathcal H}}^{-2}} \\[1ex]
&\le& \| \overline{P} ( \lambda f'(u+\mu) v - \lambda^* f'(u^*+\mu)v )
\|_{\overline{{\mathcal H}}^0} + \sigma \, |\lambda - \lambda^*| \, \frac{1}{\pi^4} \,
\|v\|_{\overline{{\mathcal H}}^2} \\[1ex]
&\le& \| \lambda f'(u+\mu) v - \lambda^* f'(u^*+\mu)v \|_{L^2} +
\frac{\sigma}{\pi^4}\,|\lambda - \lambda^*| \, \|v\|_{\overline{{\mathcal H}}^2} \\[1ex]
&\le& \| \lambda f'(u+\mu) - \lambda^* f'(u^*+\mu) \|_\infty \, \|v\|_{L^2} +
\frac{\sigma}{\pi^4}\,|\lambda - \lambda^*| \, \|v\|_{\overline{{\mathcal H}}^2} \\[1ex]
&\le& \left( \frac{1}{\pi^2} \| \lambda f'(u+\mu) - \lambda^* f'(u^*+\mu) \|_\infty
+ |\lambda - \lambda^*| \, \frac{\sigma}{\pi^4}\right) \|v\|_{\overline{{\mathcal H}}^2} \; .
\end{eqnarray*}
The first estimate follows straightforwardly from the definition of the Fr\'echet
derivative~(\ref{eqn:deffrechetderivative}), while the second one uses the fact that
the Laplacian is an isometry (cf.\ Lemma~\ref{lemma:lapiso}) and the Banach scale
estimate between~$\overline{{\mathcal H}}^{-2}$ and~$\overline{{\mathcal H}}^{2}$ (cf.\ Lemma~\ref{lemma:bscaleest}).
The third estimate follows from~$\|\overline{P}\| = 1$, as well as the fact
that~$\overline{{\mathcal H}}^0$ and~$L^2(\Omega)$ are equipped with the same norm. Finally,
the fourth estimate is straightforward, and the factor~$1/\pi^2$ in the fifth estimate
follows from $v \in \overline{{\mathcal H}}^{2} \subset \overline{{\mathcal H}}^0$ and the estimate in
Lemma~\ref{lemma:bscaleest}.
The above estimate shows that the operator norm of the difference of the two
Fr\'echet derivatives is bounded by the expression in parentheses. The first of
these two terms will now be estimated further. For this, note first that
\begin{eqnarray*}
& & \| \lambda f'(u+\mu) - \lambda^* f'(u^*+\mu) \|_\infty \\[1ex]
& & \qquad\qquad \le \;
|\lambda| \, \| f'(u+\mu) - f'(u^*+\mu) \|_\infty +
| \lambda-\lambda^*| \, \| f'(u^*+\mu) \|_\infty \; .
\end{eqnarray*}
For fixed $x \in \Omega$, we know from the mean value theorem that there exists
a number~$\xi(x)$ between~$u(x)$ and~$u^*(x)$ such that
\begin{displaymath}
| f'(u(x)+\mu) - f'(u^*(x)+\mu) | \le
|f''(\xi(x)+\mu)| \; |u(x)-u^*(x)| \;.
\end{displaymath}
Since~$\xi(x)$ is contained between~$u(x)$ and~$u^*(x)$ for all
$x \in \Omega$, the function~$\xi$ is bounded. Combining this fact with
the definition of~$\overline{C}_m$ in~(\ref{eqn:cmcb}) we get
\begin{displaymath}
\| \xi \|_\infty \le \| u^*\|_\infty + \| u - u^* \|_\infty \le
\|u^*\|_\infty + \overline{C}_m \|u - u^*\|_{\overline{{\mathcal H}}^2} \le
\|u^*\|_\infty + \overline{C}_m d_u \;,
\end{displaymath}
and therefore
\begin{eqnarray*}
& & \hspace*{-2cm}
\| \lambda f'(u+\mu) - \lambda^* f'(u^*+\mu) \|_\infty \\[1ex]
& \le &
|\lambda| \, f^{(2)}_{\max}\, \| u - u^*\|_\infty +
|\lambda-\lambda^*|\, \| f'(u^*+\mu) \|_\infty \\[1ex]
& \le & |\lambda| \, f^{(2)}_{\max}\, \overline{C}_m \,
\| u - u^*\|_{\overline{{\mathcal H}}^2} +
|\lambda-\lambda^*|\, \| f'(u^*+\mu) \|_\infty \;,
\end{eqnarray*}
where~$f^{(2)}_{\max}$ is defined in~(\ref{eqn:fpmax}). Incorporating this
into the previous estimate, we see that
\begin{eqnarray*}
& & \hspace*{-1.2cm}
\| D_uF(\lambda,u) - D_uF(\lambda^*,u^*) \|_{{\mathcal L}(\overline{{\mathcal H}}^2,
\overline{{\mathcal H}}^{-2})} \\[1ex]
&\le& \left( \frac{\overline{C}_m \, f^{(2)}_{\max} \,
(\lambda^* + d_{\lambda})}{\pi^2} \right) \|u - u^*\|_{\overline{{\mathcal H}}^2} +
\left( \frac{\|f'(u^* + \mu)\|_\infty}{\pi^2} +
\frac{\sigma}{\pi^4}\right) |\lambda - \lambda^*| \; .
\end{eqnarray*}
This equation directly gives the values of the Lipschitz constants~$M_1$ and~$M_2$
given in the statement of the lemma.
We now turn our attention to the remaining constants~$M_3$ and~$M_4$. The Fr\'echet
derivative of $F$~with respect to~$\lambda$ is given by
\begin{displaymath}
D_\lambda F(\lambda,u) = - \Delta f(u+ \mu) - \sigma u \;.
\end{displaymath}
Using almost identical steps as the calculation of~$M_1$ and~$M_2$, we get
\begin{eqnarray*}
& & \hspace*{-2cm}
\|D_\lambda F(\lambda,u) - D_\lambda
F(\lambda^*,u^*)\|_{\overline{{\mathcal H}}^{-2}} \\[1ex]
& \le & \| \Delta (f(u+\mu)-f(u^*+\mu))\|_{\overline{{\mathcal H}}^{-2}} +
|\sigma| \, \| u - u^*\|_{\overline{{\mathcal H}}^{-2}} \\[1ex]
&\le& \| f(u+\mu)-f(u^*+\mu)\|_{L^2} + \frac{\sigma}{\pi^4} \,
\| u - u^*\|_{\overline{{\mathcal H}}^{2}} \\[1ex]
&\le& f^{(1)}_{\max} \, \| u - u^*\|_{L^2} + \frac{\sigma}{\pi^4} \,
\| u - u^*\|_{\overline{{\mathcal H}}^{2}} \\[1ex]
&\le& \left( \frac{f^{(1)}_{\max}}{\pi^2} + \frac{\sigma}{\pi^4} \right)
\| u - u^*\|_{\overline{{\mathcal H}}^{2}} \;.
\end{eqnarray*}
Notice that in estimating the norm of this difference of Fr\'echet derivatives
we use the standard identification of~${\mathcal L}({\mathbb R},\overline{{\mathcal H}}^{-2})$
with~$\overline{{\mathcal H}}^{-2}$. Furthermore, in the above inequalities, we
have made liberal use of the constructive Sobolev embedding results from
the previous section. This gives the constants~$M_3$ and~$M_4$ given in
the statement of the lemma.
\end{proof}
\subsection{Variation of the long-range elasticity}
\label{subsec:sigma}
We now establish Lipschitz constants for the case when the parameter~$\sigma$
varies and both~$\lambda$ and~$\mu$ are held fixed.
\begin{lemma}[Lipschitz constants for variation of~$\sigma$]
Let $\sigma^* \in {\mathbb R}$ and $u^* \in \overline{{\mathcal H}}^2$ be arbitrary, and
consider fixed positive constants~$d_{\sigma}$ and~$d_u$. Finally let~$\sigma$
and~$u$ be such that
\begin{displaymath}
|\sigma-\sigma^*| \le d_{\sigma}
\quad\mbox{ and }\quad
\|u-u^*\|_{\overline{{\mathcal H}}^2 } \le d_u \;.
\end{displaymath}
Then the Lipschitz constants in~(\ref{eqn:lipschitz}) can be chosen as
\begin{displaymath}
M_1 \; = \; \frac{\lambda \, f^{(2)}_{\max} \, \overline{C}_m}{\pi^2} \; ,
\qquad
M_2 \; = \; M_3 \; = \; \frac{\lambda}{\pi^4} \; ,
\qquad
M_4 \; = \; 0\; ,
\end{displaymath}
where the value of~$f^{(2)}_{\max}$ is defined in~(\ref{eqn:fpmax}).
\end{lemma}
\begin{proof}
We start by computing the constants~$M_1$ and~$M_2$. Holding~$\mu$ and~$\lambda > 0$
fixed in the equation for~$D_uF$, we are able to follow very similar arguments as in
the $\lambda$-varying case, including the use of the Sobolev embedding formulas and
the mean value theorem. The resulting estimate is given by
\begin{eqnarray*}
& & \hspace*{-2cm} \| D_uF(\sigma,u)[v] - D_uF(\sigma^*,u^*)[v]
\|_{\overline{{\mathcal H}}^{-2}} \\[1ex]
& \le & \| \Delta (\lambda (f'(u + \mu) - f'(u^*+\mu)) v) \|_{\overline{{\mathcal H}}^{-2}} +
\lambda \, |\sigma-\sigma^*| \, \|v\|_{\overline{{\mathcal H}}^{-2}} \\[1ex]
& \le & \lambda \, \|f'(u+\mu) - f'(u^*+\mu)\|_\infty \, \| v \|_{L^2} +
\lambda \, |\sigma-\sigma^*| \, \|v\|_{\overline{{\mathcal H}}^{-2}}\\[1ex]
& \le &\left( \frac{ \lambda \, f^{(2)}_{\max} \, \overline{C}_m}{\pi^2} \right)
\|u-u^* \|_{\overline{{\mathcal H}}^2} \, \| v \|_{\overline{{\mathcal H}}^2} +
\left(\frac{\lambda}{\pi^4} \right) \, |\sigma-\sigma^*| \,
\|v\|_{\overline{{\mathcal H}}^{2}}\;.
\end{eqnarray*}
This establishes constants~$M_1$ and~$M_2$ given in the lemma.
We now turn our attention to the constants~$M_3$ and~$M_4$.
The derivative of~$F$ with respect to~$\sigma$ is given by
\begin{displaymath}
D_\sigma F(\sigma,u) = - \lambda u \;.
\end{displaymath}
Therefore, once again Lemma~\ref{lemma:bscaleest},
we get
\begin{displaymath}
\|D_\sigma F(\sigma,u) - D_\sigma F(\sigma^*,u^*)\|_{\overline{{\mathcal H}}^{-2}}
\; \le \;
\lambda \, \|u-u^*\|_{\overline{{\mathcal H}}^{-2}}
\; \le \;
\frac{\lambda}{\pi^4} \, \|u-u^*\|_{\overline{{\mathcal H}}^2 }\; ,
\end{displaymath}
which gives the constants~$M_3$ and~$M_4$ stated in the lemma.
\end{proof}
\subsection{Varying the relative proportion of the two polymers}
\label{subsec:mu}
In this final subsection we now consider the third parameter variation,
namely that of~$\mu$.
\begin{lemma}[Lipschitz constants for variation of~$\mu$]
Let $\mu^* \in {\mathbb R}$ and $u^* \in \overline{{\mathcal H}}^2$ be arbitrary, and
consider fixed positive constants~$d_{\mu}$ and~$d_u$. Finally let~$\mu$
and~$u$ be such that
\begin{displaymath}
|\mu-\mu^*| \le d_{\mu}
\quad\mbox{ and }\quad
\|u-u^*\|_{\overline{{\mathcal H}}^2 } \le d_u \;.
\end{displaymath}
Then the Lipschitz constants in~(\ref{eqn:lipschitz}) can be chosen as
\begin{displaymath}
M_1 \; = \; \frac{\lambda \, f^{(2)}_{\max,\mu} \, \overline{C}_m}{\pi^2} \; ,
\qquad
M_2 \; = \; M_3 \; = \; \frac{\lambda \, f^{(2)}_{\max,\mu}}{\pi^2} \; ,
\qquad
M_4 \; = \; \lambda \; f^{(2)}_{\max,\mu} \; ,
\end{displaymath}
where the constant~$f^{(2)}_{\max, \mu}$ is defined as
\begin{equation}\label{eqn:fpmumax}
f^{(2)}_{\max, \mu} = \max_{ |\rho| \le \| u^* + \mu^*\|_{\infty} +
\overline{C}_m d_u + d_\mu } |f''(\rho)| \; .
\end{equation}
\end{lemma}
\begin{proof}
Using a similar format to the last two proofs, we consider~$\lambda > 0$
and $\sigma \ge 0$ to be fixed constants and only allow~$\mu$ to vary.
The we have
\begin{eqnarray*}
& & \hspace*{-2cm} \| D_uF(\mu,u)[v] - D_uF(\mu^*,u^*)[v]
\|_{\overline{{\mathcal H}}^{-2}} \\[1ex]
&\le& \| \Delta ( \lambda ( f'(u + \mu) - f'(u^* + \mu^*))v)
\|_{\overline{{\mathcal H}}^{-2}} \\[1ex]
&\le& \lambda \| f'(u + \mu) - f'(u^* + \mu^*)\|_{\infty} \,
\| v \|_{L^2} \\[1ex]
&\le& \frac{\lambda}{\pi^2} \| f'(u + \mu) - f'(u^* + \mu^*)\|_{\infty} \,
\| v \|_{\overline{{\mathcal H}}^{2}} \; .
\end{eqnarray*}
As in the previous calculations, we use the mean value theorem to bound
the value of the maximum norm $\| f'(u + \mu) - f'(u^* + \mu^*)\|_{\infty}$.
To do so, note that if a real value~$\rho$ is between the two
numbers~$u^*(x)+\mu$ and~$u(x) + \mu^*$ for some $x \in \Omega$,
then one has
\begin{eqnarray*}
|\rho| &\le& \| u + \mu^*\|_{\infty} + |\mu - \mu^*| \\[1ex]
&\le& \|u^*+\mu^*\|_\infty + \| u - u^*\|_\infty + |\mu - \mu^*| \\[1ex]
&\le& \|u^*+\mu^*\|_\infty + \overline{C}_m \| u - u^*\|_{\overline{{\mathcal H}}^2}
+ |\mu - \mu^*| \le \|u^*+\mu^*\|_\infty + \overline{C}_m d_u + d_\mu \; .
\end{eqnarray*}
Thus, by the mean value theorem, followed by the use of our Sobolev embedding
results, one further obtains
\begin{eqnarray*}
\| f'(u + \mu) - f'(u^* + \mu^*)\|_\infty &\le&
f^{(2)}_{\max, \mu} \, \| (u + \mu) - (u^* + \mu^*) \|_\infty \\[1ex]
&\le& f^{(2)}_{\max, \mu} \, \left( \overline{C}_m
\|u-u^*\|_{\overline{{\mathcal H}}^2} + |\mu-\mu^*| \right) \; ,
\end{eqnarray*}
and combining this with our previous estimate we finally deduce
\begin{displaymath}
\| D_uF(\mu,u) - D_uF(\mu^*,u^*) \|_{{\mathcal L}(\overline{{\mathcal H}}^{-2},
\overline{{\mathcal H}}^{2})}
\; \le \;
\frac{\lambda \; f^{(2)}_{\max,\mu} }{\pi^2}
\left( \overline{C}_m \|u-u^*\|_{\overline{{\mathcal H}}^2} +
|\mu-\mu^*| \right) \; .
\end{displaymath}
This gives the constants~$M_1$ and~$M_2$. We now look at the bounds
for~$M_3$ and~$M_4$. The derivative of~$F$ with respect to $\mu$ is
given by
\begin{displaymath}
D_\mu F(\mu,u) = -\Delta (\lambda f'(u + \mu)) \;.
\end{displaymath}
By similar reasoning as before, we then get
\begin{eqnarray*}
\|D_\mu F(\mu,u) - D_\mu F(\mu^*, u^*)\|_{\overline{{\mathcal H}}^{-2}}
&=& \lambda\, \| \Delta(f'(u + \mu) -
f'(u^*+\mu^*))\|_{\overline{{\mathcal H}}^{-2}} \\[1ex]
&\le& \lambda\, \| f'(u + \mu) - f'(u^* + \mu^*)\|_{L^2} \\[1ex]
&\le& \lambda\, f_{\max,\mu}^{(2)} \; \|(u + \mu) -
(u^* + \mu^*)\|_{L^2} \\[1ex]
&\le& \lambda \, f_{\max,\mu}^{(2)} \, \left( \frac{1}{\pi^2}
\|u-u^*\|_{\overline{{\mathcal H}}^2} + |\mu-\mu^*| \right) \; .
\end{eqnarray*}
This gives the constants~$M_3$ and~$M_4$ and completes the proof of
the lemma.
\end{proof}
With the above lemma we have completed the discussion of all of the
Lipschitz constant bounds for all three equation parameters.
\section{Illustrative examples}
\label{sec:eg}
In this section, we present some examples of validated equilibrium solutions
in order to illustrate the power of our theoretical validation method. In
particular, the theoretical methods developed above can be used to produce
a {\em validated region\/} in parameter cross phase space. We emphasize that
this section is only intended to present proof of concept. We have not made
any attempt to optimize our results or to add computational methods to speed
up the code. For example, the interval arithmetic package INTLAB~\cite{rump:99a}
that we have used is not written in parallel, and we have not attempted to parallelize
any of our algorithms. As another example, in the past we have found that careful
preconditioning can speed up the computation time significantly. Rather than
add any of these techniques at this stage, we have chosen to reserve numerical
considerations for a future paper, in which we will also address additional
questions such as how to use these methods iteratively to validate branches
of solutions.
\begin{figure} \centering
\includegraphics[width=0.7\textwidth]{solutions1d.eps}
\caption{\label{fig:1d}
Ten sample validated one-dimensional equilibrium solutions.
For all solutions we choose $\lambda = 150$ and $\sigma = 6$.
Three of the solutions have total mass $\mu=0$, three are for
mass $\mu = 0.1$, three for $\mu = 0.3$, and finally one
for $\mu = 0.5$.}
\end{figure}
\begin{figure} \centering
\includegraphics[width=0.45\textwidth]{nversusk.eps}
\hspace*{0.5cm}
\includegraphics[width=0.45\textwidth]{nversusdu.eps} \\[2ex]
\includegraphics[width=0.45\textwidth]{nversusdl.eps}
\caption{\label{fig:tradeoff}
There is a tradeoff between high-dimensional calculations and
optimal results. The top left figure shows how the bound of~$K$
varies with the dimension of the truncated approximation matrix
used to calculate~$K_N$. These calculations are for dimension one,
but a similar effect occurs in higher dimensions as well. The top
right figure shows the corresponding estimate for~$\delta_x$, and
the bottom panel shows the estimate for~$\delta_\alpha$,
where~$\alpha$ is each of the three parameters. The size of
the validated interval grows larger as the truncation dimension
grows, but with diminishing returns on the computational
investment.}
\end{figure}
\begin{table}
\begin{tabular}{|c|c|c||c|c|c|}
\hline
$\mu$ & $K$ & $N$ & $P$ & $\delta_\alpha$ & $\delta_x$ \\
\hline\hline
$0$ & 6.2575 & 89 & $\lambda$ & 0.0016 & 0.0056 \\
& && $\sigma$ & 2.9259e-04 & 0.0056 \\
& && $\mu$ & 2.8705e-06 & 0.0044 \\ \hline
$0.1$ & 6.4590 & 104 & $\lambda$ & 0.0011 & 0.0050 \\
& && $\sigma$ & 2.5369e-04 & 0.0050 \\
& && $\mu$ & 2.5579e-06 & 0.0041 \\ \hline
$0.5$ & 3.1030 & 74 & $\lambda$ & 0.0052 & 0.0107 \\
& && $\sigma$ & 0.0011 & 0.0106 \\
& && $\mu$ & 1.2871e-05 & 0.0092 \\ \hline
\end{tabular}
\vspace*{0.3cm}
\caption{\label{table:1d}
A sample of the one-dimensional solution validation
parameters for three typical solutions. In each case, we
use $\sigma = 6$ and~$\lambda = 150$. If we had chosen a
larger value of~$N$, we could significantly improve the results.}
\end{table}
Under the hypotheses of Theorem~\ref{nift:thm}, the constructive implicit
function theorem, for each~$\delta_\alpha$ and~$\delta_x$ satisfying both
parts of~(\ref{nift:thm2}), we are guaranteed that the solution is uniquely
contained in the corresponding $(\delta_\alpha,\delta_x)$-box, where~$\alpha$
is the chosen of the three parameters. In fact, if we fix~$\delta_\alpha$
small enough, then there are a range of values of~$\delta_x$ bounded below
by the quadratic second equation and above by the linear first equation.
We can view the region bounded by the lower limit of~$\delta_x$ as an
{\em accuracy region\/}, within which the equilibrium is guaranteed to lie;
and the region bounded by the upper limit of~$\delta_x$ is a {\em uniqueness
region\/}, which contains the accuracy region, within which the solution is
guaranteed to be unique. If~$\delta_\alpha$ is chosen to be the point for
which the line and curve in~(\ref{nift:thm2}) intersect, then this is the
largest possible value of~$\delta_\alpha$ for which the theorem holds, and
the accuracy and uniqueness regions coincide. In our calculations we have
validated using this maximal interval in parameter space, and we have done
the calculation of the interval size for each of the three parameters.
\begin{figure} \centering
\includegraphics[width=0.3\textwidth]{lambda075sigma6mu000c.eps}
\includegraphics[width=0.3\textwidth]{lambda150sigma6mu000c.eps}
\includegraphics[width=0.3\textwidth]{lambda150sigma6mu000a.eps}\\[2ex]
\includegraphics[width=0.3\textwidth]{lambda150sigma6mu010b.eps}
\includegraphics[width=0.3\textwidth]{lambda150sigma6mu030c.eps}
\includegraphics[width=0.3\textwidth]{lambda150sigma6mu050c.eps}
\caption{\label{fig:2d}
Six of the seventeen validated two-dimensional equilibrium solutions.
For all seventeen solutions we use $\sigma = 6$. Five of these solutions
are for $\lambda = 75$ and $\mu = 0$ (top left). The rest of them use
$\lambda = 150$ and $\mu = 0$ (top middle and top right), $\mu =0.1$
(bottom left), $\mu =0.3$ (bottom middle), and $\mu = 0.5$ (bottom
right).}
\end{figure}
\begin{table}
\begin{tabular}{|c|c|c||c|c|c|}
\hline
$(\lambda,\mu)$ & $K$ & $N$ & $P$ & $\delta_\alpha$ & $\delta_x$ \\
\hline\hline
$(75,0)$ & 21.1303 & 28 & $\lambda$ & 1.6124e-04 & 0.0020 \\
& && $\sigma$ &6.1338e-05 &0.0020 \\
& && $\mu$ & 5.9914e-07 & 0.0016 \\ \hline
$(150,0.1)$ & 30.1656 & 72 & $\lambda$ & 1.1833e-05 & 4.7710e-04\\
& && $\sigma$ &5.1514e-06 & 4.7858e-04 \\
& && $\mu$ & 4.4558e-08 &4.2316e-04 \\ \hline
\end{tabular}
\vspace*{0.3cm}
\caption{\label{table:2d}
A sample of the two-dimensional validation parameters
for a couple of typical solutions. In all cases, we use
$\sigma = 6$. Again as in the previous table, we could
improve results by choosing a larger value of~$N$, but
in this case since~$N$ is only the linear dimension, the
dimension of the calculation varies with~$N^2$.}
\end{table}
We have validated ten different equilibrium solutions in one dimension,
shown in Figure~\ref{fig:1d}. Some examples of the associated validation
parameters are presented in Table~\ref{table:1d}. Ideally, we are able to
validate the largest possible $(\delta_\alpha,\delta_x)$-box in which we
can guarantee that the solution exists. However, there is a tradeoff between
computational cost and optimal bounds. The most computationally costly part
of our estimates is the calculation of~$K_N$, the bound on the inverse of the
linearization of the truncated system. As depicted in Figure~\ref{fig:tradeoff},
the bounds on~$K$, and correspondingly on~$\delta_x$ and~$\delta_\alpha$, depend
significantly on the value of~$N$ that is chosen for the truncation dimension.
Since our goal is to use these validations iteratively for path following, we
will not be able to refine our calculations each time. Therefore as a rule of
thumb for a starting point, we used the equation in Theorem~\ref{thm:k} to
guess that we would have a successful validation for $N \approx C
\|q\|^{1/2}_{H^2}$, where~$C$ is a fixed order one constant. In our
calculations for the ten solutions, this results in a dimension that
varies. For these calculations we chose~$N$ values ranging between~50
and~200. The values of~$M_i$ become progressively larger as you go
from~$\lambda$ to~$\sigma$ to~$\mu$. This means that the corresponding
values of~$\delta_\alpha$ are worse (i.e., smaller), respectively, often
by one or two orders of magnitude. However, the values of~$\delta_x$ for
the three cases are of the same order. While we could increase~$N$ to
improve the estimates, Figure~\ref{fig:tradeoff} shows that there are
diminishing returns on computational investment, and eventually at some~$N$,
we could not have done much better even with a significantly larger value
of~$N$.
\begin{figure} \centering
\includegraphics[width=0.7\textwidth]{3dlambda75sigma6mu0.eps}
\caption{\label{fig:3d}
A three-dimensional validated solution for the parameter
values $\lambda = 75$, $\sigma = 6$, and~$\mu = 0$.}
\end{figure}
\begin{table}
\begin{tabular}{|c|c|c||c|c|c|}
\hline
$(\lambda,\sigma,\mu)$ & $K$ & $N$ & $P$ & $\delta_\alpha$ & $\delta_x$ \\
\hline\hline
$(75,6,0)$ & 22.6527 & 22 & $\lambda$ & 0.1143e-04 & 0.5917e-03 \\
& && $\sigma$ & 0.1707e-04 & 0.5955e-03 \\
& && $\mu$ & 0.0010e-04 & 0.4901e-03 \\ \hline
\end{tabular}
\vspace*{0.3cm}
\caption{\label{table:3d}
Validation parameters for a three-dimensional sample solution.}
\end{table}
In two dimensions, we have validated seventeen different solutions for
varying parameter values. A representative sample are given in
Figure~\ref{fig:2d}, with some sample validation parameters presented
in Table~\ref{table:2d}. Again here, there is a tradeoff between
computational speed and optimal results, but with all of the computations
being significantly longer due to the increased dimension; if the
function~$u$ is encoded by a Fourier coefficient array of size $N \times N$,
then the derivative matrix is of size~$(N^2 - 1)^2$, where the~$-1$ is due
to the fact that we have removed the constant term. As in one dimension,
the resulting~$\delta_\alpha$ values vary significantly, but the~$\delta_x$
values do not. Figure~\ref{fig:3d} and Table~\ref{table:3d} show the details
of a solution which is validated in three dimensions, with much the same
observed behavior. Three-dimensional result validation requires a much
larger computational effort, since if the function~$u$ is given by a
Fourier coefficient array of size $N \times N \times N$, then the
derivative matrices with inverse being approximated are of size~$(N^3-1)^2$.
\section{Conclusions}
\label{sec:theend}
As outlined in more detail in the introduction, in this paper we presented
the theoretical foundations for validating branch segments of equilibrium
solutions for the diblock copolymer model. Our approach is based on using the
natural Sobolev norms which are used in the study of the underlying evolution
equation, and they have been derived in all three relevant physical dimensions. As
a side result, we obtained a method based on Neumann series to determine
rigorous upper bounds on the inverse Fr\'echet derivative of the diblock copolymer
operator which are of interest in their own right, as they are connected to
the pseudo-spectrum of this non-self-adjoint operator, see~\cite{trefethen:embree:05a}.
Moreover, we have demonstrated briefly in the last section how these results can be
used to obtain computer-assisted proofs for selected diblock copolymer equilibrium
solutions.
While the present paper is a first step towards a complete path-following
framework for the diblock copolymer model in dimensions up to three, there
are still a number of issues that have to be addressed. On the theoretical side,
one has to develop a pseudo-arclength continuation method with associated linking
conditions which operates in an automatic fashion. This can be done by using
the constructive implicit function theorem as a tool, similar to the applications
to slanted box continuation and limit point resolution which were presented
in~\cite[Sections~2.2 and~2.3]{sander:wanner:16a}. In addition, the bottleneck
in the current validation step is the estimation of the norm bound for the inverse.
Especially in two, and even more so in three dimensions, one has to implement
path-following in such a way that the estimate does not have to be validated at
every step. This can be accomplished via perturbation arguments, and further
speedups are possible by using the sparseness of the involved matrices. However,
all of these issues are nontrivial and lie beyond the scope of the current paper
--- they will therefore be presented elsewhere.
\section*{Acknowledgments}
We thank the referee for helpful comments, which improved the quality of this paper.
E.S. and T.W.~were partially supported by NSF grant DMS-1407087.
E.S. was partially supported by NSF grant DMS-1440140 while in
residence at the Mathematical Sciences Research Institute in Berkeley,
California, during the Fall 2018 semester. In addition, T.W.\ and E.S.\
were partially supported by the Simons Foundation under Awards~581334
and~636383, respectively.
|
2,877,628,090,308 | arxiv | \section{#1}\setcounter{equation}{0}}
\def\arabic{section}{\arabic{section}}
\def\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}}{\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}}}
\def\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}{\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}}
\newcommand{\mbox{$\sigma$}}{\mbox{$\sigma$}}
\newcommand{\bi}[1]{\bibitem{#1}}
\newcommand{\fr}[2]{\frac{#1}{#2}}
\newcommand{\mbox{$\gamma_{\mu}$}}{\mbox{$\gamma_{\mu}$}}
\newcommand{\mbox{$\gamma_{\nu}$}}{\mbox{$\gamma_{\nu}$}}
\newcommand{\mbox{$\fr{1+\gamma_5}{2}$}}{\mbox{$\fr{1+\gamma_5}{2}$}}
\newcommand{\mbox{$\fr{1-\gamma_5}{2}$}}{\mbox{$\fr{1-\gamma_5}{2}$}}
\newcommand{\mbox{$\tilde{G}$}}{\mbox{$\tilde{G}$}}
\newcommand{\mbox{$\gamma_{5}$}}{\mbox{$\gamma_{5}$}}
\newcommand{\tan\beta}{\tan\beta}
\newcommand{\mbox{Im}}{\mbox{Im}}
\newcommand{\mbox{Re}}{\mbox{Re}}
\newcommand{\rm Tr}{\mbox{Tr}}
\newcommand{\slash{\!\!\!p}}{\slash{\!\!\!p}}
\newcommand{C\!P}{\;\;\slash{\!\!\!\!\!\!\rm CP}}
\newcommand{\langle \ov{q}q\rangle}{\langle \ov{q}q\rangle}
\newcommand{\bar{u}g_s(G\si) u}{\bar{u}g_s(G\si) u}
\newcommand{\bar{d}g_s(G\si) d}{\bar{d}g_s(G\si) d}
\newcommand{\newcommand}{\newcommand}
\newcommand{\bar{u}u}{\bar{u}u}
\newcommand{\bar{d}d}{\bar{d}d}
\newcommand{\gone}{\bar g_{\pi NN}^{(1)}}
\newcommand{\gzero}{\bar g_{\pi NN}^{(0)}}
\newcommand{\al}{\alpha}
\newcommand{\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}}}{\gamma}
\newcommand{\de}{\delta}
\newcommand{\ep}{\epsilon}
\newcommand{\ze}{\zeta}
\newcommand{\et}{\eta}
\renewcommand{\th}{\theta}
\newcommand{\Th}{\Theta}
\newcommand{\ka}{\kappa}
\newcommand{\rh}{\rho}
\newcommand{\si}{\sigma}
\newcommand{\ta}{\tau}
\newcommand{\up}{\upsilon}
\newcommand{\ph}{\phi}
\newcommand{\ch}{\chi}
\newcommand{\ps}{\psi}
\newcommand{\om}{\omega}
\newcommand{\Ga}{\Gamma}
\newcommand{\De}{\Delta}
\newcommand{\La}{\Lambda}
\newcommand{\Si}{\Sigma}
\newcommand{\Up}{\Upsilon}
\newcommand{\Ph}{\Phi}
\newcommand{\Ps}{\Psi}
\newcommand{\Om}{\Omega}
\newcommand{\ptl}{\partial}
\newcommand{\del}{\nabla}
\newcommand{\ov}{\overline}
\newcommand{\newcaption}[1]{\centerline{\parbox{15cm}{\caption{#1}}}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{displaymath}{\begin{displaymath}}
\def\end{displaymath}{\end{displaymath}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\begin{array}{\begin{array}}
\def\end{array}{\end{array}}
\def\begin{itemize}{\begin{itemize}}
\def\end{itemize}{\end{itemize}}
\def\begin{enumerate}{\begin{enumerate}}
\def\end{enumerate}{\end{enumerate}}
\def\begin{tabular}{\begin{tabular}}
\def\end{tabular}{\end{tabular}}
\def\begin{table}{\begin{table}}
\def\end{table}{\end{table}}
\def\begin{figure}[htb]{\begin{figure}[htb]}
\def\end{figure}{\end{figure}}
\def\begin{picture}{\begin{picture}}
\def\end{picture}{\end{picture}}
\def\scriptstyle{\scriptstyle}
\def\scriptscriptstyle{\scriptscriptstyle}
\def\hspace{0.06in}{\hspace{0.06in}}
\def\hspace{0.08in}{\hspace{0.08in}}
\def\hspace{0.12in}{\hspace{0.12in}}
\def\nonumber \\{\nonumber \\}
\def\nonumber \\ &&{\nonumber \\ &&}
\def\hspace{1cm}{\hspace{1cm}}
\def\hspace{2cm}{\hspace{2cm}}
\def\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}}{\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}}}
\def\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}{\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}}
\def\mathrel{\rlap {\raise.5ex\hbox{$>$}{\mathrel{\rlap {\raise.5ex\hbox{$>$}}
{\lower.5ex\hbox{$\sim$}}}}
\def\mathrel{\rlap{\raise.5ex\hbox{$<$}{\mathrel{\rlap{\raise.5ex\hbox{$<$}}
{\lower.5ex\hbox{$\sim$}}}}
\def\Omega_{\widetilde\chi}\, h^2{\Omega_{\widetilde\chi}\, h^2}
\def{\rm \, G\kern-0.125em yr}{{\rm \, G\kern-0.125em yr}}
\def{\rm \, Me\kern-0.125em V}{{\rm \, Me\kern-0.125em V}}
\def{\rm \, Ge\kern-0.125em V}{{\rm \, Ge\kern-0.125em V}}
\def{\rm \, Te\kern-0.125em V}{{\rm \, Te\kern-0.125em V}}
\defC\!P{C\!P}
\def|{\cal T}|^2{|{\cal T}|^2}
\def{\textstyle{1\over2}}{{\textstyle{1\over2}}}
\def\slash#1{\rlap{\hbox{$\mskip 1 mu /$}}#1}%
\def\tan \beta{\tan \beta}
\def\tan^2 \beta{\tan^2 \beta}
\def\theta_{\ss W}{\theta_{\scriptscriptstyle W}}
\def{\rm h.c.}{{\rm h.c.}}
\def\eta^{\hspace{0.01in} \mu \hspace{0.01in} \nu}{\eta^{\hspace{0.01in} \mu \hspace{0.01in} \nu}}
\def{\bf p}{{\bf p}}
\def{\bf \hat{n}}{{\bf \hat{n}}}
\def\frac{1}{2}{\frac{1}{2}}
\def\frac{1}{3}{\frac{1}{3}}
\def\frac{1}{4}{\frac{1}{4}}
\def\rm Tr{\rm Tr}
\def\rm Ker{\rm Ker}
\def\rm index{\rm index}
\def\mbox{\boldmath $\theta$}{\mbox{\boldmath $\theta$}}
\def\mbox{\boldmath $\phi$}{\mbox{\boldmath $\phi$}}
\def\mbox{\boldmath $\alpha$}{\mbox{\boldmath $\alpha$}}
\def\mbox{\boldmath $\sigma$}{\mbox{\boldmath $\sigma$}}
\def\mbox{\boldmath $\gamma$}{\mbox{\boldmath $\gamma$}}
\def\mbox{\boldmath $\omega$}{\mbox{\boldmath $\omega$}}
\newcommand{\FRAME}[1]{\fbox{\mbox{$#1$}}}
\begin{document}
\preprint{\hspace*{1.5in}hep-ph/0510254$\;\;\;\;\;\;\;$CERN-PH-TH-2005-203\hspace*{2.23in}}
\setcounter{page}{1}
\title{Flavor and \boldmath{$CP$} violating physics from new supersymmetric thresholds}
\author{Maxim Pospelov$^{\,(a,b)}$,
Adam Ritz$^{\,(b,c)}$ and Yudi Santoso$^{\,(a,b)}$}
\affiliation{$^{\,(a)}${\it Perimeter Institute for Theoretical Physics, Waterloo,
Ontario N2J 2W9, Canada}\\
$^{\,(b)}${\it Department of Physics and Astronomy, University of Victoria,
Victoria, BC, V8P 1A1 Canada}\\
$^{\, (c)}${\it Theoretical Division, Department of Physics, CERN,
Geneva 23, CH-1211 Switzerland}}
\begin{abstract}
Treating the MSSM as an effective theory, we study the implications of having dimension
five operators in the superpotential for flavor and $CP$-violating processes, exploiting
the linear decoupling of observable effects with respect to the new threshold scale $\Lambda$.
We show that the assumption of weak scale supersymmetry, when combined
with the stringent limits on electric dipole moments and lepton flavor-violating processes, provides
sensitivity to $\Lambda$ as high as $10^7-10^9$ GeV, while the next generation of experiments
could directly probe the high-energy scales suggested by neutrino physics.
\end{abstract}
\maketitle
\newpage
Weak-scale supersymmetry (SUSY) is a theoretical framework that helps to soften the
so-called gauge hierarchy problem by removing the power-like ultraviolet sensitivity of
the dimensionful parameters in the Higgs potential. It also has other advantages, notably
an improvement in gauge coupling unification and a natural dark matter candidate, which have
made it the standard paradigm for physics beyond the Standard Model (SM). However, the simplest
scenario -- the minimal supersymmetric standard model (MSSM) -- suffers from a number of well-known
tuning problems, due in part to the large array of possible parameters responsible for soft SUSY breaking \cite{susy}, and
consequently the possibility of catastrophically large flavor and $CP$ violating amplitudes.
The absence of new flavor structures and order-one sources of $CP$-violation in the soft breaking sector, as evidenced respectively
by the perfect accord of the observed $K$ and $B$ meson mixing and decay with the predictions of the SM \cite{ut}
and the null results of electric dipole moment (EDM) searches \cite{Tl,Hg,n},
motivates continuing work on the specifics of SUSY breaking.
In the present Letter we will instead ask, given a solution to the flavor and $CP$ problems in the soft-breaking sector,
what sensitivity do we have to new high-scale sources of flavor and $CP$-violation? Such effects would arise
through SUSY-preserving higher-dimensional operators generated at a new threshold $\La \gg M_W$.
Such thresholds
are indeed expected due to various completions of the MSSM, e.g. via mechanisms for SUSY breaking and mediation,
the breaking of flavor symmetries, and moreover via the physics generating neutrino masses and mixings.
Intermediate scales are also suggested by the axion solution to the strong $CP$ problem, SUSY leptogenesis
scenarios, and more entertainingly as a lowered GUT/string scale arising from large compactification radii of
extra dimensions.
In contrast to nonuniversal or complex soft-breaking terms,
the flavor and $CP$-violating observables induced by such operators will scale
as $(\Lambda m_{\rm susy})^{-1}$, and thus the constraints on nonminimal flavor or $CP$ translate directly
into sensitivity to $\La$ far above the scale of the superpartner masses, $m_{\rm susy}$.
At dimension five there are several well-known $R$-parity conserving operators associated with neutrino masses,
$H_u L H_u L$, and baryon number violation, $U U D E$, $QQQL$ \cite{WeinbergSY}.
The constraints on proton decay put severe restrictions on the size of
baryon-number violating operators, $\Lambda_{b} > 10^{24}$~{\rm GeV}, where $1/\Lambda_b$
is the overall normalization scale for these operators. The ``super-seesaw''
operator $H_u L H_u L$
is a welcome addition to the MSSM superpotential, as it generates Majorana masses
and mixing for neutrinos,
which imply $\Lambda_{\nu} \sim (10^{14} - 10^{16})~{\rm GeV}$. Note that in the seesaw scenario, the actual scale
of right-haned neutrinos, $M_R$, is lower than $\La_\nu$, since $\Lambda_\nu^{-1} = Y_\nu^2M_R^{-1}$ with a small $Y_\nu$,
as is also favored by SUSY leptogenesis
In what follows, we analyze in detail the remaining
operators allowed in the $R$-parity conserving MSSM at dimension five level \cite{WeinbergSY}.
We write the superpotential as
\begin{eqnarray}
{\cal W} &=& {\cal W}_{\rm MSSM} + \fr{y_h}{\Lambda_{h}}H_dH_uH_dH_u +
\fr{Y^{qe}_{ijkl}}{\Lambda_{qe}}(U_i Q_j )E_k L_l \nonumber\\
&& \!\!\!\!\!\!\!\!\!\!\!\!\! +\fr{Y^{qq}_{ijkl}}{\Lambda_{qq}}(U_iQ_{j}) (D_k Q_{l} )+
\fr{\tilde Y^{qq}_{ijkl}}{\Lambda_{qq}}(U_it^AQ_{j}) (D_kt^AQ_{l}),\label{qule}
\end{eqnarray}
where $y_h$, $Y_{qe}$, $Y_{qq}$ and $\tilde Y_{qq}$ are dimensionless coefficients,
the latter three being tensors in flavor space.
The parentheses in (\ref{qule}) denote a contraction of colour indices.
Note that since we will only consider supersymmetric thresholds, the superfield equations of
motion can be used to eliminate all dimension five corrections to the K\"ahler potential, e.g.
$K^{(5)} = c_{u}QUH_d^\dagger$, absorbing them in ${\cal W}^{(5)}$ and the Yukawa terms, and
slightly modifying the soft-breaking sector. A renormalizable realization of
(\ref{qule}) can easily be obtained, {\em e.g.} the MSSM extended by a singlet $N$ (the NMSSM) or an
extra pair of heavy Higgses.
The full Lagrangian descending from (\ref{qule}) is rather cumbersome, and
we will focus our attention here on those dimension five operators
which are of potential phenomenological interest, specifically those that involve two SM fermions and
two sfermions. We then proceed to integrate out the sfermions to obtain operators composed from the SM
fields (or more precisely those of a type II two-Higgs doublet model). We will impose the requirements of
flavor triviality and $CP$ conservation in the soft-breaking sector. Thus all dimension $\leq 4$
coefficients in the Higgs potential, trilinear terms $A_i$, gaugino masses $M_i$, and the $\mu$-parameter,
will be taken real. We will also make the simplifying assumption of universal sfermion masses,
denoted $m_{\rm sq}$, $m_{\rm sl}$, which we will take, along
with $\mu$, $M_i$, to be somewhat larger than $M_W$. Deferring the full details \cite{prs2},
we quote the relevant results below:
{\em Correction to the SM fermion masses:}
The SM operators of lowest dimension that are of phenomenological interest are the
fermion mass operators. From the diagrams of Fig.~1a, we obtain the following corrections:
\begin{eqnarray}
\label{delta_m}
\de (M_e)_{ij} &=& Y^{qe}_{klij}(M_u^{(0)})^*_{kl}
~\fr{3\ln(\Lambda_{qe}/m_{\rm sq})}{8\pi^2 \Lambda_{qe}}
(A_u^* + \mu \cot\beta) \nonumber\\
\de (M_d)_{ij} &=& K^{qq}_{klij}(M_u^{(0)})^*_{kl}
~\fr{\ln(\Lambda_{qq}/m_{\rm sq})}{4\pi^2 \Lambda_{qq}}
(A_u^* + \mu \cot\beta),\;\;\;\;\;
\end{eqnarray}
with a similar correction to $M_u$. The notation implies summation over the repeated flavor indices, and
we have defined the combination $K^{qq}\equiv (Y^{qq}-2\tilde{Y}^{qq}/3)$. $M^{(0)}_{e,d,u}$ denote unperturbed
mass matrices arising from dimension four terms in the superpotential.
Note that the corrections proportional to $A_{u}$ directly break SUSY, while those proportional to $\mu$ arise
from corrections to the K\"ahler potential.
\begin{figure}
\centerline{\includegraphics[width=8.2cm]{threshd5.eps}}
\caption{\footnotesize Several representative loop corrections to: (a) SM fermion masses;
(b) dipole amplitudes contributing to EDMs (cf. the supersymmetric Barr-Zee diagrams \cite{CKP}),
$\mu\to e\gamma$, $b\to s\gamma$, $(g-2)_\mu$; and (c,d) dimension six four-fermion operators.
The crossed vertex descends from dimension five terms in the superpotential (\ref{qule}).}
\label{f2}
\end{figure}
{\em Dipole operators:}
At dimension five, dipole operators first arise at two-loop order, as in Fig.~1b.
In the charged lepton sector they result in
\begin{equation}
\label{dipole}
{\cal L}_{e} =
\fr{A_u +\mu\cot\beta}{\Lambda^{qe}m_{\rm sq}^2} \fr{e\alpha}{12\pi^3}
(M_u)^*_{kl}Y^{qe}_{klij}\bar E_i (F\sigma) P_L E_j +(h.c.),
\end{equation}
where we treated $LR$ squark mixing as a mass insertion, and used $P_L = \fr{1-\gamma_5}{2}$ and
$(F\sigma) = F_{\mu\nu}\sigma^{\mu\nu}$.
In the quark sector the corresponding results are more cumbersome due to
a large number of possible diagrams.
Jumping an additional dimension, we now consider dimension six four-fermion
operators generated by various terms in (\ref{qule}). Two representative
diagrams are shown in Fig.~1c,d.
{\em Semileptonic operators:}
Integrating out gauginos and sfermions as in Fig.~1c, we find the
following semileptonic operators, sourced by $QULE$,
\begin{equation}
{\cal L}_{qe}=
\fr{1}{\Lambda_{qe}m_{\rm susy}}\fr{\alpha_s}{3\pi}
Y^{qe}_{ijkl}\bar U_i Q_j \bar E_k L_l + (h.c.).
\label{qqll}
\end{equation}
Here $m_{\rm susy}^{-1}$ denotes a combination of
superpartner masses folded with a loop function $F$:
$m_{\rm susy}^{-1} = M_3m_{\rm sq}^{-2}F(M_3^2/ m_{\rm sl}^2)$, and
$F(a) = 2~\fr{1-a +a\ln(a)}{(1-a)^2}$ with $F(1)=1$ (see \cite{masscorr} for the unequal mass
case).
In (\ref{qqll}) we have retained only the gluino-squark contribution,
which is expected to dominate unless there are additional hierarchies between
the masses of sleptons and squarks.
{\em Four-quark operators:}
Integrating out gluinos and squarks as in Fig.~1c, we arrive at the following four-quark
effective operators:
\begin{eqnarray}
{\cal L}_{qq} & = & \fr{1}{\Lambda_{qq}m_{\rm susy}}\fr{\alpha_s}{12\pi}
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \label{qqqq}\\
&& \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \times
K^{qq} \left[\frac{8}{3}(\bar U Q) (\bar D Q)+ (\bar U t^A Q) (\bar D t^A Q)\right]
+ (h.c.),\nonumber
\end{eqnarray}
where the summation over flavor is carried out exactly as in (\ref{qule}). The largest
down-type $\De F=2$ operator arises instead from Fig.~1d,
\begin{eqnarray}\label{dddd}
{\cal L}_{dd} &=&
\fr{1}{\Lambda_{qq}m_{\rm susy}}\fr{1}{16\pi^2}
(Y^*_u)_{im}(Y^*_d)_{nj} K^{qq}_{ijkl} \\
&& \nonumber \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\times\left[\fr{1}{3} (\bar Q_m D_n) (\bar D_k Q_l)-(\bar Q_m t^A D_n) (\bar D_k t^A Q_l)\right]
+ (h.c.),
\end{eqnarray}
which inevitably contains additional Yukawa suppression originating from the
Higgsino-fermion-sfermion vertices. Here $m_{\rm susy}$ is a combination of SUSY masses as in
(\ref{qqll}) and (\ref{qqqq}) with $M_3$ replaced by $\mu$.
We will now turn to the phenomenological consequences and the sensitivity to $\Lambda^{qe}$
and $\Lambda^{qq}$ in various experimental channels. Of course, one of the most important
issues is the flavour structure of the new couplings constants, $Y^{qe}$, $Y^{qq}$ and $\tilde Y^{qq}$.
We will assume that these coefficients are of order one, and {\em do not factorize}: $Y^{qe} \neq Y_u Y_e$.
With this assumption, we should first determine the natural scale for $\La$ such that the corrections to
SM fermion masses do not exceed their measured values.
{\em Particle masses and $\theta$-term:}
Taking $(M_u A_u)_{kl} = (M_u A_u)_{33} \sim m_tA_t \sim 175 {\rm GeV} \times 300 {\rm GeV}$
in (\ref{delta_m}), and assuming a maximal $Y^{qe}_{3311}\sim O(1)$, we arrive at the
estimate,
\begin{equation}
\Delta m_e \sim \fr{3 m_t A_t Y^{qe}_{3311}\ln(\Lambda^{qe}/m_{\rm sq})}{8\pi^2\Lambda^{qe}}
\sim 1 {\rm MeV}
\fr{10^7{\rm GeV}}{\Lambda^{qe}}.
\label{dme}
\end{equation}
Eq.~(\ref{dme}) clearly implies that the natural scale for new physics
encoded in the semileptonic
operators in the superpotential
is $\Lambda^{qe}\sim 10^7$ GeV, while the corresponding scale in the
quark sector is slightly lower.
A strikingly high naturalness scale emerges from
consideration of the effective shift of $\bar \th$ due to the mass corrections (\ref{delta_m}).
Assuming uncorrelated phases between $Y^{qq}$ and the eigenvalues of $Y_u$ and $Y_d$,
we find,
\begin{equation}
\Delta \bar\theta \sim \fr{{\rm Im~}m_d}{m_d} \sim
\fr{ {\rm Im~}K^{qq}_{3311}m_tA_t\ln(\Lambda^{qq}/m_{\rm sq})}{4\pi^2 m_d\Lambda^{qq}}
\sim
\fr{10^{7}~{\rm GeV}}{\Lambda^{qq}}.
\label{delta_th}
\end{equation}
Eq.~(\ref{delta_th}) translates directly to an extremely strong bound on $\Lambda^{qq}$ in scenarios
where $\bar \theta\simeq 0$ is engineered by hand, either by using discrete symmetries at high energies
\cite{discrete} or by imposing an approximate global $U(1)$ symmetry at tree level to ensure $m_u^{(0)}=0$.
In these cases, the experimental bound on the neutron EDM, $|d_n| < 6\times 10^{-26} e\, {\rm cm}$ \cite{n}
(soon to be updated \cite{n2}), combined with standard estimates for $d_n(\bar\theta)$ \cite{PR2005}
implies remarkable sensitivity to scales $\Lambda^{qq} \sim 10^{17}$ GeV. Future progress in EDM searches
(both for neutrons and heavy atoms) can bring this up to the Planck scale and beyond. In contrast, no constraints
from (\ref{delta_th}) ensue within the axion scenario.
{\em Electric dipole moments from four-fermion operators:}
Electric dipole moments (EDMs) of neutrons and heavy atoms and molecules are the
primary probes for sources of flavor-neutral $CP$ violation \cite{PR2005}.
In addition to $d_n$, the strongest constraints on $CP$-violating parameters arise
from the atomic EDMs of thallium, $|d_{\rm Tl}| < 9 \times 10^{-25} e\, {\rm cm}$ \cite{Tl},
and mercury, $|d_{\rm Hg}| < 2 \times 10^{-28} e\, {\rm cm}$ \cite{Hg}.
Assuming that $\bar\theta$ is removed by an appropriate symmetry, EDMs are mediated by higher-dimenional operators and
both (\ref{qqll}) and (\ref{qqqq}) are capable of inducing atomic/nuclear EDMs if the overall coefficients contain an extra
phase relative to the quark masses. Restricting Eq.~(\ref{qqll}) to the first generation, we find the
following $CP$-odd operators (with real $m_e$, $m_u$):
\begin{equation}
{\cal L}_{CP} = -
\fr{\alpha_s{\rm Im}Y^{qe}_{1111}}{6\pi\Lambda_{qe}m_{\rm susy}}
\left[(\bar uu )\bar ei\gamma_5 e + (\bar ui\gamma_5u) \bar e e \right].
\end{equation}
Accounting for QCD running from the SUSY scale to $1$GeV,
and using the hadronic matrix elements over nucleon states,
$\langle N| (\bar uu + \bar dd)/2|N\rangle \simeq 4 \bar{N}N$ and
$\langle n|\bar ui \gamma_5u |n\rangle \simeq -0.4(m_N/m_u)\bar n i \gamma_5 n$,
we determine the induced corrections to the $CP$-odd electron-nucleon Lagrangian,
${\cal L} = C_S \bar NN \bar e i \gamma_5 e + C_P \bar Ni \gamma_5N \bar e e $,
\begin{equation}
C_S \sim \frac{2\times 10^{-4}}{1{\rm GeV}\times\Lambda^{qe}}, \;\;\;
C_P \sim \frac{4\times 10^{-3}}{1{\rm GeV}\times\Lambda^{qe}}, \label{CSCP}
\end{equation}
using maximal Im$Y^{qe}$ and taking $m_{\rm susy} = 300~{\rm GeV}$.
Comparing (\ref{CSCP}) to the limits on $C_S$ and $C_P$ deduced from
the Tl and Hg EDM bounds \cite{PR2005}, we obtain the following sensitivity,
\begin{eqnarray}
\label{cslimit}
\Lambda^{qe} &\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}}& 3 \times 10^8 ~{\rm GeV} ~~~~~~~~~~{\rm from~ Tl~ EDM} \\
\Lambda^{qe} &\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}}& 1.5 \times 10^8 ~{\rm GeV} ~~~~~~~~\!{\rm from~ Hg~ EDM}\\
\Lambda^{qq} &\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}}& 3\times 10^7 ~{\rm GeV} ~~~~~~~~~~{\rm from~ Hg~ EDM}.
\end{eqnarray}
The last relation results from sensitivity to the $CP$ violating operators $(\bar d i \gamma_5 d)(\bar u u)$
from (\ref{qqqq}), leading to the Schiff nuclear moment and the Hg EDM.
These are remarkably large scales, and indeed not far below the scales suggested by neutrino physics.
In fact, the next generation of atomic/molecular EDM experiments \cite{nextedm} may reach sensitivities
sufficient to push $\La^{qe}$ into regions close to the suggested scale of right-handed neutrinos.
Semileptonic operators involving heavy quark superfields are in turn strongly constrained
via two-loop corrections (\ref{dipole}) to the dipole amplitudes. The bound on $d_{\rm Tl}$
implies $|d_e|\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 1.6\times 10^{-27} e\, {\rm cm}$, which for maximal ${\rm Im}Y^{qe}_{1133}$ implies:
\begin{equation}
\Lambda^{qe}\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 1.3 \times 10^8{\rm GeV}.
\end{equation}
Results analogous to (\ref{dipole}) apply for the quark EDMs and color EDMs,
furnishing a similar sensitivity to $\Lambda^{qq}$.
{\em Lepton flavour violation:}
Searches for lepton-flavour violation (LFV), such as $\mu\to e\gamma$ decay,
and $\mu\to e$ conversion in nuclei, have resulted in stringent upper bounds on the
corresponding branching ratio, ${\rm Br}({\mu\to e\gamma}) < 1.2\times 10^{-11}$ \cite{muegamma}, and the rate
of conversion normalized on capture rate, $R({\mu \to e^-~ {\rm on ~Ti}}) < 4.3\times 10^{-12}$ \cite{sindrum},
with further improvement anticipated.
The latter bound implies a particularly high sensitivity to the semileptonic operators in (\ref{qule}).
The conversion is mediated by $(\bar uu)\bar e i \gamma_5 \mu$
and $(\bar uu) \bar e \mu$, and involves the same matrix elements
as $C_S$. Using bounds on such scalar operators
derived elsewhere (see {\em e.g.} \cite{faessler}),
we conclude that $\mu\to e$ conversion probes energy scales as high as
\begin{equation}
\Lambda^{qe} \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 1\times 10^8 {\rm GeV}~~~~~{\rm from}~{\mu^- \to e^-~ {\rm on ~Ti}}.
\label{muelimit}
\end{equation}
The constraint on $\mu\to e\gamma$ probes similar, but slightly lower, scales as it requires
a two-loop diagram as in Fig.~1b.
Disregarding an ${\cal O}(1)$ factor between (\ref{cslimit}) and (\ref{muelimit}), we conclude that
searches for EDMs and LFV probe these extensions of the MSSM up to comparable energy scales
of $\sim10^8$ GeV.
{\em Hadronic flavor constraints:}
Often, the most constraining piece of experimental information comes from the
contribution of new physics to the mixing of neutral mesons, $K$ and $B$. However, in the
present case, there is necessarily a significant loop and Yukawa suppression arising from (\ref{dddd}),
and the sensitivity is correspondingly weakened.
Taking $(\De m_K)_{\rm exp} \simeq 3.5\times 10^{-6} {\rm eV}$ \cite{pdg}, we
find $\La^{qq} \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} (\tan\beta/50)\times 200\, {\rm GeV}$
\cite{prs2}. $\Delta m_B$ exhibits a similar sensitivity, while $\ep_K$ is about three orders of magnitude more
sensitive, but still well below the scales probed by EDMs and LFV. In contrast, it is clear that
these observables provide much better sensitivity to SUSY dimension-six operators, which impose no
additional suppression factors. Denoting the corresponding scale as $\La'$, we find
$\La' \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 8\times 10^6~{\rm GeV}$, while $\epsilon_K$ is sensitive to scales
$\sim 10^8$ GeV.
Two-loop contributions to $b\rightarrow s\gamma$ (as in Fig.~1b) are not Yukawa suppressed and, with the current precision
$\De Br(B\rightarrow X_s\gamma) \sim 10^{-4}$ \cite{pdg}, are somewhat more sensitive. We find
$\La^{qq} \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 10^3-10^4 {\rm GeV}$ (for $Y^{qq}_{3233}\sim 1$), still well below the
sensitivity in other channels.
{\em Constraints on the Higgs operator:}
The high sensitivity to $QULE$ and $QUQD$ arises primarily because they can flip the light fermion chirality
without Yukawa suppression. It would then come as no surprise if $H_uH_dH_uH_d$ were to have little implication
for $CP$ and flavor-violating observables; the operator will of course provide corrections to the
sfermion and neutralino mass matrices, and can induce $CP$-odd mixing between $A$ and $h$, $H$, but
these effects do not lead to high sensitivity to $\La_h$.
\begin{table}
\begin{center}
\begin{tabular}{c|c|c}
\hline
operator & sensitivity to $\La$ (GeV) & source \\ \hline\hline
$Y^{qe}_{3311}$ & $\sim 10^7$ & naturalness of $m_e$\\
Im($Y^{qq}_{3311})$ & $\sim 10^{17}$ & naturalness of $\bar \theta$, $d_n$\\
Im($Y^{qe}_{ii11}$) & $10^7-10^9$ & Tl, Hg EDMs \\
$Y^{qe}_{1112}$, $Y^{qe}_{1121}$ & $10^7-10^8$& $\mu\rightarrow e$ conversion \\
Im($Y^{qq}$) & $10^7-10^8$ & Hg EDM \\
Im($y_h$) & $10^3-10^8$ & $d_e$ from Tl EDM \\ \hline
\end{tabular}
\end{center}
\caption{Sensitivity to the threshold scale. The naturalness bound on Im$(Y^{qq})$
doesn't apply to the axionic solution of the strong $CP$ problem, the best sensitivity to Im$(y_h)$
is achieved at maximal $\tan\beta$, and the Hg EDM constraint on Im$(Y^{qq})$ applies when at least one pair
of quarks belongs to the 1$^{\rm st}$ generation.
}
\label{table1}
\end{table}
Remarkably enough, it turns out that EDMs do exhibit a high sensitivity to $H_uH_dH_uH_d$
at large $\tan\beta$ through corrections to the Higgs potential, and in particular the
effective shift of the $m_{12}^2$ parameter,
\begin{equation}
m_{12}^2 H_u H_d \to (m_{12}^2)_{\rm eff}H_u H_d \equiv \left(m_{12}^2 + \fr{\mu y_hv_{SM}^2}{\Lambda_h}\right)H_u H_d.
\label{meff}
\end{equation}
Crucially, a complex phase in $(m_{12}^2)_{\rm eff}$, due to Im$(y_h)$, is enhanced at large
$\tan\beta$ because $m_{12}^2 \simeq m_A^2/\tan\beta$. The resulting
phase affects the one-loop SUSY EDM diagrams (see e.g. \cite{ourlateststuff}):
\begin{eqnarray}
d_e&=&\fr{em_e\tan\beta }{16\pi^2m_{\rm susy}^2}\left (\fr{5 g_2^2}{24}+\fr{g_1^2}{24}\right)
\sin \left[{\rm Arg}\frac{\mu M_2}{(m_{12}^2)_{\rm eff}}\right].\;\;\;\;\;\;
\end{eqnarray}
Expanding to leading order in $1/\La_h$, using (\ref{meff}), and imposing the present limit on
$d_e$ discussed earlier, one finds impressive sensitivity for large $\tan\beta$,
\begin{equation}
\Lambda_h \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 2\times 10^7 ~{\rm GeV} \left(\fr{\tan\beta}{50}\right)^2
\left(\fr{300{\rm GeV}}{m_{\rm susy}}\right)\left(\fr{300{\rm GeV}}{m_A}\right)^2.
\end{equation}
In conclusion, we have examined new flavor and $CP$ violating effects
mediated by dimension five superpotential operators, and shown that the sensitivity to
these operators extends far beyond the weak scale (as summarized in Table~1). The semileptonic operators that
mediate flavor violation in the leptonic sector and/or break $CP$ could be
detectable even if the scale of new physics is as high as $10^{9}$ GeV, and well above the naturalness
scale. Our results can be translated into constraints on $CP$ and flavor violation
in specific models leading to (\ref{qule}), {\em e.g.} the
NMSSM or the MSSM with an extra pair of Higgses. Moreover, the sensitivity
quoted in (\ref{cslimit}) and (\ref{muelimit}) is robust, having a mild dependence
on the SUSY threshold.
Finally, since these effects decouple linearly, an increase in sensitivity by
just two orders of magnitude would already start probing scales relevant for neutrino physics.
Our results motivate further searches for EDMs and LFV in the SUSY framework
even if the soft-breaking sector provides no new sources, as happens {\em e.g.} in models with low scale SUSY breaking.
|
2,877,628,090,309 | arxiv | \section{Introduction}
At zero temperature a heavy quark and antiquark pair forms a bound state
(quarkonium state) or a pair of heavy-light mesons. At sufficiently high
temperatures ($T > T_{c}$), (heavy) quarks are not confined anymore due to
color screening and collisions with the medium. The Polyakov loop and the
Polyakov loop correlator are the order parameters of this
confinement-deconfinement transition from hadrons to a quark gluon plasma (QGP)
in pure Yang-Mills theory and still provide sensitive probes in full QCD. The
continuum limit of the corresponding quark-antiquark free energies have been
studied previously (see Ref.~\cite{Bazavov:2018wmo}) and it has been found that
their short distance behavior can be understood in terms of weak-coupling
effective field theory calculations in the framework of pNRQCD and EQCD (see
Ref.~\cite{Berwein:2017thy}). These studies have been limited by the
exponential drop of the signal to noise ratio, which we can now alleviate
through the use of smeared gauge fields. We demonstrate that through
subtraction of the squared expectation value of the appropriately smeared
Polyakov loop we can reconstruct the subtracted free energies at sufficiently
large distances and smoothly connect results with different amounts of
HYP-smearing (see Ref.~\cite{Hasenfratz:2001hp}) applied. We use four
dimensional HYP-smearing throughout this work. This allows a quantitatively
predictive study even in the asymptotic screening regime.\\
We perform lattice calculations in a wide temperature range, $116~\text{MeV}
\leq T \leq 5814~\text{MeV}$, using the highly improved staggered quark (HISQ)
action (see Ref.~\cite{Follana:2006rc}) and the MILC code (see
Ref.~\cite{Bazavov:2010ru}) and several lattice spacings in 2+1 flavor QCD.\\
The $Q\bar{Q}$ color average and color singlet (subtracted) free energies are
given in terms of the respective (subtracted) Polyakov loop correlators by
\begin{equation}
\label{eq:FreeEnergies}
F_{Q\bar{Q}}^{(\text{sub.})} = -T \ln(C_{P}^{(\text{sub.})}) \,, \quad\quad
F_{\text{s}}^{(\text{sub.})} = -T \ln(C_{\text{s}}^{(\text{sub.})}) \,.
\end{equation}
The bare (unsubtracted) Polyakov loop correlators are given by
\begin{equation}
C_{P}(r) = \langle P(0) P^{\dagger}(r) \rangle \,, \quad\quad C_{\text{s}}(r) =
\frac{1}{3} \langle \text{tr} W(0) W^{\dagger}(r) \rangle \,.
\end{equation}
The Polyakov loop $P$ being the trace of a temporal Wilson line $W$, where the
latter is not gauge invariant; we define $C_{\text{s}}$ in Coulomb gauge.
\begin{equation}
P(r) = \frac{1}{3} \text{tr} W(r) \,, \quad\quad W(r) = \prod\limits_{\tau/a =
1}^{N_{\tau}} U_{0}(\tau,r) \,,
\end{equation}
with $W$ describing the propagation of a static quark via the link variables
$U_{0}$. The expectation value of the Polyakov loop averaged over the spatial
volume of the lattice, $L = \langle P(r) \rangle$, is used to normalize
Polyakov loop correlators
\begin{equation}
\label{eq:Normalization}
C_{P}^{\text{sub.}}(r) = \frac{C_{P}(r)}{L^{2}} \,, \quad\quad
C_{\text{s}}^{\text{sub.}}(r) = \frac{C_{\text{s}}(r)}{L^{2}} \,.
\end{equation}
This normalizes the correlator $C^{\text{sub.}}$ such that the corresponding
free energy $F^{\text{sub.}}$ vanishes at infinite separation in infinite
volume. It is important to note that the correlator and $L$ need to be computed
for the same regularization, i.e., with the same HYP-smearing level. The
subtracted free energies defined in terms of the subtracted correlators are
divergence free and have a well-defined continuum limit. At distances $r
\gtrsim 1/(gT)$, where $gT \sim m_{\text{D}}$ is the Debye mass, the dimensionally
reduced EFT called electrostatic QCD (EQCD) is suitable to describe these
correlators (see Ref.~\cite{Braaten:1995jr}). For distances $r \ll 1/(gT)$, the
correlators are sensitive to the inherent non-perturbative magnetic sector and
receive contribution from the magnetic mass $\propto g^{2}T$. Hence, a
non-perturbative approach is required for studying the large distance
behavior.
\section{Screening functions for color singlet and color average free energies}
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth]{images/b6608sqqahypnt12.eps}
\hfill
\includegraphics[width=0.49\textwidth]{images/b7030sqqahypnt12.eps}
\caption{Screening functions for color average free energies. We show results
for $N_{\tau}=12$ below $T_{c}$ (left panel) and above $T_{c}$ (right panel) for
different HYP-smearing levels. These blend nicely into each other and, as
discussed below, this allows us to produce a screening function spanning a
large $rT$ range.}
\label{fig:ScreeningFunctionsI}
\end{figure}
In the previous study~\cite{Bazavov:2018wmo} it was shown that data with
$N_{\tau}=12$ is only marginally different from the continuum limit already. We use
these as a proxy for a full continuum limit in the following. Multiplying the
subtracted free energies for the color singlet and color average in units of
the temperature $T$ by $-r$, we obtain the respective dimensionless screening
functions assuming one-particle exchange. In Fig.~\ref{fig:ScreeningFunctionsI}
we show results for different HYP-smearing levels for two temperatures below
and above $T_{c}$, respectively, for $N_{\tau}=12$. It is remarkable how well data
at different smearing levels blend into one another allowing us to reach $rT$
regions well above $rT \sim 0.4$ where color screening is supposed to set in.
For $T < T_{c}$ the noise exceeds the unsmeared signal already before the
screening is unambiguously visible. Smearing allows us to piece together a
combined screening function covering a substantially larger $rT$ range than in
the earlier analysis~\cite{Bazavov:2018wmo}, especially for $T \lesssim 2T_{c}$.
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth]{images/sssnt12.eps}
\hfill
\includegraphics[width=0.49\textwidth]{images/sqqant12.eps}
\caption{Screening functions for the color singlet (left panels) and the color
average free energies (right panels). We show results for $N_{\tau}=12$ for a
large temperature range.}
\label{fig:ScreeningFunctionsII}
\end{figure}
These aggregate screening functions are shown in
Fig.~\ref{fig:ScreeningFunctionsII}, where we show
$-rF_{\text{s}}^{\text{sub.}}$ and $-rF_{Q\bar{Q}}^{\text{sub.}}$ with $N_{\tau}=12$
over a large temperature range. Both should appear as straight lines in the
log-plot for naive exponential screening $\propto \exp(-mr)$. This is indeed
the case, indicating that at sufficiently large distances both
$F_{Q\bar{Q}}^{\text{sub.}}$ and $F_{\text{s}}^{\text{sub.}}$ decay
exponentially. In contrast to the former analysis (see
Ref.~\cite{Bazavov:2018wmo}), we can see this now even for small $T$, where the
noise outgrows the signal without smearing before the exponential decay even
sets in. HYP-smearing allows us to extend the signal up to $rT \sim 0.8$ in
order to see and also fit the exponential decay.\\
As a general feature we see that the color singlet screening function gets
steeper as $rT$ increases but flatter with increasing $T$. On the other hand
the color average screening function changes its short distance behavior before
entering the screening regime. This is due to the change from a Coulombic
behavior in the regime $T \ll \alpha_{\text{s}}/r$ to a $1/r^{2}$ behavior in the regime $T
\gg \alpha_{\text{s}}/r$, which can be understood as due to cancellation between color
singlet and color octet contributions as seen in weak coupling calculations.
For the screening regime, we find that the color average screening function
does not exhibit distinct features, i.e., the slope does not change
significantly with increasing $T$ or $rT$. The leading order result for the
color average free energy in the color electric screening regime is given by
the exchange of two electro-static gluons (see
Refs.~\cite{Gross:1980br,McLerran:1981pb,Nadkarni:1986cz}):
\begin{equation}
F_{Q\bar{Q}}^{\text{sub.}} \simeq -\frac{\alpha_{\text{s}}^{2}}{r^{2}T} \exp(-2m_{\text{D}} r) \,.
\end{equation}
The perturbative NLO Debye mass (see Ref.~\cite{Braaten:1995jr}) in temperature
units is given by:
\begin{align}
\begin{aligned}
\label{eq:DebyeMass}
& m_{\text{D}}|_{\text{LO}}(\mu) = g(\mu) T \sqrt{\frac{2N_{\text{c}} + N_{\text{f}}}{6}} \,, \\
& m_{\text{D}}^{2}|_{\text{NLO}}(\mu) = m_{\text{D}}^{2}|_{\text{LO}}(\mu) \Big( 1 +
\frac{\alpha_{\text{s}}(\mu)}{4\pi} \Big[ 2 \beta_{0} \left( \gamma_{\text{E}} + \ln \frac{\mu}{4 \pi
T} \right) \\
& \quad\quad\quad\quad\quad + \frac{5N_{\text{c}}}{3} + \frac{2N_{\text{f}}}{3} (1 - 4 \ln 2)
\Big] \Big) - C_{\text{F}} N_{\text{f}} \alpha_{\text{s}}^{2}(\mu) T^{2} \,.
\end{aligned}
\end{align}
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth]{images/sssweak12.eps}
\hfill
\includegraphics[width=0.49\textwidth]{images/sqqsweak12.eps}
\caption{Comparison of the screening functions for the color singlet (left
panel) and the color average free energies (right panel) to EQCD predictions
(bands) at NLO. The widths of the bands correspond to the variation of the
scale $\mu$ between $\pi T$ and $4\pi T$. We show results for $N_{\tau}=12$. The
results for different $T$ have been vertically displaced for better visibility.}
\label{fig:ScreeningFunctionsIV}
\end{figure}
In the same regime of color electric screening, $r \sim 1/m_{\text{D}}$, also the color
singlet free energy can be described by EQCD (See Eq.~(21) in
Ref.~\cite{Bazavov:2018wmo} for full formulae). A comparison between both
screening functions and EQCD predictions using the NLO Debye mass given in
Eq.~\eqref{eq:DebyeMass} is shown in Fig.~\ref{fig:ScreeningFunctionsIV}. The
band reflects the variation of the scale $\mu$ between $\pi T$ and $4\pi T$.\\
At larger distances $r \gg 1/m_{\text{D}}$, contributions from the magnetic scale
$\propto g^{2}T$ may be relevant and, thus, a non-perturbative calculation is
required. We expect an asymptotic screening behavior as $F^{\text{sub.}} \sim
\exp(-mr)/r$. For $F_{Q\bar{Q}}$, this is due to the exchange of a single bound
state of electric gluons, which may mix with contributions from the magnetic
sector. We expect a similar behavior for the color singlet free energy.
Comparing the $N_{\tau}=12$ results for $F_{Q\bar{Q}}$ that are shown in the
Figs.~\ref{fig:ScreeningFunctionsII} and \ref{fig:ScreeningFunctionsIV}, we see
that mainly due to the insensitivity of the screening function to changes in
$rT$, a distinction between the color electric screening regime and the
asymptotic screening regime is difficult. This should manifest itself in a
nearly $rT$ independent screening mass. The EQCD prediction contains higher
order corrections due to the running coupling that are numerically important
and can be seen in the data. The picture for the color singlet is different, as
we see the increase in slope for increasing $rT$. This signals the onset of the
asymptotic screening regime. We expect a significant $rT$ dependence of the
screening mass.
\section{Temperature dependence of the color singlet and color average screening mass}
\begin{SCfigure}[\sidecaptionrelwidth][h]
\centering
\includegraphics[width=0.49\textwidth]{images/mqqtemp.eps}
\caption{Color average asymptotic screening masses obtained from the
exponential fits to the subtracted screening functions. The lines correspond to
two times the NLO Debye mass in temperature units which is calculated for
$\mu=\pi T$, $2\pi T$, and $4\pi T$ (solid, dotted and dashed lines). The red
open squares correspond to the screening masses using $N_{\tau}=4$ lattices with
aspect ratio 6. The horizontal band in the right panel corresponds to the EQCD
result for the screening mass obtained in 3d lattice calculations from
Ref.~\cite{Hart:2000ha}.}
\label{fig:ScreeningMass}
\end{SCfigure}
We perform several different fits of the screening functions in order to obtain
screening masses. The generic fit function for both the color singlet and color
average screening function is of the form
\begin{equation}
\label{eq:FitRF}
-RF = (A \exp(-MR) + cR) \Theta(R - i/N_{\tau}) \,.
\end{equation}
where $M=m/T$, $R=rT$, and $F$ is given in units of $T$. The fit parameters are
$A$, $M$, and $c$, where for the subtracted free energies $c$ is supposedly
zero in the infinite volume limit, but may actually be non-negligible due to
finite volume effects or improper cancellation for even moderate statistics. In
order to take into account that HYP-smearing distorts UV physics, we demand the
minimal value of $R$ to be at least the number of smearing iterations divided
by $N_{\tau}$. This is ensured by the Heaviside step function. For $N_{\tau}=4$ with
aspect ratio 4 we consider 0,1,2, and 3 HYP-smearing iterations, i.e. $i \in
\lbrace 0,1,2,3 \rbrace$, and for $N_{\tau} \in \lbrace 6,8,10,12,16 \rbrace$ and
for $N_{\tau}=4$ with aspect ratio 6 we additionally consider 5 HYP-smearing
iterations, i.e. $i \in \lbrace 0,1,2,3,5 \rbrace$. We determine for each
ensemble and HYP-smearing iteration a sensible maximal value for $R$, beyond
which we no longer see exponential decay or the uncertainties of the data
become too large. Within these bounds of $R$ we vary $R_{\text{min}}$.\\
In a first approach we treat all the different ensembles and all smearing
iterations on each ensemble as independent and perform the fits on the
Jackknife averages of our data. This approach, however, lacks a handle on
systematic uncertainties. In order to overcome this we perform individual fits
according to Eq.~\eqref{eq:FitRF} on the Jackknife bins of our data.\\
Figure~\ref{fig:ScreeningMass} shows the asymptotic color average screening
mass $m_{Q\bar{Q}}/T$ in temperature units obtained at $rT=0.5$ from the
exponential fits of the screening function performed on the Jackknife bins. The
results have been shifted by $-0.25$ with a systematic error estimate of $\pm
0.1$. The red open squares are at $rT=1.3$ and correspond to the screening
masses using $N_{\tau}=4$ lattices with aspect ratio 6. The lines correspond to
twice the EQCD prediction, Eq.~\eqref{eq:DebyeMass}, for $\mu = \pi T$, $2\pi
T$ and $4\pi T$. The similarity of the asymptotic mass $m_{Q\bar{Q}}$ to the
electric screening mass in EQCD $2m_{\text{D}}$ makes a distinction between the two
regimes for $F_{Q\bar{Q}}$ particularly difficult. Our result is in good
agreement with a lattice determination of the EQCD screening mass (see
Ref.~\cite{Hart:2000ha}), shown as a horizontal band.\\
For the color singlet (not shown) we are able to obtain results at $rT=1$.
Using a $N_{\tau}=4$ aspect ratio 6 determination at $rT=1.3$, we need to shift the
$rT=1$ results by $+0.15$ with a systematic error estimate of $\pm 0.05$ in
order to be consistent with the previous results. The corresponding EQCD
prediction, Eq.~\eqref{eq:DebyeMass}, for $\mu = \pi T$, $2\pi T$ and $4\pi T$,
requires a rescaling by a constant $A=1.6-2.0$. We then obtain that for $T
\gtrsim 400~\text{MeV}$ the screening mass and the rescaled Debye mass are very
similar.
\section{Temperature dependence of asymptotic screening masses}
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth]{images/ar6nt4b6664CI.pdf}
\hfill
\includegraphics[width=0.49\textwidth]{images/ar6nt4b6664CR.pdf}
\caption{Imaginary (left panel) and subtracted real part (right panel) of the
Polyakov loop correlator as a function of $rT$. We show a high statistics
result for $N_{\tau}=4$ with aspect ratio 6 for different HYP-smearing levels.}
\label{fig:CRCI-I}
\end{figure}
EQCD predicts that the lowest states contributing to the correlation function
of the real and imaginary part of the Polyakov loop correlator are bound states
of 2 and 3 gluons, respectively. The corresponding screening masses, $m_{R}/T$,
and $m_{I}/T$ in the asymptotic regime should then scale like
\begin{equation}
\frac{m_{R}}{m_{I}} \sim \frac{2}{3} \,.
\end{equation}
This scaling behavior has been seen on the lattice in
Refs.~\cite{Datta:2002je,Borsanyi:2015yka}. The corresponding correlators of
the real and imaginary parts of the Polyakov loop, respectively, read
\begin{equation}
C_{R}(r) = \langle \Re P(0) \Re P(r) \rangle \,, \quad\quad C_{I}(r) = \langle
\Im P(0) \Im P(r) \rangle \,.
\end{equation}
We use the correlation functions with smeared fields and fit $C_{R,I} \sim
\exp(-m_{R,I}r)/(rT) + \text{const.}$ to probe the mass of the lowest energy
state that is exchanged and compare these masses to the weak-coupling
prediction of EQCD. We show an example of the correlators in
Fig.~\ref{fig:CRCI-I} as a function of $rT$ for $N_{\tau}=4$ with aspect ratio 6 for
different HYP-smearing levels. Since the Polyakov loop expectation value is
real in QCD, $C_{R}$ asymptotes to $L^{2}$, which we can subtract, while the
imaginary part approaches zero asymptotically. In contrast to the color average
and color singlet screening functions it is not possible to piece together one
correlator for the real and imaginary parts of the Polyakov loop correlator
without changing the normalization by hand. For aspect ratio 4 ensembles the
S/N is usually significantly worse for both correlators. The imaginary part
correlator has a worse S/N ratio but the real part correlator suffers from
thermal IR noise due to the subtraction. This can be seen when, e.g., comparing
the unsmeared results (black symbols).\\
This in general makes it quite difficult to extract screening masses and even
more complicated to determine their ratio. Our preliminary results on a few
ensembles with high statistics are reasonably consistent with the EQCD
prediction and with Ref.~\cite{Borsanyi:2015yka}, i.e., we obtain as our most
precise result for the ensemble of Fig.~\ref{fig:CRCI-I} $m_{I}/m_{R} \approx
1.72(4)$, $m_{R}/T \approx 4.3(1)$, and $m_{I}/T \approx 7.4(2)$. Both masses
tend to decrease at higher temperatures; $m_{I}/T$ appears to decrease more
strongly.\\
From our results with aspect ratio 6 we conclude that $m_{I}/m_{R}$ can be
determined with about $10\%$ accuracy at $rT \gtrsim 0.6$, while smaller distances are contaminated by physics of the electric screening regime.
\section{Summary}
We have performed studies of quark-antiquark correlation functions covering a
wide range of temperatures from below to far above $T_{c}$. Making use of
HYP-smearing we have been able to obtain a signal for color singlet and color
average free energies in the screening regime at $T < T_{c}$ and extract
corresponding screening masses. In comparison with EQCD predictions for the
screening masses, we obtain reasonable agreement for $T \gtrsim
400~\text{MeV}$.\\
A second prediction of EQCD is the ratio of the screening masses of the real
and imaginary part of the Polyakov loop correlator, which we see reasonably
well confirmed in our measurement for high $T$, although this analysis is still
ongoing.\\
At present all of our results are preliminary, as they lack continuum
extrapolation. The full results will be discussed in a future
publication~\cite{paper2}.
\clearpage
\acknowledgments
This research was supported by the DFG cluster of excellence "Origin and
Structure of the Universe"
(\href{www.universe-cluster.de}{www.universe-cluster.de}). The simulations have
been carried out on the computing facilities of the Computational Center for
Particle and Astrophysics (C2PAP) and SuperMUC using the publicly available
MILC code. This work has been supported in part by the U.S Department of Energy
through grant contract No. DE-SC0012704.\\
The authors would like to thank P. Petreczky for numerous discussions and
support.\\
S.S. would like to thank N. Brambilla and A. Vairo for support, and TUMGlobal
for financial support and MSU and BNL for hospitality during a visit.
\bibliographystyle{JHEP}
|
2,877,628,090,310 | arxiv | \section{Introduction}
Atrial fibrillation (AF) is the most common cardiac arrhythmia which increases the risk of stroke, heart failure and death~\cite{journal/cir/chugh2014}.
Radiofrequency ablation is a promising procedure for treating AF, where patient selection and outcome prediction of such therapy can be improved through left atrial (LA) scar localization and quantification.
Atrial scars are located on the LA wall, thus it normally requires LA/ LA wall segmentation to exclude confounding enhanced tissues from other substructures of the heart.
Late gadolinium enhanced magnetic resonance imaging (LGE MRI) has been an important tool for scar visualization and quantification.
Manual delineations of LGE MRI can be subjective and labor-intensive.
However, automating this segmentation remains challenging, mainly due to the various LA shapes, thin LA wall, poor image quality and enhanced noise from surrounding tissues.
Limited studies have been reported in the literature to develop automatic LA segmentation and scar quantification algorithms.
For LA segmentation, Xiong et al. proposed a dual fully convolutional neural network (CNN) \cite{journal/TMI/xiong2018}.
In an LA segmentation challenge~\cite{link/LAseg2018},
Chen et al. presented a two-task network for atrial segmentation and post/ pre classification to incorporate the prior information of the patient category \cite{conf/STACOM/chen2018}.
Nunez et al. achieved LA segmentation by combining multi-atlas segmentation and shape modeling of LA \cite{conf/STACOM/nunez2018}.
Recently, Yu et al. designed an uncertainty-aware semi-supervised framework for LA segmentation \cite{conf/MICCAI/yu2019}.
For scar quantification, most of the current works adopted threshold-based methods that relied on manual LA wall segmentation~\cite{journal/jcmr/Karim2013}.
Some other conventional algorithms, such as Gaussian mixture model (GMM)~\cite{journal/TEHM/karim2014}, also required an accurate initialization of LA or LA wall.
However, automatic LA wall segmentation is complex and challenging due to its inherent thin thickness ($1\sim2$ mm)~\cite{journal/MedAI/karim2018}.
Recent studies show that the thickness can be ignored, as clinical studies mainly focus on the location and extent of scars~\cite{journal/tmi/ravanelli2013,journal/MedAI/li2020}.
For example, Li et al. proposed a graph-cuts framework for scar quantification on the LA surface mesh, where the weights of the graph were learned via a multi-scale CNN~\cite{journal/MedAI/li2020}.
However, they did not achieve an end-to-end training, i.e., the multi-scale CNN and graph-cuts were separated into two sub-tasks.
Recently, deep learning (DL)-based methods have achieved promising performance for cardiac image segmentation.
However, most DL-based segmentation methods are trained with a loss only considering a label mask in a discrete space.
Due to the lack of spatial information, predictions commonly tend to be blurry in boundary, and it leads to noisy segmentation with large outliers.
To solve this problem, several strategies have been employed, such as graph-cuts/ CRF regularization~\cite{journal/MedAI/li2020,journal/MedAI/kamnitsas2017}, and deformation combining shape priors~\cite{conf/MICCAI/zeng2019}.
\begin{figure*}[t]\center
\includegraphics[width=0.98\textwidth]{fig_network_v02}\\[-2ex]
\caption{The proposed MTL-SESA network for joint LA segmentation and scar quantification. Note that the skip connections between the encoder and two decoders are omitted here.}
\label{fig:method:network}\end{figure*}
In this work, we present an end-to-end multi-task learning network for joint LA segmentation and scar quantification.
The proposed method incorporates spatial information in the pipeline to eliminate outliers for LA segmentation, with additional benefits for scar quantification.
This is achieved by introducing a spatially encoded loss based on the distance transform map, without any modifications of the network.
To utilize the spatial relationship between LA and scars, we adopt the LA boundary as an attention mask on the scar map, namely surface projection, to achieve shape attention.
Therefore, an end-to-end learning framework is created for simultaneous LA segmentation, scar projection and quantification via the multi-task learning (MTL) network embedding the spatial encoding (SE) and boundary shape attention (SA), namely MTL-SESA network.
\section{Method} \label{method}
\zxhreffig{fig:method:network} provides an overview of the proposed framework.
The proposed network is a modified U-Net consisting of two decoders for LA segmentation and scar quantification, respectively.
In Section \ref{method:LA}, a SE loss based on the distance transform map is introduced as a regularization term for LA segmentation.
For scar segmentation, a SE loss based on the distance probability map is employed, followed by a spatial projection (see Section \ref{method:scar}).
Section \ref{method:multi_task} presents the specific SA scheme embedded in the MTL network for the predictions of LA and LA scars in an end-to-end style.
\subsection{Spatially Encoded Constraint for LA Segmentation}\label{method:LA}
A SE loss based on the signed distance transform map (DTM) is employed as a regularization term to represent a spatial vicinity to the target label.
Given a target label, the signed DTM for each pixel $x_i$ can be defined as:
\begin{equation}
\phi(x_i)=\begin{cases} -d^\beta & x_i \in \Omega_{in} \\0 & x_i \in S\\ d^\beta & x_i \in \Omega_{out} \end{cases}
\end{equation}
where $\Omega_{in}$ and $\Omega_{out}$ respectively indicate the region inside and outside the target label, $S$ denotes the surface boundary, $d$ represents the distance from pixel $x_i$ to the nearest point on $S$, and $\beta$ is a hyperparameter.
The binary cross-entropy (BCE) loss and the additional SE loss for LA segmentation can be defined as:
\begin{equation}
\mathcal L_{LA}^{BCE} = \sum_{i=1}^N y_i \cdot log(\hat{y}(x_i; \theta)) + (1-y_i) \cdot log(1-\hat{y}(x_i; \theta))
\end{equation}
\begin{equation}
\mathcal L_{LA}^{SE} = \sum_{i=1}^N (\hat{y}(x_i; \theta)-0.5) \cdot \phi(x_i)
\end{equation}
where $\hat{y}$ and $y$ ($y\in\{0,1\}$) are the prediction of LA and its ground truth, respectively, and $\cdot$ denotes element-wise product.
\subsection{Spatially Encoded Constraint with an Explicit Projection for Scar Quantification} \label{method:scar}
For scar quantification, we encode the spatial information by adopting the distance probability map of normal wall and scar region as the ground truth instead of binary scar label.
This is because the scar region can be very small and discrete, thus its detection presents significant challenges to current DL-based methods due to the class-imbalance problem.
In contrast to traditional DL-based algorithms optimizing in a discrete space, the distance probability map considers the continuous spatial information of scars.
Specifically, we separately obtain the DTM of the scar and normal wall from a manual scar label, and convert both into probability maps $p(x_i)= [p_{normal}, p_{scar}]$.
Here $p=e^{-d'}$ and $d'$ is the nearest distance to the boundary of normal wall or scar for pixel $x_i$.
Then, the SE loss for scar quantification can be defined as:
\begin{equation}
\mathcal L_{scar}^{SE} = \sum_{i=1}^N (\hat{p}(x_i; \theta) - p(x_i))^2
\end{equation}
where $\hat{p}$ ($\hat{p} = [\hat{p}_{normal}, \hat{p}_{scar}]$) is the predicted distance probability map of both normal wall and scar region.
Note that the situation of $\hat{p}_{normal} + \hat{p}_{scar} > 1$ sometimes exists.
One can compare these two probabilities to extract scars instead of employing a fixed threshold.
To ignore the wall thickness which varies from different positions and patients~\cite{journal/MedAI/karim2018}, the extracted scars are explicitly projected onto the LA surface.
Therefore, the volume-based scar segmentation is converted into a surface-based scar quantification through the spatially explicit projection.
However, the pixel-based classification in the surface-based quantification task only includes very limited information, i.e., the intensity value of one pixel.
In contrast to extracting multi-scale patches along the LA surface~\cite{journal/MedAI/li2020}, we employ the SE loss to learn the spatial features near the LA surface.
Similar to~\cite{journal/MedAI/li2020}, the SE loss can also be beneficial to improving the robustness of the framework against the LA segmentation errors.
\subsection{Multi-task Learning with an End-to-end Trainable Shape Attention} \label{method:multi_task}
To employ the spatial relationship between LA and atrial scars, we design an MTL network including two decoders, i.e., one for LA and the other for scar segmentation.
As \zxhreffig{fig:method:network} shows, the Decoder$_\mathrm{LA}$ is supervised by $\mathcal L_{LA}^{BCE}$ and $\mathcal L_{LA}^{SE}$, and the Decoder$_\mathrm{scar}$ is supervised by $\mathcal L_{scar}^{SE}$.
To explicitly learn the relationship between the two tasks, we extract the LA boundary from the predicted LA as an attention mask for the training of Decoder$_\mathrm{scar}$, namely explicit projection mentioned in Section~\ref{method:scar}.
An SA loss is introduced to enforce the attention of Decoder$_\mathrm{scar}$ on the LA boundary:
\begin{equation}
\mathcal L^{SA} = \sum_{i=1}^N (M \cdot (\Delta \hat{p}(x_i; \theta) - \Delta p(x_i)))^2
\end{equation}
where $\Delta \hat{p} = \hat{p}_{normal} - \hat{p}_{scar}$,
$\Delta p = p_{normal}-p_{scar}$, and $M$ is the boundary attention mask, which can be generated from the gold standard segmentation of LA ($M_{1}$) as well as the predicted LA ($M_{2}$).
Hence, the total loss of the framework is defined by combining all the losses mentioned above:
\begin{equation}
\mathcal L = \mathcal L_{LA}^{BCE} + \lambda_{LA}\mathcal L_{LA}^{SE} + \lambda_{scar}\mathcal L_{scar}^{SE}
+ \lambda_{M_{1}}\mathcal L^{SA}_{scarM_1} + \lambda_{M_{2}}\mathcal L^{SA}_{scarM_2}
\end{equation}
where $\lambda_{LA}$, $\lambda_{scar}$, $\lambda_{M_{1}}$ and $\lambda_{M_{2}}$ are balancing parameters.
\section{Experiments}
\subsection{Materials}
\subsubsection{Data Acquisition and Pre-processing.}
The data is from the MICCAI2018 LA challenge~\cite{link/LAseg2018}.
The 100 LGE MRI training data, with manual segmentation of LA, consists of 60 post-ablation and 40 pre-ablation data.
In this work, we chose the 60 post-ablation data for manual segmentation of the LA scars and employed them for experiments.
The LGE MRIs were acquired with a resolution of $0.625\!\times\!0.625\!\times\!0.625$ mm and reconstructed to $1\!\times\!1\!\times\!1$ mm.
All images were cropped into a unified size of $208\!\times\!208\!\times\!80$ centering at the heart region and were normalized using Z-score.
We split the images into two sets, i.e., one with 40 images for training and the other with 20 for the test.
\subsubsection{Gold Standard and Evaluation.}
The challenge provides LA manual segmentation for the training data, and scars of the 60 post-ablation data were manually delineated by a well-trained expert.
These manual segmentations were considered as the gold standard.
For LA segmentation evaluation, Dice volume overlap, average surface distance (ASD) and Hausdorff distance (HD) were applied.
For scar quantification evaluation, the manual and (semi-) automatic segmentation results were first projected onto the manually segmented LA surface.
Then, the \emph{Accuracy} measurement of the two areas in the projected surface, Dice of scars (Dice$_\mathrm{scar}$) and generalized Dice score ($G$Dice) were used as indicators of the accuracy of scar quantification.
\subsubsection{Implementation.}
The framework was implemented in PyTorch, running on a computer with 1.90 GHz Intel(R) Xeon(R) E5-2620 CPU and an NVIDIA TITAN X GPU.
We used the SGD optimizer to update the network parameters (weight decay=0.0001, momentum=0.9).
The initial learning rate was set to 0.001 and divided by 10 every 4000 iterations.
The balancing parameters in Section \ref{method:multi_task}, were set as follows, $\lambda_{LA}=0.01$, $\lambda_{scar}=10$, $\lambda_{M_{1}}=0.01$ and $\lambda_{M_{2}}=0.001$, where $\lambda_{LA}$ and $\lambda_{M_{2}}$ was multiplied by 1.1 every 200 iterations.
The inference of the networks required about 8 seconds to process one test image.
\subsection{Result}
\begin{figure*}[!t]\center
\subfigure[] {\includegraphics[width=0.47\textwidth]{fig_exp_sdm}}
\subfigure[] {\includegraphics[width=0.51\textwidth]{fig_3d_sdm_v03}}
\caption{
Quantitative and qualitative evaluation results of the proposed SE loss for LA segmentation: (a) Dice and HD of the LA segmentation results after combining the SE loss, i.e., U-Net$_\mathrm{LA}$-SE with different $\beta$ for DTM;
(b) 3D visualization of the LA segmentation results of three typical cases by U-Net$_\mathrm{LA}$-BCE and U-Net$_\mathrm{LA}$-SE.}
\label{fig:result:LA_SE}\end{figure*}
\begin{table*} [t] \center
\caption{
Summary of the quantitative evaluation results of LA segmentation. Here, U-Net$_\mathrm{LA}$ uses the original U-Net architecture for LA segmentation;
MTL means that the methods are based on the architecture in \zxhreffig{fig:method:network} with two decoders; BCE, SE, SA and SESA refer to the different loss functions.
The proposed method is denoted as MTL-SESA.
}
\label{tb:result:LA}
{\small
\begin{tabular}{ l| l *{5}{@{\ \,} l }}\hline
Method & \quad Dice & \quad ASD (mm) & \quad HD (mm) \\
\hline
U-Net$_\mathrm{LA}$-BCE &$ 0.889 \pm 0.035 $& \quad $ 2.12 \pm 0.80 $& \quad $ 36.4 \pm 23.6 $\\
U-Net$_\mathrm{LA}$-SE &$ 0.880 \pm 0.058 $& \quad $ 2.36 \pm 1.49 $& \quad $ 25.1 \pm 11.9 $\\
\hline\hline
MTL-BCE &$ 0.890 \pm 0.042 $& \quad $ 2.11 \pm 1.01 $& \quad $ 28.5 \pm 14.0 $\\
MTL-SE &$ 0.909 \pm 0.033 $& \quad $ 1.69 \pm 0.69 $& \quad $ 22.4 \pm 9.80 $\\
MTL-SESA &\bm{$ 0.913 \pm 0.032 $}& \quad \bm{$ 1.60 \pm 0.72 $}& \quad \bm{$ 20.0 \pm 9.59 $}\\
\hline
\end{tabular} }\\
\end{table*}
\begin{table*} [t] \center
\caption{
Summary of the quantitative evaluation results of scar quantification.
Here, LA$_\mathrm{M}$ denotes that scar quantification is based on the manually segmented LA, while LA$_{\mathrm{U\mbox{-}Net}}$ indicates that it is based on the U-Net$_\mathrm{LA}$-BCE segmentation;
U-Net$_\mathrm{scar}$ is the scar segmentation directly based on the U-Net architecture with different loss functions;
The inter-observer variation (Inter-Ob) is calculated from randomly selected twelve subjects.
}
\label{tb:result:scar}
{\small
\begin{tabular}{ l| l *{4}{@{\ \,} l }}\hline
Method & \quad Accuracy & \qquad Dice$_\mathrm{scar}$ & \qquad $G$Dice\\
\hline
LA$_\mathrm{M}$+Otsu~\cite{journal/tmi/ravanelli2013} &$ 0.750 \pm 0.219 $& \quad $ 0.420 \pm 0.106 $ & \quad $ 0.750 \pm 0.188 $\\
LA$_\mathrm{M}$+MGMM~\cite{journal/TBME/liu2017} &$ 0.717 \pm 0.250 $& \quad $ 0.499 \pm 0.148 $ & \quad $ 0.725 \pm 0.239 $\\
LA$_\mathrm{M}$+LearnGC~\cite{journal/MedAI/li2020} &$ 0.868 \pm 0.024 $& \quad $ 0.481 \pm 0.151 $ & \quad $ 0.856 \pm 0.029 $\\
\hline
LA$_{\mathrm{U\mbox{-}Net}}$+Otsu &$ 0.604 \pm 0.339 $& \quad $ 0.359 \pm 0.106 $ & \quad $ 0.567 \pm 0.359 $ \\
LA$_{\mathrm{U\mbox{-}Net}}$+MGMM &$ 0.579 \pm 0.334 $& \quad $ 0.430 \pm 0.174 $ & \quad $ 0.556 \pm 0.370 $ \\
\hline
U-Net$_\mathrm{scar}$-BCE &$ 0.866 \pm 0.032 $& \quad $ 0.357 \pm 0.199 $ & \quad $ 0.843 \pm 0.043 $\\
U-Net$_\mathrm{scar}$-Dice &$ 0.881 \pm 0.030 $& \quad $ 0.374 \pm 0.156 $ & \quad $ 0.854 \pm 0.041 $\\
U-Net$_\mathrm{scar}$-SE &$ 0.868 \pm 0.026 $& \quad $ 0.485 \pm 0.129 $ & \quad $ 0.863 \pm 0.026 $\\
\hline\hline
MTL-BCE &\bm{$ 0.887 \pm 0.023 $}& \quad $ 0.484 \pm 0.099 $ & \quad \bm{$ 0.872 \pm 0.024 $}\\
MTL-SE &$ 0.882 \pm 0.026 $& \quad $ 0.518 \pm 0.110 $ & \quad $ 0.871 \pm 0.024 $\\
MTL-SESA &$ 0.867 \pm 0.032 $& \quad \bm{$ 0.543 \pm 0.097 $} & \quad $ 0.868 \pm 0.028 $\\
\hline\hline
Inter-Ob &$ 0.891 \pm 0.017 $& \quad $ 0.580 \pm 0.110 $ & \quad $ 0.888 \pm 0.022 $\\
\hline
\end{tabular} }\\
\end{table*}
\begin{figure*}[t]\center
\includegraphics[width=1.0\textwidth]{fig_3d_results}\\[-2ex]
\caption{3D visualization of the LA scar localization by the eleven methods. The scarring areas are labeled in orange on the LA surface, which is constructed from LA$_\mathrm{M}$ labeled in blue.
}
\label{fig:result:3d_results}\end{figure*}
\subsubsection{Parameter Study.}
To explore the effectiveness of the SE loss, we compared the results of the proposed scheme for LA segmentation using different values of $\beta$ for DTM in Eq. (1).
\Zxhreffig{fig:result:LA_SE} (a) provides the results in terms of Dice and HD,
and \Zxhreffig{fig:result:LA_SE} (b) visualizes three examples for illustrating the difference of the results using or without using the SE loss.
One can see that with the SE loss, U-Net$_\mathrm{LA}$-SE evidently reduced clutter and disconnected parts in the segmentation compared to U-Net$_\mathrm{LA}$-BCE, and significantly improved the HD of the resulting segmentation ($p<0.001$), though the Dice score may not be very different.
Also, U-Net$_\mathrm{LA}$-SE showed stable performance with different values of $\beta$ except for too extreme values.
In the following experiments, $\beta$ was set to 1.
\subsubsection{Ablation Study.}
\Zxhreftb{tb:result:LA} and \Zxhreftb{tb:result:scar} present the quantitative results of different methods for LA segmentation and scar quantification, respectively.
For LA segmentation, combining the proposed SE loss performed better than only using the BCE loss.
For scar quantification, the SE loss also showed promising performance compared to the conventional losses in terms of Dice$_\mathrm{scar}$.
LA segmentation and scar quantification both benefited from the proposed MTL scheme comparing to achieving the two tasks separately.
The results were further improved after introducing the newly-designed SE and SA loss in terms of Dice$_\mathrm{scar}$ ($p\leq0.001$), but with a slightly worse \emph{Accuracy} ($p\leq0.001$) and $G$Dice ($p>0.1$).
\Zxhreffig{fig:result:3d_results} visualizes an example for illustrating the segmentation and quantification results of scars from the mentioned methods in \Zxhreftb{tb:result:scar}.
Compared to U-Net$_\mathrm{scar}$-BCE and U-Net$_\mathrm{scar}$-Dice, MTL-BCE improved the performance, thanks to the MTL network architecture.
When the proposed SE and SA loss were included, some small and discrete scars were also detected, and an end-to-end scar quantification and projection was achieved.
\subsubsection{Comparisons with Literature.}
\Zxhreftb{tb:result:scar} and \Zxhreffig{fig:result:3d_results} also present the scar quantification results from some state-of-the-art algorithms, i.e., Otsu~\cite{journal/tmi/ravanelli2013}, multi-component GMM (MGMM)~\cite{journal/TBME/liu2017}, LearnGC~\cite{journal/MedAI/li2020} and U-Net$_\mathrm{scar}$ with different loss functions.
The three (semi-) automatic methods generally obtained acceptable results, but relied on an accurate initialization of LA.
LearnGC had a similar result compared to MGMM in Dice$_\mathrm{scar}$ based on LA$_\mathrm{M}$, but its \emph{Accuracy} and $G$Dice were higher.
The proposed method performed much better than all the automatic methods in terms of Dice$_\mathrm{scar}$ with statistical significance ($p\leq0.001$).
In \Zxhreffig{fig:result:3d_results}, one can see that Otsu and U-Net$_\mathrm{scar}$ tended to under-segment the scars.
Though including Dice loss could alleviate the class-imbalance problem, it is evident that the SE loss could be more effective, which is consistent with the quantitative results in \Zxhreftb{tb:result:scar}.
MGMM and LearnGC both detected most of the scars, but LearnGC has the potential advantage of small scar detection.
The proposed method could also detect small scars and obtained a smoother segmentation result.
\section{Conclusion}
In this work, we have proposed an end-to-end learning framework for simultaneous LA segmentation and scar quantification by combining the SE and SA loss.
The proposed algorithm has been applied to 60 image volumes acquired from AF patients and obtained comparable results to inter-observer variations.
The results have demonstrated the effectiveness of the proposed SE and SA loss, and showed the superiority of segmentation performance over the conventional schemes.
Particularly, the proposed SE loss substantially reduced the outliers, which frequently occurs in the prediction of DL-based methods.
Our technique can be easily extended to other segmentation tasks, especially for discrete and small targets such as lesions.
A limitation of this work is that the gold standard was constructed from the manual delineation of only one expert.
Besides, the target included in this study is only post-ablation AF patients.
In future work, we will combine multiple experts to construct the gold standard, and consider both pre- and post-ablation data.
\subsubsection{Acknowledgement.}
This work was supported by the National Natural Science Foundation of China (61971142), and L. Li was partially supported by the CSC Scholarship.
\bibliographystyle{splncs04}
|
2,877,628,090,311 | arxiv | \section{\label{sec:intro}Introduction}
Underwater vehicles are used for a wide range of applications, such as marine exploration \citep{bayatEnvironmentalMonitoringUsing2017}, ecosystem monitoring \citep{whitcombAdvancesUnderwaterRobot2000}, and ocean cleaning \citep{zahugiOilSpillCleaning2013}.
One class of underwater vehicles relies on propellor-based technology and turbomachinery for long-range propulsion.
These vehicles are not suitable for application in sensitive or fragile environments like coral reefs, or in constrained environments with limited space for manoeuvering \citep{mohseniPulsatileVortexGenerators2006a}.
Another class of underwater vehicles consists of bio-inspired swimmers, which mimic the kinematics of biological organisms like fish, jellyfish, and cephalopods for locomotion \citep{zhuTunaRoboticsHighfrequency2019, weymouthUltrafastEscapeManeuver2015a, robertsonRoboScallopBivalveInspired2019a}.
Bio-inspired vehicles are highly controllable and can explore a wide range of environments.
One of the mechanisms commonly exploited by bio-inspired vehicles for transport is pulsatile jet propulsion, which relies on the periodic ejection of vortex rings to create thrust \citep{kruegerSignificanceVortexRing2003, whittleseyOptimalVortexFormation2013a}.
The canonical device used to generate and study vortex rings is the piston cylinder apparatus.
A translating piston pushes fluid from within a cylinder through an opening or nozzle at the end of the cylinder where a shear layer forms and rolls-up into a coherent ring vortex \citep{dabiriOptimalVortexFormation2009}.
The vortex ring grows larger as more fluid is ejected by the piston.
However, the growth of the vortex does not continue indefinitely.
Beyond a limiting non-dimensional vortex formation number, additional fluid supplied by the vortex generator is no longer directly entrained by the vortex ring and instead forms a trailing shear layer \citep{gharibUniversalTimeScale1998}.
A vortex ring produced by a classical piston cylinder apparatus with a constant piston velocity simultaneously attains its maximum circulation, reaches its minimum non-dimensional energy, and outpaces its feeding shear layer after approximately four convective time scales or stroke ratios \citep{gharibUniversalTimeScale1998}.
This limiting value of the vortex formation number is not universal and depends on the geometry of the outlet nozzle or orifice \citep{limbourgExtensionUniversalTime2021, dabiriStartingFlowNozzles2005, kriegNewKinematicCriterion2021,ofarrellPinchoffNonaxisymmetricVortex2014a}, the presence of a uniform background co- or counterflow \citep{dabiriDelayVortexRing2004a, kruegerFormationNumberVortex2006a}, and the temporal evolution of the piston velocity \citep{rosenfeldCirculationFormationNumber1998, zhaoEffectsTrailingJet2000, shusserEffectTimedependentPiston2006, olcayMomentumEvolutionEjected2010a}.
Different time-dependent profiles of the exit velocity can noticeably alter the formation number of the ring vortex, without significantly affecting its circulation.
Aquatic animals that utilise jet propulsion intuitively manipulate the exit velocity profile and nozzle diameter to adapt their swimming performance \citep{dabiriFastswimmingHydromedusaeExploit2006,lipinskiFlowStructuresFluid2009,gemmellCoolYourJets2021}.
The practical desire to integrate this capability in human-engineered underwater vehicles reopens fundamental questions on vortex ring formation.
Detailed knowledge of the influence of arbitrary time-varying velocity profiles on the formation number and the pinch-off of vortex rings are crucial to improve the controllability and energy-efficiency of underwater vehicles that operate using jet propulsion.
An objective and frame-independent method to identify vortex pinch-off and characterise the formation process is based on Lagrangian coherent structures \citep{hallerLagrangianStructuresRate2001}.
Lagrangian coherent structures are maxima, or ridges, in the positive and negative finite-time Lyapunov exponent (FTLE) fields.
They demarcate the vortex ring and separate regions in the flow that are dynamically different \citep{ofarrellLagrangianApproachIdentifying2010, shaddenLagrangianAnalysisFluid2006a}.
The FTLE ridges emerge between the vortex ring and the trailing shear layer when the vortex no longer accepts additional vorticity and pinches off \citep{shaddenLagrangianAnalysisFluid2006a}.
Alternatively, prominent features such as local extrema in the pressure field can serve as instantaneous indicators of vortex pinch-off.
Regions of elevated pressure in front and behind the vortex ring are called leading and trailing pressure maxima.
These maxima indicate regions where vorticity not longer passes into the vortex ring similar to the positive and negative FTLE ridges \citep{lawsonFormationTurbulentVortex2013}.
The emergence of the trailing pressure maximum coincides with the formation number of the vortex ring and is a necessary condition for pinch-off \citep{schlueter-kuckPressureEvolutionShear2016}.
Here, we present a bio-inspired vortex generator as a potential propulsion mechanism for underwater vehicles.
The generator consists of a soft elastic bulb that is compressed by two rigid arms.
Similar to biological jetters, our vortex generator produces a non-linear time-varying exit velocity profile and has a finite volume capacity which limits the maximum stroke length that the device can attain more than in classical piston cylinder arrangements.
It bears similarities with synthetic jet actuators \citep{Shuster.2007, lawsonFormationTurbulentVortex2013, VanBuren.2014, Straccia.2020}.
We experimentally study the vortex ring formation using time-resolved velocity field measurements.
The transient development of the vortex characteristics are analysed based on the evolution of ridges in the finite-time Lyapunov exponent field and on local extrema in the pressure field derived from the velocity data.
Special attention is directed toward the vortex merging event observed in the trailing shear layer.
The robustness of the emergence of pressure maxima as observables to identify the end of the vortex formation process is also evaluated.
The findings will aid the further design and control of underwater vehicles that operate using pulsatile jet propulsion.
\section{\label{sec:materials}Materials and methods}
\subsection{Vortex generator}
The bio-inspired vortex generator designed for this study combines the flexible bell of a jellyfish \citep{weymouthUltrafastEscapeManeuver2015a, xuSquidinspiredRobotsPerform2021} with the kinematics of bivalves mollusks or scallops \citep{robertsonRoboScallopBivalveInspired2019a}.
The generator presented in \cref{fig:scallop} produces vortex rings by compressing an elastic bulb with two rigid arms and ejecting fluid through a circular nozzle with diameter $D=\SI{14}{\milli\meter}$.
The silicon bulb is prepared by mixing a silicone moulding compound (Zhermack Elite Double 32 shore A base) and a catalyst in a 1:1 ratio and centrifuging the mixture.
The mixture is then poured into a plastic mould prescribing the shape of the bulb and rotated slowly for \SI{30}{minutes} to ensure a homogeneous thickness of \SI{1.5}{\milli\meter} across the bulb.
The bulb has a volume of \SI{150}{\milli\liter} when uncompressed (\cref{fig:scallop}b).
The body and arms of the vortex generator are 3D printed with standard clear resin using a stereolithographic printer (Formlabs Form 2).
The motion of both arms is controlled by a brushless servo motor (Maxon EC-max) along a single motor shaft via the use of a gear box, which ensures symmetrical compression of the bulb (\cref{fig:scallop}a).
Commands to the motor are sent via a motor controller (DMC-4040, Galil Motion Control, USA).
\begin{figure}[b!]
\includegraphics{fig1.eps}
\caption{(\textbf{a.}) Schematic of the vortex generator including the gear system to translate the rotation on the main axis into a symmetric compression of the bulb by the two arms.
(\textbf{b.}) Relaxed and compressed states of the bulb.}\label{fig:scallop}
\end{figure}
\subsection{Experimental setup}
The vortex generator is placed in a rectangular glass tank filled with water.
Time-resolved particle image velocimetry (PIV) is used to measure the velocity field in a stream-wise symmetry plane of the vortex.
Polystyrene particles with a diameter of \SI{56}{\micro\meter} are used as seeding particles.
The starting position of the arms is the angle at which they touch the bulb without compressing it (\cref{fig:scallop}b).
The angle of the arms is varied with a sinusoidal profile, to smoothly compress and relax the bulb with a prescribed amplitude and time period.
The bulb walls are thick enough such that the material does not stretch upon mechanical compression by the arms.
The arms mechanically limit the volume the bulb can contain.
When the arm are opened, the bulb returns to its original form.
Two high-power light-emitting diodes (LED) (LED Pulsed System, ILA\_5150 GmbH, Germany) create a light sheet in the horizontal stream-wise plane cutting through the centre of the exit nozzle (\cref{fig:scallop}a).
The applicability of these high-power LED for PIV has been demonstrated previously by \citet{buchmannPulsedHighpowerLED2012, krishnaFlowfieldForceEvolution2018}.
The LED are operated in continuous mode during image acquisition.
A mirror angled at \SI{45}{\degree} is placed underneath the tank and a high-speed camera (Photron Fastcam SA-X2) records \SI{1024x512}{px} images with an acquisition rate of \SI{2000}{\hertz}.
The start of the PIV image acquisition coincides with the start of the compression of the vortex generator.
Consecutive particle images are correlated using a multigrid evaluation method with an initial window size of \SI{32x32}{px} and step size of \SI{3}{px}, or \SI{90}{\percent} overlap.
This overlap was optimal for not artificially smoothing the velocity gradient fields \citep{inproceedings, kindlerAperiodicityFieldFullscale2011}.
This corresponds to a physical grid spacing of \SI{0.21}{\milli\meter} or \SI{0.015}{D}.
The magnification and extent of the spatial domain were selected to ensure that the full formation process of the vortex ring could be observed by a single camera.
The high temporal resolution comes at the expense of a lower spatial resolution of the vortex rings.
Lagrangian vortex analysis methods are used in combination with more classical Eulerian methods to compensate for the lower spatial resolution and exploit the information available through the high temporal resolution.
\subsection{Finite-time Lyapunov exponent field and ridge computation}
The candidate material boundaries of the generated ring vortex are identified from ridges in the finite-time Lyapunov exponent field (FTLE).
The FTLE fields are calculated directly from the measured time-resolved velocity fields by artificially seeding and convecting fluid particles forward or backward in time to obtain the forward or positive pFTLE and backward or negative nFTLE fields.
The initial position of the fluid particle is indicated by $\vb{x}$ and their positions after a given integration time $\kindex{T}{f}$ are found by advecting the particles with the flow using a fourth order Adam-Bashforth-Moulton integration scheme.
The flow map $\kindex{\phi}{t}^{t+\kindex{T}{f}}$ describes the displacement of the particles between time $t$ and $t+\kindex{T}{f}$.
The spatial gradient of the flow map gives us the Cauchy-Green strain tensor whose largest eigenvalue (\kindex{\lambda}{max}) is referred to as the coefficient of expansion $\sigma^{\kindex{T}{f}}(\vb{x},t)$ \citep{greenUsingHyperbolicLagrangian2010a}.
The scalar FTLE field is then defined as \citep{hallerLagrangianCoherentStructures2002}:
\begin{equation}
\textrm{FTLE}^{\kindex{T}{f}}(\vb{x},t) = \frac{1}{2\kindex{T}{f}} \ln\left(\kindex{\lambda}{max}\left( \left[\grad \kindex{\phi}{t}^{t+\kindex{T}{f}} \right]^* \left[\grad\kindex{\phi}{t}^{t+\kindex{T}{f}}\right] \right) \right) = \frac{1}{2\kindex{T}{f}} \ln\left(\sigma^{\kindex{T}{f}}(\vb{x},t)\right)
\end{equation}
where $^*$ is the matrix transpose operator.
Candidate vortex boundaries manifest as ridges or maxima of the scalar FTLE field \citep{hallerLagrangianCoherentStructures2002, shaddenLagrangianAnalysisFluid2006a, greenDetectionLagrangianCoherent2007a, greenUsingHyperbolicLagrangian2010a}.
Ridges in the positive FTLE field are candidate repelling material lines and correspond to regions where there is a maximum divergence of fluid particle trajectories over time.
Ridges in the negative FTLE field are candidate attracting material lines and correspond to regions where there is maximum attraction of fluid particle trajectories over time.
The ridges are computed here using a ridge tracking algorithm, similar to the one described by \cite{lipinskiRidgeTrackingAlgorithm2010a}.
The algorithm locates grid points with maximum intensity and performs a search within the adjacent grid points to determine the next point on the ridge.
An adjacent grid point is selected as the next ridge point if it has a similar or larger magnitude of the finite-time Lyapunov exponent than the current grid point.
The integration time \kindex{T}{f} used for the data presented here is \SI{0.4}{\second} or \SI{80}{\percent} of the full compression-relaxation cycle.
\section{\label{sec:results}Results and discussion}
\subsection{Fluid ejection and entrainment}
Our vortex generator operates by compressing and relaxing its deformable bulb.
Compression ejects fluid into the wake and relaxation entrains fluid by allowing the bulb to return to its original form.
The temporal evolution of the average velocity of the fluid that is ejected or entrained by the propulsor is obtained by integrating the vertical velocity profile directly behind the nozzle exit of the propulsor.
This average exit velocity across the nozzle diameter $D$ is defined in cylindrical coordinates as
\begin{equation}
\label{equation:uf}
\kindex{U}{exit}(t) = \frac{4}{\pi D^2} \int\limits_{0}^{2\pi}\int\limits_{0}^{D/2} u(r,x=0,t)\, r \, \dd r \dd\theta
\end{equation}
where $u(r,x=0,t)$ is the stream-wise velocity component directly behind the nozzle exit at $x/D = 0$.
Schematic representations of the velocity profiles at the nozzle exit and the temporal evolution of the averaged exit velocity are presented in \cref{fig:uf} over the duration of one compression-relaxation cycle $T$.
Positive and negative values of the exit velocity $\kindex{U}{exit}$ correspond respectively to fluid ejection and fluid entrainment.
The exit velocity $\kindex{U}{exit}$ in \cref{fig:uf} is normalised by $\kindex{U}{0}$, the characteristic velocity of the system, which is defined as
\begin{equation}
\kindex{U}{0} = \frac{8\kindex{V}{0}}{\pi D^2 T}\quad.
\label{equation:estar}
\end{equation}
Here, $\kindex{V}{0}$ is the total volume of fluid ejected from the propulsor during the duration of bulb compression, $T/2$.
The velocity \kindex{U}{0} is the velocity of the equivalent constant uniform flow that yields the same ejected volume of fluid over the time of bulb compression.
The velocity \kindex{U}{0} is \SI{415}{\milli\meter\per\second} for the data presented here, and the corresponding Reynolds number \kindex{\Rey}{D} based on the nozzle exit diameter and \kindex{U}{0} is \num{5820}.
\begin{figure}[tb!]
\includegraphics{fig2.eps}
\caption{(\textbf{a})-(\textbf{d}) Reference cylindrical coordinate system and schematics of the velocity profiles at the nozzle exit at different times during the compression-relaxation cycle.
Fluid leaving the nozzle moves in the positive $x$-direction which is referred to as the stream-wise direction.
(\textbf{e}) Temporal evolution of the measured diameter averaged exit velocity \kindex{U}{exit} normalised by the characteristic velocity \kindex{U}{0} over the duration of one compression-relaxation cycle $T$.
(\textbf{f}) Non-dimensional stroke ratio obtained from integrating \kindex{U}{exit} in time.}\label{fig:uf}
\end{figure}
Due to the particular design of our vortex generator, the exit velocity increases slowly when the compression starts, but rapidly catches up and reaches a maximum value of $\kindex{U}{exit}/\kindex{U}{0}\approx 1.9$ at $t/T=0.13$.
At $t/T = 0.5$, the kinematics of the propulsor cause a transition from bulb compression to bulb relaxation.
The fluid does not does respond directly to the transition of the kinematics and continues to flow out of the bulb for $\Delta t/T\approx \num{0.07}$ after bulb relaxation has begun.
A shorter lag in the response of the flow with respect to the forcing kinematics is present at the end of cycle.
The flow continues to be entrained for $\Delta t/T\approx \num{0.04}$ after the driving kinematics have stopped.
The slight asymmetry in the fluid ejection-entrainment response to the symmetric compression-relaxation motion is characteristic of this particular driving kinematics and could be compensated for or enhanced by using more complex kinematics.
The length of an equivalent cylindrical column of fluid ejected by the vortex propulsor during compression and relaxation is called the stroke length.
It is defined here as:
\begin{equation}
\label{equation:ld}
L(t) = \int\limits_{0}^{t} \kindex{U}{exit} (\tau)\, \dd\tau\quad.
\end{equation}
The stroke length is analogous to the distance travelled by the piston in a piston cylinder apparatus to eject the same volume of fluid.
The temporal evolution of the stroke to diameter ratio $L/D$ is presented in \cref{fig:uf}f.
After the initial slow response of the flow to the compression kinematic, the stroke ratio increases non-linearly during the compression phase for $\num{0}<t/T<\num{0.57}$.
The stroke ratio attains a maximum value of \num{7} around $t/T=0.57$ and decreases approximately linearly with a rate $\dd L/\dd t= \num{-5.7} D/T $ during the relaxation phase for $\num{0.57}<t/T<\num{1.04}$.
The temporal velocity profile of the fluid ejected by the vortex generator is not constant and leads to a non-linear variation of the stroke ratio.
This allows us to characterise the formation process of a vortex ring generated by a non-linear evolution of the stroke ratio, which has not yet received much attention.
In addition to the nonlinear fluid ejection profile, the fluid entrainment process is unique to the design of our propulsor, and enables the periodic ejection of multiple vortex rings.
The effect of the time-varying exit velocity profile and the fluid entrainment process on the temporal evolution of the propulsive force will be the subject of future investigations.
Here, we focus solely on the flow field created by our propulsor.
Our goal is to characterise the formation process of a vortex ring generated by the arbitrary fluid ejection profile and identify observable quantities that can aid a future optimisation of similar robotic devices that utilise pulsatile jet propulsion.
\begin{figure}[tb!]
\includegraphics{fig3.eps}
\caption{Snapshots of experimentally observed vortex ring development during the compression phase of the vortex generator at
(\textbf{a}) $L/D$ = \num{3.4}, (\textbf{b},\textbf{c}) $L/D$ = \num{5.0}, (\textbf{d}) $L/D$ = \num{5.5}, and (\textbf{e}) $L/D$ = \num{5.9}.
The colours in the top half of the snapshots indicate values of the experimentally measured out-of-plane vorticity component $\omega$.
The colours in the bottom half of the snapshots indicate the experimentally obtained swirling strength $\kindex{\lambda}{ci}$.
(\textbf{c}) Zoomed-in view on a secondary vortex in the trailing shear layer with velocity vectors indicating the relative velocity with respect to the velocity measured in the centre of the secondary vortex.
The vorticity associated with the secondary vortex in (\textbf{d}) is indicated by the small box.
}\label{fig:vort}
\end{figure}
\subsection{Vortex formation process}
When fluid is ejected by our vortex propulsor a coherent vortex ring is formed.
The growth of the vortex ring is presented by instantaneous snapshots of the flow field in \cref{fig:vort}.
The arrows represent the two dimensional velocity field $\vb{u}=(u,v)$ in the measurement plane.
The out-of-plane vorticity component, $\omega$, is computed from the in-plane velocity field and is show in the top half of the snapshots in \cref{fig:vort}.
The colours in the bottom half of the snapshots indicate the swirling strength, $\kindex{\lambda}{ci}$, which is the imaginary part of the complex eigenvalues of the velocity gradient tensor \citep{zhouMechanismsGeneratingCoherent1999}.
The swirling strength is a robust indicator of vortices in shear layers where high concentrations of vorticity exist and obfuscate the presence or absence of vortices.
The shear layer that emerges at the boundary between the ejected fluid and the surrounding quiescent flow immediately rolls-up into a vortex ring.
The coherent core is indicated by localised region of non-zero values of the swirling strength (\cref{fig:vort}a).
The vortex ring rapidly convects away from the nozzle exit while flow is still being ejected and a trailing shear layer appears between the vortex ring and the nozzle exit (\cref{fig:vort}b).
At $L/D=\num{5}$, a secondary or trailing vortex is present in the trailing shear layer.
Based on the vorticity concentration alone, it is difficult to distinguish between a strong shear flow and rotation, but the swirling strength distribution is conclusive (\cref{fig:vort}b,c).
The velocity vectors in the zoomed-in view of the secondary vortex in \cref{fig:vort}c represent the velocity relative to the velocity vector measured in the centre of the secondary vortex ($\vb{u}-\kindex{\vb{u}}{vortex}$).
The centre of the vortex is identified as the location of maximum swirling strength.
This secondary vortex core is not persistent and is no longer visible in the swirling strength field for $L/D>\num{5.5}$ when the primary vortex travels further downstream and moves away from the trailing shear layer (\cref{fig:vort}d-e).
The vorticity associated with the secondary vortex is still present and has moved closer to the primary vortex (indicated by the box in \cref{fig:vort}d) suggesting that the primary and secondary vortices begin merging.
To confirm the merging of the primary and secondary vortices, we analyse the temporal evolution of the Lagrangian coherent structures in the positive and negative finite-time Lyapunov exponent (FTLE) fields, which mark the boundaries of the vortex.
\Cref{fig:ftle} shows the positive and negative FTLE ridges and fields atop the vorticity and swirling strength fields.
The positive ridge is the upstream boundary of the vortex ring along which particle trajectories are attracted.
The negative ridge is the downstream boundary of the vortex ring from which particle trajectories are repelled.
The two ridges can be identified once the core of the primary vortex ring has moved one nozzle diameter away from the exit.
The evolution of the stream-wise location of the vortex core and of the intersections of the positive and negative FTLE ridges with the centreline are presented as a function of the stroke ratio $L/D$ in \cref{fig:ftle}e.
As the vortex convects away from the nozzle exit, the positive FTLE ridge lags behind the vortex core (\cref{fig:ftle}b,e).
The positive FTLE ridge encloses the secondary vortex core indicated by the box in \cref{fig:ftle}b.
For $L/D>5$ the positive FTLE ridge accelerates and catches up with the vortex core, pushing the vorticity associated with the secondary vortex core to merge with the vorticity of the primary core.
The distance between the positive FTLE ridge and the core reaches a steady state at $L/D \approx 6$ such that the Lagrangian boundaries symmetrically enclose the vortex core (\cref{fig:ftle}d).
\begin{figure}[tb!]
\includegraphics{fig4.eps}
\caption{Positive and negative finite-time Lyapunov exponent (FTLE) ridges, computed from experimental data, indicating the boundaries of the vortex ring presented atop the vorticity and $\kindex{\lambda}{ci}$ fields for (\textbf{a}) $L/D$ = \num{3.4}, (\textbf{b}) $L/D$ = \num{5.0}, (\textbf{c}) $L/D$ = \num{5.5}, and (\textbf{d}) $L/D$ = \num{5.9}.
(\textbf{e}) Evolution of the stream-wise location of the vortex core and of the crossing of the positive and negative FTLE ridges with the centreline as a function of the stroke ratio.
} \label{fig:ftle}
\end{figure}
The evolution of the positive ridge during vortex merging enables the mixing of two dynamically different regions of fluid: fluid inside the primary vortex ring and fluid that is in the trailing shear layer including the secondary vortex ring \citep{ofarrellLagrangianApproachIdentifying2010}.
At the end of merging, the positive and negative FTLE ridges are symmetric with respect to the vortex core and the vortex ring has separated from the trailing shear layer.
A symmetric definition of the vortex ring by the FTLE ridges matches the intuitive idea of a pinched-off vortex ring, with the ridges acting as physical barriers that prevent the entrance of additional vorticity into the ring.
\subsection{Evolution of vortex topology during vortex merging}
The distance between the intersections of the positive and negative FTLE ridges with the centreline is used to define the stream-wise length, \kindex{L}{o}, of the vortex.
The largest distance in the radial direction between the topmost and bottommost points on the positive FTLE ridge is used here to define the outer diameter, \kindex{D}{o}, or height of the vortex.
The definition of these vortex shape characteristics are indicated in \cref{fig:viscues}a and their temporal evolution for $L/D>3$ is presented in \cref{fig:viscues}b.
For lower stroke ratios, we cannot yet identify the downstream FTLE ridge to determine the vortex length.
Around $L/D=3$, the vortex outer diameter is about twice the nozzle diameter and larger than the stream-wise length.
The outer diameter increases approximately linearly during the rest of the compression phase of the bulb to $\kindex{D}{o} = \SI{2.4}{D}$ at $L/D=6.5$.
The stream-wise length increases faster than the outer diameter as the positive FTLE ridge lags behind and reaches a maximum value of $\kindex{L}{o}=\SI{2.65}{D}$ at $L/D \approx 4.6$, yielding a minimum aspect ratio of $\kindex{L}{o}/\kindex{D}{o}=0.8$.
For $L/D>4.6$, the positive FTLE ridge starts catching up with the vortex core and negative ridge (\cref{fig:ftle}c) and the stream-wise length rapidly decreases and converges to a value of $\kindex{L}{o}=\SI{1.76}{D}$ for $L/D>6$.
This corresponds to an aspect ratio of $\kindex{L}{o}/\kindex{D}{o}=1.4$.
Based on the velocity field (\cref{fig:vort}) and FTLE snapshots (\cref{fig:ftle}), the merging of the secondary vortex with the primary vortex ring occurs for $5<L/D<6$.
During this time interval, indicated by the shaded region in \cref{fig:viscues}b, the FTLE boundaries contract and push vorticity from the tail of the FTLE bound region towards the primary core line to merge.
To quantify the asymmetry of the FTLE bound area with respect to the vortex core location, we introduce the following asymmetry parameter:
\begin{equation}\label{eq:asymm}
a=\frac{\kindex{L}{p}-\kindex{L}{n}}{\kindex{D}{o}}\quad,
\end{equation}
with \kindex{L}{n} the distance from the stream-wise location of the vortex core to the leading nFTLE intersection, and \kindex{L}{p} the distance from the stream-wise location of the vortex core to the lagging pFTLE intersection with the centreline (\cref{fig:viscues}a).
Values of $a$ close to zero indicate symmetric FTLE boundaries with respect to the vortex core, higher positive values indicate an asymmetric tail heavy FTLE enclosed area.
For $3<L/D<4.5$, the asymmetry increases similarly to the vortex length due to the lagging of the positive FTLE ridge.
For $L/D>4.5$, the positive FTLE ridge catches up with the core line and the asymmetry parameter decreases and reaches zero at $L/D=6.2$.
The FTLE boundaries become fully symmetric with respect to the vortex core after vortex merging.
\begin{figure}[tb!]
\includegraphics{fig5.eps}
\caption{Evolution of the shape and asymmetry of the experimentally observed vortex ring.
(\textbf{a}) Vortex boundaries indicated by the FTLE ridges including the definition for the vortex stream-wise length and outer diameter.
Temporal evolution of the (\textbf{b}) vortex length and outer-diameter and (\textbf{c}) the asymmetry parameter $a$ defined by \cref{eq:asymm} as a measure for the degree of asymmetry of the FTLE boundaries with respect to the vortex core.
The shaded area indicates when vortex merging occurs.
}\label{fig:viscues}
\end{figure}
\subsection{Evolution of integral quantities during vortex merging}
The variation in the vortex ring topology during merging affects the amount of vorticity and its distribution inside the vortex core, which influences the vortex circulation, hydrodynamic impulse, and energy \citep{gharibUniversalTimeScale1998, deguyonScalingTranslationalVelocity2021}.
By analysing the FTLE ridges and the local extrema in the pressure field of vortex rings created by synthetic jets, \cite{lawsonFormationTurbulentVortex2013} demonstrated that vortex rings can continue to grow after pinch-off due to interaction with their environment.
To quantify the evolution of the vortex ring development during the entire formation process, we analyse here the temporal evolution of the integral quantities of the vortex, including its circulation, energy, and resulting translational velocity.
The circulation of the vortex ring is computed as the surface integral of the average between the positive and negative out-of-plane vorticity:
\begin{equation}
\label{equation:gamma}
\Gamma = \iint\limits_{A} \frac{|\omega|}{2}\, \dd A \quad.
\end{equation}
We have calculated the circulation within the area bound by the FTLE ridges and within a rectangular area centred around the vortex centre such that it only contains the primary vortex ring (\cref{fig:iqs}a).
The extent of the rectangular box is defined based on the trailing pressure maximum following the procedure presented by \cite{lawsonFormationTurbulentVortex2013}.
The temporal evolution of the non-dimensional circulation in both integration areas is presented in \cref{fig:iqs}b,c.
The circulation is non-dimensionalised by the characteristic velocity $\kindex{U}{0}$ and the nozzle outlet diameter $D$ in \cref{fig:iqs}b and by $\kindex{U}{0}$ and the vortex diameter \kindex{D}{v} in \cref{fig:iqs}c.
The vortex diameter normalised by the nozzle diameter is presented in \cref{fig:iqs}d.
\begin{figure}[t!]
\includegraphics{fig6.eps}
\caption{Temporal evolution of the vortex circulation and vortex diameter.
Circulation values obtained by integration of the experimentally obtained out-of-plane vorticity within the area bound by the positive and negative FTLE ridges are presented in violet, values obtained by integration within a rectangular box centred around the primary vortex centres are presented in turquoise.
The light blue shaded region represents the uncertainty based on variations in the box height.
(\textbf{a}) Sketch of the two integration areas.
(\textbf{b}) Circulation normalised by \kindex{U}{0} and the nozzle diameter $D$.
(\textbf{c}) Circulation normalised by \kindex{U}{0} and the vortex diameter \kindex{D}{v}.
(\textbf{d}) Evolution of the vortex diameter normalised by the nozzle diameter.
}\label{fig:iqs}
\end{figure}
Due to the initial tail-heavy asymmetry of the FTLE boundaries with respect to the vortex core, the circulation inside the FTLE contour is higher than the circulation in the box until $L/D\approx 5.5$.
The width of the FTLE boundary converges to the width of the rectangular contour post vortex merging and the two circulation curves in \cref{fig:iqs}b,c converge to $\Gamma/(\kindex{U}{0}D)\approx\num{4.3}$ and $\Gamma/(\kindex{U}{0}\kindex{D}{v})\approx\num{2.5}$.
The final value of the vortex circulation when normalised by the vortex diameter instead of the nozzle diameter is close to the non-dimensional values reported for propulsive vortex rings generated with a piston cylinder \citep{gharibUniversalTimeScale1998} and drag vortices behind a translating cone \citep{deguyonScalingTranslationalVelocity2021}.
The difference between the circulation in the FTLE contour and in the rectangular contour for $L/D>5.5$ is attributed to vorticity outside the FTLE contour in the radial direction.
For $L/D<5.5$, the difference is attributed to circulation in the trailing shear layer that will eventually be fed into the primary vortex ring through merging.
The maximum circulation in the FTLE boundary in measured when the stream-wise length of the vortex is also maximal (\cref{fig:viscues}b).
The circulation in the rectangular contour gradually increases and does not attain a local maximum prior to converging at $L/D>5.5$.
When the circulation inside the vortex ring is normalised by the vortex diameter instead of the nozzle diameter, the non-dimensional circulation already reaches \SI{90}{\percent} of its final value after $L/D=3.6$, which is well before vortex merging takes place.
The additional volume and associated vorticity that is added to the main vortex ring due to merging does not significantly alter the non-dimensional circulation based on the vortex size, but primarily leads to an increase in the vortex core diameter (\cref{fig:iqs}d).
The increase in the vortex diameter during the merging process is confirmed for vortex rings generated in different configurations, such as orifice-generated vortex rings \citep{limbourgFormationOrificegeneratedVortex2021a}.
After merging, the vortex core diameter converges to $\kindex{D}{v}/D=1.73$ at the same time as the non-dimensional circulation converges.
Based on the evolution of the vortex circulation, we can calculate a vortex formation number.
The formation number of a vortex ring is typically obtained as the stroke ratio at which the circulation in the entire domain equals the steady-state circulation value inside the vortex ring.
Following this convention, we obtain a vortex formation number of \num{3.3} which is within the range of formation numbers observed for propulsive vortex rings generated for different stroke ratios \citep{dabiriOptimalVortexFormation2009}.
Note that the evolution of the total circulation in the domain is not included in the figures as it quickly exceeds the axis range selected for display.
The non-dimensional energy of the vortex ring serves as a measure for the distribution of the vorticity inside the vortex ring \citep{deguyonScalingTranslationalVelocity2021}.
According to \citet{gharibUniversalTimeScale1998}, the non-dimensional energy $E^*$ is defined as:
\begin{equation}
\label{equation:estar}
E^* = \frac{E}{\sqrt{\rho I \Gamma^3}} \quad ,
\end{equation}
with $E$ the kinetic energy, $\rho$ the fluid density, $I$ the impulse, and $\Gamma$ the circulation of the vortex ring.
The kinetic energy is computed as:
\begin{equation}
\label{equation:energy}
E=\pi \rho \iint\limits_{A} \omega \psi \dd A \quad,
\end{equation}
where $\psi$ is the stream function, obtained from integrating the Cauchy Riemann equations for the axisymmetric vortex ring.
The stream function is computed here in the entire domain and integrated within the integration area $A$.
Similar to the procedure applied to compute the circulation (\cref{fig:iqs}a), we consider again two integration areas.
One area is bound by the FTLE ridges, the other one is a rectangular area centred around the vortex centres.
The impulse of the vortex ring is computed as:
\begin{equation}
I = \pi \rho \iint\limits_{A} \omega |r|^2 \dd A \quad,
\end{equation}
where $|r|$ is the distance away from the vortex centreline.
The non-dimensional energy for the two integration areas is presented in \cref{fig:estar}a.
The evolution of $E^*$ within the FTLE boundary is only shown as a reference.
The asymmetric shape of the FTLE ridges for $L/D<6$ indicate that the primary vortex ring is not isolated and we cannot directly interpret the value of the non-dimensional energy bound by the FTLE ridges as a measure of the vorticity distribution within the vortex ring.
The non-dimensional energy inside the rectangular contour that contains only the primary vortex ring drops to a steady state value of \num{0.26} after only three stroke ratios.
This would point again towards a vortex formation number around \num{3}.
The formation number can be interpreted here as the non-dimensional time required for the vorticity to accumulate into a stable distribution but it does not mean that the vortex will not accept additional vorticity or impulse.
After the initial decrease, the non-dimensional energy of \num{0.26} is maintained for the remainder of the vortex merging process while other quantities, such as the circulation and diameter of the vortex ring (\cref{fig:iqs}b-d), continue to evolve.
The continued evolution of the circulation and size of vortex rings after their pinch-off or for formation times beyond the formation number has been observed by \cite{lawsonFormationTurbulentVortex2013, limbourgFormationOrificegeneratedVortex2021a}.
The limit value of the non-dimensional energy observed in this study, \num{0.26}, is lower than the value of \num{0.33} which is typically observed in literature for vortex rings generated with a constant piston or jet velocity \citep{shusserNewModelInviscid1999, nitscheSelfsimilarSheddingVortex2001}.
This lower value is attributed to the kinematics of the vortex propulsor, which influences the velocity and acceleration profile of the jet that feeds the vortex ring \citep{limbourgExtensionUniversalTime2021, kriegApproximatingTranslationalVelocity2013,deguyonEstimatingNondimensionalEnergy2021}.
The nozzle exit velocity decelerates for $t/T>0.2$ up to the end of the compression phase of the vortex generator (\cref{fig:uf}e), which resembles the negative sloping velocity programs implemented by \cite{danailaFormationNumberConfined2018, kruegerSignificanceVortexRing2003}.
The non-dimensional energy of the vortex ring can also be manipulated to values below \num{0.33} by varying the nozzle geometry, such as a converging nozzle or a nozzle with a temporally-varying exit diameter \citep{kriegNewKinematicCriterion2021}.
The additional volume and circulation that will merge with the primary vortex ring for $L/D>3$ does not alter its non-dimensional energy.
This is a great example of the Kelvin-Benjamin variational principle, which states that a vortex will only accept additional vorticity if this does not disturb the vorticity distribution, and thus the non-dimensional energy of the new configuration \citep{benjaminAlliancePracticalAnalytical1976}.
The vorticity level in the trailing jet is high enough to allow it to penetrate into the primary vortex ring.
The primary vortex is able to accept the additional vorticity from the secondary vortex by increasing its radius (\cref{fig:iqs}d) and redistributing the additional vortical fluid such that its non-dimensional energy remains constant (\cref{fig:iqs}a).
\begin{figure}[tb!]
\includegraphics{fig7.eps}
\caption{Temporal evolution of (\textbf{a}) the non-dimensional energy of the vortex and (\textbf{b}) the translational velocity of the experimentally observed vortex according to \cref{equation:utrans} and based on the tracking of the vortex centre locations.
Values obtained by integration in the area bound by the FTLE ridges are presented in violet, values obtained by integration within a rectangular box centred around the vortex centres are presented in turquoise.
}\label{fig:estar}
\end{figure}
The reason the secondary vortex can merge with the primary vortex in the first place is due to their relative translation velocities \citep{maxworthyStructureStabilityVortex1972}.
The vortex translation velocity is calculated from the non-dimensional energy, circulation, and diameter of the vortex ring \citep{saffmanVelocityViscousVortex1970, deguyonScalingTranslationalVelocity2021}:
\begin{equation}
\label{equation:utrans}
\kindex{U}{v} = \frac{\Gamma}{\pi \kindex{D}{v}} \bigg(E^* \sqrt{\pi}+\frac{3}{4} \bigg)\quad.
\end{equation}
\Cref{equation:utrans} is valid for steady vortex rings with a thin core \citep{saffmanVelocityViscousVortex1970}.
An error of $\approx\SI{3}{\percent}$ is obtained for the vortex translational velocity by taking into account the effect of the core thickness, based on the second-order correction proposed by \cite{fraenkelExamplesSteadyVortex1972} and \cite{shusserNewModelInviscid1999} for Norbury vortices \citep{norburySteadyVortexRing1972}.
The temporal evolution of the translational velocity was computed for the FTLE boundary and the rectangular contour enclosing solely the primary vortex and is presented in \cref{fig:estar}b.
The translational velocity based on the tracking of the vortex centres is included in \cref{fig:estar}b for comparison.
The translational velocity of just the primary vortex ring based on the integral quantities rapidly increases during the first three stroke ratios to a value of $\kindex{U}{v}/\kindex{U}{0}=0.8$.
The velocity based on the tracking the core converges to $\kindex{U}{v}/\kindex{U}{0}=1$ after four stroke ratios.
This rapid increase in the translational velocity causes the core to move away from the nozzle exit.
At $L/D=3$, the core has already move one diameter away from the outlet (\cref{fig:ftle}a,e) which makes it harder to directly feed additional vorticity into the primary vortex, but it does not mean that the vortex is not able to accept additional vorticity at a later stage.
The early physical distancing of the primary vortex ring is a direct consequence of the specific time-varying outlet velocity profile of our vortex generator.
The translational velocity based on the integral values computed for the FTLE boundary increases beyond the velocity of the primary vortex for $4<L/D<5.5$.
This difference is attributed to the higher translational velocity of the secondary vortex in the tale of the FTLE bounded area.
The secondary vortex catches up with the primary vortex and they merge.
The vortex diameter, circulation, and translational velocity all converge to new post-merging values for $L/D>5.5$ while the non-dimensional energy remains at its limiting pre-merging value in agreement with the Kelvin-Benjamin variational principle.
\subsection{Fluid entrainment and detrainment during vortex merging}
The entrainment and detrainment of fluid into the merging of the primary and secondary vortex is visualised using a Lagrangian approach.
Artificial seed particles are placed inside and outside the vortex boundaries marked by the FTLE ridges $L/D = 3.7$ and are convected with the flow.
The initial time of particle seeding is selected such that the FTLE ridges demarcating the vortex have fully formed and form a closed contour.
The locations of the seed particles at different time instants during the compression stage of the bulb are presented in \cref{fig:ftraj}.
The top half of the snapshots in \cref{fig:ftraj} present the results for particles that were initially inside the FTLE boundaries and show fluid detrainment.
The bottom half of the snapshots in \cref{fig:ftraj} present the results for particles that were initially outside the FTLE boundaries and show fluid entrainment.
By definition, fluid particles move around the negative ridge and are attracted by the positive FTLE ridge \citep{hallerLagrangianCoherentStructures2002, shaddenTransportStirringInduced2007}.
Occasionally, particles surpass the positive FTLE ridge to enter the vortex boundary (\cref{fig:ftraj}d-f).
The particles leave through the formation of lobes in the negative FTLE ridge, in the process known as tail shedding \citep{shaddenLagrangianAnalysisFluid2006a, deguyonModellingVortexRing2021}.
\begin{figure}[tb!]
\includegraphics{fig8.eps}
\caption{
Snapshots of artificial seed particles convected by the experimentally measured flow.
Top half show particles initially inside the FTLE boundaries, bottom half show particles initially outside the FTLE boundaries.
White symbols correspond to points along the FTLE ridges that are non-hyperbolic, filled coloured symbols correspond to points along the FTLE ridges that are hyperbolic.
Initial position of the particles at (\textbf{a}) $L/D$ = \num{3.7}, and their location after convection by the flow at (\textbf{b}) $L/D$ = \num{4.4}, (\textbf{c}) $L/D$ = \num{5.0}, (\textbf{d}) $L/D$ = \num{5.5}, (\textbf{e}) $L/D$ = \num{5.9}, and (\textbf{f}) $L/D$ = \num{6.4}.
} \label{fig:ftraj}
\end{figure}
To understand how particles can cross FTLE ridges we compute the strain rates normal to the ridges.
The strain rates allow us to verify whether the ridges are indeed hyperbolic Lagrangian coherent structures \citep{hallerLagrangianCoherentStructures2002, greenUsingHyperbolicLagrangian2010a}.
The positive FTLE ridge, which repels particles, is a hyperbolic repelling material line if the strain rate normal to the ridge is positive.
A negative FTLE ridge is an attracting material line if the strain rate normal to the ridge is negative \citep{hallerLagrangianStructuresRate2001}.
The sign of the strain rates on the positive and negative FTLE ridges are indicated by the markers in \cref{fig:ftraj}.
The negative FTLE ridge continuously maintains negative strain rates on the ridge throughout the process, confirming that it is a hyperbolic attracting material line \citep{greenDetectionLagrangianCoherent2007a}.
The entrainment of fluid particles into the vortex ring across the positive FTLE ridges occurs where the ridges are locally non-hyperbolic (\cref{fig:ftraj}b-e).
The positive FTLE ridge evolves into an entirely hyperbolic repelling ridge post vortex merging when the ridges symmetrically enclose the primary vortex core (\cref{fig:ftraj}f).
\subsection{Evolution of pressure field during vortex merging}
The Lagrangian analysis provides detailed and accurate insight into the formation and development of the vortex ring generated by our propulsor.
But, the Lagrangian analysis is computationally expensive, requires time resolved flow field data, and is not suitable for in-situ optimisation and control of the time-dependent exit velocity profile.
Local instantaneous pressure measurements would be preferential for this purpose.
To evaluate the potential of pressure-based indicators of vortex formation we need to explore how pressure features in the flow field related to the previously extracted Lagrangian boundaries.
The pressure field is computed from a direct integration of the velocity field \citep{dabiriAlgorithmEstimateUnsteady2014} and presented in \cref{fig:press} for selected snapshots.
The FTLE ridges are added atop the pressure fields for comparison.
The pressure minima reliably indicate the location of the vortex core in all the snapshots.
The pressure minimum becomes stronger with increasing stroke ratio.
A high pressure region called the leading pressure maximum forms ahead of the vortex core and has a local maximum where the negative FTLE ridge intersects with the centreline (\cref{fig:press}a).
Local high pressure regions or trailing pressure maxima, emerge aft of the primary vortex core for $L/D > 3.5$.
These regions are scattered pre vortex merging (\cref{fig:press}b,c) and combine to a single more coherent region centred around the intersection of the positive FTLE ridge with the centreline post merging (\cref{fig:press}d).
\begin{figure}[tb!]
\includegraphics{fig9.eps}
\caption{Snapshots of the pressure field derived from the experimentally measured velocity field at (\textbf{a}) $L/D$ = \num{3.4}, (\textbf{b}) $L/D$ = \num{5.0}, (\textbf{c}) $L/D$ = \num{5.5}, and (\textbf{d}) $L/D$ = \num{5.9}.
(\textbf{e}) Evolution of the stream-wise location of the vortex core, of the crossing of the positive and negative FTLE ridges with the centreline, and of location of the leading and trailing pressure maxima (lpm, tpm) along the centreline as a function of the stroke ratio.
}\label{fig:press}
\end{figure}
The evolution of the stream-wise locations of the vortex core, of the intersection of the positive and negative FTLE ridges with the centreline, and of the location of the leading and trailing pressure maxima along the centreline are summarised in \cref{fig:press}e.
The trajectory of the leading pressure maximum and the negative FTLE ridge coincide perfectly.
The trailing pressure maximum is initially ahead of the positive FTLE ridge which lags behind the primary vortex core.
For $4.4<L/D<6$, the trailing pressure maximum cannot be reliably identified in all snapshots.
We use a linear interpolation to fill the gaps.
The locations of the trailing pressure maximum and the positive FTLE ridge match closely post-merging.
The leading and trailing pressure maxima, like the positive and negative FTLE ridges, reliably indicate the upstream and downstream bounds of the vortex ring \citep{lawsonFormationTurbulentVortex2013}.
The ridges and the pressure maxima symmetrically enclose the primary vortex core and act as physical barriers that prevent additional fluid from entering the vortex ring.
The pressure maxima can serve as reliable observables for vortex ring formation and shedding that can be used for future optimisation and control of the driving jet profile of bio-inspired vehicles and other vortex generating systems.
A pressure-based methodology does not require time-resolved data and is computationally less expensive than calculating the FTLE field.
Our results disclose new possibilities to incorporating pressure sensors and probes in the flow to measure vortex ring properties in-situ.
\section{\label{sec:conclusion}Conclusion}
We have presented here a bio-inspired jet propulsor that combines the body morphologies of two marine organisms, the bell muscle of the jellyfish and the compression kinematic of a bivalve.
Our propulsor generates a non-linear time-varying exit velocity profile by compressing and relaxing a flexible bulb with two rigid arms and has a finite volume capacity.
The formation process of the vortices generated by this jet profile is analysed using time-resolved velocity field measurements.
The temporal evolution of the vortex topology and its integral quantities are analysed based on the finite-time Lyapunov exponent field and the pressure field, both derived from the velocity data.
When fluid is ejected by our vortex propulsor, a coherent vortex ring is formed.
This primary vortex ring rapidly moves away from the nozzle exit during the compression phase and a trailing shear layer with a secondary vortex emerges.
The secondary vortex has a higher translation velocity than the initial primary vortex and both merge before the end of the bulb compression.
Analysis of the temporal evolution of the ridge in the FTLE field during vortex merging reveal that the vortex length increases beyond its diameter pre-merging due to the lagging of the positive FTLE ridge.
During vortex merging, the vortex length contracts and its diameter increases such that additional vorticity is accepted by the primary vortex ring without changing its non-dimensional energy, in agreement with the Kelvin-Benjamin variational principle.
The vortex diameter, circulation, and translational velocity all converge to new post-merging values post-merging.
Our Lagrangian analysis provides detailed and accurate insight into the formation and development of the vortex ring generated by the non-linear evolution of the stoke ratio generated by our propulsor.
However, this type of analysis is computationally expensive, requires time-resolved flow field data, and cannot be conducted in-situ to provide input for optimisation and control of the time-dependent exit velocity profile.
An alternative pressure-based methodology does not require time-resolved data and can be applied on a single snapshot.
We reveal that the trajectories of the pressure maxima that lead and trail the vortex core coincide with the trajectories of the negative and positive FTLE ridges pre and post vortex merging.
During vortex merging, the trailing pressure maximum is less pronounced.
The local pressure maxima can serve as reliable observables for vortex ring formation and shedding that could be picked up by pressure sensors integrated in the surface around the nozzle exit.
Our results provide novel insights into the evolution of integral quantities of vortex rings during merging and can aid and inspire the further design and control of underwater vehicles that uses pulsatile jet propulsion.
Even though our design derives inspiration from biological organisms to create propulsion by periodically produce vortex rings, it does not have the full range of adaptivity displayed by the biological examples.
The vortex rings we generated here are fully axisymmetric, and only provide propulsion in one direction.
Jellyfish and bivalves exploit a variety of both axisymmetric and asymmetric vortex rings to manoeuvre and interact with obstacles in their environments.
The formation and characterisation of asymmetric vortex rings with the vortex propulsor, where three-dimensional effects are involved, will be the subject of future investigations.
\begin{Backmatter}
\paragraph{Funding Statement}
This work was supported by the Swiss National Science Foundation (grant nr. 200021175792).
\paragraph{Declaration of Interests}
The authors declare no conflict of interest.
\paragraph{Data Availability Statement}
The data that support the findings of this study can be made available upon request.
|
2,877,628,090,312 | arxiv | \section{Introduction}
\label{section1}
With the continuous development of deep learning~\cite{Lecun2015Deep}, deep neural networks have made significant progress in various fields, such as computer vision, natural language processing and speech recognition. {Convolutional neural networks (CNNs) have been proved to be reliable in the fields of image classification~\cite{7298594,NIPS2015_5638,DBLP:journals/pr/NogueiraPS17,DBLP:journals/pr/LiuFGWP17,DBLP:journals/pr/DengLLT18}, object detection~\cite{DBLP:journals/corr/Girshick15,DBLP:journals/corr/abs-1904-02701,ge2017DetectingMasked,DBLP:journals/pr/LopesASO17} and object recognition~\cite{krizhevsky2012imagenet,simonyan2015very,7298594,DBLP:journals/pr/WuWGL18,DBLP:journals/pr/LiLZW20}, and thus have been widely used in practice.}
Owing to the deep structure with a number of layers and millions of parameters, the deep CNNs enjoy strong learning capacity, and thus usually achieve satisfactory performance. For example, the VGG-16~\cite{simonyan2015very} network contains about 140 million 32-bit floating-point parameters, and can achieve 92.7\% top-5 test accuracy for image classification task on ImageNet dataset. The entire network needs to occupy more than 500 megabytes of storage space and perform $1.6\times{10}^{10}$ floating-point arithmetic operations. This fact makes the deep CNNs heavily rely on the high-performance hardware such as GPU, while in the real-world applications, usually only the devices (\emph{e.g. }, the mobile phones and embedded devices) with limited computational resources are available~\cite{cheng2017survey}. For example, embedded devices based on FPGAs usually have only a few thousands of computing units, far from dealing with millions of floating-point operations in the common deep models. There exists a severe contradiction between the complex model and the limited computational resources. Although at present, a large amount of dedicated hardware emerges for deep learning~\cite{chen2018eyeriss,Sze2017Hardware,Chen2016Eyeriss,Chen201614,EyerissAnEnergy-Efficient}, providing efficient vector operations to enable fast convolution in forward inference, the heavy computation and storage still inevitably limit the applications of the deep CNNs in practice. {Besides, due to the huge model parameter space, the prediction of the neural networks is usually viewed as a black-box, which brings great challenges to the interpretability of CNNs. Some works like~\cite{Zhang2019InterpretingAI,Yu2019TowardsNN,Liu2019TrainingRD} empirically explore the function of each layer in the network. They visualize the feature maps extracted by different filters and view each filter as a visual unit focusing on different visual components.}
{From the aspect of explainable machine learning, we can summarize that some filters are playing a similar role in the model, especially when the model size is large. So it is reasonable to prune some useless filters or reduce their precision to lower bits. On the one hand, we can enjoy more efficient inference with such compression technique. On the other hand, we can utilize it to further study the interpretability of CNNs, \emph{i.e. }, finding out which layer is important, which layer is useless and can be removed from the black-box, what structure is beneficial for accurate prediction.}
Many prior studies have proven that there usually exists large redundancy in the deep structure~\cite{Izui1990Analysis,cheng2015an,han2015learning,Srinivas2015Data}. For example, by simply discarding the redundant weights, one can keep the performance of the ResNet-50~\cite{he2016deep}, and meanwhile save more than 75\% of parameters and 50\% computational time. {In the literature, approaches for compressing the deep networks can be classified into five categories: parameter pruning~\cite{han2015learning, han2016deep, he2017channel,ge2017Compressing}, parameter quantizing~\cite{gong2014compressing,wu2016quantized,Vanhoucke2011ImprovingTS,gupta2015deep,ge2018EfficientDeepLearning,DBLP:conf/mm/ZhaoH0H18,DBLP:conf/eccv/HuLWZC18,DBLP:conf/nips/ChenWP19,Wu2020Rotation,zhu2019unified}, low-rank parameter factorization~\cite{denton2014exploiting,lebedev2015speeding,jaderberg2014speeding,lebedev2016fast,DBLP:journals/pr/WenZXYH18}, transferred/compact convolutional filters~\cite{mobilenet,mobilenet_v2,shufflenet,shufflenet_v2}, and knowledge distillation~\cite{hinton2015distilling,xu2018training,chen2018darkrank,Yim2017A,zagoruyko2017paying,DBLP:journals/pr/DingCH19}.} The parameter pruning and quantizing mainly focus on eliminating the redundancy in the model parameters respectively by removing the redundant/uncritical ones or compressing the parameter space (\emph{e.g. }, from the floating-point weights to the integer ones). Low-rank factorization applies the matrix/tensor decomposition techniques to estimate the informative parameters using the proxy ones of small size. The compact convolutional filter based approaches rely on the carefully-designed structural convolutional filters to reduce the storage and computation complexity. The knowledge distillation methods try to distill a more compact model to reproduce the output of a larger network.
Among the existing network compression techniques, quantization based one serves as a promising and fast solution that yields highly compact models compared to their floating-point counterparts, by representing the network weights with very low precision. Along this direction, the most extreme quantization is binarization, the interest in this survey. Binarization is a 1-bit quantization where data can only have two possible values, namely -1(0) or +1. For network compression, both the weight and activation can be represented by 1-bit without taking too much memory. Besides, with the binarization, the heavy matrix multiplication operations can be replaced with light-weighted bitwise XNOR operations and Bitcount operations. Therefore, compared with other compression methods, binary neural networks enjoy a number of hardware-friendly properties including memory saving, power efficiency and significant acceleration. The pioneering work like BNN~\cite{BNN} and XNOR-Net~\cite{XNOR-Net} has proven the effectiveness of the binarization, namely, up to $32\times$ memory saving and $58\times$ speedup on CPUs, which has been achieved by XNOR-Net for a 1-bit convolution layer. Following the paradigm of binary neural network, in the past years a large amount of research has been attracted on this topic from the fields of computer vision and machine learning~\cite{Lecun2015Deep,7298594,simonyan2015very,he2016deep}, and has been applied to various popular tasks such as image classification\cite{BinaryConnect,DoReFa-Net,LQ-Net,Bi-Real,Gong:iccv19}, detection~\cite{BWBDN,Li_2019_CVPR}, and so on. {With the binarization technique, the importance of a layer can be easily validated by switching it to full-precision or 1-bit. If the performance greatly decreases after binarizing certain layer, we can conclude that this layer is on the critical path of the network. Furthermore, it is also significant to find out whether the full-precision model and the binarized model work in the same way from the explainable machine learning view.}
{Besides focusing on the strategies of model binarization, many studies have attempted to reveal the behaviors of model binarization, and further explain the connections between the model robustness and the structure of deep neural networks. This possibly helps to approach the answers to the essential questions: how does the deep network work indeed and what network structure is better? It is very interesting and important to well investigate the studies of binary neural network, which will be very beneficial for understanding the behaviors and structures of the efficient and robust deep learning models. Some of studies in the literature have shown that binary neural networks can filter the input noise, and pointed out that specially designed BNNs are more robust compared with the full-precision neural networks. \cite{lin2018defensive} shows that noise is continuously amplified during the forward propagation of neural networks, and binarization improves robustness by keeping the magnitude of the noise small.}
{The studies based on BNNs can also help us to analyze how structures in deep neural networks work. Liu \etal creatively proposed Bi-Real Net, which added additional shortcuts (Bi-Real) to reduce the information loss caused by binarization~\cite{Bi-Real}. This structure works like the shortcut in ResNet and it helps to explain why the widely used shortcuts can improve performance of deep neural networks to some extent. On the one hand, by visualizing the activations, it can be seen that more detailed information in the shallow layer can be passed to the deeper layer during forward propagation. On the other hand, gradients can be directly backward propagated through the shortcut to avoid gradient vanish problem.
Zhu \etal leveraged ensemble methods to improve the performance of BNNs by building several groups of weak classifiers, and the ensemble methods improve the performance of BNNs although sometimes face over-fitting problem~\cite{BENN}. Based on analysis and experimentation of BNNs, they showed that the number of neurons is more important than the bit-width and it may not be necessary to use real-valued neurons in deep neural networks, which is similar to the principle of biological neural networks.
Besides, reducing the bit-width of certain layer to explore its effect on accuracy is one effective approach to study the interpretability of deep neural networks. There are many works to explore the sensitivity of different layers to binarization. It is a common sense that the first layer and the last layer should be kept in higher precision, which means that these layers play a more important role in the prediction of neural networks.}
This survey tries to exploit the nature of binary neural networks and categorizes the them into the naive binarization without optimizing the quantization function and the optimized binarization including minimizing quantization error, improving the loss function, and reducing the gradient error. It also discusses the hardware-friendly methods and the useful tricks of training binary neural networks. In addition, we present the common datasets and network structures of evaluation, and compare the performance of current methods on different tasks.
The organization of the remaining part is given as the following. Section \ref{section2} introduces the preliminaries for binary neural network. Section \ref{section3} presents the existing methods falling in different categories and lists the training tricks in practice. Section \ref{section4} gives the evaluation protocols and performance analysis. Finally, we conclude and point out the future research trends in Section \ref{section5}.
\section{Preliminary}
\label{section2}
In full-precision convolutional neural networks, the basic operation can be expressed as
\begin{equation}
{\mathbf z=\sigma(\mathbf w\otimes\mathbf a)}
\end{equation}
{where $\mathbf w$ and $\mathbf a$ represent the weight tensor and activation tensor generated by the previous network layer, respectively. $\sigma(\cdot)$ is the non-linear function and $\mathbf z$ is the output tensor and $\otimes$ represents the convolution operation. In the forward inference process of neural networks, the convolution operation contains a large number of floating-point operations, including floating-point multiplication and floating-point addition, which correspond to the vast majority of calculations in neural network inference.}
\subsection{Forward Propagation}
The goal of network binarization is to represent the floating-point weights $\mathbf w$ and/or activations $\mathbf a$ using 1-bit. The popular definition of the binarization function is given as follows:
\begin{equation}
Q_w(\mathbf w)=\alpha\mathbf{b_w},\quad Q_a(\mathbf a)=\beta\mathbf{b_a}
\end{equation}
{where $\mathbf b_{\mathbf w}$ and $\mathbf b_{\mathbf a}$ are the tensor of binary weights (kernel) and binary activations, with the corresponding scalars $\alpha$ and $\beta$. In the literature, the $\mathtt{sign}$ function is widely used for $Q_w$ and $Q_a$:}
\begin{equation}
\mathtt{sign}(x)=
\left\{\begin{array}{ll}{+1,} & {\text {\rm if }\ x \ge 0} \\
{-1,} & {\text {\rm otherwise }}\end{array}\right.
\end{equation}
With the binarized weights and activations, the vector multiplication in forward propagation can be reformulated as
\begin{equation}
{\mathbf{z} = \sigma(Q_w(\mathbf w) \otimes Q_a(\mathbf a))= \sigma({{\alpha}}{{\beta}}({\mathbf{b_w}}\odot {\mathbf{b_a}})),}
\end{equation}
where $\odot$ denotes the inner product for vectors with bitwise operation XNOR-Bitcount. Figure \ref{fig:bi-conv} shows the convolution process in the binary neural networks.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\linewidth]{Figs1_final.pdf}
\end{center}
\vspace{-0.2in}
\caption{Convolution Process of Binary Neural Networks}
\label{fig:bi-conv}
\vspace{-0.1in}
\end{figure}
\begin{comment}
\begin{table}[htb]
\caption{Multiply using XNOR}
\label{XNOR-mapping}
\centering
\begin{tabular}{c|c|c}
\hline
$b_w$ & $b_a$ & XNOR($\odot$) \\
\hline
0 ($-1$) & 0 ($-1$) & 1 ($+1$) \\
0 ($-1$) & 1 ($+1$) & 0 ($-1$) \\
1 ($+1$) & 0 ($-1$) & 0 ($-1$) \\
1 ($+1$) & 1 ($+1$) & 1 ($+1$) \\
\hline
\end{tabular}
\end{table}
\end{comment}
\subsection{Backward Propagation}
Similar to training a full-precision neural network model, when training a binary neural network, it is still straightforward to adopt the powerful backward propagation (BP) algorithm based on the gradient descent to update the parameters.
However, usually the binarization function (\emph{e.g. }, $\mathtt{sign}$) is not differentiable, and even worse, the derivative value in part of the function vanishes (\emph{e.g. }, 0 almost everywhere for $\mathtt{sign}$). Therefore, the common gradient descent based BP algorithm cannot be directly applied to update the binary weights.
{Fortunately, the technique called straight-through estimator (STE) has been proposed by Hinton \etal to address the gradient problem occurring when training deep networks binarized by $\mathtt{sign}$ function~\cite{STE}.
The function of STE is defined as follows}
\begin{equation}
\mathtt{clip}(x,-1,1)=\max (-1, \min (1, x)).
\end{equation}
Through STE, the binary neural network can be directly trained using the same gradient descent method as the ordinary full-precision neural network.
{However, when the $\mathtt{clip}$ function is used in backward propagation, if the absolute value of full-precision activations are greater than 1, it cannot be updated in backward propagation. Therefore, in the practical scenarios, the $\mathtt{Identity}$ function is also chosen to approximate the derivative of the $\mathtt{sign}$ function.}
\section{Binary Neural Networks}
\label{section3}
Compared with the full-precision neural network, the binary neural networks based on 1-bit representation replace the floating-point multiplication and addition operations by the efficient XNOR-Bitcount operations, and thus largely reduce the storage space and the inference time. However, the binarization of weights and activations will cause a severe deviation from the full-precision ones. {Also, as aforementioned, the discrete binarization makes the popular gradient descent based BP algorithm usually fail to pursue the satisfactory solution, even with the STE technique.} Therefore, the binary neural networks inevitably suffer from the performance degradation. It is still an open research problem that how to optimize the binary neural network.
{In recent years, a variety of binary neural networks have been proposed, from the native solutions that directly binarize the weights and inputs using the pre-defined binarization function, to the optimization based ones using different techniques that treat the problem from different perspectives: approximate the full-precision values by minimizing the quantization error, constrain the weights by modifying the network loss function, and learn the discrete parameters by reducing the gradient error. Table \ref{tab:over} summaries the surveyed binarization methods in different categories.}
\begin{table}[]
\setlength{\abovecaptionskip}{0.cm}
\caption{{Overview of Binary Neural Networks}}\label{tab:over}
\scriptsize
\centering
\begin{threeparttable}
\setlength{\tabcolsep}{0.5mm}{
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{\textbf{Type}}} & \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{Key Tech.}}
& \multicolumn{4}{|c|}{\textbf{Tricks}} \\ \cline{5-8}
\multicolumn{2}{|c|}{} & & & ST & OP & AQ & GA\\ \cline{1-8}
\multicolumn{2}{|c|}{\multirow{3}{*}{\tabincell{c}{Naive Binary\\Neural Networks}}} & BinaryConnect~\cite{BinaryConnect} &\multirow{3}{*}{\tabincell{c}{FP:\quad $\mathtt{sign}(x)$\\BP:\quad STE}} & - & \tiny{A} & - & - \\ \cline{3-3} \cline{5-8}
\multicolumn{2}{|c|}{} & Bitwise Neural Networks~\cite{BitwiseNN} & & - & - & - & - \\ \cline{3-3} \cline{5-8}
\multicolumn{2}{|c|}{} & Binarized Neural Networks~\cite{BNN} & & - & \tiny{AM} & - & - \\ \cline{1-8}
\multirow{32}{*}{\tabincell{c}{{Optimization}\\\\{Based}\\\\{Binary}\\\\{Neural}\\\\{Networks}}} & \multirow{14}{*}{\tabincell{c}{Minimize\\the\\Quantization\\Error}} & Binary Weight Networks~\cite{BNN} &
\multirow{14}{*}{\tabincell{c}{$J(\mathbf{b}, \alpha)=$\\ $\Vert\mathbf{x}-\alpha \mathbf{b}\Vert^{2}$ \\\\ $\alpha^{*},
\mathbf{b}^{*}=$\\ $\mathop{\arg\min}\limits_{\alpha, \mathbf{b}}{J(\mathbf{b},
\alpha)}$}}
& - & \tiny{S} & - & -\\ \cline{3-3} \cline{5-8}
& & XNOR-Net~\cite{XNOR-Net} & & \tiny{RB+RP} & \tiny{A} & - & - \\ \cline{3-3} \cline{5-8}
& & DoReFa-Net~\cite{DoReFa-Net} & & - & \tiny{A} & - & - \\ \cline{3-3} \cline{5-8}
& & High-Order Residual Quantization~\cite{HORQ} & & - & \tiny{A} & - & - \\ \cline{3-3} \cline{5-8}
& & ABC-Net~\cite{ABC-Net} & & - & \tiny{S} & - & - \\ \cline{3-3} \cline{5-8}
& & Two-Step Quantization~\cite{TSQ} & & \tiny{RB} & - & - & - \\ \cline{3-3} \cline{5-8}
& & Binary Weight Networks via Hashing\cite{BWN-Hashing}& & - & \tiny{S} & - & - \\ \cline{3-3} \cline{5-8}
& & PArameterized Clipping acTivation~\cite{PACT} & & - & \tiny{A} & - & - \\ \cline{3-3} \cline{5-8}
& & LQ-Nets~\cite{LQ-Net} & & \tiny{RB} & - & - & - \\ \cline{3-3} \cline{5-8}
& & Wide Reduced-Precision Networks~\cite{WRPN} & & \tiny{WD} & \tiny{A} & - & - \\ \cline{3-3} \cline{5-8}
& & {XNOR-Net++}~\cite{Bulat2019XNORNetIB} & & {-} & {\tiny{A}} & {-} & {-} \\ \cline{3-3} \cline{5-8}
& & Learning Symmetric Quantization~\cite{SYQ} & & - & - &\checkmark& - \\ \cline{3-3} \cline{5-8}
& & {BBG~\cite{DBLP:journals/corr/abs-1909-12117} } & & {SC} & {-} &{-}& {-} \\ \cline{3-3} \cline{5-8}
& & {Real-to-Bin~\cite{martinez2020training} } & & {SC} & {A} &{-}& {\checkmark} \\ \cline{2-8}
& \multirow{7}{*}{\tabincell{c}{Improve\\Network\\Loss\\Function}} & Distilled Binary Neural Network~\cite{DistilledBNN} &\multirow{7}{*}{\tabincell{c}{$\mathcal{L}_{\text {\rm total}}^{b}=$\\$\mathcal{L}_{\rm original}^{b}+$\\ $\lambda \mathcal{L}_{\rm Customized}^{b}$}} & - & \tiny{S} & - & - \\ \cline{3-3} \cline{5-8}
& & Distillation and Quantization~\cite{Distillation-Quant} & & - & \tiny{S} & - & - \\ \cline{3-3} \cline{5-8}
& & Apprentice~\cite{Apprentice} & & - & - & - & - \\ \cline{3-3} \cline{5-8}
& & Loss-Aware Binarization~\cite{Loss-Aware-BNN} & & - & \tiny{A} & - & - \\ \cline{3-3} \cline{5-8}
& & Incremental Network Quantization~\cite{INQ} & & - & \tiny{S} &\checkmark& - \\ \cline{3-3} \cline{5-8}
& & BNN-DL~\cite{Regularize-act-distribution} & & - & \tiny{R} & - &\checkmark \\ \cline{3-3} \cline{5-8}
& & {CI-BCNN~\cite{LearningChannel-Wise}} & & {-} & {\tiny{R}} & {-} &{\checkmark} \\ \cline{3-3} \cline{5-8}
& & Main/Subsidiary Network~\cite{Subsidiary} & & \tiny{RB} & - & - & - \\ \cline{3-3} \cline{2-8}
&\multirow{11}{*}{\tabincell{c}{Reduce\\the\\Gradient\\Error}} & Bi-Real Net~\cite{Bi-Real} &\multirow{11}{*}{\tabincell{c}{Customized\\$\mathtt{ApproxFunc}$ (FP)\\or\\$\mathtt{QuantFunc}$ (BP)\\or\\$\mathtt{UpdateFunc}$ (BP)}} &\tiny{SC} & \tiny{S} & - &\checkmark \\ \cline{3-3} \cline{5-8}
& & Circulant Binary Convolutional Networks\cite{CirculantBNN} & &\tiny{SC} & \tiny{S} & - &\checkmark \\ \cline{3-3} \cline{5-8}
& & Half-wave Gaussian Quantization~\cite{HWGQ} & & \tiny{RB} & \tiny{S} & - &\checkmark \\ \cline{3-3} \cline{5-8}
& & BNN+~\cite{BNN+} & & \tiny{RB} & \tiny{A} & - &\checkmark \\ \cline{3-3} \cline{5-8}
& & Differentiable Soft Quantization~\cite{Gong:iccv19} & & - & \tiny{A} & - &\checkmark \\ \cline{3-3} \cline{5-8}
& & BCGD~\cite{BCGD} & & - & - & - &\checkmark \\ \cline{3-3} \cline{5-8}
& & ProxQuant~\cite{proxquant} & & - & \tiny{A} & - &\checkmark \\ \cline{3-3} \cline{5-8}
& & Quantization Networks~\cite{quantization_networks} & & - & \tiny{S} & - &\checkmark \\ \cline{3-3} \cline{5-8}
& & {Self-Binarizing Networks~\cite{selfBN}} & & {-} & {\tiny{A}} & {-} &{\checkmark} \\ \cline{3-3} \cline{5-8}
& & {Improved Training BNN~\cite{ImprovedTraining}} & & {-} & {\tiny{A}} & {-} & {\checkmark} \\ \cline{3-3} \cline{5-8}
& & {IR-Net~\cite{IRNet}} & & {-} & {\tiny{S}} & {\checkmark} & {\checkmark} \\
\hline
\end{tabular}}
\begin{tablenotes}
\footnotesize
\item[*] Tech. = Technology. Tricks: ST = Structure Transformation, OP = Optimizer, AQ = Asymptotic Quantization, GA = Gradient Approximation. Optimizer: S = SGD, A = Adam, AM = AdaMax, R = RMSprop. Structure Transformation: RB = Reorder BN layer, RP = Reorder Pooling layer, WD = Widen, SC = Shortcut. FP = Forward Propagation, BP = Backward Propagation.
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Naive Binary Neural Networks}
The naive binary neural networks directly quantize the weights and activations in the neural network to 1-bit by the fixed binarization function. {Then the basic backward propagation strategy equipped with STE is applied to optimize the deep models in the standard training way.}
In 2016 Courbariaux \etal proposed BinaryConnect~\cite{BinaryConnect} that pioneered the study of binary neural networks. BinaryConnect converts the full-precision weights inside the neural network into 1-bit binary weights. In the forward propagation of training, a stochastic binarization method is adopted to quantize the weights, and the effect of the binary weights during inference is simulated. During the backward propagation, a $\mathtt{clip}$ function is introduced to cut off the update range of the full-precision weights to prevent the real-valued weights from growing too large without any impact on the binary weights. Though after model binarization the parameters of the neural network model is greatly compressed (even with large quantization error), the binary model can closely reach the state-of-the-art performance on some datasets in the image classification tasks. The stochastic binarization method in BinaryConnect is defined as:
\begin{equation}
w_b =
\left\{\begin{array}{ll}{+1,} & {{\rm with\ probability }\ p=\hat{\sigma}(w)} \\
{-1,} & {{\rm with\ probability }\ 1-p}\end{array}\right.
\end{equation}
where $\hat{\sigma}$ is the ``hard sigmoid" function:
\begin{equation}
\hat{\sigma}(x) =\mathtt{clip}(\frac{x+1}{2}, 0, 1) = \max(0, \min(1, \frac{x+1}{2}))
\end{equation}
Following the paradigm of binarizing the network, Courbariaux \etal further introduced Binarized Neural Network (BNN)~\cite{BNN}, presenting the training and acceleration skills in detail. It proved the practicability and acceleration capability of binary neural networks from both theoretical and practical aspects. For the inference acceleration of networks with batch normalization, this method also devised techniques like Shift-based Batch Normalization and XNOR-Bitcount. The experiments on image classification show that BNN takes $32\times$ less storage space and 60\% less time. Smaragdis \etal also studied the network binarization and developed Bitwise Neural Network especially suitable for resource-constrained environments~\cite{BitwiseNN}.
\subsection{{Optimization Based Binary Neural Networks}}
The naive binarization methods own the advantages of saving computational resources by quantizing the network in a very simply way. However, without considering the effect of the binarization in the forward and backward process, these methods inevitably suffer the accuracy loss for the wide tasks. Therefore, in order to mitigate the accuracy loss in the binary neural network, in the past years, a great number of optimization-based solutions have been proposed and shown the successful improvement over the native ones.
\subsubsection{Minimize the Quantization Error}
For the optimization of binary neural networks, a common practice is to reduce the quantization error of weight and activation. This is a straightforward solution similar to the standard quantization mechanism that the quantized parameter should approximate the full-precision parameter as closely as possible, expecting that the performance of the binary neural network model will be close to the full-precision one.
As the early research considering the quantization error, Rastegari \etal proposed Binary Weight Networks (BWN) and XNOR-Net~\cite{XNOR-Net}. BWN adopts the setting of binary weights and full-precision activations, while XNOR-Net binarizes both weights and activations. Different from the prior studies,~\cite{XNOR-Net} well approximates the floating-point parameters by introducing a scaling factor for the binary parameter. Specifically, the weight quantization process in BWN and XNOR-Net can be formulated as $\mathbf w\approx\alpha\mathbf{b_w}$, where $\alpha$ is the floating-point scaling factor for the binarized weight $\mathbf{b_w}$. This means that the weights in BWN are binarized to $\{-\alpha, +\alpha\}$, but still can bring the benefits of fast computation. {Then minimizing the quantization error can help to find the optimal scaling factor and binary parameters:}
\begin{equation}
{\mathop{\min}\limits_{\alpha, \mathbf{b_{\mathbf w}}}\|\mathbf{w}-\alpha \mathbf{b_{\mathbf w}}\|^{2}}
\end{equation}
The solution enjoys much less quantization error than directly using 1-bit (-1/+1), thereby improving the inference accuracy of the network. Figure \ref{fig:xnor} shows the binarization and the corresponding convolution process in XNOR-Net. Similar idea was also proposed in Binary Weight Networks via Hashing (BWNH)~\cite{BWN-Hashing}, which considers the quantizing process as a hash map with scaling factors. The DoReFa-Net~\cite{DoReFa-Net} further extends XNOR-Net, so that the network training can be accelerated using quantized gradients. Mishra \etal devised Wide Reduced-Precision Networks (WRPN)~\cite{WRPN} that also minimize the quantization error in a similar way to XNOR-Net, but increase the number of filters in each layer. Compared with directly binarizing the network, widening and binarizing together can achieve a good balance between the precision and the acceleration. The work of Faraone \etal groups parameters in training process and gradually quantizes each group with optimized scaling factor to minimize the quantization error~\cite{SYQ}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\linewidth]{Figs2_final.pdf}
\end{center}
\vspace{-0.2in}
\caption{Binarization and Convolution Process of XNOR-Net}
\label{fig:xnor}
\vspace{-0.1in}
\end{figure}
To further reduce the quantization error, High-Order Residual Quantization (HORQ)~\cite{HORQ} adopts a recursive approximation to the full-precision activation based on the quantized residual, instead of one-step approximation used in XNOR-Net. It generates the final quantized activation by a linear combination of the approximation in each recursive step. In a very similar way, Lin \etal designed ABC-Net~\cite{ABC-Net} that linearly combines multiple binary weight matrices and scaling factors to fit the full-precision weights and activations, which can largely reduce the information loss caused by binarization. Wang \etal pointed out the shortcoming of the previous methods that separately minimizing the quantization error of weights and activations can hardly promise the outputs to be similar to the full-precision ones~\cite{TSQ}. To address this problem, a two-step quantization (TSQ) method is designed. During the first step, all weights are full-precision values and all activations are quantized into low-bit format with a learnable quantization function $Q_a$. During the second step, $Q_a$ is fixed and the low-bit weight vector $\mathbf{b_w}$ and scaling factor $\alpha$ are learned as follow:
\begin{equation}
{\min\limits_{\alpha, \mathbf{\mathbf{b_w}}} \quad\left\|\mathbf z-Q_{a}\left(\alpha(\mathbf{a}\odot \mathbf{b_w})\right)\right\|_{2}^{2},}
\end{equation}
which can be solved efficiently in an iterative manner.
The aforementioned methods usually choose the fixed binarization function (\emph{e.g. }, $\mathtt{sign}$ function). One can also adopt more flexible binarization function and learn its parameters during minimizing the quantization error. To achieve this goal, Choi \etal proposed PArameterized Clipping Activation (PACT)~\cite{PACT} with a learnable upper bound for the activation function. The optimized upper bound of each layer is able to ensure that the quantization range of each layer is aligned with the original distribution. In practice, PACT performs better on binary networks, and can achieve accuracy close to full-precision network on larger networks. In~\cite{LQ-Net}, the Learned Quantization (LQ-Nets) attempts to minimize quantization error by jointly training neural networks and quantizers in the network. Different from the previous work, LQ-Nets learn the quantization thresholds and cutoff values by minimizing the quantization error during the network training, and can support arbitrary bit quantization. In~\cite{Xu2019Accurate}, trainable scaling factors for both weights and activations are introduced to increase the value range. {And based on XNOR-Net, Bulat \etal fused the activation and weight scaling factors into a single one that is learned discriminatively via backward propagation and proposed XNOR-Net++~\cite{Bulat2019XNORNetIB}.}
\subsubsection{Improve the Network Loss Function}
Minimizing the quantization error tries to retain the values of full-precision weights and activations, and thus reduces the information loss in each layer. However, only focusing on the local layers can hardly promise the exact final output passed through a series of layers. Therefore, it is highly required that the network training can globally take the binarization as well as the task-specific objective into account. Recently, an amount of research works at finding the desired network loss function that can guide the learning of the network parameters with restrictions brought by binarization.
Usually the general binarization scheme only focuses on accurate local approximation of the floating-point values and ignores the effect of binary parameters on the global loss. In~\cite{Loss-Aware-BNN}, Hou \etal proposed Loss-Aware Binarization (LAB) that directly minimizes the overall loss associated with binary weights using the quasi-Newton algorithm. The method utilizes information from the second-order moving average that has been calculated by the Adam optimizer to find optimal weights with consideration of the characteristics of binarization. Apart from considering the task-relevant loss from a quantization view, devising additional quantization-aware loss item is proved to be practical. In~\cite{Regularize-act-distribution}, Ding \etal summarized the problems caused by forward binarization and backward propagation in binary neural networks, including ``degeneration", ``saturation" and ``gradient mismatch". To address these issues, a distribution loss was introduced to explicitly regularize the activation distribution as follows:
\begin{equation}
\mathcal{L}_{total}=\mathcal{L}_{CE}+\lambda \mathcal{L}_{DL}
\end{equation}
where $\mathcal{L}_{CE}$ is the common cross-entropy loss for training deep neural networks, $\mathcal{L}_{DL}$ is the distribution loss for learning the proper binarization, and $\lambda$ balances the effect of the two types of losses. With the guide of additional loss, the learned neural network can effectively avoid the aforementioned obstacles and is friendly to binarization. The Incremental Network Quantization (INQ) method~\cite{INQ} proposed by Zhou \etal also proved this point, which adds a regularization term in loss function.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\linewidth]{Apprentice.pdf}
\end{center}
\vspace{-0.2in}
\caption{Schematic of Apprentice Network}
\label{fig:apprentice}
\vspace{-0.1in}
\end{figure}
The guiding information for training accurate binary neural networks can also derive from the knowledge of a large full-precision model. The Apprentice method~\cite{Apprentice} trains a low-precision student network using a well-trained, full-precision, large-scale teacher network, using the following loss function:
\begin{equation}
\mathcal{L}\left(x ; \mathbf {w}^{T}, \mathbf{b}_\mathbf{w}^{S}\right)=\alpha \mathcal{H}\left(y, p^{T}\right)+\beta \mathcal{H}\left(y, p^{S}\right)+\gamma \mathcal{H}\left(z^{T}, p^{S}\right)
\end{equation}
where $\mathbf {w}^T$ and $\mathbf {b}_\mathbf{w}^S$ are the full-precision weights of the teacher model and binary weights of the student (apprentice) model respectively, $y$ is the label for sample $x$, $\mathcal{H}(\cdot)$ is the soft and hard label loss function between the teacher and apprentice model, and $\alpha,\ \beta,\ \gamma$ are the weighting factors, $p^T$ and $p^S$ are the predictions of the teacher and student model, respectively. Under the supervision of the teacher network, the binary network can preserve the learning capability and thus obtain the close performance to the teacher network. The process of knowledge distillation is shown in Figure \ref{fig:apprentice}. Similar mimic solutions like Distillation and Quantization (DQ)~\cite{Distillation-Quant}, Distilled Binary Neural Network (DBNN)~\cite{DistilledBNN} and Main/Subsidiary Network~\cite{Subsidiary} have been studied, and their experiments demonstrate that the loss functions related to the full-precision teacher model help to stabilize the training of binary student model with high accuracy. CI-BCNN proposed in~\cite{LearningChannel-Wise} mines the channel-wise interactions, through which prior knowledge is provided to alleviate inconsistency of signs in binary feature maps and preserves the information of input samples during inference. {\cite{martinez2020training} built strong BNNs with a loss function during training, which matches the spatial attention maps computed at the output of the binary and real-valued convolutions.}
\subsubsection{Reduce the Gradient Error}
Training of binary neural networks still relies on the popular BP algorithm. To deal with the gradients for the non-differential binarization function, straight-through estimator (STE) technique is often adopted to estimate the gradients in backward propagation~\cite{STE}. However, there exists obvious gradient mismatch between the gradient of the binarization function (\emph{e.g. }, $\mathtt{sign}$) and STE (\emph{e.g. }, $\mathtt{clip}$). Besides, it also suffers the problem that the parameters outside the range of $[-1, +1]$ will not be updated. These problems easily lead to the under-optimized binary networks with severe performance degradation.
Intuitively, an elaborately designed approximate binarization function can help to relieve the gradient mismatch in the backward propagation. Bi-Real~\cite{Bi-Real} presents a customized $\mathtt{ApproxSign}$ function to replace $\mathtt{sign}$ for back-propagation gradient calculation as follow:
\begin{equation}
\mathtt { ApproxSign }(x)=\left\{\begin{array}{ll}{-1,} & {\text {\rm if }\ x<-1} \\ {2 x+x^{2},} & {\text {\rm if }\ -1 \leq x<0} \\ {2 x-x^{2},} & {\text {\rm if }\ 0 \leq x<1} \\ {1,} & {\text {\rm otherwise }}\end{array}\right.
\end{equation}
\begin{equation}
{\frac{\partial \mathtt { ApproxSign }(x)}{\partial x}=\left\{\begin{array}{ll}{2+2 x,} & {\text {\rm if }\ -1 \leq x<0} \\ {2-2 x,} & {\text {\rm if }\ 0 \leq x<1} \\ {0,} & {\text {\rm otherwise }}\end{array}\right.}
\end{equation}
Compared to the traditional STE, $\mathtt{ApproxSign}$ has a close shape to that of the original binarization function $\mathtt{sign}$, and thus the gradient error can be controlled to some extent. Circulant Binary Convolutional Networks (CBCN)~\cite{CirculantBNN} also applied an approximate function to address the gradient mismatch from $\mathtt{sign}$ function. Binary Neural Networks+ (BNN+)~\cite{BNN+} directly proposed an improved approximation to the derivative of the $\mathtt{sign}$ function, and introduced a regularization function that encourages the learned weights around the binary values.
{Besides focusing on the backward propagation, some recent methods attempted to pursue the good quantization functions in forward propagation, which can also reduce the gradient error.} In~\cite{HWGQ}, the proposed Half-ware Gaussian Quantization (HWGQ) method gave a low-precision estimation for the more commonly used $\mathtt{ReLU}$ function in the forward propagation in training process, which surprisingly works well to solve the gradient mismatch problem. Following the same intuition, Gong \etal present a Differential Soft Quantization (DSQ) method~\cite{Gong:iccv19}, replacing the traditional quantization function with a soft quantization function:
\begin{equation}
\varphi(x)=s \tanh \left(k\left(x-m_{i}\right)\right), \quad \text {\rm if }\ x \in \mathcal{P}_{i}
\end{equation}
where $k$ determines the shape of the asymptotic function, $s$ is a scaling factor to make the soft quantization function smooth and $m_i$ is the center of the interval $\mathcal{P}_{i}$. DSQ can adjust the cutoff value and the shape of the soft quantization function to gradually approach the standard $\mathtt{sign}$ function. {In fact, the DSQ function rectifies the data distribution in a steerable way, and thus helps to alleviate the gradient mismatch.} The overview of DSQ is shown in Figure \ref{fig:dsq}. A similar method~\cite{quantization_networks} also provides a simple and uniform way for weight and activation quantization by formulating it as a differentiable non-linear function. Besides, ProxQuant proposed in~\cite{proxquant} formulates quantized network training as a regularized learning problem instead and optimizes it via the prox-gradient method. ProxQuant does backward propagation on the underlying full-precision vector and applies an efficient prox-operator in between stochastic gradient steps. \cite{selfBN} and~\cite{ImprovedTraining} also explored the smooth transitions for the derivative of the $\mathtt{Sign}$, and used $\mathtt{Tanh}$ function with parameters $v$ and $\mathtt{SoftSign}$ function to reduce training gradient error. {The IR-Net proposed in~\cite{IRNet} included a self-adaptive Error Decay Estimator (EDE) to reduce the gradient error in training, which considers different requirements on different stages of training process and balances the update ability of parameters and reduction of gradient error. The IR-Net provided a new perspective for improving BNNs that retaining both forward and backward information is crucial for accurate BNNs, and it is the first to design BNNs considering both forward and backward information retention.}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\linewidth]{DSQ.pdf}
\end{center}
\vspace{-0.2in}
\caption{Overview of Differentiable Soft Quantization}
\label{fig:dsq}
\vspace{-0.1in}
\end{figure}
Besides modifying binarization function in backward or forward propagation,~\cite{BCGD} directly calibrates the gradients by the blended coarse gradient descent (BCGD) algorithm.
The weight update of BCGD goes by a weighted average of the full-precision weights and their quantized counterparts:
\begin{equation}
\mathbf{w}^{t+1}=(1-\rho) \mathbf{w}^{t}+\rho \mathbf{b}_\mathbf{w}^{t}-\eta \nabla f\left(\mathbf{b}_\mathbf{w}^{t}\right)
\end{equation}
where $\mathbf{w}^t$ denotes the full-precision weights on the $t$-th step and $\nabla f\left(\mathbf{b}_\mathbf{w}^{t}\right)$ denotes the gradient of $\mathbf{b}_\mathbf{w}^t$, thereby yielding sufficient descent in the objective value and thus accelerates the training.~\cite{DeeperUnderstanding} further investigated training methods for quantized neural networks from a theoretical viewpoint, and show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods lack, which explains the difficulty of training using low-precision arithmetic.
\subsection{{Efficient Computing Architectures for Binary Neural Networks}}
The most attractive point of binary neural networks is that they enjoy the advantages of fast computation, low power consumption and low memory footprint, which can faithfully support the general hardware (including FPGA, ASIC, CPU, \emph{etc}) with limited computational resources. FPGAs are the most widely used platforms because they allow for customizing data paths and adjusting the designs. In particular, FPGAs allow optimization around XNOR-Bitcount operations. ASICs have the potential to provide ultimate power and computational efficiency for binary neural networks, because the hardware layout in ASICs can be designed according to network structure. To make the binarization algorithms more practical in the wide scenarios with different hardware environment, researchers also devoted great efforts to developing hardware-friendly binary networks.
XNOR.AI team, who proposed XNOR-Net~\cite{XNOR-Net}, successfully launched XNOR-Net on the cheap Raspberry Pi device. In order to reduce the amount of computation, they conducted optimization for different targeted hardware. They also tried to combine XNOR-Net with real-time detection algorithms such as YOLO~\cite{yolov3}, and deployed them in the edge computing scenarios like smart home and autonomous driving. FP-BNN~\cite{FP-BNN} implemented a 64-channel acceleration on the Stratix-V FPGA system and analyzed the performance through the Resource-Aware Model Analysis (RAMA) method. Both~\cite{Accelerating-Binarized} and~\cite{FINN} from Xilinx also studied the FPGA-based binary network accelerator using different strategies.~\cite{Accelerating-Binarized} depended on variable-length buffers and achieved up to twice the number of operations per second of existing FPGA accelerators.~\cite{TowardsFastandEnergy} proposed two types of fast and energy-efficient architectures for binary neural network inference. By reusing the results from previous computation, much cycles for data buffer access and computations can be skipped. In order to achieve the most possible memory latency hiding,~\cite{FINN} designed a multi-stream architecture, and applied the Bitcount, Threshhold and OR operations to map the binary network to the FPGA operators. The researchers of the Haas-Platna Software Institute in Germany implemented an accelerated version of BMXNet~\cite{bmxnet,bmxnetv2} on GPU for both binary neural networks and linear quantization networks based on MXNet, supporting XNOR-Net and DoReFa-Net. {For ARM platform, engineers from JD company developed the binarization inference library daBNN~\cite{dabnn} for mobile phone platforms. The library uses ARM assembly instructions, which is 8-24$\times$ more efficient than BMXNet.}
\begin{comment}
\begin{table}[htb]
\caption{Hardware Deployment Performance of Binary neural networks.}
\label{table-hardware_deployment}
\centering
\scriptsize
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Method & \tabincell{c}{Clock\\(MHz)} & \tabincell{c}{Precision\\(bit)} & \tabincell{c}{Model Size\\(OPs)} & \tabincell{c}{Performance\\(GOP/S)} & \tabincell{c}{Power\\(W)} & \tabincell{c}{Efficiency\\(GOP/s/W)} \\
\hline
\cite{Accelerating-Binarized} & 123 & 1W8A & 1.24G & 207.8 & 4.7 & 44.2 \\
\hline
\cite{FP-BNN} & 200 & 1W8A & 112.5M & 2465.5 & 11.7 & 210.72 \\
\hline
~\cite{FINN} & 150 & 1W8A & 1.23G & 9396.41 & 26.2 & 358.64 \\
\hline
\end{tabular}
\end{table}
\end{comment}
\begin{table}[htb]
\setlength{\abovecaptionskip}{0.cm}
\centering
\tiny
\caption{Deployment Performance of Binary Neural Networks}
\label{BNN-in-different-platforms}
\begin{threeparttable}
\setlength{\tabcolsep}{0.5mm}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{Dataset} & \textbf{Method} & \multicolumn{1}{c|}{\textbf{\tabincell{c}{Acc.\\(\%)}}} & \textbf{Topology} & \textbf{Platform} & \multicolumn{1}{c|}{\textbf{LUTs}} & \multicolumn{1}{c|}{\textbf{BRAMs}} & \multicolumn{1}{c|}{\textbf{\tabincell{c}{Clk\\(MHz)}}} & \multicolumn{1}{c|}{\textbf{FPS}} & \multicolumn{1}{c|}{\textbf{\tabincell{c}{Power\\(W)}}} \\ \cline{1-10}
\multirow{6}{*}{\tabincell{c}{MNIST\\\cite{MNIST}}} & FINN-R~\cite{FINN-R} & 97.7 & MLP-4 & Zynq-8020 & 25,358 & 220 & 100 & - & 2.5 \\ \cline{2-10}
& FINN-R~\cite{FINN-R} & 97.7 & MLP-4 & ZynqUltra 3EG & 38,250 & 417 & 300 & - & 11.8 \\ \cline{2-10}
& ReBNet\cite{ReBNet} & 98.3 & MLP-4* & Spartan 750 & 32,600 & 120 & 200 & - & - \\ \cline{2-10}
& FINN~\cite{FINN} & 98.4 & MLP-4 & Zynq-7045 & 82,988 & 396 & 200 & 1,561,000 & 22.6 \\ \cline{2-10}
& BinaryEye~\cite{BinaryEye} & 98.4 & MLP-4 & Kintex 7325T & 40,000 & 110 & 100 & 10,000 & 12.2 \\ \cline{1-10}
\multirow{3}{*}{\tabincell{c}{SVHN\\\cite{SVHN}}} & FINN~\cite{FINN} & 94.9 & CNV-6 & Zynq-7045 & 46,253 & 186 & - & 21,900 & 11.7 \\ \cline{2-10}
& FBNA~\cite{FBNA} & 96.9 & CNV-6 & Zynq-7020 & 29,600 & 103 & - & 6,451 & 3.2 \\ \cline{2-10}
& ReBNet~\cite{ReBNet} & 97.0 & CNV-6* & Zynq-7020 & 53,200 & 280 & 100 & - & - \\ \cline{1-10}
\multirow{15}{*}{\tabincell{c}{CIFAR-10\\\cite{LearningMultipleLayers}}} & Zhou \etal~\cite{Zhou2017DeepLB} & 66.6 & CNV-2* & Zynq-7045 & 20,264 & - & - & - & - \\ \cline{2-10}
& Nakahara \etal~\cite{A-memory-based-realization} & - & CNV* & Vertex-7 690T & 20,352 & 372 & 450 & - & 15.4 \\ \cline{2-10}
& Fraser \etal~\cite{Fraser2017ScalingBN} & 79.1 & 1/4 cnn* & KintexUltra 115 & 35,818 & 144 & 125 & 12,000 & - \\ \cline{2-10}
& FINN-R~\cite{FINN-R} & 80.1 & CNV-6 & ZynqUltra 3EG & 41,733 & 283 & 300 & - & 10.7 \\ \cline{2-10}
& FINN-R~\cite{FINN-R} & 80.1 & CNV-6 & Zynq-7020 & 25,700 & 242 & 100 & - & 2.3 \\ \cline{2-10}
& FINN~\cite{FINN} & 80.1 & CNV-6 & Zynq-7045 & 46,253 & 186 & 200 & 21,900 & 11.7 \\ \cline{2-10}
& FINN~\cite{FINN-R} & 80.1 & CNV-6 & ADM-PCIE-8K5 & 365,963 & 1,659 & 125 & - & 41.0 \\ \cline{2-10}
& FINN~\cite{A-fully-connected-layer} & 80.1 & CNV-6 & Zynq-7020 & 42,823 & 270 & 166 & 445 & 2.5 \\ \cline{2-10}
& Nakahara \etal~\cite{A-fully-connected-layer} & 81.8 & CNV-6 & Zynq-7020 & 14,509 & 32 & 143 & 420 & 2.3 \\ \cline{2-10}
& Fraser \etal~\cite{Fraser2017ScalingBN} & 85.2 & 1/2 cnn* & KintexUltra 115 & 93,755 & 386 & 125 & 12,000 & - \\ \cline{2-10}
& Zhou \etal~\cite{Zhou2017DeepLB} & 86.1 & CNV-5* & Vertex-7 980T & 556,920 & - & 340 & 332,158 & - \\ \cline{2-10}
& ReBNet~\cite{ReBNet} & 87.0 & CNV-6* & Zynq-7020 & 53,200 & 280 & 200 & - & - \\ \cline{2-10}
& Zhao \etal~\cite{Accelerating-Binarized} & 87.7 & CNV-6 & Zynq-7020 & 46,900 & 140 & 143 & 168 & 4.7 \\ \cline{2-10}
& Fraser \etal~\cite{Fraser2017ScalingBN} & 88.3 & BNN* & KintexUltra 115 & 392,947 & 1814 & 125 & 12,000 & - \\ \cline{2-10}
& FBNA~\cite{FBNA} & 88.6 & CNV-6 & Zynq-7020 & 29,600 & 103 & - & 520 & 3.3 \\ \cline{1-10}
\multirow{2}{*}{\tabincell{c}{ImageNet\\\cite{Deng2009ImageNet}}} & ReBNet~\cite{ReBNet} & 41.0 & CNV-5* & VertexUltra 095 & 1,075,200 & 3456 & 200 & - & - \\ \cline{2-10}
& Yonekawa \etal~\cite{On-Chip-Memory-Based} & - & VGG-16 & ZynqUltra 9EG & 191,784 & 32,870 & 150 & 31.48 & 22.0 \\ \hline
\end{tabular}}
\begin{tablenotes}
\footnotesize
\item[1] The * represents the network using a customized network structure, and the Acc. in table refers to Top-1 classification accuracy on each dataset.
\end{tablenotes}
\end{threeparttable}
\end{table}
We provide a comparison of different binary neural network implementations~\cite{FINN-R,FINN,ReBNet,BinaryEye,A-memory-based-realization,Fraser2017ScalingBN,A-fully-connected-layer,Accelerating-Binarized,On-Chip-Memory-Based,Zhou2017DeepLB} on different FPGA platforms in Table \ref{BNN-in-different-platforms}. It can be seen that the method proposed by~\cite{On-Chip-Memory-Based} can achieve comparable accuracy to full-precision models, although it is not efficient enough. The implementation of Xilinx's~\cite{FINN} owns the most promising speed with a low power consumption. A series of experiments prove that it can achieve a good balance among accuracy, speed and power consumption.~\cite{ReBNet} obtains high accuracy on small datasets such as MNIST and CIFAR-10, but a poor result on ImageNet. We have to point out that despite the progress of developing hardware-friendly algorithms, till now there have been quite few binary models that can perform well on large datasets such as ImageNet in terms of both speed and accuracy.
\subsection{Applications of Binary Neural Networks}
Image classification is a fundamental task in computer vision and machine learning. Therefore, most of existing studies chose to evaluate binary neural networks on the image classification tasks. {BNNs can greatly accelerate and compress the neural network models, which is of great attraction to deep learning researchers. Both weights and activations are binary in BNNs and it results in $58\times$ faster convolutional operations and $32\times$ memory savings theoretically. Thus the binary neural networks are also applied to other common tasks such as object detection and semantic segmentation.}
In the literature, Kung \etal utilized binary neural networks for both object recognition and image classification tasks~\cite{Kung2018} on infrared images. In this work, the binary neural networks got comparable performance to full-precision networks on MNIST and IR datasets and achieved at least a 4$\times$ acceleration and an energy saving of three orders of magnitude over the GPU.~\cite{BWBDN} also addressed the fast object detection algorithm by unifying the prediction and object detection process. It obtained 62$\times$ acceleration and saved 32$\times$ storage space using binary VGG-16 network, where except the last convolution layer, all other layers are binarized. Li \etal generated quantized object detection neural networks based on RetinaNet and faster R-CNN, and show that these detectors achieve very encouraging performance~\cite{Li_2019_CVPR}. {Leng \etal applied BNNs to different tasks, and evaluated their method on convolutional neural networks for image classification and object detection, and recurrent neural networks for language model~\cite{Leng2017ExtremelyLB}.} Zhuang \etal proposed a ``network decomposition" strategy called Group-Net, which shows strong generalization to different tasks including classification and semantic segmentation, outperforming the previous best binary neural networks in terms of accuracy and major computation savings~\cite{Zhuang_2019_CVPR}. In~\cite{SeerNet}, SeerNet considers feature-map sparsity through low-bit quantization, which is applicable to general convolutional neural networks and tasks.
{The researchers also studied enhancing the robustness of neural network models through model binarization. Binary models were generally considered to be more robust than full-precision models, because they were considered to filter the noise of input. Lin \etal explored the impact of quantization on the model robustness. They showed that quantization operations on parameters help BNNs to reduce the distance by removing perturbations when magnitude of noise is small. However, for vanilla BNNs, the distance is enlarged when magnitude of noise is large. The inferior robustness comes from the error amplification effect in forward propagation of BNNs, where the quantization operation further enlarges the distance caused by amplified noise. They propose Defensive Quantization (DQ)~\cite{lin2018defensive} to defend the adversarial examples for quantized models by suppressing the noise amplification effect and keeping the magnitude of the noise small in each layer. Quantization improves robustness instead of making it worse in DQ models, thus they are even more robust than full-precision networks.}
\subsection{Tricks for Training Binary Neural Networks}
Due to the highly discrete nature of the binarization, training binary neural networks often requires the introduction of special training techniques to make the training process more stable and the convergence accuracy much higher. In this section, we summarize the general and effective binary neural network training techniques that have been widely adopted in the literature, from the aspects including network structure transformation, optimizer and hyper-parameter selection, gradient approximation and asymptotic quantization.
\subsubsection{Network Structure Transformation}
Binarization converts activations and weights to $\{-1, +1\}$. This is actually equivalent to regularizing the data, making the data distribution changed in an unexpected way after binarization. Adjusting the network structure serves as a promising solution to adapting to the distribution changes.
{Simply reordering the layers in the network can improve the performance of the binary neural network.} In~\cite{alizadeh2018a}, researchers from the University of Oxford pointed out that almost all binarization studies have repositioned the location of the pooling layer. The pooling layer is always used immediately after the convolutional layer to avoid information loss caused by max pooling after binarization. Experiments have shown that this position reorder has a great improvement in accuracy. In addition to the pooling layer, the location of the batch normalization layer also greatly affects the stability of binary neural network training.~\cite{TSQ} and~\cite{HWGQ} insert a batch normalization layer before all quantization operations to rectify the data. After this transformation, the quantized input obeys a stable distribution (sometimes close to Gaussian), and thus the mean and variance keep within a reasonable range and the training process becomes much smoother.
Based on the similar idea, instead of adding new layers, several recent work attempts to directly modify the network structure. For example, Bi-Real~\cite{Bi-Real} connects the full-precision feature maps across the layer to the subsequent network. This method essentially adjusts the data distribution through structural transformation. Mishra \etal devised Wide Reduced-Precision Networks (WRPN)~\cite{WRPN}, which increase the number of filters in each layer and thus reform the data distributions. Binary Ensemble Neural Network (BENN)~\cite{BENN} leverages the ensemble method to fit the underlying data distributions. Liu \etal proposed circulant filters (CiFs) and a circulant binary convolution (CBConv) to enhance the capacity of binarized convolutional features, and circulant back propagation (CBP) was also proposed to train the structures~\cite{CirculantBNN}. {BBG~\cite{DBLP:journals/corr/abs-1909-12117} even appended a gated residual to compensate their information loss during the forward process.}
\subsubsection{Optimizer and Hyper-parameter Selection}
{Choosing the proper hyper-parameters and specific optimizers when training binary neural networks also improves the performance of BNNs.} Most existing binary neural network models chose an adaptive learning rate optimizer, such as Adam. Using Adam can make the training process better and faster, and the smoothing coefficient of the second derivative is especially critical. The analysis by~\cite{alizadeh2018a} shows that if using a fixed learning rate optimizer that does not consider historical information, such as a stochastic gradient descent (SGD) algorithm, one needs to adopt a large batch size to improve the performance.
The setting of the batch normalization's momentum coefficient is also critical. In~\cite{alizadeh2018a}, by comparing the precision results under different momentum coefficients, it is found that the parameters of the batch normalization need to be set appropriately to adapt to the jitter caused by the binarization operation.
\subsubsection{Asymptotic Quantization}
{Since the quantization has negative impact on training, many methods employed the asymptotic quantization strategy, which gradually increases the degree of quantization, to reduce the losses caused by parameter binarization.} Practice shows that this step-by-step quantization method is useful to find the optimal solution. For instance, INQ~\cite{INQ} groups the parameters and gradually increases the number of groups participating in the quantization to achieve group-based step-by-step quantization. {\cite{Zhuang_2018_CVPR} introduces the idea of stepping the bit-width, which first quantizes to a higher bit-width and then quantizes to a lower bit-width.} This strategy can help to avoid the large perturbations caused by extremely low-bit quantization, compensating the gradient error of quantized parameters during training.
\subsubsection{Gradient Approximation}
{It became a common practice to use a smoother estimator in binary neural network training process.} The gradient error usually exists in backward propagation due to the straight-through estimator. Finding an approximate function close to the binarization function serves as the simple and practical solution. {This becomes a popular technique widely considered in recent studies~\cite{Bi-Real,HWGQ,CirculantBNN,BNN+,selfBN,ImprovedTraining,IRNet}, where the approximate functions are tailored according to different motivations, to replace the standard $\mathtt{clip}$ function that causes gradient error.} For designing a proper approximate function, an inspiring idea is to align its shape with that of the binarization function~\cite{Gong:iccv19}.
\section{Evaluation and Discussions}
\label{section4}
\subsection{Datasets and Network Structures}
To evaluate the binary neural network algorithms, the image classification task is widely chosen, and correspondingly two common image datasets: CIFAR-10~\cite{LearningMultipleLayers} and ImageNet~\cite{Deng2009ImageNet} are usually used. The CIFAR-10 is a relatively small dataset containing 60,000 images with 10 categories, while ImageNet dataset is currently the most popular image classification dataset. For other tasks like object detection and semantic segmentation, PASCAL VOC~\cite{Everingham:2010:PVO:1747084.1747104} and COCO (Common Objects in Context)~\cite{DBLP:journals/corr/LinMBHPRDZ14} are also employed for evaluating the performance of the binary neural networks. {PASCAL VOC dataset is derived from the PASCAL Visual Object Classes challenge, which is used to evaluate the performance of models for various tasks in the field of computer vision.} Many excellent computer vision models (including classification, positioning, detection, segmentation, motion recognition, \emph{etc}) are based on the PASCAL VOC dataset, especially some object detection models. COCO is a dataset provided by the Microsoft team for image recognition and object detection. It collects images by searching 80 object categories and various scene types such as Flickr.
To investigate the generalization capability of the binary neural network algorithm over different network structures, various deep models including VGG~\cite{simonyan2015very}, AlexNet~\cite{krizhevsky2012imagenet}, ResNet-18~\cite{he2016deep}, ResNet-20, ResNet-34, and ResNet-50, \emph{etc}. will be binarized and tested. These models have outstanding contributions in the progress of deep learning, and make significant breakthrough in ImageNet classification task. Among them, the VGG network contains a large number of parameters and convolution operations, so binarizing VGG can obviously show the inference acceleration of different algorithms. ResNet is currently the most popular deep model in many tasks, with a sufficient number of layers.
\subsection{Image Classification Tasks}
\begin{table}[!h]
\setlength{\abovecaptionskip}{0.cm}
\caption{{Image Classification Performance of Binary Neural Networks on CIFAR-10 Dataset}}
\label{BNN-On-CIFAR-10}
\centering
\scriptsize
\setlength{\tabcolsep}{1.8mm}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Type}} & \multicolumn{1}{c|}{\textbf{Method}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Bit-Width\\(W/A)\end{tabular}}} & \multicolumn{1}{c|}{\textbf{Topology}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Acc.\\ (\%)\end{tabular}}} \\ \hline
\multicolumn{2}{|c|}{\multirow{6}{*}{\tabincell{c}{Full-Precision\\Neural Networks}}} & \multirow{6}{*}{-} & 32/32 & VGG-Small~\cite{LQ-Net} & 93.8 \\ \cline{4-6}
\multicolumn{2}{|c|}{} & & 32/32 & ResNet-20~\cite{LQ-Net} & 92.1 \\ \cline{4-6}
\multicolumn{2}{|c|}{} & & 32/32 & ResNet-32~\cite{proxquant} & 92.8 \\ \cline{4-6}
\multicolumn{2}{|c|}{} & & 32/32 & ResNet-44~\cite{proxquant} & 93.0 \\ \cline{4-6}
\multicolumn{2}{|c|}{} & & 32/32 & VGG-11~\cite{Subsidiary} & 83.8 \\ \cline{4-6}
\multicolumn{2}{|c|}{} & & 32/32 & NIN~\cite{Subsidiary} & 84.2 \\ \cline{1-6}
\multicolumn{2}{|c|}{\multirow{2}{*}{\tabincell{c}{Naive Binary\\ Neural Networks}}} & BinaryConnect~\cite{BinaryConnect} & 1/32 & VGG-Small & 91.7 \\ \cline{3-6}
\multicolumn{2}{|c|}{} & BNN~\cite{BNN} & 1/1 & VGG-Small & 89.9 \\ \hline
\multirow{31}{*}{\tabincell{c}{{Optimization}\\\\{Based}\\\\{Binary}\\\\{Neural}\\\\{Networks}}}& \multirow{12}{*}{\tabincell{c}{Minimize\\the\\Quantization\\Error}} & BWN~\cite{XNOR-Net} & 1/32 & VGG-Small & 90.1 \\ \cline{3-6}
& & \multirow{2}{*}{XNOR-Net~\cite{XNOR-Net}} & 1/1 & VGG-Small & 89.8 \\ \cline{4-6}
& & & 1/1 & Customized~\cite{HORQ} & 77.0 \\ \cline{3-6}
& & \multirow{2}{*}{DoReFa-Net~\cite{DoReFa-Net}} & 1/32 & ResNet-20 & 90.0 \\ \cline{4-6}
& & & 1/1 & ResNet-20 & 79.3 \\ \cline{3-6}
& & HORQ~\cite{HORQ} & 2/1 & Customized~\cite{HORQ} & 82.0 \\ \cline{3-6}
& & TSQ~\cite{TSQ} & 3/2 & VGG-Small & 93.5 \\ \cline{3-6}
& & \multirow{3}{*}{{BBG~\cite{DBLP:journals/corr/abs-1909-12117}}} & {1/1} & {ResNet-20} & {85.3} \\ \cline{4-6}
& & & {1/1} & {ResNet-20 (2x)} & {90.7} \\ \cline{4-6}
& & & {1/1} & {ResNet-20 (4x)} & {92.5} \\ \cline{3-6}
& & \multirow{2}{*}{LQ-Nets~\cite{LQ-Net}} & 1/32 & ResNet-20 & 90.1 \\ \cline{4-6}
& & & 1/2 & VGG-Small & 93.4 \\ \cline{2-6}
&\multirow{13}{*}{\tabincell{c}{Improve\\Network\\Loss\\Function}} & \multirow{2}{*}{LAB~\cite{Loss-Aware-BNN}} & 1/32 & VGG-Small & 89.5 \\ \cline{4-6}
& & & 1/1 & VGG-Small & 87.7 \\ \cline{3-6}
& & \multirow{3}{*}{\tabincell{c}{Main/Subsidiary\\Network~\cite{Subsidiary}}} & 1/1 & NIN & 83.1 \\ \cline{4-6}
& & & 1/1 & VGG-11 & 82.0 \\ \cline{4-6}
& & & 1/1 & ResNet-18 & 86.4 \\ \cline{3-6}
& & \multirow{2}{*}{BCGD~\cite{Subsidiary}} & 1/4 & VGG-11 & 89.6 \\ \cline{4-6}
& & & 1/4 & ResNet-20 & 90.1 \\ \cline{3-6}
& & \multirow{3}{*}{ProxQuant~\cite{proxquant}} & 1/32 & ResNet-20 & 90.7 \\ \cline{4-6}
& & & 1/32 & ResNet-32 & 91.5 \\ \cline{4-6}
& & & 1/32 & ResNet-44 & 92.2 \\ \cline{3-6}
& & BNN-DL~\cite{Regularize-act-distribution} & 1/1 & VGG-Small & 90.0 \\ \cline{3-6}
& & \multirow{2}{*}{{CI-BCNN~\cite{LearningChannel-Wise}}} & {1/1} & {VGG-Small} & {92.5} \\ \cline{4-6}
& & & {1/1} & {ResNet-20} & {91.1} \\ \cline{2-6}
& \multirow{7}{*}{\tabincell{c}{Reduce the\\Gradient Error}} & \multirow{3}{*}{\tabincell{c}{DSQ~\cite{Gong:iccv19}}} & \tabincell{c}{1/1} & VGG-Small & 91.7 \\ \cline{4-6}
& & & \tabincell{c}{1/32} & ResNet-20 & 90.2 \\ \cline{4-6}
& & & \tabincell{c}{1/1} & ResNet-20 & 84.1 \\ \cline{3-6}
& & \multirow{4}{*}{\tabincell{c}{{IR-Net~\cite{Gong:iccv19}}}} & \tabincell{c}{{1/32}} & {ResNet-20} & {90.2} \\ \cline{4-6}
& & & \tabincell{c}{{1/1}} & {VGG-Small} & {90.4} \\ \cline{4-6}
& & & \tabincell{c}{{1/1}} & {ResNet-18} & {91.5} \\ \cline{4-6}
& & & \tabincell{c}{{1/1}} & {ResNet-20} & {86.5} \\ \hline
\end{tabular}}
\end{table}
\begin{table}[]
\setlength{\abovecaptionskip}{0.cm}
\caption{{Image Classification Performance of Binary Neural Networks on ImageNet Dataset}}
\label{BNN-On-ImageNet}
\centering
\scriptsize
\setlength{\tabcolsep}{0.6mm}{
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Type}} & \multicolumn{1}{c|}{\textbf{Method}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Bit-Width\\(W/A)\end{tabular}}} & \textbf{Topology} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Top-1\\(\%)\end{tabular}}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Top-5\\(\%)\end{tabular}}} \\ \hline
\multicolumn{2}{|c|}{\multirow{5}{*}{\tabincell{c}{Full-Precision\\Neural Networks}}} & \multirow{5}{*}{\tabincell{c}{-}} & 32/32 & AlexNet~\cite{LQ-Net} & 57.1 & 80.2 \\ \cline{4-7}
\multicolumn{2}{|c|}{} & & 32/32 & ResNet-18~\cite{LQ-Net} & 69.6 & 89.2 \\ \cline{4-7}
\multicolumn{2}{|c|}{} & & 32/32 & ResNet-34~\cite{LQ-Net} & 73.3 & 91.3 \\ \cline{4-7}
\multicolumn{2}{|c|}{} & & 32/32 & ResNet-50~\cite{LQ-Net} & 76.0 & 93.0 \\ \cline{4-7}
\multicolumn{2}{|c|}{} & & 32/32 & VGG-Variant~\cite{LQ-Net} & 72.0 & 90.5 \\ \cline{1-7}
\multicolumn{2}{|c|}{\multirow{2}{*}{\tabincell{c}{Naive Binary\\Neural Networks}}} & \tabincell{c}{\tabincell{c}{BinaryConnect~\cite{BinaryConnect}}} & 1/32 & AlexNet & 35.4 & 61.0 \\
\cline{3-7}
\multicolumn{2}{|c|}{} & \tabincell{c}{BNN~\cite{BNN}} & 1/1 & AlexNet & 27.9 & 50.4 \\ \hline
\multirow{34}{*}{\tabincell{c}{{Optimization}\\\\{Based}\\\\{Binary}\\\\{Neural}\\\\{Networks}}}& \multirow{26}{*}{\tabincell{c}{Minimize\\the\\Quantization\\Error}} & \multirow{2}{*}{BWN~\cite{XNOR-Net}} & 1/32 & AlexNet & 56.8 & 79.4 \\ \cline{4-7}
& & & 1/32 & ResNet-18 & 60.8 & 83.0 \\ \cline{3-7}
& & XNOR-Net~\cite{XNOR-Net} & 1/1 & AlexNet & 44.2 & 69.2 \\ \cline{3-7}
& & \multirow{2}{*}{DoReFa-Net~\cite{DoReFa-Net}} & 1/1 & AlexNet & 43.6 & - \\ \cline{4-7}
& & & 1/1 & AlexNet & 49.8 & - \\ \cline{3-7}
& & \multirow{4}{*}{ABC-Net~\cite{ABC-Net}} & 1/32 & ResNet-18 & 62.8 & 84.4 \\ \cline{4-7}
& & & 2/32 & ResNet-18 & 63.7 & 85.2 \\ \cline{4-7}
& & & 1/1 & ResNet-18 & 42.7 & 67.6 \\ \cline{4-7}
& & & 1/1 & ResNet-34 & 52.4 & 76.5 \\ \cline{3-7}
& & TSQ~\cite{TSQ} & 1/1 & AlexNet & 58.0 & 80.5 \\ \cline{3-7}
& & \multirow{2}{*}{BWNH~\cite{BWN-Hashing}} & 1/32 & AlexNet & 58.5 & 80.9 \\ \cline{4-7}
& & & 1/32 & ResNet-18 & 64.3 & 85.9 \\ \cline{3-7}
& & \multirow{3}{*}{PACT~\cite{PACT}} & 1/32 & ResNet-18 & 65.8 & 86.7 \\ \cline{4-7}
& & & 1/2 & ResNet-18 & 62.9 & 84.7 \\ \cline{4-7}
& & & 1/2 & ResNet-50 & 67.8 & 87.9 \\ \cline{3-7}
& & \multirow{5}{*}{LQ-Nets~\cite{LQ-Net}} & 1/2 & ResNet-18 & 62.6 & 84.3 \\ \cline{4-7}
& & & 1/2 & ResNet-34 & 66.6 & 86.9 \\ \cline{4-7}
& & & 1/2 & ResNet-50 & 68.7 & 88.4 \\ \cline{4-7}
& & & 1/2 & AlexNet & 55.7 & 78.8 \\ \cline{4-7}
& & & 1/2 & VGG-Variant & 67.1 & 87.6 \\ \cline{3-7}
& & \multirow{3}{*}{SYQ~\cite{SYQ}} & 1/2 & AlexNet & 55.4 & 78.6 \\ \cline{4-7}
& & & 1/8 & ResNet-18 & 62.9 & 84.6 \\ \cline{4-7}
& & & 1/8 & ResNet-50 & 70.6 & 89.6 \\ \cline{3-7}
& & \multirow{3}{*}{WRPN~\cite{WRPN}} & \begin{tabular}[c]{@{}r@{}}1/1 (1$\times$)\end{tabular} & ResNet-34 & 60.5 & - \\ \cline{4-7}
& & & \begin{tabular}[c]{@{}r@{}}1/1 (2$\times$)\end{tabular} & ResNet-34 & 69.9 & - \\ \cline{4-7}
& & & \begin{tabular}[c]{@{}r@{}}1/1 (3$\times$)\end{tabular} & ResNet-34 & 72.4 & - \\ \cline{3-7}
& & \multirow{2}{*}{{XNOR-Net++~\cite{Bulat2019XNORNetIB}}} & \begin{tabular}[c]{@{}r@{}}{1/1 (1$\times$)}\end{tabular} & {ResNet-18} & {57.1} & {79.9} \\ \cline{4-7}
& & & \begin{tabular}[c]{@{}r@{}}{1/1 (1$\times$)}\end{tabular} & {AlexNet} & {46.9} & {71.0} \\ \cline{2-7}
& \multirow{9}{*}{\tabincell{c}{Improve\\Network\\Loss\\Function}} & INQ~\cite{INQ} & 2/32 & ResNet-18 & 66.0 & 87.1 \\ \cline{3-7}
& & \tabincell{c}{BNN-DL~\cite{Regularize-act-distribution}} & 1/1 & AlexNet & 41.3 & 65.8 \\ \cline{3-7}
& & \tabincell{c}{XNOR-Net-DL~\cite{Regularize-act-distribution}} & 1/1 & AlexNet & 47.8 & 71.5 \\ \cline{3-7}
& & \tabincell{c}{DoReFa-Net-DL~\cite{Regularize-act-distribution}} & 1/1 & AlexNet & 47.8 & 71.5 \\ \cline{3-7}
& & \tabincell{c}{CompactNet-DL~\cite{Regularize-act-distribution}} & 1/2 & AlexNet & 47.6 & 71.9 \\ \cline{3-7}
& & \tabincell{c}{WRPN-DL~\cite{Regularize-act-distribution}} & 1/1 & AlexNet & 53.8 & 77.0 \\ \cline{3-7}
& &\multirow{2}{*}{\tabincell{c}{Main/Subsidiary\\Network~\cite{Subsidiary}}}& \multirow{2}{*}{1/1} & \multirow{2}{*}{ResNet-18} & \multirow{2}{*}{50.1} & \multirow{2}{*}{-} \\
& & & & & & \\ \cline{1-7}
\end{tabular}}
\end{table}
\begin{table}[]\ContinuedFloat
\vspace{-2cm}
\setlength{\abovecaptionskip}{0.cm}
\caption{ ($Cont.$) {Image Classification Performance of Binary Neural Networks on ImageNet Dataset}}
\centering
\scriptsize
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Type}} & \multicolumn{1}{c|}{\textbf{Method}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Bit-Width\\(W/A)\end{tabular}}} & \textbf{Topology} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Top-1\\(\%)\end{tabular}}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Top-5\\(\%)\end{tabular}}} \\ \hline
\multirow{18}{*}{\tabincell{c}{{Optimization}\\\\{Based}\\\\{Binary}\\\\{Neural}\\\\{Networks}}}& & \multirow{2}{*}{\tabincell{c}{{{CI-BCNN~\cite{LearningChannel-Wise}}}}} & {1/1} & {ResNet-18} & {59.9} & {84.2} \\ \cline{4-7}
& & & {1/1} & {ResNet-34} & {54.9} & {86.6} \\ \cline{2-7}
&\multirow{16}{*}{\tabincell{c}{ Reduce\\the\\Gradient\\Error}} &\multirow{2}{*}{Bi-Real~\cite{Bi-Real}} & 1/1 & ResNet-18 & 56.4 & 79.5 \\ \cline{4-7}
& & & 1/1 & ResNet-34 & 62.2 & 83.9 \\ \cline{3-7}
& & HWGQ~\cite{HWGQ} & 1/1 & AlexNet & 52.7 & 76.3 \\ \cline{3-7}
& & CBCN~\cite{CirculantBNN} & 1/1 & ResNet-18 & 61.4 & 82.8 \\ \cline{3-7}
& &\multirow{2}{*}{\tabincell{c}{{Quantization}}} & 1/32 & AlexNet & 58.8 & 81.7 \\ \cline{4-7}
& &\multirow{2}{*}{Networks~\cite{quantization_networks}} & 1/32 & ResNet-18 & 66.5 & 87.3 \\ \cline{4-7}
& & & 1/32 & ResNet-50 & 72.8 & 91.3 \\ \cline{3-7}
& & \multirow{2}{*}{BCGD~\cite{BCGD}} & 1/4 & ResNet-18 & 65.5 & 86.4 \\ \cline{4-7}
& & & 1/4 & ResNet-34 & 68.4 & 88.3 \\ \cline{3-7}
& & DSQ~\cite{Gong:iccv19} & 1/32 & ResNet-18 & 63.7 & - \\ \cline{3-7}
& & \multirow{4}{*}{{IR-Net~\cite{IRNet}}} & {1/32} & {ResNet-18} & {62.9} & {84.1} \\ \cline{4-7}
& & & {1/32} & {ResNet-34} & {70.4} & {89.5} \\ \cline{4-7}
& & & {1/1} & {ResNet-18} & {58.1} & {80.0} \\ \cline{4-7}
& & & {1/1} & {ResNet-34} & {66.5} & {86.8} \\ \cline{3-7}
& & \multirow{2}{*}{{IT-BNN~\cite{ImprovedTraining}}} & {1/1} & {ResNet-18} & {53.7} & {76.8} \\ \cline{4-7}
& & & {1/1} & {AlexNet} & {48.6} & {72.8} \\ \cline{4-7}
\hline
\end{tabular}}
\end{table}
{Most binary neural networks adopt the inference accuracy of image classification as the evaluation metric, as the classical classification models do.} Table \ref{BNN-On-CIFAR-10} and \ref{BNN-On-ImageNet} respectively illustrate the performance of the typical binary neural network methods on CIFAR-10 and ImageNet, and compare the inference accuracy with different bit-width and network structures.
Comparing the performance of binary neural networks on different datasets, we can first observe that binary neural networks can approach the performance of full-precision neural networks on small datasets (\emph{e.g. } MNIST, CIFAR-10), but still suffer a severe performance drop on large datasets (\emph{e.g. } ImageNet). This is mainly because for the large dataset, the binarized network lacks sufficient capacity to capture the large variations among data. This fact indicates that there still require great efforts for pursuing the delicate binarization and optimization solution to design a satisfactory binary neural network.
From the table \ref{BNN-On-CIFAR-10} and \ref{BNN-On-ImageNet}, it can be concluded that the neural networks are more sensitive to the binarization of activations. When only quantizing weights to 1-bit and leaving the activations as full-precision, there is a smaller performance degradation. Taking ResNet-18 in ABC-Net~\cite{ABC-Net} on ImageNet dataset as an example, there is only about 7\% accuracy loss after applying binarization to weights but there is addition 20\% loss after the activations are binarized. Thus eliminating the influence of activation binarization is usually much more important when designing binary network, which becomes the main motivations for studies like~\cite{Regularize-act-distribution} and~\cite{PACT}. After adding reasonable regularization to the distribution of activations, the harmful effect caused by binarization on activations will be reduced, and subsequently the accuracy is naturally improved.
What's more, the robustness of binary neural networks is highly relevant to their structures. Some specific structure patterns are friendly to binarization, such as skip connections proposed in~\cite{Bi-Real} and wider blocks proposed in~\cite{WRPN}. With a shortcut to directly pass full-precision values to the following layers, Bi-Real~\cite{Bi-Real} achieves performance close to full-precision models. With a $3\times$ wider structure, the accuracy loss of ResNet-34 in~\cite{WRPN} is lower than 1\%. In fact, what they essentially do is to enable the information to pass through the whole network as much as possible. Although the structure modification may increase the amount of calculation, they can still get a significant acceleration benefiting from the XNOR-Bitcount operation.
{Different optimization-based methods represent different understandings of BNNs. Among the papers aiming to minimizing the quantization error, many methods that directly reduce the quantization error were proposed to make the binary neural networks approximate full-precision neural networks. These papers believed that the closer the binary parameters are to full-precision parameters, the better the BNNs perform. Another idea is improving the loss function. This type of methods makes the parameter distribution in BNNs friendly to the binarization operation by modifying loss function. Moreover, STE proposed in BinaryConnect is rough, which results in some problems such as gradient mismatch. Thus many recent works use smooth transition such as $\mathtt{Tanh}$ function to reduce the gradient loss, and it became a common practice to use a smoother estimator.}
{We believe binary neural networks should not be simply regarded as the approximations of full-precision neural networks, more specific designs for the special characteristics of BNNs are necessary. In fact, some of the recent works essentially worked on this such as XNOR-Net++~\cite{Bulat2019XNORNetIB}, CBCN~\cite{CirculantBNN}, Self-Binarizing Networks~\cite{selfBN}, BENN~\cite{BENN}, \emph{etc}. The results show that specially designed methods considering characteristics of BNNs can achieve better performance. They prove the view that BNNs need different optimization compared with the full-precision models although they share the same network architecture.}
{It is also worth mentioning that accuracy is not the only criterion of BNNs, the versatility is another key to measure whether a method can be used in practice. Some methods proposed in existing papers are very versatile, such as scale factors proposed in XNOR-Net~\cite{XNOR-Net}, smooth transition~\cite{BNN+}, addition shortcuts~\cite{Bi-Real}, \emph{etc}. The methods are versatile because of their simple implementation and low coupling. Thus they become common practices to improve the performance of BNNs. Some methods improve the performance of binary neural networks by designing or learning delicate quantizers. Such quantizers usually have stronger ability to preserve the information. However, we have to point out that some of them suffer complicate computation and even multi-stage training pipelines, which is sometimes unfriendly to hardware implementation and reproducibility. This means it is hard to acquire an effective speed up with such quantizers in real-world deployment. Therefore, purely pursuing high accuracy without considering the acceleration implementation makes no sense in practice. The balance between accuracy and speed is also an essential criterion for binarization research that should be always kept in mind.}
\subsection{Other Tasks}
It is worth noting that most of the current binary neural networks that focus on image classification tasks cannot be directly generalized to other tasks. For different tasks, it is still highly required to design specific binary neural networks for the desirable performance. In addition to image classification task, there are also a few studies that designed and evaluated the binary neural network models for other tasks, such as object detection and semantic segmentation tasks. For the object detection task, Table \ref{BNN-On-COCO} and Table \ref{BNN-On-PASCAL2007} respectively list the performance of different binary neural networks on the COCO 2017 and PASCAL VOC 2007 datasets. For the semantic segmentation tasks, Table \ref{BNN-On-PASCAL2012} compares different binary neural networks on the PASCAL VOC 2012 dataset. The experiments are based on different bit-width and network structures.
\begin{table}[t]
\vspace{-2cm}
\setlength{\abovecaptionskip}{0.cm}
\centering
\caption{{Object Detection Performance of Binary Neural Networks on COCO 2017 Dataset}}
\label{BNN-On-COCO}
\scriptsize
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\textbf{Topology}} & \multicolumn{1}{c|}{\textbf{Method}} & \multicolumn{1}{c|}{\textbf{Bit-Width (W/A)}} & \multicolumn{1}{c|}{\textbf{mAP (\%)}} \\ \hline
\multirow{2}{*}{\tabincell{c}{{Faster-RCNN~\cite{Faster-RCNN}}\\{ResNet-18}}} & \multirow{2}{*}{{FQN~\cite{Li_2019_CVPR}}} & \multirow{2}{*}{{1/1}} & \multirow{2}{*}{{28.1}} \\
&&& \\ \cline{1-4}
\multirow{2}{*}{\tabincell{c}{{Faster-RCNN~\cite{Faster-RCNN}}\\{ResNet-34}}} & \multirow{2}{*}{{FQN~\cite{Li_2019_CVPR}}} & \multirow{2}{*}{{1/1}} & \multirow{2}{*}{{31.8}} \\
&&& \\ \cline{1-4}
\multirow{2}{*}{\tabincell{c}{{Faster-RCNN~\cite{Faster-RCNN}}\\{ResNet-50}}} & \multirow{2}{*}{{FQN~\cite{Li_2019_CVPR}}} & \multirow{2}{*}{{1/1}} & \multirow{2}{*}{{33.1}} \\
&&& \\ \cline{1-4}
\multirow{7}{*}{\tabincell{c}{{RetinaNet~\cite{Lin2017Focal}}\\{ResNet-18}}} & Quant whitepaper~\cite{DBLP:journals/corr/abs-1806-08342} & 8/8 & 22.6 \\ \cline{2-4}
& Integer-only~\cite{DBLP:journals/corr/abs-1712-05877} & 8/8 & 19.7 \\ \cline{2-4}
& DoReFa-Net~\cite{DoReFa-Net} & 1/1 & 3.9 \\ \cline{2-4}
& {XNOR-Net~\cite{XNOR-Net}} & {4/4} & {24.4} \\ \cline{2-4}
& {XNOR-Net (Percentile)~\cite{Li_2019_CVPR}} & {4/4} & {26.7} \\ \cline{2-4}
& FQN~\cite{Li_2019_CVPR} & 4/4 & 28.6 \\ \hline
\end{tabular}
\vspace{-0.1cm}
\end{table}
\begin{table}[t]
\setlength{\abovecaptionskip}{0.cm}
\centering
\caption{{Object Detection Performance of Binary Neural Networks on PASCAL VOC 2007 Dataset}}
\label{BNN-On-PASCAL2007}
\scriptsize
\setlength{\tabcolsep}{4mm}{
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\textbf{Topology}} & \multicolumn{1}{c|}{\textbf{Method}} & \multicolumn{1}{c|}{\textbf{\tabincell{c}{Bit-Width (W/A)}}} & \multicolumn{1}{c|}{\textbf{\tabincell{c}{mAP (\%)}}} \\ \hline
\multirow{3}{*}{\tabincell{c}{{Faster-RCNN~\cite{Faster-RCNN}}\\{VGG}}} & Full-Precision & 32/32 & 68.9 \\ \cline{2-4}
& BWN~\cite{BWBDN} & 1/32 & 62.5 \\ \cline{2-4}
& BNN~\cite{BWBDN} & 1/1 & 47.3 \\ \hline
\multirow{3}{*}{\tabincell{c}{{Faster-RCNN~\cite{Faster-RCNN}}\\{AlexNet}}} & Full-Precision & 32/32 & 66.0 \\ \cline{2-4}
& BWN~\cite{BWBDN} & 1/32 & 62.1 \\ \cline{2-4}
& BNN~\cite{BWBDN} & 1/1 & 46.4 \\ \hline
\multirow{2}{*}{\tabincell{c}{{SSD~\cite{Liu2015SSD}}\\{DarkNet}}} & {Full-Precision} & {1/32} & {62.1} \\ \cline{2-4}
& {ELBNN~\cite{Leng2017ExtremelyLB}} & {2/2} & {62.4} \\ \hline
\multirow{2}{*}{\tabincell{c}{{SSD~\cite{Liu2015SSD}}\\{VGG-16}}} & {Full-Precision} & {32/32} & {62.1} \\ \cline{2-4}
& {ELBNN~\cite{Leng2017ExtremelyLB}} & {2/2} & {46.4} \\ \hline
\end{tabular}}
\end{table}
From Table \ref{BNN-On-COCO} and \ref{BNN-On-PASCAL2007} we can see that existing binarization algorithms have achieved encouraging progress for the object detection task, and meanwhile bring the significant acceleration when deployed in real-world systems. But it also should be noted that the binary models still face a great challenge, especially when the activations are quantized to 1-bit. For semantic segmentation task, as shown in Table \ref{BNN-On-PASCAL2012}, the very recent method~\cite{Zhuang_2019_CVPR} achieved high accuracy using only 1-bit, which is almost the same as the full-precision model. But it is unknown how it works and the actual speed up of that method still needs to be verified.
{Among these results, we found that although the binary neural networks perform well on the classification task, there are still unacceptable losses on other tasks. This makes binary neural networks designed for classification tasks hard to be directly applied to other tasks such as object detection and semantic segmentation. In the classification task, the network pays more attention to the global features, while ignoring the loss of local features caused by binarization. However, local features are more important in other tasks. So when designing binary neural networks for other tasks, the local features of the feature map need to be paid more attention.}
\begin{table}[t]
\vspace{-2cm}
\setlength{\abovecaptionskip}{-0.cm}
\caption{{Semantic Segmentation Performance of Binary Neural Networks on PASCAL VOC 2012 Dataset}}
\label{BNN-On-PASCAL2012}
\centering
\scriptsize
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Topology} & \multicolumn{1}{c|}{\textbf{Method}} & \textbf{\tabincell{c}{Bit-Width (W/A)}} & \textbf{\tabincell{c}{mAP (\%)}} \\ \hline
\multirow{5}{*}{\tabincell{c}{{Faster-RCNN}\\{ResNet-18}\\{FCN-32s~\cite{Zhuang_2019_CVPR}}}} & Full-precision~\cite{Zhuang_2019_CVPR} & 32/32 & 64.9 \\ \cline{2-4}
& LQ-Nets~\cite{LQ-Net} & 3/3 & 62.5 \\ \cline{2-4}
& Group-Net~\cite{Zhuang_2019_CVPR} & 1/1 & 60.5 \\ \cline{2-4}
& Group-Net + BPAC~\cite{Zhuang_2019_CVPR} & 1/1 & 63.8 \\ \cline{2-4}
& Group-Net**+BPAC~\cite{Zhuang_2019_CVPR} & 1/1 & 65.1 \\ \hline
\multirow{5}{*}{\tabincell{c}{{Faster-RCNN}\\{ResNet-18}\\{FCN-16s}}} & Full-precision~\cite{Zhuang_2019_CVPR} & 32/32 & 67.3 \\ \cline{2-4}
& LQ-Nets~\cite{LQ-Net} & 3/3 & 65.1 \\ \cline{2-4}
& Group-Net~\cite{Zhuang_2019_CVPR} & 1/1 & 62.7 \\ \cline{2-4}
& Group-Net+BPAC~\cite{Zhuang_2019_CVPR} & 1/1 & 66.3 \\ \cline{2-4}
& Group-Net**+BPAC~\cite{Zhuang_2019_CVPR} & 1/1 & 67.7 \\ \hline
\multirow{5}{*}{\tabincell{c}{{Faster-RCNN}\\{ResNet-34}\\{FCN-32s}}} & Full-precision~\cite{Zhuang_2019_CVPR} & 32/32 & 72.7 \\ \cline{2-4}
& LQ-Nets~\cite{LQ-Net} & 3/3 & 70.4 \\ \cline{2-4}
& Group-Net~\cite{Zhuang_2019_CVPR} & 1/1 & 68.2 \\ \cline{2-4}
& Group-Net+BPAC~\cite{Zhuang_2019_CVPR} & 1/1 & 71.2 \\ \cline{2-4}
& Group-Net**+BPAC~\cite{Zhuang_2019_CVPR} & 1/1 & 72.8 \\ \hline
\multirow{5}{*}{\tabincell{c}{{Faster-RCNN}\\{ResNet-50}\\{FCN-32s}}} & Full-precision~\cite{Zhuang_2019_CVPR} & 32/32 & 73.1 \\ \cline{2-4}
& LQ-Nets~\cite{LQ-Net} & 3/3 & 70.7 \\ \cline{2-4}
& Group-Net~\cite{Zhuang_2019_CVPR} & 1/1 & 67.2 \\ \cline{2-4}
& Group-Net+BPAC~\cite{Zhuang_2019_CVPR} & 1/1 & 70.4 \\ \cline{2-4}
& Group-Net**+BPAC~\cite{Zhuang_2019_CVPR} & 1/1 & 71.0 \\ \hline
\end{tabular}
\end{table}
\section{Future Trend and Conclusions}
\label{section5}
The binary neural networks based on 1-bit representation enjoy the compressed storage and fast inference speed, but meanwhile suffer from the performance degradation. To bridge the gap between the binary and full-precision models, as we summarized in this survey, there are various solutions proposed in recent years, which can be roughly categorized into the naive and the optimized. Our analysis shows that optimizing the binary network using different techniques can promise better performance. {These techniques, derived from different motivations, mainly focus on how to preserve the information in the forward propagation and how to optimize the network in the backward propagation. It shows that retaining the various information in forward and backward propagation is one of the key factors in training high-performance BNNs.}
Although much progress has been made, existing techniques for binary neural networks still face the performance loss, especially for the large network and datasets. The main reasons might include: (1) it is still unclear what kind of network structure is suitable for binarization, so that the information passing through the network can be preserved, even after binarization. (2) it is a difficult problem to optimize the binary network in a discrete space, even we have the gradient estimator or approximate function for binarization.
We believe more practical and theoretical studies will emerge to answer the two questions in the future.
Besides, as the mobile devices are becoming widely used in real world, more research efforts will be devoted to the applications to different tasks and deployment on different hardware. {For example, \cite{Wu2020Rotation} proposed a novel rotation consistent loss considering the open set characteristics of face recognition and achieves competitive performance using 4-bit compared to the full-precision model.} Therefore, there will arise the interesting topics such as customizing or transferring binary networks for different tasks, designing hardware-friendly or energy-economic binarization algorithms, \emph{etc}.
{In addition to weights and activations, quantizing the backward propagation including gradients to accelerate the whole training process has arisen as a new topic recently. The unified framework proposed in \cite{zhu2019unified} proves the possibility of 8-bit training of neural networks from the accuracy and speed aspect. It is worthy to further explore the feasibility of binarized backward calculation for faster training time.}
{Last but not the least, the research on explainable machine learning indicates that there are critical paths in the prediction of neural networks and different network structures follow different patterns. So it is also meaningful to design mix-precision strategy according to the importance of layer and devise new architectures that are friendly to the information flow of binary neural networks.}
\section*{Acknowledgment}
This work was supported by National Natural Science Foundation of China (61872021, 61772057 and 61690202), Beijing Nova Program of Science and Technology (Z191100001119050), BSFC No. 4202039, and the support funding from State Key Lab. of Software Development Environment and Jiangxi Research Institute of Beihang University.
\clearpage
\tiny
|
2,877,628,090,313 | arxiv | \section{Introduction}
The possibility of achieving greater secrecy by introducing
additional randomness into the plaintext of a cipher before
encryption was known, according to \cite{massey88}, already to
Gauss, in the form of the so-called `homophonic substitution'.
Such a procedure is an example of a \emph{random cipher}
\cite{massey88,yuen05qph}. The advantage of a random cipher not
present in standard nonrandom ciphers is that it can provide
information-theoretic security of the key against statistical
attacks, and possibly known-plaintext attacks (See Appendix A and
also \cite{yuen05qph}). A somewhat detailed description of these
possibilities is one of the goals of this paper. In spite of the
potential advantages of random ciphers, a large obstacle in their
deployment is the bandwidth expansion, or more accurately data
rate reduction, that is needed to operate all previous random
ciphers. Also, it is not currently possible to generate true
random numbers at speeds high enough for random ciphers to operate
at sufficiently high data rates ($\sim$ Mbps is the current upper
limit for random number generation). The quantum noise in optical
coherent-state signals may be utilized for this purpose, and
quantum optical effects seem to be the only technologically
feasible way to generate $>$ Gbps true random numbers. A
particular quantum noise-based random cipher, called $\alpha\eta$,
that also does not entail data rate reduction, has already been
proposed and implemented \cite{barbosa03,yuen03} at Northwestern
University. In a previous preprint \cite{yuen05qph}, $\alpha\eta$
was discussed concomitantly with that of the closely related key
generation system called $\alpha\eta$-KG. Since the features of
$\alpha\eta$ direct encryption are subtle and complex enough, we
take the approach in this paper of discussing just the
$\alpha\eta$ encryption system in its own right, and analyze
quantitatively its random cipher feature. Doing so will hopefully
also avert many possible confusions with $\alpha\eta$-Key
Generation, such as those in \cite{nishioka04,nishioka05}. In
particular, we will set up in detail the proper framework to
understand and analyze the security issues involved. Note that the
present paper can be understood independently of ref.
\cite{yuen05qph}, the relevant terminology and results from which
are summarized in Section 2.1 and Appendix A of this paper.
Following our discussion of random ciphers in general and the
$\alpha\eta$ cryptosystem, we show that $\alpha\eta$ security is
equivalent to that of a corresponding classical random cipher. We
show how quantum noise allows some degree of randomization in
$\alpha\eta$ without sacrificing data rate, and quantify the
randomization by two different parameters corresponding to
ciphertext-only and known-plaintext attacks. We also show how
$\alpha\eta$ can be operated on top of a standard cipher like AES
to provide additional, qualitatively different, security based on
quantum noise against known-plaintext attacks on the key. However,
information-theoretically, ciphertext-only attack on the key is
possible with the original $\alpha\eta$. We will indicate what
additional techniques can alleviate this problem, without going
into any detailed analysis to be presented at a later time.
Generally, only search-complexity based security will be
quantitatively described in this paper. Finally, we rebut the
claims in \cite{nishioka04,nishioka05} that $\alpha\eta$ security
is equivalent to that of a standard stream cipher and that
$\alpha\eta$ is nonrandom.
The plan of this paper is as follows: In Section 2, we provide the
necessary review of standard cryptography. In addition, we define
the random cipher concept quantitatively and point out the
available results on random cipher security. This sets the stage
for our definitions in Section 3 that characterize a \emph{quantum
cipher} and a \emph{quantum random cipher}, which are both ciphers
in which the ciphertext is in the form of a quantum state. In
Section 4, we describe the $\alpha\eta$ system in detail, show its
quantum random cipher characteristics, and highlight its
advantages. In Section 5, we respond to the criticisms on
$\alpha\eta$ made by Nishioka et al \cite{nishioka04,nishioka05} in a
further elaboration of the quantitative random cipher character of $\alpha\eta$.
\section{Standard Cryptography and Random Ciphers}
\subsection{Standard Symmetric-Key Cryptography}
We review the basics of symmetric-key data encryption. Further
details can be found in, e.g., \cite{massey88,stinson}. Throughout
the paper, random variables will be denoted by \emph{upper-case}
letters such as $K,X_1$ etc. It is sometimes necessary to
consider explicitly sequences of random variables $(X_1,X_2,
\ldots, X_n)$. We will denote such \emph{vector} random variables
by a \emph{boldface} upper-case letter $\mathbf{X}_n$ and,
whenever necessary, indicate the length of the vector ($n$ in this
case) as a subscript. Confusion with the $n$-th component $X_n$ of
$\mathbf{X}_n$ should not arise as the latter is a boldface
vector. Particular values taken by these random variables will be
denoted by similar \emph{lower-case} alphabets. Thus, particular
values taken by the key random variable $K$ are denoted by $k, k'$
etc. Similarly, a particular value of $\mathbf{X}_n$ can be
denoted $\mathbf{x}_n$. The plaintext alphabet will be denoted
$\mathcal{X}$, the set of possible key values $\mathcal{K}$ and
the ciphertext alphabet $\mathcal{Y}$. Thus, for example, the
sequences $\mathbf{x}_n \in \mathcal{X}^n$. In most nonrandom
ciphers, $\mathcal{X}$ is simply the set $\{0,1\}$ and
$\mathcal{Y}=\mathcal{X}$.
With the above notations, the $n$-symbol long \emph{plaintext}
(i.e., the message sequence that needs to be encrypted) is denoted
by the random vector $\mathbf{X}_n$, the \emph{ciphertext} (i.e.,
the output of the encryption mechanism) is denoted by
$\mathbf{Y}_n$ and the secret key used for encryption is denoted
by $K$. In this paper, we will often call the legitimate sender of
the message `Alice', the legitimate receiver `Bob', and the
attacker (or eavesdropper) `Eve'. Note that although the secret
key is typically a sequence of bits, we do not use vector notation
for it since the bits constituting the key will not need to be
singled out separately in our considerations in this paper. In
standard cryptography, one usually deals with \textit{nonrandom
ciphers}. These are ciphers for which the ciphertext is a function
of only the plaintext and key. In other words, there is an
encyption function $E_{k}(\cdot)$ such that:
\begin{equation} \label{encryption}
\mathbf{y}_n=E_{k}(\mathbf{x}_n). \end{equation} There is a
corresponding decryption function $D_{k}(\cdot)$ such that:
\begin{equation} \label{decryption}
\mathbf{x}_n=D_{k}(\mathbf{y}_n). \end{equation} In such a case,
the $X_i$ and $Y_i, i=1,\ldots, n$ are usually taken to be
from the same alphabet.
In contrast, a \emph{random cipher} makes use of an additional
random variable $R$ called the \emph{private randomizer}
\cite{massey88}, generated by Alice while encrypting the
plaintext and known only to her, if at all. Thus the ciphertext is
determined as follows:
\begin{equation} \label{randomcipher}
\mathbf{y}_n=E_{k}(\mathbf{x}_n, r).
\end{equation}
Because of the additional randomness in the ciphertext, it
typically happens that the ciphertext alphabet $\mathcal{Y}$ needs
to be larger than the plaintext alphabet $\mathcal{X}$ (or else,
$\mathbf{Y}$ is a longer sequence than $\mathbf{X}$, as in
homophonic substitution). It may even be a continuous infinite
alphabet, e.g. an analog voltage value. However, we still require,
as in \cite{massey88}, that Bob be able to decrypt with just the
ciphertext and key (i.e., without knowing $R$), so that there
exists a function $D_{k}(\cdot)$ such that Eq.(\ref{decryption})
holds. We note that random ciphers are called `privately
randomized ciphers' in Ref. \cite{massey88} -- we will however use
the shorter term `random cipher' (Note that `random cipher' is
used in a completely different sense by Shannon \cite{shannon49}).
We note that the presence or absence of the private randomizer
$R$ may be indicated using the conditional Shannon entropy (We
assume a basic familiarity with Shannon entropy and conditional
entropy. See any information theory textbook, e.g.,
\cite{cover91}.). For nonrandom ciphers, we have from
Eq.(\ref{encryption}) that
\begin{equation} \label{nonrandom}
H(\mathbf{Y}_n|K \mathbf{X}_n)=0. \end{equation} On the other
hand, a \textit{random cipher} satisfies
\begin{equation} \label{random}
H(\mathbf{Y}_n|K\mathbf{X}_n) \neq 0, \end{equation} due to the
randomness supplied by the private randomizer $R$. The decryption
condition Eqs.(\ref{decryption}) for both random and nonrandom
ciphers has the entropic characterization:
\begin{equation} \label{decrypt} H(\mathbf{X}_n|K\mathbf{Y}_n)=0.
\end{equation}
Note that this characterization of a random cipher is problematic
when the ciphertext alphabet is continuous, as could be the case
with $\alpha\eta$, because then the Shannon entropy is not
defined. It may be argued that the finite precision of measurement
forces the ciphertext alphabet to be discrete. Indeed, in
Sec.~2.2, we define a parameter $\Lambda$ that characterizes the
``degree of randomness'' of a random cipher. In any case, the
definition makes sense, similar to Eq.~(\ref{random}), only when
the ciphertext alphabet is finite, or at most discrete.
In the cryptography literature, the characterization of a general
random cipher is limited to that given by Eqs.
(\ref{randomcipher}) and (\ref{random}). See, e.g.,
\cite{massey88}. In the next section, we will see that the
purposes of cryptographic security suggest a sharper quantitative
definition of a random cipher involving a pertinent security
parameter $\Gamma$. This new definition, unlike (\ref{random}),
will be meaningful irrespective of whether the ciphertext alphabet
is discrete or continuous. Before we discuss the above new
definition of random ciphers, we conclude this section with some
important cryptographic terminology.
By \textit{standard cryptography}, we shall mean that Eve and Bob
both observe the same ciphertext random variable, i.e.,
$\mathbf{Y}^{\rm E}_{n}=\mathbf{Y}_{n}^{\rm B}=\mathbf{Y}_{n}$.
Thus, standard cryptography includes usual mathematical
private-key (and also public-key) cryptography but excludes
quantum cryptography and classical-noise cryptography
\cite{maurer93}. For a standard cipher, random or nonrandom, one
can readily prove from the above definitions the following result
known as the \emph{Shannon limit} \cite{massey88,shannon49}:
\begin{equation} \label{shannonlimit} H(\mathbf{X}_n|\mathbf{Y}_n) \leq H(K).
\end{equation}
This result may be thought of as saying that no matter how long
the plaintext sequence is, the attacker's uncertainty on it
\emph{given the ciphertext} cannot be greater than that of the
key. This condition is of crucial importance in both direct
encryption and key generation, as brought out in refs.
\cite{yuen03,yuen05qph,yuen05pla,pra05,yuan05}, but was missed in
previous criticisms of $\alpha\eta$
\cite{nishioka04,nishioka05,loko05}.
By \textit{information-theoretic security} (or \emph{IT security})
on the data, we mean that Eve cannot, even with unlimited
computational power, pin down uniquely the plaintext from the
ciphertext, i.e.,
\begin{equation}\label{ITsecurity}
H(\mathbf{X}_n|\mathbf{Y}_n)\neq 0.
\end{equation}
The level of such security may be quantified by
$H(\mathbf{X}_n|\mathbf{Y}_n)$. Shannon has defined
\textit{perfect security} \cite{shannon49} to mean that the
plaintext is statistically independent of the ciphertext, i.e.,
\begin{equation} H(\mathbf{X}_n|\mathbf{Y}_n)=H(\mathbf{X}_n).
\end{equation}
With the advent of quantum cryptography, the term `unconditional
security' has come to be used, unfortunately in many possible
senses. By \emph{unconditional security}, we shall mean
near-perfect information-theoretic security against all attacks
consistent with the known laws of quantum physics.
Incidentally, note that the Shannon limit Eq.~(\ref{shannonlimit})
immediately shows that perfect security can be attained only if
$H(\mathbf{X}_n)\leq H(K)$, so that, in general, the key needs to
be as long as the plaintext.
\subsection{Random Ciphers -- Quantitative Definition}
As mentioned in the previous section, the characterization of a
general random cipher merely using Eq.~(\ref{randomcipher}) or
(\ref{random}) is perhaps not well-motivated. The reason for
studying random ciphers is in fact the belief that they enhance
the security of the cipher against various attacks. By bringing
into focus the intuitive mechanism by which a random cipher may
provide greater security than a nonrandom counterpart against
known-plaintext attacks, we will propose one possible quantitative
characterization of a general random cipher (or more exactly, a
general random \emph{stream} cipher. See below.). For a
description of known-plaintext and other attacks on ciphers,
together with the known results on their security, we refer the
reader to Appendix A.
We now discuss the intuitive mechanism of security enhancement in
a random cipher. To this end, a schematic depiction of encryption
and decryption with a random cipher is given in Fig.~1. For a
binary alphabet $\mathcal{X}=\{0,1\}$, let
$\mathcal{X}^n=\{a_1,\ldots, a_N\}$ be the set of $N=2^n$ possible
plaintext $n$-sequences. Let $k$ be a particular key value. One
can view the key $k$ as dividing the ciphertext space
$\mathcal{Y}^n$ into $N$ parts, denoted by the
$\mathcal{A}_{a_j}^k, j \in \{1, \ldots, N\},$ in the figure.
Encryption of plaintext $a_j$ proceeds by first determining the
relevant region $\mathcal{A}_{a_j}^k$ and randomly selecting (this
is the function of the private randomizer) as ciphertext some
$y\in\mathcal{A}_{a_j}^k$. The decryption condition
Eq.(\ref{decryption}) is satisfied by virtue of the regions
$\mathcal{A}_{a_j}^k$ being disjoint for a given $k$. Also shown
in Fig. 1 is the situation where a different key value $k'$ is
used in the system. The associated partition of $\mathcal{Y}^n$
consists of the sets $\mathcal{A'}_{a_j}^{k}$ that are shown with
shaded boundaries in Fig. 1. The \emph{important point} here is
that the respective partitions of the ciphertext space for the key
values $k$ and $k'$ should be sufficiently `intermixed'. More
precisely, for any given plaintext $a_j$, and any observed
ciphertext $\mathbf{y}_n$, we require that there exist
sufficiently many key values $k$ (and hence a sufficiently large
probability of the set of possible keys corresponding to a given
plaintext and observed ciphertext) for which $\mathbf{y}_n \in
\mathcal{A}_{a_j}^k$. In other words, a given plaintext-ciphertext
pair can be connected by many possible keys. This is the intuitive
basis why random ciphers offer better quantitative security (as
measured either by Eve's information on the key or her complexity
in finding it; see Sec. 4.2-4.4 for a discussion of $\alpha\eta$
security) than nonrandom ciphers against known-plaintext attacks.
\begin{figure} [htbp]
\begin{center}
\rotatebox{-90} {
\includegraphics[scale=0.5]{randomeps.eps}}
\caption{Schematic of a random cipher: The plaintexts $a_i$ are
carried, under the key $k$, into the corresponding regions
$A_{a_j}^k$ of ciphertext space $Y^n$. The subsets of $Y^n$
associated with a different key value $k'$ are shown with curved
boundaries.}
\end{center}
\end{figure}
While the above arguments hold for any type or random cipher
whatsoever, we will restrict our scope to the so-called
\emph{stream ciphers}. Most ciphers in current use (which are all
nonrandom), such as AES, are stream ciphers \cite{stinson}. In a
nonrandom stream cipher, the key $K$ is first expanded using a
deterministic function into a much longer sequence $(Z_1,\ldots,
Z_n)$ called the \emph{keystream} or \emph{running key}. The
defining property of a \emph{stream cipher} is that the $i$-th
ciphertext symbol $y_i$ be a function of just the $i$-th keystream
symbol $z_i$ and the earlier and current plaintext symbols
$x_1,\ldots,x_i$:
\begin{equation} \label{streamcipher}
y_i=E^i(x_1, \ldots, x_i;z_i). \end{equation} It follows that
decryption of the first $i$ symbols of plaintext is possible from
the first $i$ symbols of ciphertext and the running key. A
\emph{synchronous} stream cipher is one for which
\begin{equation} \label{syncstreamcipher} y_i=E^i(x_i;z_i).
\end{equation} Thus, the $i$-th ciphertext symbol depends only on
the $i$-th plaintext symbol and the $i$-th keystream symbol, i.e.,
the cipher is memoryless. For our discussion of random ciphers, we
will restrict ourselves for concreteness to the case of
\emph{random stream ciphers}, that are defined by:
\begin{equation} \label{randomstreamcipher}
y_i=E^i(x_1, \ldots, x_i; z_i; r_i). \end{equation} Here, the
$\{R_i\}$ are randomizers that may be assumed to be independent
random variables (this is the case in $\alpha\eta$), but this is
not necessary. In the rest of the paper, a \emph{random cipher}
will always mean a \emph{random stream cipher}.
For a nonrandom stream
cipher given by Eq.~(\ref{streamcipher}), it is usually the case
that given the plaintext vector $\mathbf{x}_i$ of length $i$ and
ciphertext symbol $y_i$, the value of the keystream $z_i$ is
uniquely determined. This is typically the case also in a random
stream cipher \emph{when the value $r$ taken by the randomizer
$R_i$ is known}. In the absence of such knowledge, however, the
different possible values taken by $R_i$ will in general allow
many different values of the keystream for the given plaintext
vector and ciphertext symbol. The more such possibilities exist,
the less information is obtained about the keystream and the more
`secure' the cipher is. Our quantitative definition of random
cipher given below introduces a parameter $\Gamma$ that provides
one way of quantifying the different knowledge of the keystream
obtained in the above two scenarios by the number of additional
possible keystreams for a given pair of input data and
corresponding ciphertext symbols.
\\ \\
\textbf{Definition} (\emph{$\Gamma$- Random Cipher})
\textbf{:}\\
A $\Gamma$-Random Cipher is a random stream cipher of the form of
Eq.~(7) for which the following condition holds: \\For every
plaintext sequence, $\mathbf{x}_i$, for every $i$, for every
ciphertext symbol $y_i$ obtainable by encryption of
$\mathbf{x}_i$, and for every value $r$ of $R_i$,
\begin{equation} \label{Gammarandomcipherdef}
|\{z_i | y_i=E^i(x_1, \ldots, x_i; z_i; r') \; \textup{ for some}
\; r'\}| - |\{z_i | y_i=E^i(x_1, \ldots, x_i; z_i; r)\}| \geq
\Gamma.
\end{equation}
\\
The bars $|\cdot |$ indicate size of the enclosed set. For a
nonrandom stream cipher, the keystream $z_i$ is uniquely fixed by
the plaintext vector $\mathbf{x}_i$ and the ciphertext symbol
$y_i$. Therefore, if the randomizer in
(\ref{Gammarandomcipherdef}) is ignored so that it applies to a
nonrandom cipher, a nonrandom cipher would have $\Gamma = 0$. Note
that the sets whose sizes appear in the above equation, both for
random ciphers and their nonrandom reductions, are constructed
only on the basis of the $i$-th ciphertext symbol $y_i$, and not
on the basis of the entire ciphertext sequence. Thus, the
definition of $\Gamma$ only gives the \emph{number of possible
keys per symbol of ciphertext} under known-plaintext attack, while
the number of possible keys based on the entire ciphertext
sequence (that is illustrated schematically by the overlap sets in
Fig.~1) may be significantly less. In this sense, our definition
has a restricted symbol by symbol scope but is easy to calculate
with, similar to the independent particle approximation in
many-body physics. It does not by itself determine the precise
security of the cipher, but rather is the starting point of
precise analysis, which is a difficult task just as correlations
in interacting many-body systems are always difficult to deal with
in a rigorous quantitative manner.
It is possible to satisfy the random cipher condition
(\ref{random}) with $\Gamma =0$. This happens, e.g., when
(\ref{Gammarandomcipherdef}) holds for some ciphertext symbols
with $\Gamma >0$ but some others with $\Gamma=0$, so the overall
condition (\ref{Gammarandomcipherdef}) is only satisfied for
$\Gamma=0$. A different measure of randomization $\Lambda$,
bearing directly on (\ref{random}), may be introduced which has
the property that $\Lambda=0$ is equivalent to a nonrandom cipher.
For the case where the ciphertext alphabet is finite and for given
$\mathbf{x}_i,z_i$ and $r$, let
\begin{equation} \label{lambdarandomcipherdef}
\Lambda=|\{ y_i | y_i = E^i(x_1, \cdots , x_i;z_i;r') \; \textup{ for some}
\; r' \}| - |\{ y_i | y_i = E^i (x_1, \cdots , x_i;z_i;r)
\}|.
\end{equation}
Thus, condition (\ref{random}) is equivalent to $\Lambda >0$ for
some $\mathbf{x}_i,z_i$ and $r$. It follows that $\Lambda=0$ for
all $(\mathbf{x}_i,z_i)$ is equivalent to the cipher being
nonrandom. $\Lambda+1$ is the number of possible output signal
symbols corresponding to a given input symbol and running key
value. Thus, the parameter $\Lambda$ measures directly the degree
of per symbol ciphertext randomization, while $\Gamma$ measures
the per symbol key redundancy. It is possible that a $\Gamma=0$
random cipher is still useful due to the additional loads on Eve
to record and store more information from her observation. On the
other hand, for the \emph{typical} case where $z_i$ is in
one-to-one correspondence with $y_i$ for given $\mathbf{x}_i$ and
$r$, $\Gamma >0$ implies $\Lambda>0$ for every $\mathbf{x}_i$ and
$z_i$, which in turn implies that a cipher with $\Gamma
> 0$ is random in the sense of (\ref{random}). A simple
application of the $\Gamma$ and $\Lambda$ characterizations to
$\alpha\eta$ leads to information-theoretic lower bounds on the
unicity distances $n_0$ and $n_1$ for CTA and KPA, as discussed in
Sec. 4.3. The following simple example also serves to illustrate
the above definitions:
\newline\newline
\textbf{Example} (Random cipher) \newline Let
$\mathcal{X}=\{0,1\}$, $\mathcal{K}=\{k_0,k_1,k_2,k_3,k_4\}$ and
$\mathcal{Y}=\{a,b,c,d,e\}$. Fig.~2 lists the possible ciphertexts
for each plaintext and key pair.
\begin{figure*}[htbp]
\begin{tabular} {|c|c|c|} \hline $x$ & $k$ & $y$\\
\hline \hline$0$ & $k_0$ & $a,b$\\
\hline $1$&$k_0$ & $c,d,e$\\
\hline$0$ & $k_1$ & $c,d$\\
\hline$1$ & $k_1$ & $e,a,b$\\
\hline$0$ & $k_2$ & $e,a$\\
\hline$1$ & $k_2$ & $b,c,d$\\
\hline$0$ & $k_3$ & $b,c$\\
\hline$1$ & $k_3$ & $d,e,a$\\
\hline$0$ & $k_4$ & $d,e$\\
\hline$1$ & $k_4$ & $a,b,c$\\ \hline
\end{tabular}
\caption{Encryption table for a simple random cipher.}
\end{figure*}
For this cipher, one can easily verify that at
least 2 key values connect every possible plaintext-ciphertext
pair. In addition, every plaintext-key pair can lead to at least
two different ciphertexts. In terms of the definitions given
above, this cipher
has $\Gamma=1$ and $\Lambda=1$.
\newline
\section{Quantum Random Ciphers}
The known and possible advantages of a random classical cipher
over a nonrandom one were discussed in the previous section. While
it is possible to implement a random cipher classically using
random numbers generated on Alice's side, this is not currently
practical at high ($\sim$ Gbps) rates. As will become clear in the
sequel, the quantum encryption protocol $\alpha\eta$ (Various
implementations are described in
\cite{barbosa03,corndorf03,pra05,ptl05,hirota05} - The protocol in
\cite{hirota05} is a variation on the original $\alpha\eta$ of
\cite{barbosa03}) effectively implements a random cipher from
Eve's point of view for a given choice of her measurement, the
difference from a classically random cipher being that it uses
coherent-state quantum noise to perform the needed randomization.
Before we describe $\alpha\eta$, we define some concepts that
capture the relevant features of a quantum random cipher. As
emphasized in Section 2.2, we will confine our attention to
\emph{stream} ciphers. First, we straightforwardly extend the
usual stream cipher to one where the ciphertext is a quantum
state. Our motivation for this definition is that, from the point
of view of the legitimate users Alice and Bob, $\alpha\eta$ is a
quantum stream cipher with negligible $\lambda$ in the sense given
below:
\\ \\
\textbf{Definition}\ \emph{($\lambda$-Quantum Stream Cipher (QSC)})\textbf{:}\\
A quantum stream cipher is a cipher for which the following two
conditions are satisfied:
\begin{enumerate} [A.]
\item{ The encryption map $e_k(\cdot)$ takes the $n$-symbol
plaintext sequence $\mathbf{x}_n$ to a quantum state $n$-sequence
$\mathbf{\rho}$ in the $n$-fold tensor product form:
\begin{equation} \label{quantcipher}
\mathbf{\rho}= e_k(\mathbf{x}_n) = \rho_{1}(x_1;z_1) \otimes
\ldots \otimes \rho_{n}(x_1, \ldots ,x_n;z_n),
\end{equation}
and}
\item{ Given the key $k$, there exists a measurement on the
encrypted state sequence, that recovers each plaintext symbol
$x_i$ with probability $P_{dec} > 1 - \lambda$.}
\end{enumerate}
Here, as in Section 2.2, $(Z_1, \ldots, Z_n)$ is the keystream
generated from the seed key $K$. A few comments will help clarify
the definition. First, note that the tensor product form of the
state in condition A retains for a quantum cipher the property of
a classical cipher that one can generate the components in the
$n$-sequence of states that constitute the output of a cipher one
after the other in a time sequence. Note also that, analogous to a
classical stream cipher, the $i$-th tensor component of $\rho$
depends on just $z_i$ and $(x_1,\ldots,x_i)$. Condition B is the
generalized counterpart of the decryption condition
Eq.(\ref{decryption}) for a classical cipher -- we now allow a
small enough decryption error probability. Thus, the per-symbol
error probability is bounded above by $\lambda < 1$.
We now want to bring the concept of classical \emph{random} cipher
defined in the previous section into the quantum setting. Our
motivation in doing so is to show that, for an attacker making the
same measurement on a mode-by-mode basis without knowledge of the
key, $\alpha\eta$ reduces to an equivalent $\Gamma$-Random Cipher
with significantly large $\Gamma$. Since the output of a quantum
cipher is a quantum state and not a random variable, we will need
to specify a POVM $\{\Pi_{\mathbf{y}_n}\}$ whose measurement
result $\mathbf{Y}_n$ supplies the classical ciphertext. Note
that in this quantum situation different choices of measurement
may result in radically different kinds of ciphertext. Note also
that the user's and the attacker's measurements may be different.
Our definition of a quantum random stream cipher below will apply
relative to a chosen ciphertext $\mathbf{Y}_n$ defined by its
associated POVM. We will also assume that, from the eavesdropper's
viewpoint, the same measurement is made on each of the $n$
components of the cipher output. In other words, the POVM defining the
ciphertext $\mathbf{Y}_n$ is a tensor product of identical POVMs $\{\pi_y\}$.
\\ \\
\textbf{Definition} (\emph{$(\Gamma,\lambda,\lambda',\{\pi_y\})$- Quantum Random Stream Cipher (QRC)})\textbf{:}\\
An $(\Gamma,\lambda,\lambda',\{\pi_y\})$ - quantum random stream
cipher is a $\lambda$-quantum stream cipher such that for the
ciphertext given by the result of the product POVM
$\{\Pi_{\mathbf{y}_n}= \bigotimes_{i=1}^{i=n} \pi_{y_i}\}$,
\begin{enumerate}[A.]
\item
one has an $\Gamma$-random stream cipher satisfying
Eq.(\ref{Gammarandomcipherdef}), and
\item
the probability of error per symbol $P_{dec}'$ using the key
\emph{after} measurement is $P_{dec}' > 1 -\lambda'$.
\end{enumerate}
Several comments are given to explain this definition:
\begin{enumerate}[1.]
\item{ While condition QRC-B above appears similar to the condition QSC-B for a
quantum stream cipher, there is a crucial difference. In the
latter, the decryption probability $P_{dec}$ takes into account
the possibility that the \emph{quantum measurement} (as well as
classical post-processing) made on the cipher state can depend on
the key, i.e. it refers to Bob's rather than Eve's error
probability. In QRC-B, we are considering the probability of error
involved for Eve when she decrypts using a quantum measurement
independent of the key followed by classical post-processing that is , in general, ``collective'' and depends on the key. Thus,
the parameter $\lambda'$ is related to the symbol error
probability under this latter restriction while the parameter
$\lambda$ in QSC-B is tied to the symbol error probability for a
quantum measurement allowed to depend on the key. We see that
there are two measurements implicit in our definition of a QRC -
one made by the user with the help of the key, and the other given
by $\{\pi_y\}$ made by the attacker without the key. See also Item
3 below. As we shall see, $\alpha\eta$ satisfies QRC-B with
negligible $\lambda'$ under a heterodyne or phase measurement attack by
Eve.}
\item{ $\Gamma$ in QRC-A, as in Eq.(\ref{Gammarandomcipherdef}), is a measure of the
'degree of intermixing' of the regions of ciphertext space
corresponding to different key values on a symbol-by-symbol basis.
If $\{\pi_y\}$ describes a discrete measurement, a $\Lambda$
corrresponding to Eq.(\ref{lambdarandomcipherdef}) can also be
introduced.}
\item{ Our stipulation that the same POVM be measured on each of
the components of the cipher output is tantamount to restricting
the attacker to identical measurements on each tensor component
followed by collective processing. We will call such an attack a
\emph{collective attack} in this paper (also in \cite{yuen05qph}).
This definition is different from the usual collective attack in
quantum cryptography \cite{gisin02}: in the latter, following the
application of identical probes to each qubit/qumode, a joint
quantum measurement on all the probes is allowed.
In our case, there is no probe for Eve to set as we conceptually
allow her a full copy of the quantum state. Doing so, we can upper
bound her performance. (This is an important feature of our
so-called KCQ approach to encryption and key generation. See
\cite{yuen03} for discussion.) Thus, allowing a joint
measurement, as also nonidentical measurements on each output
component, will be called a joint attack.}
\item{In analogy with the classical random cipher definition Eq.~(\ref{Gammarandomcipherdef}), one
may wonder why the private randomizers $R_i$ used in that
definition are missing from that of the quantum random cipher.
Indeed, one may randomize the quantum state $\rho_i(x_1, \ldots
,x_i;z_i)$ to $\rho_i(x_1, \ldots ,x_i;z_i;r_i)$ using a private
random variable with probability distribution $p_{r_i}$. However,
since the value of $R_i$ remains unknown to both user and attacker
(Indeed, the user should not need to know $R_i$ in order to
decrypt or even to encrypt in the case of $\alpha\eta$), one sees
that all probability distributions of Bob's or Eve's measurements
in this situation are given by the state $\rho'_i(x_1, \ldots
,x_i;z_i)=\sum_{r_i}{p_{r_i}\rho_i(x_1, \ldots ,x_i;z_i;r_i)}$, in
which there is no explicit dependence on $r_i$. In particular, we
mention here that exactly such quantum state randomization, called
Deliberate Signal Randomization (DSR), has been proposed in the
context of $\alpha\eta$ in \cite{yuen03} for the purposes of
enhancing the information-theoretic security of $\alpha\eta$.}
\item{It is important to observe that the definitions given above
both for classical and quantum random ciphers are not arbitrary
ones, but rather the mathematical characterizations of very
typical situations involving randomization in classical and
quantum cryptosystems.}
\end{enumerate}
We present an example of a QRC in the next section: the
$\alpha\eta$ cryptosystem.
\section{The $\alpha\eta$ cryptosystem}
\subsection{Operation}
We now describe the $\alpha\eta$ system and its operation as a
quantum cipher:
\begin{enumerate}[(1)]
\item
Alice and Bob share a secret key $\mathbf{K}_s$.
\item
Using a \emph{key expansion function} $ENC(\centerdot)$, e.g., a
linear feedback shift register or AES in stream cipher mode, the
seed key $\mathbf{K}_s$ is expanded into a running key sequence
that is chopped into $n$ blocks:
$\mathbf{K}_{Mn}=ENC(\mathbf{K}_s)=(K_1, \ldots , K_{mn})$. Here,
$m=\log_2(M)$, so that $Z_i \equiv (K_{(i-1)m +1}, \ldots,
K_{im})$ can take $M$ values. The $Z_i$ constitute the
\emph{keystream}.
\item
The encrypted state $e_{\mathbf{K}_s}(\mathbf{X}_n)$ of
Eq.(\ref{quantcipher})is defined as follows. For each bit $X_i$ of
the plaintext sequence $\mathbf{X}_n = (X_1, \ldots, X_n)$, Alice
transmits the \emph{coherent state}
\begin{equation} \label{state}
|\psi(X_i,Z_i)\rangle=|\alpha e^{i\theta(X_i,Z_i)}\rangle.
\end{equation}
Here, $\alpha \in \mathbb{R}$ and $\theta(X_i,Z_i)$ takes values
in the set $\{0,\pi/M,\ldots,(2M-1)\pi/M\}$. The function $\theta$
taking the data bit and keystream symbol to the actual angle on
the coherent state circle is called the \emph{mapper}. In this
paper, we choose $\theta(X_i,Z_i)=[Z_i/M+(X_i\oplus
Pol(Z_i))]\pi$. $Pol(Z_i)= 0$ or $1$ according to whether $Z_i$ is
even or odd. This distribution of possible states is shown in
Fig.~2. Thus $K_i$ can be thought of as choosing a `basis' with
the states representing bits $0$ and $1$ as its end points. In
general, one has the freedom to vary the mapper in various ways
for practical reasons. See, e.g, \cite{pra05}.
\item
In order to decrypt, Bob runs an identical ENC function on his
copy of the seed key. For each $i$, knowing $Z_i$, he makes a
quantum measurement to discriminate just the two states
$|\psi(0,Z_i)\rangle$ and $|\psi(1,Z_i)\rangle$.
\end{enumerate}
\begin{figure*} [htbp]
\begin{center}
\rotatebox{-90} {
\includegraphics[scale=0.7]{setup2eps}}
\caption{Left -- Overall schematic of the $\alpha\eta$ encryption
system.
Right -- Depiction of two of $M$ bases with interleaved logical
bit mappings.}
\end{center}
\end{figure*}
To decrypt in step (4) above, Bob, in general would need a phase
reference. This is effectively provided by the use of Differential
Phase Shift Keyed (DPSK) signals in the implementations of
$\alpha\eta$. See \cite{corndorf03,pra05,ptl05} for details. Doing
so does not compromise security as we still assume that Eve has a
perfect copy of the transmitted state.
If the line transmittance between Alice and Bob is $\eta$, Bob
receives a coherent state with energy $\eta S$ instead of $S
\equiv |\alpha|^2$. The optimal quantum measurement
\cite{helstrom76} for Bob has error probability
\begin{equation} \label{pB}
P^B_e \sim \frac{1}{4} \exp(-4\eta S).
\end{equation}
It is thus apparent that $\alpha\eta$ is a $\lambda$-quantum
cipher in the sense of Section 3 with $\lambda \sim \frac{1}{4}
\exp(-4\eta S)$. For the $S \sim 4 \times 10^4$ of \cite{pra05},
over a distance of 80 km at a loss of 0.2 dB/km, we have $\eta S
\sim 10^3$ photons. For this mesoscopic level, $\lambda$ is $\sim
\exp(-1000)$, which is completely negligible compared, say, to the
standard acceptable BER limit of $10^{-9}$, which arises from
device imperfections, for an uncoded optical on-off keyed line.
Let us briefly indicate how this system may provide data security
by considering an \emph{individual attack} on each data bit $X_i$
by Eve. Under such an attack, one only looks at the per-bit error
probability ignoring correlations between the bits. Under this
assumption, Eve, not knowing $Z_i$, is faced with the problem of
distinguishing the density operators $\rho^0$ and $\rho^1$ where
\begin{equation} \label{rho}
\rho^b=\sum_{Z_i}\frac{1}{M}|\psi(b,Z_i)\rangle\langle\psi(b,Z_i)|.
\end{equation}
For a fixed signal energy $S$, Eve's optimal error probability is
numerically seen to go asymptotically to $1/2$ as the number of
bases $M \rightarrow \infty$ (See Fig. 1 of \cite{barbosa03}). The
intuitive reason for this is that increasing $M$ more closely
interleaves the states on the circle representing bit 0 and bit 1,
making them less distinguishable. Therefore, at least under such
individual attacks on each component qumode \footnote{When
referring to an optical field mode, we use the term \emph{qumode}
(for 'quantum mode', in analogy to 'qubit').} of the cipher
output, $\alpha\eta$ offers any desired level of security
determined by the relative values of $S$ and $M$. While we are not
concerned in this paper with key generation, it may be observed
that unambiguous state determination (USD) attacks on $\alpha\eta$
are totally ineffective due to the large number of $2M$ states
involved.
In our security analysis, Eve is always assumed to be at the
transmitter so that $\eta=1$ for her. Without knowing the key,
however, her performance on the data is still poor as described in
the above paragraph. Her attacks on the key are described in the
following. We have assumed that the users can utilize the signal
energy $\eta S$ to maintain a proper bit error rate without
channel coding, despite possible interference from Eve. This does
not place a stringent requirement on $\eta$ itself as one can
typically go around 80 km in fiber before the signal needs to be
amplified. In case Eve's interference is too strong and causes
error, it would be detected in a message authentication code which
always goes with encryption. There is clearly no need to do
separate intrusion detection in this direct encryption case, but
it turns out there is also no need in the key generation regime
\cite{yuen05qph,yuen03} which we do not discuss in this paper.
\subsection{$\alpha\eta$ as a Random Cipher}
We showed in the previous subsection that $\alpha\eta$ may be
operated in a regime of $S$, $\eta$ and $M$ where it is a
$\lambda$-quantum cipher for $\lambda \sim 0$. We now show, that
from Eve's point of view, under both a heterodyne and phase
measurement attack, $\alpha\eta$ appears effectively as a quantum
\emph{random} cipher according to the characterization of Section
3. Note that the randomization in $\alpha\eta$ can also be
effected in principle by using an additional classical random
number generator. This is not required in $\alpha\eta$ as
high-speed randomization is automatically provided by the
coherent-state quantum noise.
To see the quantum random cipher characteristic of $\alpha\eta$,
consider employing the following two measurements for obtaining
$\{\pi_y\}$ in the quantum random cipher definition:
\begin{enumerate}[1)]
\item{ (Heterodyne measurement) $\pi_y = \frac{1}{\pi} |y\rangle\langle y|, y \in
\mathbb{C}.$}
\item{ (Canonical Phase measurement) $\pi_{\theta} = \frac {1} {2\pi} \sum_{n,n'=0}^{\infty} e^{i(n-n')\theta} |n\rangle \langle n'|, \theta \in [0,2\pi).$
}
\end{enumerate}
To show that the conditions for a QRC are satisfied, let us first
consider QRC-B. It may be shown \cite{yuen03} that the error
probabilities $\lambda'$ involved are respectively $\sim
\frac{1}{2}e^{-S}$ and $\sim \frac{1}{2}e^{-2S}$ for the
heterodyne and phase measurements.
Turning to QRC-A, let us estimate the value of $\Gamma$ under
heterodyne and phase measurement. For a signal energy $S$, the
heterodyne measurement is Gaussian distributed around the
transmitted amplitude with a standard deviation of $1/2$ for each
quadrature while the phase measurement has an approximately
Lorentzian distribution around the transmitted phase with standard
deviation $\sim 1/{\sqrt{S}}$. If we assume that, given a certain
transmitted amplitude/phase, the possible ciphertext values are
uniformly distributed within a standard deviation on either side
and ciphertext values outside this range are not reached (this
will be called the \emph{wedge approximation}), we get the
following estimates $N_{het}$ and $N_{phase}$ for the number of
keystream values $z_i$ covered by the quantum noise under
heterodyne and phase measurements:
\begin{equation} \label{numberestimate}
N_{het}=2N_{phase} = M/(\pi \sqrt{S}).
\end{equation}
If the value of the randomizer $R$ is fixed (corresponding to
rotation by a given angle within the wedge), $Z_i$ is fixed by the
plaintext and ciphertext. Thus we have according to
Eq.~(\ref{Gammarandomcipherdef}) that
\begin{equation}\label{Gammahet}
\Gamma_{het}=N_{het} -1 \cong M/(\pi \sqrt{S}),
\end{equation}
and that
\begin{equation} \label{Gammaphase}
\Gamma_{phase} \cong \Gamma_{het}/2 \cong M/(2 \pi \sqrt{S}).
\end{equation} As expected, the $\Gamma$'s of both measurements
increase as the number of bases $M$ increases, and decrease with
increasing signal energy $S$ that corresponds to decreasing
quantum noise. For example, using the experimental parameters in
\cite{pra05} of $S \sim 4 \times 10^4$ photons and $M \sim 2
\times 10^3$ has $\Gamma_{het} \sim 3$. The $\Lambda$ (cf.
Eq.~(\ref{lambdarandomcipherdef}) characteristics of $\alpha\eta$
will be considered in Sec.~5.2 in connection with the Nishioka
group attack. The relevance of these parameters for security is
considered in detail in the next subsection and in Sec.~5.2.
\subsection{$\alpha\eta$: Information-theoretic and Complexity-Theoretic Security}
Before discussing $\alpha\eta$ security, we comment that
$\alpha\eta$ direct encryption is often compared to BB84 key
generation followed by the use of the generated key in either
one-time pad or a standard cipher like AES. This is not an
appropriate comparison because $\alpha\eta$ already assumes that
the users share a key. Perhaps the source of the confusion is that
both $\alpha\eta$ and BB84 involve the use of quantum states. In
any case, the appropriate comparison would be between $\alpha\eta$
and a standard cipher like one-time pad or AES - we do make such a
comparison in the following.
We will consider in turn the information-theoretic (IT) and
\emph{complexity-theoretic (CT)} security of $\alpha\eta$. In
standard cryptography, no rigorous result is known about the
quantitative security level of any cipher, save the one-time pad.
Since $\alpha\eta$ includes a classical stream cipher ENC (See
Fig. 1), we may in general expect a similarly murky state of
affairs regarding its quantitative security. However, it will turn
out that, under known-plaintext attacks, one can claim
\emph{additional} security from the physical coherent-state noise
for a suitably modified $\alpha\eta$ with any cipher ENC, as
compared to ENC alone.
\subsubsection{Information-theoretic (IT) Security: Qualitative discussion}
Considering first IT security, we discuss in turn qualitatively
the cases of ciphertext-only, known-plaintext, and statistical
attacks on the data as well as the key. Subsequently, for the
former two cases, we give lower bounds for the unicity distances
$n_0$ and $n_1$ (See Appendix A for definitions).
As mentioned in Appendix A, for a nondegenerate ENC box cipher,
one can protect the key completely and attain data security up to
the Shannon limit under CTA. If the same ENC box is used in
$\alpha\eta$ one may consider, as in Sec. 4.1, an attack in which
Eve attacks each data bit using only the measurement result from
the corresponding qumode. Although under such an assumption IT
security obtains as $M/\sqrt{S} \rightarrow \infty$, this attack
is too restrictive since Eve does gain information on the key from
each qumode measurement that could be useful in learning about
other data bits as well. Such attacks utilizing key correlations
across data bits may be launched against standard stream ciphers.
Under the wedge approximation of Sec.~4.2, Eve is able to narrow
her choice of basis down to $\Gamma$ possible values. Even if
$\Gamma$ is large, the key security (and hence data security) is
not as good as that of the ENC box alone for which case the
keystream bits are \emph{completely} random to Eve. However, one
can still derive a unicity distance lower bound (See below). This
defect of $\alpha\eta$ may be removed by the use of Deliberate
Signal Randomization (DSR) introduced in \cite{yuen03}. However,
the concrete analysis of systems using various forms of DSR are
still under progress. But see \cite{yuen06pla}.
Let us now consider the case of known-plaintext attacks on the
key. As discussed in Appendix A, most nonrandom ciphers have a
nondegeneracy distance $n_d$ at which the key is fixed under a
known-plaintext attack. We also mentioned that for random ciphers,
such a distance may not exist, so that it is unknown whether or
not they possess IT security against KPAs. Since $\alpha\eta$ is
random, the same remark applies to it. However, a finite unicity
distance $n_1$ may exist for $\alpha\eta$ and other random ciphers
beyond which the key is fixed in a KPA. While rigorous analysis is
difficult and is so far limited to the unicity distance bound
given below, we believe that such is the case for the original
$\alpha\eta$ with no modification, so that it has no IT security
for large enough $n$.
The statistical attacks fall between the above two extremes. Thus,
there may exist a crossover point where $\alpha\eta$ security
becomes better than that of the ENC box alone as one moves from
CTA towards KPA. However, no quantitative results, e.g., the
unicity distance under STA, are known. To summarize, we believe
that under all cryptographic attacks, $\alpha\eta$ has no IT
security for large enough $n$, i.e., $\lim _{n \rightarrow \infty}
H(K|\mathbf{Y}_n^E) = 0$. However, the use of $\alpha\eta$ should
extend the unicity distance beyond that of the cipher ENC used in
it for some statistical attacks and for known-plaintext attacks.
\subsubsection{Information-Theoretic (IT) Security: Unicity Distance Lower Bounds}
Nonrigorous estimates of the unicity distance $n_1$ against KPA
for standard stream ciphers are often made via a capacity argument
in the so-called ``correlation attacks'' (See, e.g.,
\cite{chepyzhov00}). The bound
\begin{equation}
\label{capacitybound} n \geq |K|/C,
\end{equation}
where $C$ is the capacity of Eve's effective channel, follows from
the converse to the coding theorem \cite{cover91}. The application
of (\ref{capacitybound}) to correlation attacks is nonrigorous
because the assumption of independent noise in each bit is not
valid. In the case of $\alpha\eta$, \emph{rigorous} lower bounds
on $n_0$ and $n_1$ can be obtained from (\ref{capacitybound})
because of the independent qumode to qumode coherent-state noise.
Under the wedge approximation to the noise distribution for
evaluating Eve's capacity in (\ref{capacitybound}), it may be
shown \cite{eguchi06} that for uniform data, the CTA unicity
distance
\begin{equation} \label{n0bound}
n_0 \geq \frac{|K|}{\log_2(\frac{M}{\Lambda +1})},
\end{equation}
and for KPA,
\begin{equation} \label{n1bound}
n_1 \geq \frac{|K|}{\log_2(\frac{M}{\Gamma +1})}.
\end{equation}\newline
In terms of the experimental parameters of \cite{pra05}, this
gives $n_0 \geq 550, n_1 \geq 490$. While these are much bigger
than $n_0 \sim 120$ bits for English, no precise practical
conclusion can be drawn, both because they are just lower bounds
and because the actual complexity of key determination as a
function of $n$ is not yet known. For the numbers above, the
cryptosystem would be secure if the optimal complexity is
exponential in $n$.
\subsubsection{Complexity-theoretic (CT) Security}
Apart from IT security, the issue of complexity-theoretic (CT)
security is of great practical importance. Indeed, in
\cite{yuen05qph}, we have argued that large enough search
complexity security is as good as information-theoretic security
in reality. For standard ciphers, we have seen that there is no IT
security beyond the nondegeneracy distance. Thus, standard ciphers
rely for their security under KPA basically on the complexity of
algorithms to find the key. We now compare the situation with that
of $\alpha\eta$. For any attack, the mere fact that
$H(K|\mathbf{Y}_n^E)=0$ (for CTA and STA) or $H(K|\mathbf{Y}_n^E
\mathbf{X}_n)=0$ (for KPA) does not mean that the unique key can
be readily obtained from $\mathbf{Y}_n^E$ (and $\mathbf{X}_n$ in
the case of KPA). For most ciphers, one needs to run an algorithm
to obtain it. At worst, this algorithm can be a \emph{brute force
search} - one decrypts $\mathbf{Y}_n^E$ with all the $2^{|K|}$
possible keys until a valid plaintext is obtained. This search can
easily be made prohibitive by choosing $|K|$ large enough --
$|K|\sim 4000$ used in experimental $\alpha\eta$ \cite{pra05} is
already way beyond conceivable search capability. A better
procedure that we call an \emph{assisted brute force search} can
exploit partial knowledge of the possible running key values for
each bit as follows. Since each basis is specified by
$m=\log_2(M)$ bits of the running key, and the seed key is
revealed by a $|K|$-bit sequence of the running key for an ENC box
of Fig.~3 that is an LFSR with known connection polynomial, we
obtain an \emph{assisted brute-force search complexity} of
\begin{equation}\label{searchcomplexity}
\mathcal{C}=\Gamma^{|K|/m}.
\end{equation}
For $|K|=4400$ used in \cite{pra05}, $\mathcal{C} \sim 2^{630}$
which is far beyond any conceivable search capability. While it
is not known what Eve's \emph{optimal} search complexity is, the
advantage here is that this degree of randomization is achieved
automatically by the coherent-state quantum noise at the $\sim$
Gbps rate of operation of the system. Note also that it is not
hard to increase $M$ while maintaining the same data rate because
the number of bits needed to select a basis on the circle scales
logarithmically with $M$.
In practice, heuristic algorithms based on the structure of the
ENC cipher are used to speed up the search. The rigorous
quantitative performance of these algorithms is unknown for
standard ciphers. However, one may view $\alpha\eta$ as an
``enhancer'' of security by providing an additional `physical
encryption' on top of the standard `mathematical encryption'
provided by the ENC box as follows.
For the ENC of Fig.~3 used as a standard cipher, so that
\begin{equation} \label{ENC}
Y_i=X_i\oplus K_i, \ K_i=ENC(\mathbf{K}_s),
\end{equation}
let the unicity distance for KPA be $n_1$ . Let us assume that
there exists an algorithm ALG($Y_{n_1},X_{n_1})$) whose output is
the seed key $K_s$ and that ALG has complexity $C$ when used with
inputs of length $n_1$. In order to compare this complexity with
that of $\alpha\eta$, we assume that the same ENC is used in an
$\alpha\eta$ system. However, since $m$ bits of the keystream
output of ENC are used to choose the basis for one data bit in
$\alpha\eta$, we first 'match' the data stream and keystream in
$\alpha\eta$ as follows.
We \emph{expand} the ENC output keystream by applying $m$
deterministic $m$-bit to $m$-bit functions $\{f_j\}_{j=1}^m$ to
each keystream symbol $Z_i$ to get a new keystream $\mathbf{Z'}$
as follows:
\begin{equation} \label{kprimeprime} Z' = (f_1(Z_1), \cdots,
f_m(Z_1), f_1(Z_2), \cdots, f_m(Z_2), \cdots).
\end{equation}
We then use $Z'$ instead of $Z$ to choose the basis for each data
bit.
The above modification results in the $i$-th $m$-block of
ciphertext $Y_{(i-1)m} \cdots Y_{im}$ being dependent only on
$K_{(i-1)m} \cdots K_{im}$ and $X_{(i-1)m} \cdots X_{im}$ for
\emph{both} ENC and $\alpha\eta$ with ENC. Under a KPA on ENC
alone, using a known plaintext of length $n_1$, $K_1\ldots
K_{n_1}$ is known \emph{exactly}. For ENC augmented with
$\alpha\eta$ in the described manner, it \emph{may} happen that
because of the randomization of $Z'_1 \cdots Z'_{n_1}$, $K_1\ldots
K_{n_1}$ is not fixed by $\mathbf{Y}_{n_1}$ and
$\mathbf{X}_{n_1}$. In the latter case, we have IT security above
that of ENC alone, even though such security may be lost for large
enough $n$, as mentioned in the previous subsection.
Let us assume that, at the nondegeneracy distance $n_1$ of ENC,
$\alpha\eta$ with ENC does \emph{not} have IT security, so that
$H(K|\mathbf{X}_{n_1}\mathbf{Y}_{n_1})=0$. Assume also that
$n_1=mk$. Even in such a case, it appears harder to implement the
algorithm ALG that finds the key. As discussed in Section 2.2, the
reason is that the randomization of the ciphertext $Y_i$, for each
$i$, leaves each $Z_i$ undetermined immediately after the
measurement, even though, by our present assumption, only one
possible seed key $K$ can lead to the observed measurement
results. If the number of possibilities for each $Z_i$ is $l$, Eve
may need to run the algorithm ALG $l^{k}$ times resulting in a
complexity of $l^{n_1/m}C$ versus $C$ for ENC alone. Of course,
there may exist a clever algorithm that enables her to do much better. All
we claim here is that $\alpha\eta$ provides an additional but
unquantified layer of security over that of the ENC box against
KPA, both in the IT and CT senses. Thus, $\alpha\eta$ can be run
on top of any standard cipher in use at present, e.g. AES
(Advanced Encryption Standard), and provides an additional,
qualitatively different layer of physical encryption security over
AES under a known-plaintext attack.
An interesting point is that, if the above level of CT security
against known-plaintext attack is sufficiently high for some data
length $n$, there is at least as much security against CTA for the
same $n$. However, this comparison may not be practically
meaningful as a CTA can typically be launched for the entire
sequence of data while usually only a much smaller segment of
known-plaintext is available to the attacker. Typically, this
would imply the attacks can be parallelized. On the other hand,
the situation is practically favorable with AES used in the ENC -
see ref. \cite{yuen06pla}, where the immunity of $\alpha\eta$
against fast correlation attacks with and without DSR are also
treated.
\subsection{Overview of $\alpha\eta$ Features}
We summarize the main known advantages and rigorous security
claims regarding $\alpha\eta$ compared to previous ciphers:
\begin{enumerate} [(1)]
\item For known-plaintext attacks on the
key, $\alpha\eta$ using an LFSR has an additional brute force
search complexity given by $\Gamma^{|K|/m}$. When reconfigured as
in Sec. 4.3.3, it also has at least as much IT security as the ENC
box alone for the same length $n$ of data.
\item It may, when supplemented with further techniques \cite{yuen03}, have
information-theoretic security against known-plaintext attacks
that is not possible with nonrandom ciphers, and would also have
maximal information-theoretic security against ciphertext-only
attacks.
\item With added Deliberate Signal Randomization (DSR) \cite{yuen03}, it is expected to have improved information-theoretic
security on the data far exceeding the Shannon limit.
\item It has high-speed private true randomization (from quantum noise that even Alice does not know), which is not
possible otherwise with current or foreseeable technology.
\item It suffers no reduction in data rate compared to other known
random ciphers, because Bob needs to resolve only two and not $M$
possibilities (i.e, one data bit is transmitted per qumode).
\item It provides physical encryption, different from usual
mathematical encryption, that forces the attacker to attack the
optical line rather than simply the electronic bit output.
\end{enumerate}
\section{Nishioka et al's criticisms of $\alpha\eta$}
In this section, we discuss the criticisms made by Nishioka et al
\cite{nishioka04,nishioka05} and respond to them. This section has
some overlap with \cite{nair05} (that was not published), but
contains new material.
\subsection{Claims in Nishioka \emph{et al} \cite{nishioka05}}
Nishioka \emph{et al} claim that $\alpha\eta$ can be reduced to a
classical non-random stream cipher under the attack that we now
review. For each transmission $i$, Eve makes a heterodyne
measurement on the state and collapses the outcomes to one of $2M$
possible values. Thus, the outcome $j \in \{0, \cdots, 2M-1\}$ is
obtained if the heterodyne result falls in the wedge for which the
phase $\theta \in [\theta_j-\pi/2M, \theta_j+\pi/2M]$, where
$\theta_j= \pi j/M$. Further, for $q \in \{0, \cdots, M-1\}$
representing the $M$ possible values of each $Z_i$, Nishioka
\emph{et al} construct a function $F_j(q)$ with the property that,
for each $i$, and the corresponding running key value $Z_i$
actually used,
\begin{equation} \label{nishiokadecryption}
F_{j^{(i)}}(Z_i)=r_i
\end{equation}
with probability very close to 1. In fact, for the parameters
$S=100$ and $M=200$, they calculate the probability that
Eq.(\ref{decryption}) fails to hold to be $10^{-44}$, which value
they demonstrate to be negligible for any practical purpose.
The authors of \cite{nishioka05} further claim that the above
function $F_{j^{(i)}}(q)$ can always be represented as the XOR of
two bit functions $G_{j^{(i)}}(q)$ and $l_{j^{(i)}}$, where
$l_{j^{(i)}}$ depends \emph{only} on the measurement result. Thus,
they make the claim that the equation
\begin{equation} \label{reduction}
l_{j^{(i)}}=r_i \oplus G_{j^{(i)}}(Z_i)
\end{equation}
holds with probability effectively equal to 1. They then observe
that a classical additive stream cipher \cite{stinson} (which is
non-random by definition) satisfies
\begin{equation}\label{streamcipher}
l_i=r_i \oplus \tilde{k_i},
\end{equation}
where $r_i$, $l_i$, and $\tilde{k_i}$ are respectively the $i$th
plaintext bit, ciphertext bit and running key bit. Here,
$\tilde{k_i}$ is obtained by using a seed key in a
pseudo-random-number generator to generate a longer running key.
The authors of \cite{nishioka05} then argue that since
$l_{j^{(i)}}$ in Eq.(\ref{reduction}), like the $l_i$ in
Eq.(\ref{streamcipher}), depends just on the measurement result,
the validity of Eq.(\ref{reduction}) proves that the security of
Y-00 is equivalent to that of a classical stream cipher. In
particular, they claim that by interpreting $l_{j^{(i)}}$ as the
ciphertext, Y-00 is not a random cipher, i.e., it does not satisfy
Eq.(\ref{random}) of the next section.
We analyze and respond to these claims and other statements in
\cite{nishioka05} in the following section.
\subsection{Reply to claims in \cite{nishioka05}}
To begin with, we believe that Eq.~(\ref{decryption}) (Eq.~(14) in
\cite{nishioka05}) is correct with the probability given by them.
This content of this equation is simply that Eve is able to
decrypt the transmitted bit from her measurement data $J_N$ and
the key $\mathbf{K}_s$. In other words, it merely asserts that
Eq.(\ref{decryption}) holds for $\mathbf{Y}_N=J_N$. As such, it
does not contradict, and is even \emph{necessary}, for the claim
that $\alpha\eta$ is a random cipher for Eve. In fact, we already
claimed in \cite{yuen03} and \cite{yuen05pla} that such a
condition holds. In this regard, note also that the statement in
Section 4.1 of \cite{nishioka05} that ``informational secure key
generation is impossible when ( Eq.(\ref{decryption}) of this
paper) holds'' is irrelevant, since direct encryption rather than
key generation is being considered here. Furthermore, we have
already pointed out \cite{yuen05qph,yuen03,yuen05pla} that the
Shannon limit prevents key generation with the experimental
parameters used so far, a point missed in
\cite{nishioka04,nishioka05,loko05}. See also \cite{note1}.
We also agree with the claim of Nishioka \emph{et al} that it is
possible to find functions $l_{j^{(i)}}$ and $G_{j^{(i)}}(q)$, the
former depending only of the measurement result $j^{(i)}$, such
that Eq.(\ref{reduction}) holds, again with probability
effectively equal to one. The \emph{error} in \cite{nishioka05} is
to use this equation to claim, in analogy with
Eq.~(\ref{streamcipher}), that $\alpha\eta$ is reducible to a
classical nonrandom stream cipher.
To understand the error in their argument, note that, for
Eq.~(\ref{streamcipher}) to represent an additive stream cipher,
the $l_i$ in that equation should be a function \emph{only} of the
measurement result, and $\tilde{k_i}$ should be a function
\emph{only} of the running key. While the former requirement is
true also for the $l_{j^{(i)}}$ in Eq.~(\ref{reduction}), the
latter is certainly \emph{false} for the function
$G_{j^{(i)}}(Z_i)$ in Eq.~(\ref{reduction}), since it depends
\emph{both} on the measurement result $j^{(i)}$ and the running
key $Z_i$. Indeed, it can be seen that the definition of the
function $F_{j^{(i)}}(Z_i)$, and thus, $G_{j^{(i)}}(q)$ depends on
the sets $C_{j^{(i)}}^+$ and $C_{j^{(i)}}^-$ defined in Eq.~(12)
of \cite{nishioka05}. The identity of these sets in turn depends
on the relative angle between the basis $q$ and Eve's estimated
basis $\tilde{j^{(i)}}= j^{(i)} \bmod M.$ Thus, it is clearly the
case that $G_{j^{(i)}}(Z_i)$ must depend both on $j^{(i)}$ and
$Z_i$, a fact also revealed by the inclusion of the subscript
$j^{(i)}$ by the authors of \cite{nishioka05} in the notation for
$G$.
Notwithstanding the failure of Eq.~(\ref{reduction}) to conform to
the requirements of a stream cipher representation
Eq.~(\ref{streamcipher}), Nishioka \emph{et al} reiterate that
Y-00 is nonrandom because
\begin{equation} \label{l}
H(L_N|R_N, \mathbf{K}_s) =0
\end{equation}
holds, where $\mathbf{L}_N=(l_{j^{(1)}}, \ldots, l_{j^{(N)}})$.
This equation follows from Eq.~(\ref{reduction}) and so by
considering $\mathbf{L}_N\equiv \mathbf{Y}_N$ to be the
ciphertext, the Eq.(\ref{random}) is not satisfied, thus
supposedly making Y-00 nonrandom. The choice of $\mathbf{L}_N$ as
the ciphertext is supported by the statement in \cite{nishioka05}
that ``It is a matter of preference what we should refer to as
``ciphertext''.'' This is indeed true, especially considering that
there are different possible quantum measurements that may be made
on the quantum state in Eve's possession, each giving rise to a
different ciphertext. This point is also highlighted by our
definition of a qauntum random cipher. However, if one wants to
claim equivalence to a non-random cipher for some particular
choice of ciphertext $\mathbf{Y}_N$, one must show that Eq.~(10)
is violated \emph{and} that Eq.~(11) is satisfied using the chosen
ciphertext in \emph{both} equations. In other words, no
equivalence to any kind of cipher is shown unless one can also
decrypt \emph{with the chosen ciphertext} and key alone. However,
one may readily see that, taking $\mathbf{Y}_N=\mathbf{L}_N$,
Eq.~(\ref{decrypt}) is not satisfied, i.e.,
$H(\mathbf{R}_N|\mathbf{L}_N,\mathbf{K}_s)\neq 0$.
The reason is that, as we noted from our analysis above of the
function $G_{j^{(i)}}(q)$, decrypting $r_i$ requires knowledge of
certain ranges in which the angle between the basis chosen by the
running key and the estimated basis $\tilde{j^{(i)}}$ falls. To
convey this information \emph{for every possible} $j^{(i)}$, one
needs at least $\log_2(2M)$ bits. It follows that the single bit
$l_{j^{(i)}}$ is insufficient for the purpose of decryption, and
so Eq.~(\ref{decrypt}) cannot be satisfied for
$\mathbf{Y}_N=\mathbf{L}_N$. Therefore, we conclude, that in the
interpretation of $\mathbf{L}_N$ as the ciphertext, decryption is
not possible even if Eve has the key $\mathbf{K}_s$. Indeed, it is
$\mathbf{J}_N$ that can be regarded as a possible ciphertext,
since Eq.~(\ref{decrypt}) is satisfied for
$\mathbf{Y}_N=\mathbf{J}_N$. However, with this choice of
ciphertext, Y-00 necessarily becomes a \emph{random} cipher,
because $H(\mathbf{J}_N|\mathbf{R}_N,\mathbf{K}_s) \neq 0$, a fact
admitted by Nishioka \emph{et al} in \cite{nishioka05}.
We hope that the discussion above makes it clear that the
`reduction' of $\alpha\eta$ in \cite{nishioka05} to a non-random
cipher is false, and that in fact, no such reduction can be made
under the heterodyne attack considered in \cite{nishioka05}.
Indeed, as detailed in previous sections, the representation of
ciphertext by $\mathbf{Y}_N=\mathbf{J}_N$ does reduce it to a
\emph{random} cipher under the heterodyne attack. Its quantitative
random cipher characteristics, namely $\Gamma$ of
Eq.~(\ref{Gammarandomcipherdef}) and $\Lambda$ of
Eq.~(\ref{lambdarandomcipherdef}), are as follows, for various
definitions of ``ciphertext'' adopted.
If the full continuous observation on the circle is taken as the
ciphertext, then (\ref{Gammahet}) shows that $\Gamma \sim 3$ for
typical experimental parameters. If the ciphertext alphabet is
digitized and taken to be the $2M$ arc segments around the $2M$
states on the circle, then $\alpha\eta$ has, for any
$(\mathbf{x}_i,z_i,r)$, $\Lambda+1 =2(\Gamma+1)$ where $\Gamma$ is
given by (\ref{Gammahet}). If one attempts to `de-randomize' the
ciphertext by clubbing together the possibilities, $\Gamma$ would
increase while $\Lambda$ would decrease. In the nonrandom limit
where a fixed half-circle observation is taken to represent each
bit value, which is the nonrandom reduction discussed in
\cite{yuen05pla}, $\Gamma$ would increase from that of
Eq.~(\ref{Gammahet}) to $M$, making attacks on the key completely
impossible. On the other hand, while $\Lambda =0$ for a binary
ciphertext alphabet, the $2M$-outcome ciphertext would lead, from
Eq.~(\ref{Gammahet}), to an error probability per ciphertext bit
for Eve \cite{yuen05pla}:
\begin{equation}\label{wedgeerror}
P_b^E \sim 2/{\pi \sqrt{S}}. \end{equation} Eq.~(\ref{wedgeerror})
is obtained in the wedge approximation on a per qumode basis for
Eve, under the assumption that the state is uniformly distributed
on the circle which is satisfied for uniform data and an LFSR for
the ENC box of Fig.~3. It leads to $0.1-1\%$ error rate for Eve on
the ciphertext (not data \cite{note2}) for the experimental
parameters of \cite{barbosa03,pra05}. As a consequence, the data
security will far exceed the Shannon limit (\ref{shannonlimit})
because she would make many errors even when the correct key is
given to her for decryption. For any other ciphertext alphabet
division of the circle, it is clear that $\Lambda >0$ for any
$z_i$ and $\mathbf{x}_n$ from the same randomization for states
near the ciphertext alphabet boundaries on the circle.
In sum, there can be no nonrandom reduction of $\alpha\eta$. If
the ciphertext alphabet is chosen to make $\alpha\eta$ nonrandom,
then known-plaintext attack on the key is impossible and the
ciphertext itself would be obtained with significant noise.
We conclude this section by responding to some other statements
made in \cite{nishioka05}.
In Section 3.3, Nishioka \emph{et al} claim that ``The value of
$l_{j^{(i)}}$ does not have to be the same as that of
$l_{j^{(i')}}$ when $i \neq i'$, even if $j^{(i)}=j^{(i')}$
holds.'' This statement is in direct contradiction to their
previous statement in the same subsection that ``$l_{j^{(i)}}$
depends only on the measurement value $j^{(i)}$''.
In the same subsection, Nishioka \emph{et al} claim that ``In
(\cite{nishioka04}), we showed another concrete construction of
$l_{j^{(i)}}$ ...''. We could find no explicit construction of
$l_{j^{(i)}}$ in that paper. We were led to the choice of $l_i$
described in \cite{yuen05pla} by the attempt to make the stream
cipher representation Eq.~(\ref{streamcipher}) valid. In fact,
such a representation is claimed by Nishioka \emph{et al} in their
Case 2 of \cite{nishioka04}. It turned out, however, that
decryption using that $l_i$ suffered a $0.1-1$\% error depending
on the value of $S$ used as noted above. See \cite{yuen05pla} for
further details. While it was later claimed that they have a
different reduction in mind \cite{nishioka05}, the reduction in
\cite{yuen05pla} is the only one that makes $\alpha\eta$ nonrandom
(but in noise). In any case, as we have shown above, no
construction of a single-bit from the heterodyne or phase
measurement results can satisfy Eq.(\ref{decryption}) with the
extremely low probability given in \cite{nishioka05}.
\section{Acknowledgements}
We would like to thank Greg Kanter, Chuang Liang, and Koichi
Yamazaki for useful discussions. This work was supported by DARPA
under grant F30602-01-2-0528 and by AFOSR under grant
FA9550-06-1-0452.
\section*{Appendix A -- Security under Statistical and Known-Plaintext Attacks}
In this appendix, we summarize some relevant terminology and
results from ref. \cite{yuen05qph} on the key security of a random
cipher. We first present an overview of the various possible
cryptographic attacks possible on a cipher and some early results
on the subject. We also present our result on the security of a
nonrandom cipher under known-plaintext attacks. In the process, we
define the important term `unicity distance' coined by
Shannon and broaden it to include the notion of `unicity distance
under known-plaintext attack' for both random and nonrandom
ciphers. We also define the important concept of `nondegeneracy'
for both random and nonrandom ciphers that is needed to make the
concept of unicity distance meaningful. Finally, we discuss how
random ciphers may enhance security against known-plaintext
attacks.
The following terminology in regard to cryptographic attacks has
bee used in this paper, as in \cite{yuen05qph}. This terminology
is not standard, however. In the cryptography literature, what we
call statistical attacks are sometimes referred to as
ciphertext-only attacks (See, e.g., \cite{stinson}, Ch. 2) but are
also often lumped together with known-plaintext attacks.
By a \emph{ciphertext-only attack (CTA)}, we refer to the case
where the probability distribution $p(\mathbf{X}_n)$ is completely
uniform, i.e., $p(\mathbf{X}_n) =2^{-n}$ to Eve, so that her
attack cannot exploit input frequencies or correlations and must
be based only on the ciphertext in her possession. By a
\emph{statistical attack (STA)}, we refer to the case where the
probability distribution $p(\mathbf{X}_n)$ is nonuniform, so that
Eve may in principle exploit input frequencies or correlations to
launch a better attack. Such an attack is typical when the
plaintext is in a language such as English. It is also the attack
that obtains when the $\{X_i\}$ are independent and identically
distributed (i.i.d.) but each $p(X_i)$ is nonuniform. By a
\emph{known-plaintext attack (KPA) } we mean the case where Eve
knows \emph{exactly} some length $m$ of plaintext $\mathbf{x}_m$.
Finally, by a \emph{chosen-plaintext attack (CPA)}, we mean a KPA
where the data $\mathbf{x}_m$ is chosen by Eve.
In standard cryptography, one typically does not worry about
ciphertext-only attack on nonrandom ciphers. The reason is that,
under CTA, Eq.~(\ref{shannonlimit}) is satisfied with equality for
large $n$ for the designed key length $ |K|=H(K)$ under a certain
`nondegeneracy' condition \cite{jkm} that is readily satisfied.
Thus, in practice, the data security is assumed to be sufficient
if $H(K)$ is chosen large enough by adjusting the key length. In
this paper, we would essentially make the same assumption and,
with few exceptions, do not discuss data security per se. However,
it follows from (\ref{shannonlimit}) that no meaningful lower
bound on $H(\mathbf{X}_n|\mathbf{Y}_n)$ exists for $n \gg |K|$. A
new fundamental treatment of data security in symmetric-key
ciphers has to be developed separately. Under CTA, it is also the
case for nonrandom nondegenerate ciphers that \cite{jkm}
\begin{equation}\label{keysecurity}
H(K|\mathbf{Y}_n)=H(K),
\end{equation}
i.e., the key is
\emph{statistically independent} of the ciphertext. Thus, no
attack better than pure guessing can be launched on the key.
The above two results do not hold for statistical and
known-plaintext attacks. Eve can indeed launch an attack on the
key and use her resulting information on the key to get at future
and past data. In fact, it is such attacks that are the focus of
concern for standard ciphers such as the Advanced Encryption
Standard (AES). For STAs, Shannon \cite{shannon49} characterized
the security by the so-called unicity distance. The \emph{unicity
distance} $n_0$ of a cipher is the smallest input data length for
which $H(K|\mathbf{Y}_{n_{0}}) = 0$. In other words, if a
plaintext sequence of length $n_0$ is encrypted by the cipher, the
ciphertext contains enough information to fix the key (and hence,
the plaintext) uniquely -- the cipher has no information-theoretic
security. For nonrandom ciphers defined by Eq. (\ref{nonrandom}),
Shannon, in \cite{shannon49}, derived in terms of the data entropy
an estimate on $n_0$ that is independent of the cipher. This
estimate is actually \emph{not} a rigorous bound. Indeed, it can
be shown that one of the inequalities used in the derivation goes
in the wrong direction. Even so, the estimate works well
empirically for English language plaintexts, for which $n_0 \sim
25$ characters are found to be sufficient to break many ciphers.
We now consider, in some detail, security against known-plaintext
attacks. Here, a natural quantity to consider is
$H(K|\mathbf{X}_n\mathbf{Y}_n)$, since it provides a measure of
key uncertainty when both plaintext and ciphertext are known to
the attacker. Before we state the main result, we define the
notion of nondegeneracy distance. The reader can readily convince
himself that a finite unicity distance exists only if, for some
$n$, there is no \emph{redundant key use} in the cryptosystem,
i.e., no plaintext sequence $\mathbf{x}_n$ is mapped to the same
ciphertext $\mathbf{y}_n$ by more than one key value. With
redundant key use, one cannot pin down the key but it seems that
this may not enhance the system security either, and so is merely
wasteful. The exact possibilities will be analyzed elsewhere. For
now, we call a
cipher \emph{nondegenerate}
in this paper if it has no redundant key use for some finite $n$
or for $n \rightarrow \infty$.
Under the condition
\begin{equation} \label{nondegenerate}
\lim_{n \rightarrow \infty} H(\mathbf{Y}_n|\mathbf{X}_n) =
H(K),\end{equation} which is similar but not identical to the
definition of a `nondegenerate' cipher given in \cite{jkm}, one
may show that, when Eq.~(\ref{nonrandom}) also holds, one has
\begin{equation} \label{broken} \lim_{n \rightarrow \infty}
H(K|\mathbf{X}_n,\mathbf{Y}_n) = 0,
\end{equation}
so that the system is asymptotically broken under a
known-plaintext attack. More generally, for a nonrandom cipher, we
define a
\emph{nondegeneracy distance} $n_d$ to be the smallest $n$ such that
\begin{equation} \label{nondegdist}
H(\mathbf{Y}_{n}|\mathbf{X}_{n})=H(K)
\end{equation}
holds, with $n_d =\infty$ if (\ref{nondegenerate}) holds and there is no finite $n$ satisfying
(\ref{nondegdist}). Thus, a nonrandom cipher is nondegenerate in
our sense if it has a nondegeneracy distance, finite or infinite. In general, of course, the cipher
may be \emph{degenerate}, i.e., it has no nondegeneracy distance.
We can readily show (see Appendix A of \cite{yuen05qph}) that, under known-plaintext attack, a nonrandom nondegenerate
cipher is broken at data length $n=n_d$, in the sense that
\begin{equation} \label{kpabroken}H(K|\mathbf{X}_{n_d}\mathbf{Y}_{n_d})=0.
\end{equation}
More generally, for both random and nonrandom ciphers, we define
the \emph{unicity distance under known-plaintext attacks}, denoted
by $n_1$, to be the smallest integer such that
\begin{equation} \label{unicitydistKPA}
H(K|\mathbf{X}_{n_1}\mathbf{Y}_{n_1}) = 0. \end{equation} If no
such integer exists, the unicity distance under KPA is taken to be
infinite if $\lim_{n \rightarrow \infty}
H(K|\mathbf{X}_n\mathbf{Y}_n) = 0$. Thus, $n_1$ is the minimum
length of data needed to break the cipher for \emph{any} possible
known-plaintext $\mathbf{X}_n$. For a nonrandom cipher, it is
equal to the nondegeneracy distance.
Many ciphers including the one-time pad and LFSRs (linear feedback
shift registers \cite{stinson}) have finite $n_d$. Similar to the
case of $n_d$ for nonrandom ciphers, $n_1$ for a random cipher may
not always exist. For our definition of $n_1$ to make sense for
random ciphers, we will impose a `nondegeneracy' restriction on
random ciphers: A \emph{random cipher} is said to be
\emph{nondegenerate} if and only if \emph{each} nonrandom cipher
resulting from an assignment $\mathbf{R}=\mathbf{r}$ of the
randomizer is nondegenerate. Then we say it has
\emph{information-theoretic security against known-plaintext
attacks} if
\begin{equation} \label{ITsecurityKPA}
\inf_n H(K|\mathbf{X}_n,\mathbf{Y}_n) \neq 0,
\end{equation}
i.e., if $H(K|\mathbf{X}_n,\mathbf{Y}_n)$ cannot be made
arbitrarily small whatever $n$ is. In other words, $n_1$ does not
exist. The actual level of the information-theoretic security is
quantified by the left side of (\ref{ITsecurityKPA}). One major
motivation to study random ciphers is the \emph{possibility} that
they possess such information-theoretic security. Some discussion
on this point is also available in Appendix A of \cite{yuen05qph}.
Even in the absence of information-theoretic security,
nondegenerate random ciphers can be expected (see the discussion
in Section 2.2) to have larger unicity distance $n_1$ under KPA
compared to the case where the randomization is turned off. This
would, as assumed in cryptography practice, increase the
complexity of attacking the key significantly. If
Eq.~(\ref{kpabroken}) holds when $\mathbf{X}_{n}$ is replaced by a
specific $\mathbf{x}_{n}$, $n$ defines the unicity distance
corresponding to $\mathbf{x}_{n}$. The overall unicity distance
under KPA may be defined by
\begin{equation} \label{overallunicitydistance}
\bar{n}_1= \min_{H(K|\bf{X}_n = \bf{x}_n, Y_{n})=0} n \textup{ for
some } \bf{x}_n.
\end{equation}
The above result has not been given in the literature, perhaps
because $H(K|\mathbf{X}_n\mathbf{Y}_n)$ has not been used
previously to characterize known-plaintext attacks. Nevertheless,
it is assumed to be true in cryptography practice that $K$ would
be pinned down for sufficiently long $n$ in a nonrandom
`nondegenerate' cipher.
We now discuss the advantages that a random cipher provides as
compared to nonrandom ciphers. For the case of STA on the key when
the plaintext $\mathbf{X}_n$ has nonuniform but i.i.d.
statistics, the so-called \emph{homophonic substitution} method
provides complete information-theoretic security, i.e.
$H(K|\mathbf{Y}_n)=H(K)$ \cite{jkm}. The original form of
homophonic substitution involves assigning to each plaintext
symbol a number of possible \emph{sequences} of length $l$
proportional to its a priori probability in such a way that all
possible $l$-sequences are covered. Then, for every input symbol,
if one of its assigned $l$-sequences is generated at random, the
net effect is to generate $l$-sequences of plaintext with i.i.d.
uniform statistics. These sequences may be passed through a
non-degenerate cipher without revealing information on the key as
per Eq.~(\ref{keysecurity}). To put it another way, a statistical
attack has been converted to a ciphertext-only attack. A
generalized homophonic substitution that allows each symbol to be
coded into sequences of variable length is discussed in
\cite{jkm}, for which it is shown that sometimes data compression
instead of data expansion results.
Unfortunately, this reduction of a STA to a CTA does not work for
known-plaintext attacks. However, we emphasize that there is
\emph{no result} on random ciphers analogous to
Eq.~(\ref{kpabroken} ) with $n_d$ replaced by any definite $n$
depending on the cipher, since under randomization,
Eq.~(\ref{nonrandom}), and usually (\ref{nondegdist}) also, does
not hold for any $n$. Indeed, an inspection of the defining
equation Eq.~(\ref{Gammarandomcipherdef}) for a random cipher (or
Fig.~1) suggests how a random cipher may provide greater security
against KPAs. For a given plaintext-ciphertext sequence pair,
Eq.(\ref{Gammarandomcipherdef}) suggests that one has some
residual uncertainty on the value of the keystream $(Z_1,\ldots,
Z_n)$, which does not exist for a corresponding nonrandom cipher.
On the other hand, Eq.(\ref{Gammarandomcipherdef}) refers only to
the per-symbol uncertainty of the key stream calculated without
regard to the ciphertext observed for the other symbols in the
sequence. When such correlations are taken into account, the
uncertainty on the keystream may be drastically reduced and we can
give no general quantitative assertions of information-theoretic
security. Note, however, that due to the randomization, the
unicity distance $n_1$ of a random cipher under known-plaintext
attacks can be expected to be bigger than that of any of its
nonrandom reductions. Thus, the complexity-based security would be
greater.
In fact, the general problem of attacking a random cipher has
received limited attention because they are \emph{not used in
practice} due to the associated reduction in effective bandwidth
or data rate as is evident in homophonic substitution, due to the
need for high speed random number generation, and also due to the
uncertainty on the actual input statistics needed for, e.g.,
homophonic substitution randomization. Thus, the rigorous
quantitative security of symmetric-key random ciphers against
known-plaintext attacks is not known theoretically or empirically,
although in principle random ciphers have actual and potential
advantages just discussed.
|
2,877,628,090,314 | arxiv | \section{Introduction}
Deep learning~(\cite{lecun2015deep}) is one of the most successful technologies in the last decade.
Most of its success comes from the use of stochastic gradient descent (SGD) algorithms~(\cite{robbins1951stochastic}) to optimize deep neural networks using first-order gradients over the network parameters obtained by a back-propagation technique for a given optimization problem.
Therefore, in parallel with the development of various network structures like residual networks~(\cite{he2016deep}), various SGD-based optimizers that can optimize the networks in a more stable and efficient manner have been pursued.
The most representative optimizer would be Adam~(\cite{kingma2014adam}), and a lot of its variants with respective features have been proposed (see the survey for details in~\cite{sun2019survey,schmidt2021descending}).
Among these, in our knowledge, RAdam~(\cite{liu2019radam}) and AdaBelief~(\cite{zhuang2020adabelief}) have illustrated the state-of-the-art (SOTA) learning performance.
One of the features of the SGD optimizers is robustness to noise in gradients, which can result for example from the use of noisy datasets with sensory errors, mislabeling~(\cite{mirylenka2017classifier,suchi2019easylabel})
and from optimization problems that require the use of estimated inputs and/or outputs like long-term dynamics learning~(\cite{chen2018neural,kishida2020deep}), reinforcement learning (RL)~(\cite{sutton2018reinforcement}), and distillation from the trained teacher(s)~(\cite{rusu2015policy,gou2021knowledge}).
This feature is for example essential in robot learning problems, where the available datasets can be small, making easily apparent the adverse effects of noise.
Not only that, but it has been shown empirically in~(\cite{simsekli2019tail}) and~(\cite{zhou2020towards}) that the norm of the gradient noise in both Adam and SGD had heavy tails, even in the absence of input/output noise.
In order to deal with such noise and robustly carry out efficient gradient updates, previous studies have proposed to detect and exclude the aberrant gradients affected by noise.
In particular, the work by~\cite{ilboudo2020robust} has focused on the fact that the first-order momentum used in recent Adam-like optimizers is computed with exponential moving average (EMA), which can be regarded as the mean of the normal distribution, known to be sensitive to noise.
By converting such a noise-sensitive first-order momentum into the t-momentum --- derived from the Student's t-distribution --- which is robust to noise, most of the Adam-like optimizers can acquire robustness.
Furthermore, in a follow-up work~(\cite{ilboudo2021adaptive}), the authors showed how the degrees of freedom of the t-momentum which determines the level of robustness, could be tuned automatically by combining the t-momentum algorithm with a heuristic estimation method.
This latest version of the t-momentum algorithm, called At-momentum, was then applied to a practical behavioral cloning dataset and shown to have acquired some adaptive behaviors to different noise ratio in the optimization problem to be solved while suppressing excessive robustness.
However, these two previous works modify only the first-order momentum, and the second-order momentum, which also usually appears in the Adam-like optimizers, is updated by the basic EMA.
\cite{ilboudo2020robust} motivated this choice by relying on the fact that the corresponding smoothness parameter $\beta_2$ is usually set to be large and therefore has the potential to reduce the effects of noise.
Unfortunately, such design induces a modeling discrepancy between the Student's t-distribution model applied for the first-order moment and the Gaussian distribution applied to the second moment.
Similarly, although the latest version makes the degrees of freedom adjustable, it is unclear whether the heuristic adjustment violates the other assumptions and/or interferes with the other parts of the algorithm.
In the end, the lack of a unified derivation of the algorithm as a noise-robust optimizer could allow potential problems to be included.
This paper therefore proposes the \textit{Adaptive T-distribution estimated robust moments} algorithm, called AdaTerm, with a unified derivation of all the distribution parameters based on their gradients for online maximum likelihood estimation of the Student's t-distribution.
AdaTerm introduces adaptive step sizes specifically chosen to turn the gradient-based update of the statistics into an interpolation between the past statistics values and the update amounts.
Such adaptive step sizes also allow the smoothness parameters $\beta$ to be common for all the involved statistics.
In addition, since the gradient of the degrees of freedom in the multi-dimensional case has been reported to not be consistent with our expectation (briefly stated, it is too small; see details in~\cite{ley2012value}), it is appropriately approximated as in the one-dimensional case where our expectation holds.
This approximated value can also be used as the upper bound.
As expected, AdaTerm can obtain the following qualitative behaviors from the above implementations: if the given gradients are considered aberrant, it reinforces the robustness while excluding the gradients; otherwise, it relaxes the robustness to facilitate updates.
Including the proposed AdaTerm, all optimizers are required to derive their regret bounds to theoretically analyze their convergence.
However, since the appearance of the AMSGrad paper~(\cite{reddi2018convergence}), all subsequent literature have assumed its usage in order to derive a regret bound, even when the target optimizer does not usually use AMSGrad.
To avoid this theoretical and practical contradiction, we devise a new trick for deriving a theoretical regret bound without AMSGrad, which can be employed for other optimizers.
Our contributions in this paper are four folds:
\begin{enumerate}
\item Unified derivation of a novel SGD algorithm that is adaptively robust to noise;
\item Easing the difficulty of tuning hyper-parameters by using a common smoothness parameter for all the distribution parameters;
\item Theoretical proof of the regret bound without the necessity of AMSGrad;
\item Numerical verification of usefulness in major test functions and typical problems (i.e. classification problems with mislabeling, long-term prediction problems, reinforcement learning, and policy distillation).
\end{enumerate}
In the last verification, we compared not only AdaBelief and RAdam as the state-of-the-art algorithms but also t-Adam variants developed in the related work.
\section{Problem statement and related works}
\label{sec:problem}
\subsection{Optimization problem solved by SGD optimizer}
Let us briefly define the optimization (minimization without loss of generality) problem that we will solve using either of the SGD optimizers.
Suppose that some input data $x$ and output data $y$ are generated according to the problem-specific (stochastic) rule, $p(x, y)$.
Then, the problem-specific minimization target, $\mathcal{L}$, is given as follows:
\begin{align}
\mathcal{L} = \mathbb{E}_{x,y \sim p(x,y)}[\ell(f(x; \theta), y)]
\end{align}
where $\ell$ denotes the loss function for each data, and $f(x; \theta)$ denotes the mapping function (e.g. from $x$ to $y$) with the parameter set $\theta$ which is optimized through this minimization (e.g. network weights and biases).
The above expectation operation can be approximated by Monte Carlo method.
In other words, let a dataset containing $N$ pairs of $(x, y)$, $\mathcal{D} = \{(x_n, y_n)\}_{n=1}^N$, be constructed according to the problem-specific rule.
With $\mathcal{D}$, the above minimization target is replaced as follows:
\begin{align}
\mathcal{L}_{\mathcal{D}} = \cfrac{1}{|\mathcal{D}|} \sum_{x_n, y_n \in \mathcal{D}} \ell(f(x_n; \theta), y_n)
\end{align}
where $|\mathcal{D}|$ denotes the size of dataset ($N$ in this case).
We can then compute the gradient w.r.t. $\theta$, $g = \nabla_\theta \mathcal{L}_{\mathcal{D}}$, and use gradient descent to obtain the (sub)optimal $\theta$ that (locally) minimizes $\mathcal{L}_{\mathcal{D}}$.
However, if $\mathcal{D}$ is large, the above gradient computation would be infeasible due to the limitation of computational resources.
The SGD optimizer~(\cite{robbins1951stochastic}) therefore extracts a subset (a.k.a. mini batch) at each update step $t$, $\mathcal{B}_t \subset \mathcal{D}$, and updates $\theta$ from $\theta_{t-1}$ to $\theta_t$ as follows:
\begin{align}
g_t &= \nabla_{\theta_{t-1}} \mathcal{L}_{\mathcal{B}_t}
\\
\theta_t &= \theta_{t-1} - \alpha \eta(g_t)
\end{align}
where $\alpha > 0$ denotes the learning rate, and $\eta$ represents a function used to modify $g_t$ in order to improve the learning performance.
Namely, various SGD optimizers have their own $\eta$.
For example, in the case of Adam~(\cite{kingma2014adam}), which is the most popular optimizer in recent years, $\eta$ is given with three hyper-parameters, $\beta_1 \in (0, 1)$, $\beta_2 \in (0, 1)$, and $\epsilon \ll 1$.
\begin{align}
\eta^\mathrm{Adam}(g_t) = \cfrac{m_t (1 - \beta_1^t)^{-1}}{\sqrt{v_t (1 - \beta_2^t)^{-1}} + \epsilon}
\end{align}
where
\begin{align}
m_t &= \beta_1 m_{t-1} + (1 - \beta_1) g_t
\\
v_t &= \beta_2 v_{t-1} + (1 - \beta_2) g_t^2
\end{align}
A simple interpretation of Adam is that, since $g_t$ fluctuates depending on how $\mathcal{B}_t$ is sampled, the first momentum $m_t$ smoothes $g_t$ with the past gradients to stabilize the update, and the second momentum $v_t$ scales $g_t$, which is different depending on the problem, to increase the generality.
\subsection{Inaccurate datasets in practical cases}
If the above problem setting is satisfied, we can stably acquire one of the local solutions through one of the SGD optimizers, although their regret bounds may be different~(\cite{reddi2018convergence,alacaoglu2020new}).
However, in real problems, it is difficult to make datasets that follow the problem-specific rules exactly.
For example, in RL~(\cite{sutton2018reinforcement}), the optimal actions for the target tasks cannot be explicitly defined, so an agent learns the policy (i.e. mapping from state inputs to action outputs) by estimating the optimal actions based on reward values given from the tasks.
Likewise, when distilling knowledge and skills from trained large-scale models to a smaller one~(\cite{gou2021knowledge}), biases resulting in training will prevent the generation of accurate supervised signals.
As a third example, in the problem of learning dynamics from time-series data, the long-term prediction accuracy may be involved in the loss function with predicted values as inputs~(\cite{chen2018neural}).
Even in classification problems, it is not realistic to expect all the data to be correctly labelled, especially when employing (semi-)automatic annotation techniques~(\cite{suchi2019easylabel}).
All of the above examples can be interpreted as the problem of learning with an inaccurate dataset $\mathcal{\tilde{D}} = \mathcal{D} \cup \mathcal{E}$ with a noisy subset, $\mathcal{E} \nsubseteq \mathcal{D}$, caused by the estimated values, mislabeling, and aforementioned difficulties.
Please note that the optimization for $\mathcal{D}$ would be impossible if $|\mathcal{D}| < |\mathcal{E}|$ since the majority is switched; hence, we assume $|\mathcal{D}| > |\mathcal{E}|$.
\subsection{Related works}
The inherent noisiness of the stochastic gradient combined with the ubiquity of imperfect data in practical settings has encouraged the propositions of more robust and efficient machine learning algorithms against noisy or heavy-tailed datasets.
All these methods can be divided into two main approaches, going from methods that produce robust estimates of the loss function, to methods based on the detection and attenuation of wrong gradient updates~(\cite{gulcehre2017robust, holland2019efficient, prasad2020robust, kim2021hyadamc}).
Each approach has its own pros/cons as summarized in Table~\ref{tab:road_to_robustness}.
\begin{table*}[tb]
\caption{Pros/Cons of the two main approaches to robustness
}
\label{tab:road_to_robustness}
\centering
\begin{tabular}{l cc}
\hline\hline
& \multicolumn{2}{c}{Approach}
\\
& Robust Loss Estimation & Robust Gradient Descent
\\
\hline
Pros
& Robustness independent of batch size
& Widely applicable
\\
\hline
Cons
& Usually problem specific
& Robustness dependent of batch size and outliers repartition
\\
& Usually require the use of all the available data
& Rely on only estimates of true gradient
\\
& Can be both unstable and costly in high dimensions
&
\\
\hline\hline
\end{tabular}
\end{table*}
As mentioned in the introductory section, the latter approach is the one we take in this paper.
In particular, we draw inspiration from the work by~\cite{ilboudo2020robust} which first proposed the use of the Student's t-distribution as a statistical model for the gradients of the optimization process.
Indeed, in that work, the popular EMA-based momentum at the heart of the SOTA most recent optimization algorithms and defined by $m_{t} = \beta_{1} m_{t-1} + (1 - \beta_{1}) g_{t}$ was identified with the iterative arithmetic mean $m_{t} = \frac{n - 1}{n} m_{t-1} + \frac{1}{n} g_{t}$ such that $\beta_{1} = \frac{n - 1}{n}$ and $n$ is a fixed number of recent samples independent of $t$.
This classical EMA, derived from the arithmetic mean estimator which is of Gaussian origin was then replaced by a Student's t-based mean estimator $m_{t} = \frac{W_{t-1}}{W_{t-1} + w_n} m_{t-1} + \frac{w_t}{W_{t-1} + w_t} g_{t}$ where $w_i = (\nu + d)/(\nu + \sum_{j=1}^d \frac{(g_{i,j} - m_{i-1,j})^2}{v_{i-1,j}+\epsilon})$, and to match the fixed number of recent samples, the sum $W_t = \sum_{i=1}^t w_i$ used in the usual estimator was replaced by a decaying sum $W_t = \frac{2\beta_{1} - 1}{\beta_{1}} W_{t-1} + w_{t}$.
Unfortunately, the second-order momentum $v_{t}$ is still based on the regular EMA, i.e. $v_{t} = \beta_{2} m_{t-1} + (1 - \beta_{2}) s_{t}$ where $s_{t}$ is a function of the squared gradient, e.g. $s_{t} = g_{t}^2$ for Adam~(\cite{kingma2014adam}) and $s_{t} = (g_{t}-m_{t})^2$ for AdaBelief~(\cite{zhuang2020adabelief}).
This stands in contrast to its usage in the computation of $w_{i}$ which makes the underlying assumption that $v_{t}$ is also derived from the maximum likelihood (ML) Student's t-distribution scale estimator, resulting in the unnatural blending of two statistical models.
Furthermore, the degrees of freedom $\nu$ is treated as an hyper-parameter and kept constant throughout the optimization process.
To alleviate this latter problem, the follow-up work~(\cite{ilboudo2021adaptive}) proposed to adapt and apply the direct incremental degrees of freedom estimation algorithm developed by~\cite{aeschliman2010novel} in order to automatically update the degrees of freedom.
However, this algorithm is based on an approximation of $\mathbb{E}[\log{\norm{g}^2}]$ and $\mathbb{V}ar[\log{\norm{g}^2}]$ and therefore uses a different approach than the ML approach that gave rise to the t-momentum.
In the experiment section below, we show that this difference in approach results in a degrees of freedom with insufficient adaptability, most likely due to the fact that the direct incremental degrees of freedom estimation algorithm was originally developed to work with a specific estimation method for the location and scale parameters that the ML-based t-momentum does not match.
In this paper, we tackle the lack of unified approach by requiring that all of the parameters be estimated under the ML estimation of the t-distribution.
In particular, the previous work was able to derive Student's t-based EMA by relying on the analytic solution of the maximum log-likelihood (i.e. by setting the gradient of the log-likelihood to zero and solving explicitly).
However, although such approach can also be used to estimate a scale parameter $v_{t}$ by also requiring a fixed number of samples in the given solution, the extension to the degrees of freedom is much more difficult due its non-linearity and the absence of a closed-form solution to the ML problem.
To avoid such problem, we therefore propose to use a gradient ascent algorithm for all of the parameters, combined with adaptive step sizes.
We then apply some further tricks to ensure that the algorithm still recovers the EMA in the Gaussian limit and does not violate the positiveness of both the scale and the degrees of freedom parameter.
Namely, we surrogate when necessary the gradients by their upper bound.
The details of the derivation are given in the next section.
\section{Derivation of AdaTerm}
\label{sec:derivation}
\subsection{The Student's t distribution statistical model}
Since the gradient for $\mathcal{E}$ is disturbed, this paper develops a noise-robust optimizer to achieve learning along $\mathcal{D}$ while properly detecting and excluding such disturbed gradient as noise.
Although it is possible to make the loss function $\ell$ noise-robust, e.g. as done by~\cite{ma2020normalized}, the noise-robust optimizer proposed in this paper is highly useful in that it can be employed very widely as a safety net.
In order to exclude the aforementioned disturbed gradients, we assume that the gradients $g$ can be modeled by the Student's t-distribution, which has a heavy tail and is robust to outliers, referring to the related work~(\cite{ilboudo2020robust}).
Note that this statistical model is further supported by the fact that it has been shown empirically~(\cite{simsekli2019tail}) that the norm of the gradient noise in SGD had heavy tails and furthermore that in continuous-time, the gradient's stationary distribution was found~(\cite{ziyin2021strength}) to obey a Student's t-like distribution.
Specifically, $g$ is assumed to be generated from a $d$-dimensional diagonal Student's t-distribution which is characterized by three kinds of parameters:
a location parameter $m \in \mathbb{R}^d$;
a scale parameter $v \in \mathbb{R}_{> 0}^d$;
and a degrees of freedom parameter $\nu \in \mathbb{R}_{> 0}$.
That is,
\begin{align}
g &\sim \cfrac{\Gamma(\frac{\nu + d}{2})}{\Gamma(\frac{\nu}{2})(\nu \pi)^{\frac{d}{2}} \prod_d \sqrt{v_d}}
\left (1 + \cfrac{1}{\nu}\sum_d (g_d - m_d)^2 v_d^{-1} \right )^{-\frac{\nu + d}{2}}
\nonumber \\
&=: \mathcal{T}(g \mid m, v, \nu)
\end{align}
where $\Gamma$ denotes the gamma function.
Here, $d$ corresponds to the dimension size of each subset of parameters, meaning that, following the related work~(\cite{ilboudo2020robust}) and PyTorch implementation~(\cite{paszke2017automatic}), AdaTerm is applied to a weight matrix and a bias in each layer separately.
In the paper by~\cite{ilboudo2020robust}, only $m$ has been estimated based on its maximum likelihood solution, and in a later paper~(\cite{ilboudo2021adaptive}), $\nu$ has been adjusted by a heuristic approximation of the maximum likelihood~(\cite{aeschliman2010novel}).
The present paper simultaneously estimates $m$, $v$, and $\nu$ based on their surrogated gradients to maximize the log-likelihood.
The unified derivation yields a more adaptive parameter $\nu$ (see~\ref{sec:experiments}), with the same computational complexity (i.e. $\mathcal{O}(d)$) as the other major optimizers.
\subsection{Gradients for maximum log-likelihood}
To derive AdaTerm, let us consider the problem of maximizing the log-likelihood, $\ln \mathcal{T}(g \mid m, v, \nu)$, in order to estimate the parameters $m$, $v$, and $\nu$ that can adequately model the recent $g$.
Since this model is expected to be time-varying, we simply employ a gradient ascent at each step instead of optimization by EM algorithm~(\cite{lange1989robust,dougru2018doubly}), which comes with high computational cost.
In addition, as will be explained later, the pure gradients of $v$ and $\nu$ does not provide the expected behavior, hence some surrogated versions are introduced.
Nevertheless, we first show the pure gradient for each statistic.
To simplify the notation, the following variables are defined.
\begin{align}
s = (g - m)^2
, \
D = \cfrac{1}{d} s^\top v^{-1}
, \
\tilde{\nu} = \nu d^{-1}
, \
w_{mv} = \cfrac{\tilde{\nu} + 1}{\tilde{\nu} + D}
\nonumber
\end{align}
When $\nu$ is defined to be proportional to $d$, as in the literature~(\cite{ilboudo2020robust}), $\tilde{\nu}$ corresponds to the proportionality coefficient.
With these, the gradients w.r.t. $m$, $v$, and $\nu$ can be derived, respectively.
\begin{align}
\nabla_{m}\ln \mathcal{T}
&= - \cfrac{\nu + d}{2} \cfrac{-\nu^{-1} (g - m) v^{-1}}{1 + \nu^{-1} d D}
\nonumber \\
&= \cfrac{\nu d^{-1} + 1}{\nu d^{-1} + D} \cfrac{g - m}{2v}
\nonumber \\
&= w_{mv} \cfrac{g - m}{2v}
=: g_m
\label{eq:grad_m}\\
\nabla_{v}\ln \mathcal{T}
&= - \cfrac{1}{2v} - \cfrac{\nu + d}{2} \cfrac{-\nu^{-1} (g - m)^2 v^{-2}}{1 + \nu^{-1} d D}
\nonumber \\
&= \cfrac{1}{2v^2}\left ( \cfrac{\tilde{\nu} + 1}{\tilde{\nu} + D} s - v \right )
\nonumber \\
&= w_{mv} \cfrac{\tilde{\nu}}{2v^2 (\tilde{\nu} + 1)} \{ (s - v) + (s - Dv) \tilde{\nu}^{-1} \}
\label{eq:grad_v}\\
\nabla_{\nu}\ln \mathcal{T}
&= \cfrac{1}{2}\psi\left( \cfrac{\nu + d}{2} \right) - \cfrac{1}{2}\psi\left( \cfrac{\nu}{2} \right) - \cfrac{d}{2\nu}
\nonumber \\
&- \cfrac{1}{2} \ln (1 + \nu^{-1} dD) - \cfrac{\nu + d}{2} \{(\nu + dD)^{-1} - \nu^{-1} \}
\label{eq:grad_nu}
\end{align}
where the gradients w.r.t. $m$ and $v$ are transformed so that the response to outliers can be intuitively analyzed by $w_{mv}$.
$\psi$ denotes the digamma function.
\subsection{Online maximum likelihood updates with adaptive step sizes}
Following the success of the EMA scheme in outlier-free optimization, we specifically choose the adaptive step sizes so that in the Gaussian limit (i.e. when $\nu \to \infty$), the gradient ascent updates of the mean $m$ and scale $v$ parameters revert to simple EMA updates. Under this requirement, we now derive the update rules and the corresponding step sizes for each parameters.
\subsubsection{The first moment}
Since $m$ is defined in the whole real space, gradient-based updates can be applied with no restriction.
The gradient ascent equation is therefore given by:
\begin{align}
m_t &= m_{t-1} + \kappa_m g_m
\nonumber \\
&= m_{t-1} + \kappa_m w_{mv} \frac{(g_t - m_{t-1})}{2v_{t-1}}
\end{align}
where $\kappa_m$ is the update step size. Since we restrict ourselves to a EMA-like update rule, we also have:
\begin{align}
m_t &= m_{t-1} + \tau_m (g_{t} - m_{t-1})
\nonumber
\end{align}
where $\tau_m$ is the interpolation factor satisfying $\tau_m \in (0, 1)$. Therefore,
\begin{align}
\tau_m &= \frac{\kappa_m w_{mv}}{2v_{t-1}} \implies 0 < \kappa_m < \frac{2v_{t-1}}{w_{mv}}
\nonumber \\
&\implies \kappa_m = \frac{2kv_{t-1}}{w_{mv}},\; k \in (0, 1) \implies \tau_m = k
\end{align}
Furthermore, for $\tilde{\nu} \to \infty$, we require $\tau_m \to (1 - \beta)$ with $\beta \in (0, 1)$ the common smoothness parameter (i.e. we recover a Gaussian model in the limit), meaning that $k \to (1 - \beta)$.
Since the interpolation ratio $\tau_m$ should be adaptive to outliers, namely, it should ultimately involve $w_{mv}$ and since $k \in (0, 1)$, in this paper, we set $k = (1 - \beta) w_{mv}\overline{w}_{mv}^{-1}$ where $w_{mv} \leq \overline{w}_{mv} = (\tilde{\nu} + 1)\tilde{\nu}^{-1}$.
With this, the adaptive step size $\kappa_m$ and the update rule for the first moment $m$ is given by:
\begin{align}
\kappa_m &= 2v_{t-1} \frac{1 - \beta}{\overline{w}_{mv}}
\end{align}
and the update rule becomes:
\begin{align}
m_t &= m_{t-1} + \kappa_m g_m = m_{t-1} + (1 - \beta) \frac{w_{mv}}{\overline{w}_{mv}} (g_t - m_{t-1})
\nonumber \\
&= (1 - \tau_m) m_{t-1} + \tau_m g_t
\end{align}
where $\tau_m = (1 - \beta) \frac{w_{mv}}{\overline{w}_{mv}}$.
\subsubsection{The central second moment}
Similarly, the gradient ascent update rule for the scale parameter $v$ is given by:
\begin{align}
v_t &= v_{t-1} + \kappa_v g_v
\nonumber \\
&= v_{t-1} + \kappa_v \frac{w_{mv} \tilde{\nu}}{2v_{t-1}^{2} (\tilde{\nu} + 1)} \left[ (s_{t} + \Delta s) - v_{t-1} \right]
\end{align}
where $\kappa_v$ is the update step size and $\Delta s = (s - Dv) \tilde{\nu}^{-1}$.
Again, we restrict ourselves to an EMA-like update rule:
\begin{align}
v_t &= v_{t-1} + \tau_v \left[ (s_t + \Delta s) - v_{t-1} \right]
\nonumber
\end{align}
where $\tau_v \in (0, 1)$ as well as $\tau_m$.
In order to preserve a limiting Gaussian model, we simply require that for $\tilde{\nu} \to \infty$, $\tau_v \to (1 - \beta)$, since we already have $\Delta s \to 0$.
Therefore, we can write:
\begin{align}
\tau_v &= \frac{\kappa_v w_{mv} \tilde{\nu}}{2v_{t-1}^{2} (\tilde{\nu} + 1)} \implies 0 < \kappa_v < \frac{2v_{t-1}^{2} (\tilde{\nu} + 1)}{w_{mv} \tilde{\nu}}
\nonumber \\
&\implies \kappa_v = \frac{2kv_{t-1}^{2} (\tilde{\nu} + 1)}{w_{mv} \tilde{\nu}},\; k \in (0, 1) \implies \tau_v = k
\end{align}
Here again, $k$ must satisfy the same conditions as in the first-moment derivation. This means that we can again set $k = (1 - \beta) w_{mv}\overline{w}_{mv}^{-1}$ which results in the following step size:
\begin{align}
\kappa_v &= 2v_{t-1}^2 (1 - \beta)
\end{align}
and the corresponding update rule:
\begin{align}
v_t &= v_{t-1} + \kappa_v g_v
\nonumber \\
&= v_{t-1} + (1 - \beta) \frac{w_{mv}}{\overline{w}_{mv}} \left[ (s_t + \Delta s) - v_{t-1} \right]
\nonumber \\
&= (1 - \tau_v) v_{t-1} + \tau_v (s_t + \Delta s)
\end{align}
where $\tau_v = \tau_m = (1 - \beta) \frac{w_{mv}}{\overline{w}_{mv}}$.
Although as stated before we have that for $\tilde{\nu} \to \infty$, $\Delta s = (s - Dv) \tilde{\nu}^{-1} \to 0$, we notice that its value can be negative for $\tilde{\nu} \ll \infty$. This means that the update rule above can potentially take the scale parameter $v$ into a negative region during the learning process, which is undesired.
There are many ways to satisfy the positive constraint such as reparametrization or the mirror descent (MD) strategies.
However, in order to preserve the EMA-like interpolation directly on the parameter $v$, we employ the projected gradient method.
In particular, we use a simple gradient clipping method where we put a lower bound on $\Delta s$
\begin{align}
\Delta s =: \max(\epsilon^2, (s - Dv) \tilde{\nu}^{-1})
\end{align}
where $\epsilon \ll 1$.
Such a gradient upper bound has the disadvantage that it can cause the algorithm to overestimate the scale parameter $v$.
In that case, when $v$ is larger than the exact value, $D$ and $w_{mv}$ would become smaller and larger respectively, namely, the robustness to noise may be impaired.
However, this drawback is attenuated by the fact that Adam-like optimizers scale the descent update amounts by $v$, which means that the effect of noise is expected to still be insignificant.
We also argue that the slow decrease of $v$ permitted by the gradient clipping strategy is advantageous in preventing too much robustness.
Indeed, it is possible to derive an update rule over $v$ such that the positiveness is directly built in the equation from the start as detailed in~\ref{apdx:alt_v}.
However, we found that such strategies (which includes the fixed-number-of-samples-based ML analytic solution $\beta v_{t-1} + (1-\beta) w_{mv} s_t$), although having similar performance on the simple regression task described in the experiments, proves to be less effective (and for some, unstable) on the prediction task.
As a remark, since $\tau_m = \tau_v = (1 - \beta) w_{mv}\overline{w}_{mv}^{-1}$, we adopt one common notation for both of them as $\tau_{mv} = (1 - \beta) w_{mv}\overline{w}_{mv}^{-1}$.
\subsubsection{The degrees of freedom}
\paragraph{Simplifying the gradient}
Compared to $m$ and $v$, the update rule of $\nu$ is much more delicate to handle.
In particular, when the positive constraint on $\nu$ would be violated by a simple gradient ascend update cannot be identified.
This is mainly due to the digamma function $\psi$, hence the first step is to get rid of it.
In this paper, we use the upper and lower bounds of $\psi$ to find the upper bound of the gradient w.r.t. $\nu$.
Specifically, the upper and lower bounds of $\psi$ are given as follows:
\begin{align*}
\ln x - \cfrac{1}{x} \leq \psi(x) \leq \ln x - \cfrac{1}{2x}
\end{align*}
This leads to the following upper bound for eq.~\eqref{eq:grad_nu}:
\begin{align}
\nabla_{\nu}\ln \mathcal{T}
&\leq \cfrac{1}{2}\Biggl \{ \ln(\nu + d) - \ln 2 - \cfrac{1}{\nu + d} - \ln \nu + \ln 2 + \cfrac{2}{\nu}
\nonumber \\
&- \cfrac{d}{\nu} - \ln(\nu + dD) + \ln \nu - \cfrac{\nu + d}{\nu + dD} + 1 + \cfrac{d}{\nu} \Biggr \}
\nonumber \\
&= \cfrac{1}{2}\Biggl \{ - w_{mv} + \ln w_{mv} + 1 + \cfrac{\tilde{\nu} + 2}{\tilde{\nu} + 1}\cfrac{1}{\nu} \Biggl \}
\nonumber \\
&= \cfrac{1}{2}\Biggl \{ - w_{\tilde{\nu}} + 1 + \cfrac{\tilde{\nu} + 2}{\tilde{\nu} + 1}\cfrac{1}{\nu} \Biggl \}
\label{eq:grad_nu_m1}
\end{align}
where $w_{\tilde{\nu}} = w_{mv} - \ln w_{mv}$.
Although the use of an upper bound can impair the noise robustness in theory (as in the case of $v$), in practice, as the maximum likelihood solution for $\nu$ has been reported to be overly robust, this modification is not expected to affect the learning performance too much (see below).
\paragraph{Tackling the curse of dimensionality}
The overestimated gradient above is very simple and its behavior is easy to understand.
Notably, we found that its behavior is greatly affected by $d$, as shown in Fig.~\ref{fig:grad_nu} with $\nu = d$ (that is $\tilde{\nu} = 1$).
Note that it is not necessary to follow the literature~(\cite{ilboudo2020robust}) for setting the degrees of freedom as $\nu = \tilde{\nu}d$, but this procedure is important to provide the same level of robustness to the respective subsets of parameters regardless of their respective size.
As can be seen, when $d=1$, the gradient w.r.t. $\nu$ is negative only when $w_{mv}$ is very small corresponding to a very large deviation $D$ and the negative gradient effectively decreases $\nu$ to exclude the noisy gradients of the network's parameters $g$; otherwise, the gradient in~\eqref{eq:grad_nu_m1} becomes positive and $\nu$ is increased to suppress the robustness and make it easier to update the parameters.
Unfortunately, with $d \sim = 10,000$ which is common for a neural network's weights matrix, it is always negative and almost zero even when $w_{mv}=1$, which means that $\nu$ can only decrease.
Such a pessimistic behavior is consistent with the non-intuitive behavior of the multivariate Student's t-distribution reported in the literature~(\cite{ley2012value}).
This problem has also been empirically confirmed in~(\cite{ilboudo2021adaptive}), where the excessive robustness was forcibly suppressed by correcting the obtained (approximated) estimate by multiplying it with $d$.
To alleviate this problem, we further consider an upper bound where $\nu$ is replaced by $\tilde{\nu}$.
In addition, since $\nu$ does not appear directly in any of the other maximum likelihood gradients, it is more natural to deal with the gradient of $\tilde{\nu}$.
The above modifications are finally summarized below.
\begin{align}
\nabla_{\tilde{\nu}}\ln \mathcal{T} &= d \nabla_{\nu}\ln \mathcal{T}
\nonumber \\
&\leq \cfrac{d}{2}\Biggl \{ - w_{\tilde{\nu}} + 1 + \cfrac{\tilde{\nu} + 2}{\tilde{\nu} + 1}\cfrac{1}{\tilde{\nu}} \Biggl \}
\nonumber \\
&= w_{\tilde{\nu}} \cfrac{d}{2}\Biggl \{ -1 + \Biggl (\cfrac{\tilde{\nu} + 2}{\tilde{\nu} + 1} + \tilde{\nu} \Biggr )\cfrac{1}{\tilde{\nu} w_{\tilde{\nu}}} \Biggl \}
=: g_{\tilde{\nu}}
\label{eq:grad_nu_m2}
\end{align}
Where the second line is obtained by using $\frac{1}{\nu} \leq \frac{1}{\tilde{\nu}}$.
This new upper bound can be interpreted as an approximation of the multivariate distribution by a univariate distribution.
Although it is possible to model the distribution as a univariate distribution from the beginning of the derivation, there remains a concern that the robustness may be degraded since it would capture the average without focusing on the amount of deviation in each axis.
\begin{figure}[tb]
\centering
\includegraphics[keepaspectratio=true,width=0.95\linewidth]{figure/grad_nu}
\caption{$w_{mv}$ vs. the surrogated gradient in eq.~\eqref{eq:grad_nu_m1} with $\nu = d$ according to several dimension sizes}
\label{fig:grad_nu}
\end{figure}
\paragraph{Update rule}
With this upper bound, we now proceed to the derivation of a proper update rule for the robustness parameter $\tilde{\nu}$.
To do so, we draw similarities with the update rules obtained previously for $m$ and $v$ and set our goal to an EMA-like equation with an adaptive smoothness parameter $\tau_{\tilde{\nu}} = (1 - \beta) w_{\tilde{\nu}}\overline{w}_{\tilde{\nu}}^{-1}$, where $w_{\tilde{\nu}} \leq \overline{w}_{\tilde{\nu}}$.
That is, we want the following update rule:
\begin{align}
\tilde{\nu}_{t} &= (1 - \tau_{\tilde{\nu}}) \tilde{\nu}_{t-1} + \tau_{\tilde{\nu}} \lambda_{t}
\end{align}
where $\lambda_{t}$ is some function of $w_{mv}$.
To derive such an equation, we focus first on $\tau_{\tilde{\nu}}$ and in particular on finding the maximum value of $w_{\tilde{\nu}}$, $\overline{w}_{\tilde{\nu}}$.
We start by noticing that $w_{\tilde{\nu}}$ is a convex function over $w_{mv}$ with a minimum value at $w_{mv}=1$.
The maximum value is therefore determined when $w_{mv}$ is the largest ($w_{mv} \gg 1$) or smallest ($w_{mv} \ll 1$).
The largest $w_{mv}$ value has already been derived as $\overline{w}_{mv}$, but the smallest value cannot exactly be given since $w_{mv} \to 0$ when $D \to \infty$.
Therefore, instead of the exact minimum value, we employ the tiny value of float $\epsilon_{\mathrm{float}}$, which is the closest to zero numerically (in the case of float32, $w_{\tilde{\nu}}(w_{mv} = \epsilon_{\mathrm{float}}) \simeq 87.3365$).
In summary, the maximum $w_{\tilde{\nu}}$, $\overline{w}_{\tilde{\nu}}$, can be defined as follows:
\begin{align}
\overline{w}_{\tilde{\nu}} = \max(\overline{w}_{mv} - \ln(\overline{w}_{mv}), \epsilon_{\mathrm{float}} - \ln(\epsilon_{\mathrm{float}}))
\end{align}
If we design the step size with $\overline{w}_{\tilde{\nu}}$, we can obtain an interpolation update rule for $\tilde{\nu}$ similar to $m$ and $v$.
However, although the minimum value of $\tilde{\nu}$ is expected to be positive, an excessive robustness could inhibit the learning process.
On that account, it is desirable that the user retain some level of control on how much robustness the algorithm is allowed to achieve.
Therefore, our final trick is to transform $\tilde{\nu} = \underline{\tilde{\nu}} + \Delta \tilde{\nu}$ with a user-specified minimum value, $\underline{\tilde{\nu}} > 0$, and the deviation, $\Delta \tilde{\nu} > 0$, automatically controlled by the algorithm.
With this transformation and the appropriate step size $\kappa_{\Delta \tilde{\nu}}$ in a gradient ascent applied on $\Delta \tilde{\nu}$, whose surrogate gradient is the same as eq.~\eqref{eq:grad_nu_m2}, the update rule can be given as follows:
\begin{align}
\kappa_{\Delta \tilde{\nu}} &= 2 \Delta \tilde{\nu}_{t-1} \cfrac{1 - \beta}{d \overline{w}_{\tilde{\nu}}}
\\
\Delta \tilde{\nu}_t &= \Delta \tilde{\nu}_{t-1} + \kappa_{\Delta \tilde{\nu}} g_{\tilde{\nu}}
\nonumber \\
&= (1 - \tau_{\tilde{\nu}}) \Delta \tilde{\nu}_{t-1}
+ \tau_{\tilde{\nu}} \Biggl ( \cfrac{\tilde{\nu}_{t-1} + 2}{\tilde{\nu}_{t-1} + 1} + \tilde{\nu}_{t-1} \Biggr )
\cfrac{\Delta \tilde{\nu}_{t-1}}{\tilde{\nu}_{t-1} w_{\tilde{\nu}}}
\nonumber
\end{align}
Then, by adding $\underline{\tilde{\nu}} + \epsilon$ to both sides of the update rule for $\Delta \tilde{\nu}$, we get the update rule for $\tilde{\nu}$ directly.
\begin{align}
\tilde{\nu}_{t} &= (1 - \tau_{\tilde{\nu}}) \tilde{\nu}_{t-1} + \tau_{\tilde{\nu}} \lambda_{t} \\
\tau_{\tilde{\nu}} &= (1 - \beta) w_{\tilde{\nu}}\overline{w}_{\tilde{\nu}}^{-1}
\\
\lambda_{t} &= \Biggl ( \cfrac{\tilde{\nu}_{t-1} + 2}{\tilde{\nu}_{t-1} + 1} + \tilde{\nu}_{t-1} \Biggr )
\cfrac{\tilde{\nu}_{t-1} - \underline{\tilde{\nu}}}{\tilde{\nu}_{t-1} w_{\tilde{\nu}}}
+ \underline{\tilde{\nu}} + \epsilon
\end{align}
Note that the minimum value of $\Delta \tilde{\nu} = \tilde{\nu} - \underline{\tilde{\nu}}$ is given by $\epsilon$ so that $\Delta \tilde{\nu} > 0$ is satisfied.
This process also prevents the first term of $\lambda$ from becoming $0$ and stopping the update of $\tilde{\nu}$.
\subsection{Algorithm}
Finally, the update amount of the optimization parameters $\theta$ for AdaTerm is given as follows:
\begin{align}
\eta^\mathrm{AdaTerm}(g_t) = \cfrac{m_t (1 - \beta^t)^{-1}}{\sqrt{v_t (1 - \beta^t)^{-1}}}
\label{eq:adaterm_update_rule}
\end{align}
Unlike Adam~(\cite{kingma2014adam}), the small amount usually added to the denominator, $\epsilon$, is removed since $\sqrt{v} \geq \epsilon$ in our implementation.
Note that AdaBelief~(\cite{zhuang2020adabelief}) could also remove it.
However, it is still added in the AdaBelief algorithm, even though the small amount in AdaBelief has little effect since the minimum of the scaling factor is, in comparison, absolutely larger.
The pseudo-code of AdaTerm is summarized in Alg.~\ref{alg:adaterm}.
The regret bound is also analyzed with a new approach different from the literature~(\cite{reddi2018convergence}) in~\ref{apdx:proofs} by combining the trick proposed by \cite{alacaoglu2020new}, with a new trick based on the Lemma~\ref{lem:pp_ineq} in order to remove the need for the AMSGrad assumption from the regret analysis.
\begin{algorithm}[tb]
\caption{AdaTerm optimizer}
\label{alg:adaterm}
\begin{algorithmic}[1]
\STATE{Set $\alpha > 0$ ($10^{-3}$ is the default value)}
\STATE{Set $\beta \in (0, 1)$ ($0.9$ is the default value)}
\STATE{Set $\epsilon \ll 1$ ($10^{-5}$ is the default value)}
\STATE{Set $\underline{\tilde{\nu}} > 0$ ($1$ is the default value)}
\STATE{Set $d$ as the dimension size of each subset of parameters}
\STATE{Initialize $\mathcal{\tilde{D}}$, $\theta_1$, $m_0 \gets 0$, $v_0 \gets \epsilon^2$, $\tilde{\nu}_0 \gets \underline{\tilde{\nu}} + \epsilon$, $t \gets 0$}
\WHILE{$\theta_t$ not converged}
\STATE{// Compute gradient}
\STATE{$t \gets t + 1$}
\STATE{$g_t = \nabla_{\theta_{t}} \mathcal{L}_{\mathcal{B}_t}$, $\mathcal{B}_t \sim \mathcal{\tilde{D}}$}
\STATE{// Compute index of outlier}
\STATE{$s = (g_t - m_{t-1})^2$}
\STATE{$D = d^{-1} s^\top v_{t-1}^{-1}$}
\STATE{// Compute adaptive step sizes}
\STATE{$w_{mv} = (\tilde{\nu}_{t-1} + 1)(\tilde{\nu}_{t-1} + D)^{-1}$}
\STATE{$\overline{w}_{mv} = (\tilde{\nu}_{t-1} + 1)\tilde{\nu}_{t-1}^{-1}$}
\STATE{$w_{\tilde{\nu}} = w_{mv} - \ln(w_{mv})$}
\STATE{$\overline{w}_{\tilde{\nu}} = \max(\overline{w}_{mv} - \ln(\overline{w}_{mv}), \epsilon_{\mathrm{float}} - \ln(\epsilon_{\mathrm{float}}))$}
\STATE{$\tau_{mv} = (1 - \beta) \cfrac{w_{mv}}{\overline{w}_{mv}}$}
, {$\tau_{\tilde{\nu}} = (1 - \beta) \cfrac{w_{\tilde{\nu}}}{\overline{w}_{\tilde{\nu}}}$}
\STATE{// Compute update amounts}
\STATE{$\Delta s = \max(\epsilon^2, (s - D v_{t-1})\tilde{\nu}_{t-1}^{-1})$}
\STATE{$\lambda = \left ( \cfrac{\tilde{\nu}_{t-1} + 2}{\tilde{\nu}_{t-1} + 1} + \tilde{\nu}_{t-1} \right ) \cfrac{\tilde{\nu}_{t-1} - \underline{\tilde{\nu}}}{\tilde{\nu}_{t-1} w_{\tilde{\nu}}} + \underline{\tilde{\nu}} + \epsilon$}
\STATE{// Update parameters}
\STATE{$m_t = (1 - \tau_{mv}) m_{t-1} + \tau_{mv} g_t$}
\STATE{$v_t = (1 - \tau_{mv}) v_{t-1} + \tau_{mv} (s + \Delta s)$}
\STATE{$\tilde{\nu}_t = (1 - \tau_{\tilde{\nu}}) \tilde{\nu}_{t-1} + \tau_{\tilde{\nu}} \lambda$}
\STATE{$\theta_{t+1} = \theta_{t} - \alpha \cfrac{m_t (1 - \beta^t)^{-1}}{\sqrt{v_t (1 - \beta^t)^{-1}}}$}
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\subsection{Behavior analysis}
\subsubsection{Convergence analysis}
Our convergence proof adopts the approach highlighted in~(\cite{alacaoglu2020new}).
As such, we start by enunciating the same assumptions:
\begin{assumption} \label{asmp:necessary_assumptions}
Necessary assumptions:
\begin{enumerate}
\item $\mathcal{F} \subset \mathbb{R}^d$ is a compact convex set
\item $\mathcal{L}_{\mathcal{B}_t}: \mathcal{F} \to \mathbb{R}$ is a convex lower semicontinuous (lsc) function
\item $\mathcal{F}$ has a bounded diameter, i.e. $D = \underset{x, y \in \mathcal{F}}{\max} \norm{x - y}_{\infty}$, and $G = \underset{t \in [T]}{\max} \norm{g_{t}}_{\infty}$
\end{enumerate}
\end{assumption}
Then, the following convergence result, whose proof can be found in~\ref{apdx:proofs}, can be stated
\begin{theorem} \label{th:convergence_bound}
Let $\tau_{t}$ be the value of $\tau_{mv}$ at time step $t$ and let $\underline{\tau} \leq \tau_{t}$, $\forall t$.
Under Assumption~\ref{asmp:necessary_assumptions}, and with $\beta < 1$, $\alpha_{t} = \alpha/\sqrt{t}$, AdaTerm achieves a regret $R_{T} = \sum\limits_{t=1}^{T} \mathcal{L}_{\mathcal{B}_t}(\theta_t) - \mathcal{L}_{\mathcal{B}_t}(\theta^*)$ such that:
\begin{align}
\begin{split}
R_{T} &\leq \left. \frac{D^2\sqrt{T}}{4\tau_{T}\alpha} \sum\limits_{i=1}^{d} v_{T,i}^{1/2} + \left[ \frac{\underline{\tau}^2 + 1 - (\beta + \underline{\tau})}{2\underline{\tau}^2} \right] \sum\limits_{t=1}^{T-1} \frac{D^2}{\alpha_{t}} \sum\limits_{i=1}^{d} v_{t,i}^{1/2} \right. \\
&\left.\; + \left[ \frac{(1-\beta)^2 \alpha}{\epsilon\underline{\tau}^2\sqrt{T}} \right] \sum\limits_{t=1}^{T-1} \sum\limits_{i=1}^{d} (1 - \underline{\tau})^{T-k} g^2_{k,i} \right. \\
&\;+ \left[ \frac{\underline{\tau} (1 - \underline{\tau}) + (1-\beta)}{2\underline{\tau}^2} \right]\left[ \frac{(1-\beta)^2 \alpha}{\epsilon\underline{\tau}^2} \right] \sqrt{1 + \log (T-1)} \sum\limits_{i=1}^d \norm{g_{1:T-1,i}^2}_2
\end{split}
\end{align}
\end{theorem}
\begin{corollary}[Non robust case regret bound] \label{corol:non_robust_convergence_bound}
If $\underline{\tilde{\nu}} \to \infty$, then $\underline{\tau} \to \mathrm{constant} = (1-\beta)$. Then, the regret becomes:
\begin{align}
\begin{split}
R_{T} &\leq \left. \frac{D^2\sqrt{T}}{4(1-\beta)\alpha} \sum\limits_{i=1}^{d} v_{T,i}^{1/2} + \frac{1}{2} \sum\limits_{t=1}^{T-1} \frac{D^2}{\alpha_{t}} \sum\limits_{i=1}^{d} v_{t,i}^{1/2} \right. \\
&\left.\; + \frac{\alpha}{\epsilon\sqrt{T}} \sum\limits_{t=1}^{T-1} \sum\limits_{i=1}^{d} \beta^{T-k} g^2_{k,i} \right. \\
&\;+ \frac{(1+\beta)\alpha\sqrt{1 + \log (T-1)}}{2(1-\beta)\epsilon} \sum\limits_{i=1}^d \norm{g_{1:T-1,i}^2}_2
\end{split}
\end{align}
\end{corollary}
Note the similarity between this regret bound and the one derived by~\cite{reddi2018convergence} and by~\cite{zhuang2020adabelief} using AMSGrad. In particular, this regret can be bounded by $\mathcal{O}(G\sqrt{T})$ and thus the regret is upper bounded by a minimum of $\mathcal{O}(G\sqrt{T})$. This leads to a worst case dependence of the regret on $T$ to still be $\mathcal{O}(\sqrt{T})$ despite the non-usage of AMSGrad.
We emphasize that this approach to the regret bound is not specific to AdaTerm, but can be used to bound the regret of other momentum based optimizers, including Adam and AdaBelief.
\subsubsection{Qualitative comparison with related work and robustness analysis}
The difference between AdaTerm and the related work~(\cite{ilboudo2020robust}) is threefold.
First, for $m$, the difference lies in how the interpolation between the value before the update and the update amount is performed.
In the past work, the discounted sum of $w_{mv}$ was used, but in AdaTerm, $\overline{w}_{mv}$ is used instead, which eliminates the need to store the discounted sum in memory.
Secondly, $v$ is now updated robustly depending on $w_{mv}$, ensuring that the observation of aberrant gradients $g$ would not cause a sudden increase of $v$ and inadvertently loosen the threshold for other aberrant update amounts in the next and subsequent steps.
In addition, it is expected to coordinate among the axes.
That is, if an anomaly is detected only on a particular axis, $\Delta s$ on that axis will become larger, making $v$ on that axis larger and mitigating the anomaly detection.
Conversely, if anomalies are detected in most axes, $\Delta s \simeq \epsilon^2$ will continue to exclude the anomalies as described above.
Such adaptive behaviors would yield stable updates even if $\beta$ is smaller than the conventional $\beta_2$ ($= 0.999$ in most optimizers).
Thirdly, the robustness indicator $\tilde{\nu}$ will be increased when the deviation $D$ is small (i.e. no aberrant value), in which case the first term of $\lambda_{t}$ becomes larger and $\kappa_{\Delta \tilde{\nu}} g_{\tilde{\nu}}$ becomes positive.
However, the increased speed will be limited by $\overline{w}_{\tilde{\nu}}$ thanks to its inclusion in the step size $\kappa_{\Delta \tilde{\nu}}$.
On the other hand, if an aberrant value is observed, $w_{\tilde{\nu}} \gg 1$ and $\lambda$ will get smaller such that $\kappa_{\Delta \tilde{\nu}} g_{\tilde{\nu}}$ becomes negative.
The decay of $\lambda$ towards $\underline{\tilde{\nu}}$ will then happen more quickly compared to the increase in speed.
An intuitive visualization of the described behavior obtained by the AdaTerm equations can be found in~\ref{apdx:adaterm_algo_vis}.
Thus, although the robustness tuning mechanism substitutes the upper bound of its gradient, it still behaves conservatively and can be expected to retain its excellent robustness to noise.
Finally, if $\underline{\tilde{\nu}} \to \infty$, the robustness is lost by design and AdaTerm essentially matches a slightly different version of AdaBelief, with performance difference due to the simplification of the hyper-parameters and the variance computation mechanism (AdaBelief estimates $\mathbb{E}[(g_t - m_t)^2]$, while AdaTerm estimates $\mathbb{E}[(g_t - m_{t-1})^2]$ which is the usual variance estimator).
Therefore, in problems where AdaBelief would be effective, AdaTerm would perform effectively as well.
\section{Experiments}
\label{sec:experiments}
\subsection{Analysis with test functions}
\begin{figure}[tb]
\centering
\includegraphics[keepaspectratio=true,width=0.95\linewidth]{figure/comp_norm}
\caption{Noise ratio vs. L2 norm between the converged point and the analytically-optimal point}
\label{fig:comp_norm}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[keepaspectratio=true,width=0.95\linewidth]{figure/comp_dof}
\caption{Noise ratio vs. degrees of freedom $\nu$}
\label{fig:comp_dof}
\end{figure}
Before solving more practical benchmark problems, we analyze the behavior of AdaTerm through minimization of typical test functions.
In other words, we aim to find the two-dimensional point where the respective potential fields is minimized by relying on the gradients of the potential fields.
To analyze the robustness to noise, uniformly distributed noise ($\in (-0.1, 0.1)$) at a specified ratio is added to the point coordinates to be optimized.
The results are summarized in Figs.~\ref{fig:comp_norm} and~\ref{fig:comp_dof} (details are in~\ref{apdx:test}).
Fig.~\ref{fig:comp_norm}, which shows the error norm from the analytically-optimal point, shows that the convergence accuracy of McCormick function with AdaTerm was not good in lower noise ratio probably due to large learning rate.
In other scenes, however, AdaTerm was able to maintain its normal update performance while mitigating the effect of noise.
The performance of Adam~(\cite{kingma2014adam}) was significantly degraded when some noise is given, indicating the sensitivity to noise.
On the Michalewicz function, At-Adam~(\cite{ilboudo2021adaptive}) judges the steep gradients near the optimal value to be due to noise and tends to exclude them, causing the optimal solution to not be obtained.
This result implies that the automatic mechanism for tuning $\nu$ employed by At-Adam has insufficient robustness adaptability to noise.
Indeed, Fig.~\ref{fig:comp_dof}, which plots the final $\nu$, shows that $\nu$ with At-Adam converged to a nearly constant value independent of the noise ratio.
In contrast, in AdaTerm, $\nu$ was inversely proportional to the noise ratio, and succeeded in achieving an intuitively natural behavior of increasing $\nu$ when the noise is rare, and decreasing $\nu$ when the noise became more frequent (high noise ratio).
\subsection{Robustness display on simple regression task}
\paragraph{Problem settings}
Following the same process as in~(\cite{ilboudo2020robust}), we consider the problem of fitting a ground truth function $f(x) = x^2 + \ln(x+1) + \sin(2\pi x) \cos(2\pi x)$ given some noisy observations $y = f(x) + \zeta$ with $\zeta$ given by:
\begin{align}
\zeta &\sim \mathcal{T}(1, 0, 0.05) \mathrm{Bern} \left ( \frac{p}{100} \right ), \ p = 0, 10, 20, \ldots, 100
\end{align}
where $\mathcal{T}(1, 0, 0.05)$ designates a Student's t-distribution with degrees of freedom $\nu_{\zeta} = 1$, location $\mu_{\zeta} = 0$ and scale $v_{\zeta} = 0.05$ and $\mathrm{Bern}(p/100)$ is a Bernoulli distribution with the ratio $p$ as its parameter.
$50$ trials with different random seeds are conducted for each noise ratio $p$ and each optimization method, using $40000$ $(x, y)$ pairs sampled as observations and split into batches of size $10$.
The model used is a fully-connected neural network with $5$ linear layers, each composed of $50$ neurons, equipped with the ReLU activation function~(\cite{relu}) for all the hidden layers.
The training and the test loss functions are the Mean Squared Error (MSE) applied on $(\hat{y}, y)$ and $(\hat{y}, f(x))$ respectively, where $\hat{y}$ is the network's estimate given $x$.
\paragraph{Result}
The test loss against the noise ratio $p$ is plotted in Fig.~\ref{fig:result_regr}.
As can be seen, AdaTerm performed even more robustly than t-Adam and At-Adam and is able to keep its prediction loss almost constant across all noise ratio.
The effect of the batch size on the learning performance is also studied and can be found in~\ref{apdx:batch_size_effect}.
\begin{figure}[tb]
\centering
\includegraphics[keepaspectratio=true,width=0.95\linewidth]{figure/loss_nbatch10}
\caption{Test loss on the Regression task against the noise ratio $p$}
\label{fig:result_regr}
\end{figure}
\subsection{Configurations of benchmark problems}
After verifying the optimization efficiency and robustness on some test functions and a simple regression task, we now perform four practical benchmark problems to compare the performance of typical optimizers, including the proposed AdaTerm.
Detailed setups (e.g. network architectures) can be found in~\ref{apdx:learning}.
\subsubsection{Classification of mislabeled CIFAR-100}
The first problem is an image classification problem with CIFAR-100 dataset.
As artificial noise, some proportion of the labels (0~\% and 10~\%) on the training data is randomized to be anything other than the true labels.
As a simple data augmentation, random horizontal flip is introduced along with 4 padding only when training.
The loss function is given to be the cross-entropy.
In order to stabilize the learning performance, we introduce a label smoothing technique~(\cite{szegedy2016rethinking}).
The degree of smoothing is set to 20\%, referring to the literature~(\cite{lukasik2020does}).
\subsubsection{Long-term prediction of robot locomotion}
The second problem is a motion prediction problem.
The dataset used was borrowed from the literature~(\cite{kobayashi2020q}) and contains 150 trajectories as training data, 25 trajectories as validation data, and 25 trajectories as test data, all gathered from a hexapod locomotion task with 18 observed joint angles and 18 reference joint angles.
An agent continually predicts the states within the given time interval (1 and 30 steps) from the initial true state and subsequent predicted states.
Therefore, the predicted states used as inputs would deviate from the true ones and become noise for learning.
For the loss function, instead of the mean squared error (MSE), which is commonly used, we employ the negative log likelihood (NLL) of the predictive distribution, which allows the scale difference of each state to be considered internally.
A NLL is computed at each prediction step, then their sum is used as the loss function.
Because of the high cost of back-propagation through time (BPTT) over the entire trajectory, truncated BPTT (30 steps on average)~(\cite{puskorius1994truncated,tallec2017unbiasing}) is employed.
\subsubsection{Reinforcement learning on Pybullet simulator}
The third problem is RL simulated by Pybullet engine with OpenAI Gym~(\cite{brockman2016openai,coumans2016pybullet}).
The tasks to be solved are Hopper and Ant, both of which require an agent to move as straight as possible on flat terrain.
As mentioned before, RL relies on estimated values due to no true signals, which can easily introduce noise in the training.
The RL algorithm implemented is an actor-critic algorithm based on the literature~(\cite{kobayashi2021proximal}).
The agent only learns after each episode using experience replay.
This experience replay samples 128 batches after each episode, and its buffer size is set to be small enough (i.e. 10,000) to reveal the influence of noise.
\subsubsection{Policy distillation by behavioral cloning}
The fourth problem is policy distillation~(\cite{rusu2015policy}).
The three policies for Ant that were properly learned in the RL problem above were taken as experts, and 10 trajectories of state-action pairs were collected from each of them.
We also collected three trajectories from the one policy that fell into a local solution as an amateur.
In the dataset constructed with these trajectories, not only the amateur trajectories, but also the expert trajectories would not be always truly optimal, and thus they cause noise.
The loss function is the negative log likelihood of the policy according to behavioral cloning~(\cite{bain1995framework}), which is the simplest imitation learning method.
In addition, a weight decay is added to prevent the small networks (i.e. the distillation target) from over-fitting particular behaviors.
\subsection{Results of benchmark problems}
\begin{table*}[tb]
\caption{Experimental results:
the best in all the results for each benchmark is written in bold;
the numbers in parentheses denote standard deviations.
}
\label{tab:exp_result}
\centering
\begin{tabular}{l cc cc cc cc}
\hline\hline
& \multicolumn{2}{c}{Classification}
& \multicolumn{2}{c}{Prediction}
& \multicolumn{2}{c}{RL}
& \multicolumn{2}{c}{Distillation}
\\
& \multicolumn{2}{c}{Accuracy}
& \multicolumn{2}{c}{MSE at final prediction}
& \multicolumn{2}{c}{The sum of rewards}
& \multicolumn{2}{c}{The sum of rewards}
\\
Method & 0~\% & 10~\% & 1 step & 30 steps & Hopper & Ant & w/o amateur & w/ amateur
\\
\hline
Adam
& 0.7205
& 0.6811
& 0.0321
& 1.1591
& 1539.06
& 863.31
& 1686.94
& 1401.01
\\
& (4.33e-3)
& (4.93e-3)
& (4.48e-4)
& (1.04e-1)
& (5.64e+2)
& (4.01e+2)
& (2.34e+2)
& (2.39e+2)
\\
\hline
AdaBelief
& 0.7227
& 0.6799
& 0.0320
& 1.2410
& 1152.10
& 692.68
& 1731.51
& 1344.23
\\
& (5.56e-3)
& (3.94e-3)
& (5.67e-4)
& (1.34e-1)
& (5.48e+2)
& (1.17e+2)
& (1.88e+2)
& (2.75e+2)
\\
\hline
RAdam
& 0.7192
& 0.6797
& 0.0328
& 1.1218
& 1373.85
& 871.23
& 1574.13
& 1258.10
\\
& (4.28e-3)
& (3.41e-3)
& (3.55e-4)
& (1.36e-1)
& (7.55e+2)
& (3.57e+2)
& (2.86e+2)
& (2.70e+2)
\\
\hline
t-Adam
& 0.7174
& 0.6803
& 0.0320
& 1.1048
& \textcolor{orange}{\textbf{1634.20}}
& \textcolor{orange}{\textbf{2272.58}}
& 1586.99
& 1347.29
\\
& (5.67e-3)
& (3.10e-3)
& (4.09e-4)
& (2.66e-1)
& (4.52e+2)
& (3.20e+2)
& (2.06e+2)
& (2.11e+2)
\\
\hline
At-Adam
& 0.7178
& 0.6811
& \textcolor{orange}{\textbf{0.0319}}
& 1.1075
& 1523.37
& 2213.75
& 1611.23
& 1347.51
\\
& (4.21e-3)
& (4.03e-3)
& (3.14e-4)
& (1.97e-1)
& (5.48e+2)
& (2.31e+2)
& (2.25e+2)
& (2.67e+2)
\\
\hline
AdaTerm
& \textcolor{orange}{\textbf{0.7315}}
& \textcolor{orange}{\textbf{0.6815}}
& 0.0335
& \textcolor{orange}{\textbf{1.0016}}
& 1550.25
& 2021.37
& \textcolor{orange}{\textbf{1770.17}}
& \textcolor{orange}{\textbf{1411.02}}
\\
(Ours)
& (3.66e-3)
& (4.46e-3)
& (3.09e-4)
& (2.31e-1)
& (5.88e+2)
& (3.87e+2)
& (2.17e+2)
& (1.92e+2)
\\
\hline\hline
\end{tabular}
\end{table*}
In all the experiments, we compare the following six optimizers:
Adam~(\cite{kingma2014adam}), AdaBelief~(\cite{zhuang2020adabelief}), and RAdam~(\cite{gulcehre2017robust}) as the state-of-the-art optimizers in the cases without noise;
t-Adam~(\cite{ilboudo2020robust}) and At-Adam~(\cite{ilboudo2021adaptive}) as the noise-robust optimizers;
and AdaTerm as our proposal.
Note that the initial $\nu$ in At-Adam is set to be the same value used in t-Adam, in order to evaluate its adaptiveness.
In each condition, 24 trials are conducted with different random seeds, and the mean and standard deviation of the scores for the respective benchmarks are evaluated.
All the test results after learning can be found in Table~\ref{tab:exp_result}.
An ablation test was also conducted for AdaTerm, as summarized in~\ref{apdx:ablation}.
Since we prepared benchmarks that favor noise-robust optimizers, AdaTerm obtained the best in most of the problems (except for 1-step prediction and RL tasks).
Since 1-step prediction is a relatively simple supervised learning, it does not need robustness to noise.
Therefore, as like the error norm of McCormick function in Fig.~\ref{fig:comp_norm}, the too high learning rate caused by $\beta < \beta_2$ contributed to the performance degradation.
Although AdaTerm for RL tasks was certainly not the best, we can confirm its usefulness from the two facts that i) it achieved the second-best in Hopper task and ii) Ant task was only successful with noise-robust optimizers.
In summary, we can say that AdaTerm can maintain its learning performance better than other optimizers in practical problems with noise.
We analyze the remarkable results of the benchmark problems, respectively.
In the classification problem, AdaTerm achieved the highest classification performance in the compared optimizers even for the 0\% label error.
This is because CIFAR-100 contains general images, which are prone to be with noise, and the training dataset has only 500 images per class, revealing the adverse effects of noise.
In the 30-step prediction, AdaTerm was the only one that succeeded in making MSE approximately one.
We can easily expect that this problem is noise-contained due to inaccurate estimated inputs.
However, as learning progresses, the accuracy of the estimated inputs should be improved, and the noise robustness would gradually become unnecessary.
In AdaTerm, the adjustment of the noise robustness worked properly (like Fig.~\ref{fig:comp_dof}), and was successful in accelerating learning.
In the RL problem, as mentioned before, it is clear that Ant task required the high robustness to noise.
In addition, as one of the implementation tips in RL, it is often pointed out that setting a relatively large $\epsilon$ contributes to the stability of learning.
AdaTerm has larger $\epsilon$ (i.e. 1e-5) as the default value for the enough minimal adjustment speed of $\tilde{\nu}$, which may also contribute to the performance improvement.
Finally, in the policy distillation, the performance improvement by AdaTerm was confirmed even when amateur data, which is a source of noise, was not included.
This is due to the difficulty to express the same behaviors with limited resources, and some redundant or unnecessary behaviors need to be eliminated.
Such behaviors would be regarded as noise since their ratio is smaller than one of the normal behaviors, hence AdaTerm succeeded in excluding them and distilling the normal behaviors.
\section{Discussion}
\label{sec:discussion}
The above experiments and simulations showed the robustness improvement of AdaTerm against the related works.
However, we have to discuss its limitations as below.
\subsection{Performance and drawbacks of AdaTerm on noise-free problems}
As can be seen in Fig.~\ref{fig:comp_norm} and in the $1$ step prediction error of Table~\ref{tab:exp_result}, there is a drawback of using AdaTerm on a noise-free optimization problem when compared to a non-robust optimizer such as Adam.
Indeed these results show that in the absence of noise, employing an optimization method which assumes from the start an absence of aberrant data points can give a better result.
This is not surprising and should be expected, since AdaTerm (with its default initial value $\tilde{\nu}_{0} = 1$) has a non-zero adaptation time to converge to a non-robust behavior.
Despite that, the overall results also show that the drawback or penalty incurred from using the AdaTerm algorithm instead of Adam or AdaBelief in the case of noise-free applications does not constitute much of a problem when taking into consideration its ability to deal with the possible presence of unknown noise ratios.
In particular, given how hard it is to gather perfect datasets in practice.
In addition, when suspecting a noise-free problem, AdaTerm endows the practitioner with the ability to set a large initial value for the degrees of freedom and then let the algorithm decide if the dataset is indeed noise-free.
Such freedom is only possible thanks to the adaptive capability of AdaTerm as clearly displayed in Fig.~\ref{fig:comp_dof}.
\subsection{Gap between theoretical and experimental convergence analysis}
Although Theorem~\ref{th:convergence_bound} gives a theoretical upper bound on the regret achieved by AdaTerm as a typical approach to convergence analysis, it completely eludes the robustness factor brought in by AdaTerm.
In particular, Corollary~\ref{corol:non_robust_convergence_bound} appears to have a better bound compared to Theorem~\ref{th:convergence_bound} which, as displayed in the ablation study of~\ref{apdx:ablation}, contrasts with the practical application.
This implies a gap between the theoretical analysis and the experimental analysis and stems on the fact that the theoretical bound relies on $\underline{\tau}$.
As a remedy to this shortcoming of the regret analysis, we therefore relied on an intuitive analysis of the robustness factor based on the behavior of the different components of our algorithm.
Although this qualitative analysis has a weak theoretical convergence value, the experimental analysis against different noise settings show that AdaTerm is not only robust but also efficient as an optimization algorithm.
Nevertheless, we acknowledge the need for a stronger theoretical analysis that takes into consideration the noisiness of the gradients, while allowing for a theoretical comparison between the robustness and efficiency of different optimizers.
\subsection{Normalization of gradient}
By considering the gradients to be generated from Student's t-distribution, AdaTerm normalizes the gradients (more precisely, its first momentum) using the estimated scale, instead of the second moment.
This is similar to the normalization of AdaBelief, as mentioned before.
However, as shown in the~\ref{apdx:variants}, the normalization with the second moment can sometimes perform better.
Basically, since the second moment is larger than the scale, the normalization by the second moment makes the update conservative, while the one by the scale is expected to break through the stagnation of updates.
While both characteristics are certainly important, the answer to the question ``which one is desirable?" remains situation-dependent.
Therefore, we need to pursue the theory concerning this point, and introduce a mechanism to adaptively switch the use of both according to the required characteristics.
\section{Conclusion}
\label{sec:conclusion}
We presented the AdaTerm optimizer, which is adaptively robust to noise and outliers, for deep learning.
AdaTerm was derived from the assumption that the gradients are generated from multivariate diagonal Student's t-distribution, and then, the distribution parameters are optimized through the surrogated maximum log-likelihood estimation.
Optimization of test functions revealed that AdaTerm can adjust its robustness to noise in accordance with the impact of the noise.
Through the four typical benchmarks, we confirmed that the robustness to noise and the learning performance of AdaTerm are as good as or better than those of conventional optimizers.
In addition, we derived the new regret bound for the Adam-like optimizers without the assumption of the use of AMSGrad.
This paper focused on computing the moments' values, but in recent years, the importance of integration with raw SGD (i.e. decay of scaling in Adam-like optimizers) has been confirmed~(\cite{luo2019adaptive,zhou2020towards}).
We will therefore investigate a natural integration of AdaTerm and the raw SGD by reviewing $\Delta s$, which may enable the normalization to be constant.
In addition, as mentioned in the discussion, we argue that a new framework for analyzing optimization algorithms both in terms of robustness and efficiency and such that they can be compared, is required.
One such analysis was done by~\cite{scaman2020robustness} on SGD, but its extention to momentum-based optimizers and its ability to allow theoretical comparison across different algorithms remain limited.
We will therefore seek in the future, a similar but better approach and apply it to analyze the robustness and efficiency of different optimization algorithms, including AdaTerm.
\bibliographystyle{icml2022}
|
2,877,628,090,315 | arxiv | \section*{Introduction}
Since Dubrovin's conjecture \cite{dubrovin}, the question whether the
big quantum cohomology ring of a variety is semisimple is important
and has been discussed in many articles
\cite{manin,bayer-m,HMT,irr,ciolli,semisimple}. In particular
necessary conditions for semisimplicity are given by Hertling, Manin
and Teleman in \cite{HMT}. For a few varieties $X$, for example some
Fano threefolds \cite{ciolli}, toric varieties \cite{irr} or some
homogeneous spaces \cite{semisimple}, it was proved that the small
quantum cohomology ring $\QH(X)$ (see Subsection
\ref{subsection-small}) is semisimple. However for some homogeneous
spaces, it was proved in \cite{semisimple} and \cite{adjoint} that the
small quantum cohomology does not need to be semisimple.
In this paper we give sufficient conditions for the small quantum cohomology ring $\QH(X)$ of a smooth complex Fano variety $X$ to be semisimple. For $X$ such a Fano variety and ${\!\ \star_{0}\!\ }$ the product in $\QH(X)$ (see Subsection \ref{subsection-small}), we define
$$Q_X(a,b) = \textrm{ sum of the coefficients of $q^k$ for some $k$ in the product $a {\!\ \star_{0}\!\ } b$}.$$
This is a quadratic form defined on $\QH(X)$ and if $R(X)$ is the radical of $\QH(X)$ we prove (see Theorem \ref{thm-alg-qh})
\begin{thm1}
\label{main1}
Let $X$ be Fano with Picard number $1$ and $Q_X$ positive definite. Let $h$ the class of a generator of the Picard group.
1. Then $R(X) \subset \{a \in \QH(X) \ | \ h^k {\!\ \star_{0}\!\ } a = 0 \textrm{ for some $k$} \}$.
2. If $h$ is invertible in $\QH(X)$, then $\QH(X)$ is semisimple.
\end{thm1}
In the second part of this paper we give examples of varieties whose quadratic form $Q_X$ is positive definite. In particular we obtain (See Theorem \ref{thm-def-pos} and Proposition \ref{prop-def-pos})
\begin{thm1}
\label{main2}
Let $X$ be a variety in the following list (See Subsection \ref{subsect-qh-comin})
\medskip
\centerline{\begin{tabular}{lllllll}
$\mathbb{P}^n$ & & $Q_n$ & & $\OG(5,10)$ & & $E_6/P_6$ \\
$\Gr(2,n)$ & & $\LG(3,6)$ & & $\OG(6,12)$ & & $E_7/P_7$ \\
\end{tabular}}
\medskip
\noindent
or an adjoint variety (See Subsection \ref{subsection-adjoint}) and let $Y$ be a general linear section of codimension $k$ of $X$ with $2 c_1(Y) > \dim Y$. Then $Q_Y$ is positive definite.
\end{thm1}
In particular alsmost all Fano varieties of coindex 3 occur in the
above list (see Example \ref{sub-exqm}). In the third part we give a closer look at the product $h {\!\ \star_{0}\!\ } -$ with the generator of the Picard group and obtain the following semisimplicity result (See Theorem \ref{thm-semisimple-Y}).
\begin{thm1}
\label{main3}
Let $Y$ be a general hyperplane section with $2 c_1(Y) > \dim Y$ of a homogeneous space in the following list
$$\begin{array}{lllllll}
\mathbb{P}^n & & Q_n & & \LG(3,6) & & F_4/P_1 \\
\Gr(2,2n+1) & & \OG(5,10) & & \OG(2,2n+1) & & G_2/P_1.\\
\end{array}$$
Then $\QH(Y)$ is semisimple.
\end{thm1}
In particular, this Theorem recovers in a uniform way semisimplicity results proved \cite{adjoint} and \cite{pech} and provides new semisimplicity results.
In the last section we consider two cases where the small quantum cohomology is not semisimple and prove using Theorem \ref{main1} that the big quantum cohomology ring (denoted $\BQH(X)$, see Section \ref{sec-bqh}) is semisimple (see Theorem \ref{thm-bqh1} and Theorem \ref{thm-bqh2}).
\begin{thm1}
\label{main4}
Let $X = \IG(2,2n)$ or $X = F_4/P_4$. Then $\QH(X)$ is not semisimple but $\BQH(X)$ is semisimple.
\end{thm1}
This result was the starting point of this work which came from
discussions with A.~Mellit and M.~Smirnov. They obtain in \cite{SM}
together with S.~Galkin an independent proof of the semisimplicity of
$\BQH(Y)$ for $Y = \IG(2,6)$.
\medskip
Let us say few words on Dubrovin's conjecture. Recall that the first part of this conjecture states that for $X$ smooth projective, the semisimplicity of $\BQH(X)$ is equivalent to the existence of a full exceptional collection in $D^b(X)$, where $D^b(X)$ denotes the bounded derived category of coherent sheaves on $X$.
For all homogeneous spaces $X$ appearing in Theorem \ref{main3} and
Theorem \ref{main4} except $F_4/P_1$ and $F_4/P_4$, it is known (see
\cite[Section 6]{kuz1} and \cite{kuz2}) that their derived category
admits a full exceptional collection proving Dubrovin's conjecture in
these cases.
The same is true for a hyperplane section $Y = \IG(2,2n+1)$ of $X = \Gr(2,2n+1)$ recovering results of \cite{pech}.
Furthermore, for $X = \Gr(2,5)$, $X = \OG(5,10)$ or $X = \LG(3,6)$ the hyperplane sections $Y$ of $X$ with $2 c_1(Y) > \dim Y$ also have an exceptional collection (see \cite[Section 6]{kuz1}) proving Dubrovin's conjecture in these cases.
\medskip
\textbf{Acknowledgement.} I thank A.~Mellit and M.~Smirnov for enlightening discussions and email exchanges.
I also thank the organisers of the conference {\em Quantum cohomology and quantum K-theory} held in Paris in January 2014 where this work started. Finally I thank P.-E. Chaput for the program \cite{programme} which was of great use in many computations.
\setcounter{tocdepth}{1}
\tableofcontents
\section{Big quantum cohomology}
\label{sec-bqh}
In this section we recall few facts and fix notation for the quantum cohomology of a complex smooth projective variety $X$. We write $\HH(X)$ for $H^*(X,{\mathbb R})$.
\subsection{Reminders} Let $X$ be a smooth projective variety, let ${{\mathcal E}}(X)$ be the cone of effective curves and let ${\beta} \in {{\mathcal E}}(X)$. Denote by $\M_{0,n}(X,{\beta})$ the Kontsevich moduli space of genus $0$ stable maps to $X$ of degree ${\beta}$ with $n$ marked points. This is a proper scheme and there are evaluation morphisms $\operatorname{ev}_i : \M_{0,n}(X,{\beta}) \to X$ defined by evalutating the map at the $n$-th marked points. For $(\gamma_i)_{i \in [1,n]}$ cohomology classes on $X$, one defines the Gromov-Witten invariants as follows:
$$I_{0,n,\beta}({\gamma_1,\cdots,\gamma_n}) = \int_{[\M_{0,n}(X,{\beta})]^{\textrm{vir}}}
\operatorname{ev}_1^*\gamma_1 \cup \cdots \cup \operatorname{ev}_n^* \gamma_n$$
where $[\M_{0,n}(X,{\beta})]^{\textrm{vir}}$ is the virtual fundamental class as defined in \cite{beh-fant}.
When $\gamma_i = \gamma$ for all $i \in [1,n]$, we write $I_{0,n,{\beta}}({\gamma_1,\cdots,\gamma_n}) = I_{0,n,{\beta}}({\gamma^n})$.
Let $N + 1 = {\rm rk}(\HH(X))$ and $r = {\rm rk}({\rm Pic}(X))$. Let $(e_i)_{i \in [0,N]}$ be a basis of $\HH(X)$ such that $e_0 = 1$ is the fundamental class and $(e_1,\cdots,e_r)$ is a basis of $H^2(X,{\mathbb R})$. For $\gamma \in \HH(X)$, write $\gamma = \sum_i x_i e_i$ and define the \textbf{Gromov-Witten potential} by
$$\Phi(\gamma) = \sum_{n \geq 0} \sum_{{\beta} \in {{\mathcal E}}(X)} \frac{1}{n!} I_{0,n,{\beta}}({\gamma^n}).$$
This is an element in $R = {\mathbb R}[[(x_i)_{i \in [0 , N]}]]$. If $(\ ,\ )$ denotes the Poincar\'e pairing, then the \textbf{quantum product $\star$} is defined as follows
$$(e_i \star e_j , e_k) = \frac{\partial^3 \Phi}{\partial x_i \partial x_j \partial x_k}(\gamma)$$
and extended by bilinearity to any other classes. This actually defines a family of products parametrised by $\HH(X)$. The main result of the theory states that these products are associative (see \cite{behrend,beh-man,man-kon}).
We write $\BQH(X)$ for the algebra $(\HH(X) \otimes_{\mathbb R} R , \star)$.
\subsection{Virtual fundamental class}
For our computations of Gromov-Witten invariants we shall use the following general result on the virtual fundamental class for smooth projective varieties (this was proved in \cite{ruan-tian} according to \cite[Point (1.4)]{beauville}, we refer to \cite[Proposition 2]{lee-k-theo} for an algebraic proof).
\begin{prop}
\label{prop-class-virt}
Let $X$ be a smooth projective complex algebraic variety such that $\bar M_{g,n}(X,\beta)$ has the expected dimension, then the virtual class is the fundamental class.
\end{prop}
\subsection{Divisor axiom}
One very useful property of Gromov-Witten invariants is that for degree $2$ cohomology classes, they can be easily computed. Indeed we have the following \textbf{divisor axiom}. For $\gamma_1 \in H^2(X,{\mathbb R})$, we have
$$I_{0,n,{\beta}}({\gamma_1,\cdots,\gamma_n}) = \left( \int_{\beta} \gamma_1 \right) I_{0,n-1,{\beta}}({\gamma_2,\cdots,\gamma_n}).$$
This gives a simplification of the potential (modulo terms of degree lower than 2):
$$\Phi(\gamma) = \gamma \cup \gamma \cup \gamma + \sum_{n \geq 0} \sum_{{\beta} \in {{\mathcal E}}(X)} \frac{1}{n!} I_{0,n,{\beta}}({\bar \gamma^n}) q^{\beta}$$
where $\bar \gamma$ is the projection of $\gamma$ on the span of $(e_i)_{i \in [r+1,N]}$ and $q^{\beta} = q_1^{d_1} \cdots q_r^{d_r}$ with $d_i = \int_{\beta} e_i$ and $q_i = e^{x_i}$ for $i \in [1,r]$. Writing $\bar \gamma = \sum_{i = r + 1}^N x_i e_i$ we get
$$\Phi(\gamma) = \gamma \cup \gamma \cup \gamma + \sum_{n \geq 0} \sum_{{\beta} \in {{\mathcal E}}(X)} \sum_{n_{r + 1} + \cdots + n_N = n}\frac{x_{r + 1}^{n_{ r + 1}} \cdots x_N^{n_N}}{n_{r + 1}! \cdots n_N!} I_{0,n,{\beta}}({e_{r + 1}^{n_{r + 1}} , \cdots , e_N^{n_N}}) q^{\beta}.$$
\subsection{Small quantum product}
\label{subsection-small}
A very classical special product called the small quantum product and denoted by $\star_0$ in this paper is obtained as follows
$$(e_i \star_0 e_j , e_k) = \frac{\partial^3 \Phi}{\partial x_i \partial x_j \partial x_k}(\gamma)\vert_{\bar \gamma = 0}.$$
This is also a family of associative products parametrised by $H^2(X,{\mathbb R})$. This product is easier to compute and only involves $3$-points Gromov-Witten invariants \emph{i.e.} Gromov-Witten invariants with $n = 3$. Set $R_0 = {\mathbb R}[[(q_i)_{i \in [1 , r]}]]$. We write $\QH(X)$ for the algebra $(\HH(X) \otimes_{\mathbb R} R_0 , \star_0)$.
Note that $R_0 = R/((x_i)_{i \in [r + 1,N]})$.
\subsection{Deformation in the $\tau$-direction}
\label{sub-def}
Let $\tau \in \HH(X)$ and choose the basis $(e_i)_{i \in [0,N]}$ so that $e_{r + 1} = \tau$. For $\gamma \in \HH(X)$, write $\hat \gamma$ for its projection in the span of $(e_i)_{i \in [r + 2,N]}$. We define the product $\star_\tau$ as follows:
$$(e_i \star_\tau e_j , e_k) = \frac{\partial^3 \Phi}{\partial x_i \partial x_j \partial x_k}(\gamma)\vert_{\hat \gamma = 0}.$$
This is also a family of associative products parametrised by $H^2(X,{\mathbb R}) \oplus {\mathbb R} \tau$. Set $R_\tau = {\mathbb R}[[(q_i)_{i \in [1 , r]},x_{r + 1}]]$. We write $\BQH_\tau(X)$ for the algebra $(\HH(X) \otimes_{\mathbb R} R_\tau , \star_\tau)$. Note that $R_\tau = R/((x_i)_{i \in [r + 2,N]})$.
Let us describe a general product in this algebra (we set $t = x_{r + 1}$):
$$(e_i \star_\tau e_j , e_k) = (e_i \star_0 e_j , e_k) + t \sum_{{\beta} \in {{\mathcal E}}(X)} I_{0,4,{\beta}}({e_i,e_j,e_k,e_{r + 1}}) q^{\beta} + O(t^2).$$
In particular when $e_i$ is the class of a divisor this product takes a simple form. Denote by $\Psi_i$ the endomorphism of ${\mathbb R}[[(q_i)_{i \in [1,r]}]]$ defined by
$$\Psi_i \left( \sum_{{\beta} \in {{\mathcal E}}(X)} z_{\beta} q^{\beta} \right) = \sum_{{\beta}
\in {{\mathcal E}}(X)} d_i z_{\beta} q^{\beta}$$
with $d_i = \int_{\beta} e_i$ and extend $\Psi_i$ by linearity on
$\QH(X)$ via its actions on the scalars. We get the formula
$e_i \star_\tau e_j = e_i \star_0 e_j + t \Psi_i(e_{r+1}
\star_0 e_k) + O(t^2).$
\section{localisation of the radical}
\label{section-rad}
In this section, we prove that the existence of positive definite hermitian or real forms imply semisimplicity or regularity results on finite dimensional commutative algebras. Let $A$ be a finite dimensional commutative ${\mathbb C}$-algebra with $1$. We write $R(A)$ for the radical of $A$. For $a \in A$, we write $E_a \in {\rm End}_A(A)$ for the endomorphism obtained by multiplication with $a$.
\subsection{Semisimplicity and inner product}
We first relate the semisimplicity of $A$ with the existence of an inner product.
\begin{prop}
\label{prop-invol}
The algebra $A$ is semisimple if and only if there exists an algebra involution $a \mapsto \bar a$ and a linear form $\varphi : A \to {\mathbb C}$ with $\varphi(1) = 1$ such that the bilinear form defined by $Q(a,b) = \varphi(a \bar b)$ is an inner product.
\end{prop}
\begin{proof}
Let $n = \dim A$. Assume that $A$ is semisimple, then for $a \in A$, the endomorphism $E_a \in {\rm End}_A(A)$ is semisimple. Since $A$ is commutative, the endomorphisms $(E_a)_{a \in A}$ are simultaneously diagonalisable in a basis $(e_i)_{i \in [1,n]}$. These elements are orthogonal idempotents whose sum is 1. Define the linear form $\varphi$ by $\varphi(a) = n^{-1} {\rm Tr}(E_a)$ and the involution $a \mapsto \bar a$ as the unique antilinear map with $\bar e_i = e_i$. This defines the desired inner product.
Conversely, assume that such an algebra involution and linear form exist. Then the endomorphisms $E_a$ are normal for $Q$: we have $Q(E_a(b),c) = Q(ab,c) = \varphi(ab \bar c) = Q(b,\bar a c) = Q(b,E_{\bar a}(c))$. The adjoint of $E_a$ is $E_{\bar a}$ and they commute. In particular the endomorphisms $(E_a)_{a \in A}$ are simultaneously diagonalisable in a basis $(e_i)_{i \in [1,n]}$ and these elements are orthogonal idempotents whose sum is 1.
\end{proof}
This result was first motivated by the following example.
\begin{example}
\label{exam-comin}
We refer to Subsection \ref{subsect-qh-comin} for results on quantum cohomology of cominuscule homogeneous spaces. Let $X$ be a cominuscule homogeneous space and let $A(X) = \QH(X)_{q=1}$ be its small quantum cohomology with product ${\!\ \star_{0}\!\ }$ and with quantum parameter equal to 1. Let $\{{\rm pt}\}$ the the cohomology class of a point and let $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_u$ be a Schubert class. Consider $\textrm{PD}(\{{\rm pt}\} {\!\ \star_{0}\!\ } \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_u)$ where $\textrm{PD}$ stands for Poincar\'e duality. It was proved in \cite{strange} and \cite{semisimple}, that this class is a Schubert class $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\bar u}$ and that $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_u \mapsto \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\bar u}$ defines an algebra involution. Define furthermore a linear form $\varphi$ by $\varphi(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_u) = \delta_{u,1}$ on the Schubert basis (recall that $1 = \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_1$). One easily checks that $Q(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_u, \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\bar v}) = \delta_{u,v}$ proving that $Q$ is an inner product. We recover this way a result of \cite{semisimple} relating the semisimplicity with the existence of an algebra involution.
\end{example}
\subsection{Radical and positive definite forms}
One of the major problems for applying the above result is that the algebra involution is not \emph{a priori} given and is usually hard to produce (for an example see \cite[Remark 6.6]{adjoint}). In this section we furthermore assume that $A$ is a ${\mathbb R}$-algebra which is ${\mathbb Z}/c_1{\mathbb Z}$-graded and denote by $A_k$ the graded piece of degree $k$.
\begin{defn}
Let $E \in {\rm End}(A)$. We set $A_\lambda(E) = \{ a \in A \ | \ (E - \lambda {\rm Id}_A)^n(a) = 0 \textrm{ for $n$ large} \}$ and $m_E(\lambda) = \dim A_\lambda(E)$.
\end{defn}
\begin{lemma}
\label{prop-alg}
Let $E \in {\rm End}_A(A)$ with real eigenvalues. We have
$$R(A) \subset \bigoplus_{m_E(\lambda) > 1} A_\lambda(E).$$
In particular if $E$ is semisimple regular, we have $R(A) = 0$.
\end{lemma}
\begin{remark}
Note that we assume here that $E$ is $A$-linear. This is for example the case for $E = E_a$ with $a \in A$.
\end{remark}
\begin{proof}
We have a decomposition
$A = \bigoplus_\lambda A_\lambda(E).$
Furthermore by $A$-linearity, we have $A_\lambda(E) \cdot A_\mu(E) \subset A_\lambda(E) \cap A_\mu(E) = 0$ for $\lambda \neq \mu$. Let $a \in R(A)$ and write $a = \sum_\lambda a_\lambda$ with $a_\lambda \in A_\lambda(E)$. Then $a_\lambda$ is nilpotent. Write $1 = \sum_\lambda 1_\lambda$ with $1_\lambda \in A_\lambda(E)$. Then $1_\lambda$ is an idempotent and for $m_E(\lambda) = 1$ we have $a_\lambda = x_\lambda 1_\lambda$ with $x_\lambda \in {\mathbb C}$. Since $a_\lambda$ is nilpotent we have $x_\lambda = 0$ and the result follows.
\end{proof}
\begin{prop}
\label{main-prop}
Assume that there exists $\varphi_0 : A_0 \to {\mathbb C}$ a linear form with $\varphi_0(1) = 1$ such that the bilinear form defined by $Q_0(a,b) = \varphi(a b)$ for $a,b \in A_0$ is positive definite.
1. Then $R(A) \subset A_0(E_a)$ for all $a \in A_1$.
2. If there exists $a \in A_1$ with $\Ker E_a = 0$, then $A$ is semisimple.
3. If there exists $a \in A_1$ with $A_0 = A_0 \cap \Ker E_a \oplus A_0 \cap {\rm Im} E_a$, then $R(A) \subset \Ker E_a$, $A = {\rm Im} E_a \oplus \Ker E_a$ and the subalgebra generated by $a$ is semisimple.
\end{prop}
\begin{proof}
1. Let $n = \dim A_0$. As in the proof of Proposition \ref{prop-invol}, the endomorphisms $E_a\vert_{A_0}$ are self-adjoint for $Q_0$. In particular they have real eigenvalues and there exists a basis $(e_i)_{i \in [1,n]}$ of orthogonal idempotents whose sum is 1 in $A_0$. Let $a \in A_1$. Then $a^{c_1} \in A_0$ and $E_{a^{c_1}}\vert_{A_0}$ is semisimple. Furthermore there exists $b \in A_0^\times$ such that $E_{(ab)^{c_1}}\vert_{A_0}$ is semisimple and has eigenvalues with multiplicity 1 except maybe for $0$. The minimal polynomial $\mu$ of $E_{(ab)^{c_1}}\vert_{A_0}$ therefore has simple roots. Since $1 \in A_0$ this implies that $\mu((ab)^{c_1}) = 0$ and the minimal polynomial of $E_{(ab)^{c_1}}$ divides $\mu$ and thus has simple roots. In particular $E_{(ab)^{c_1}}$ is semisimple and has the same eigenvalues as $E_{(ab)^{c_1}}\vert_{A_0}$.
Let $v$ be an eigenvector of $E_{ab}$ with eigenvalue $\lambda \neq 0$. Write $v = \sum_kv_k$ with $v_k \in A_k$. Then $v_k$ is an eigenvector of $E_{(ab)^{c_1}}$ for the eigenvalue $\lambda^{c_1}$. In particular $v_0$ is the unique (up to scalar) eigenvector of $E_{(ab)^{c_1}}$ with eigenvalue $\lambda^{c_1}$. From $E_{ab}(v) = \lambda v$ we deduce $E_{(ab)^{c_1-k}}(v_k) = \lambda^{c_1-k} v_0$. Applying $E_{(ab)^k}$ we get $\lambda^{c_1} v_k = E_{(ab)^{c_1}}(v_k) = \lambda^{c_1-k} E_{(ab)^k}(v_0)$.
Finally we have
$$v = \sum_{k = 0}^{c_1 -1} \lambda^{-k} E_{(ab)^{k}}(v_0).$$
Note furthermore that for $\lambda \neq 0$ we have $A_\lambda(E_{ab}) = \Ker(E_{ab} - \lambda {\rm Id})$ since $E_{(ab)^{c_1}}$ is semisimple. Therefore $m_{E_{ab}}(\lambda) = 1$ for $\lambda \neq 0$. By the previous Lemma, this implies $R(A) \subset A_{0}(E_{ab}) = A_0(E_{a})$ (the last equality holds since $b$ was chosen invertible).
2. If $a$ is invertible, then $A_0(E_a) = 0$ and the result follows from 1.
3. Let $a \in A_1$. Let $\mu$ be the minimal polynomial of $E_{a^{c_1}}$. Since $E_{a^{c_1}}$ is semisimple, we have $\mu(X) = X P(X)$ with $P$ such that $P$ has only simple roots and $P(0) \neq 0$. The minimal polynomial of $E_a$ divides $X^{c_1} P(X^{c_1})$. Let $b = P(a^{c_1}) \in A_0$ and write $b = ac + d$ with $c \in A$ and $d \in A_0 \cap \Ker E_a$.
\begin{lemma}
We have $a^ib = 0$ for $i \geq 1$
\end{lemma}
\begin{proof}
Per descending induction on $i$. For $i \geq {c_1}$ the result follows from $0 = \mu(a^{c_1}) = a^{c_1} b$. Assume $a^ib = 0$ for $i > 1$. Then $a^iP(a^{c_1}) = 0$ and the minimal polynomial of $E_a$ divides $X^iP(X)$. In particular $\Ker E_{a^{i+1}} = \Ker E_{a^i}$. We have $0 = a^{i}b = a^{i+1} c +a^i d = a^{i+1} c$. Thus $c \in \Ker E_{a^{i+1}} = \Ker E_{a^i}$. This implies $a^{i-1} b = a^i c + a^{i-1}d = a^i c = 0$.
\end{proof}
In particular the minimal polynomial of $E_a$ divides $X P(X^{c_1})$ and therefore has only simple roots (over ${\mathbb C}$). This implies that the subalgebra generated by $a$ is semisimple. It also implies that $E_a$ is semisimple and $A = {\rm Im} E_a \oplus \Ker E_a$. This finally implies $A_0(E_{a}) = \Ker E_a$ and the result follows from 1.
\end{proof}
\begin{remark}
The above proposition works actually for any $a \in A_k$ such that $\gcd(k,c_1) = 1$
\end{remark}
\subsection{Application to quantum cohomology of Fano varieties}
\label{sect-qh}
Let $X$ be a smooth complex projective Fano variety of Picard rank $1$ and let $H$ be a divisor such that ${\mathcal O}_X(H)$ is an ample generator of the Picard group. The index $c_1(X)$ of $X$ is defined via $-K_X = c_1(X) H$. We write $\HH(X) = H^*(X,{\mathbb R})$, $h \in H^2(X,{\mathbb Z})$ for the cohomology class of $H$ and $[\textrm{pt}] \in H^{2 \dim X}(X,{\mathbb Z})$ for the cohomology class of a point in $X$.
We denote by $\QH(X)$ the small quantum cohomology ring obtained using only $3$-points Gromov-Witten invariants. We write ${\!\ \star_{0}\!\ }$ for the product in $\QH(X)$. Recall that as ${\mathbb R}$-vector space we have $\QH(X) = \HH(X) \otimes_{\mathbb R} {\mathbb R}[q]$. The ring $\QH(X)$ is graded with $\deg(q) = 2 c_1(X)$. We write $\QH^{k}(X)$ for the degree $k$ graded piece. We let $A = \QH(X)_{q=1}$ be the algebra obtained from $\QH(X)$ by quotienting with the ideal $(q-1)$. This algebra is ${\mathbb Z}/2c_1(X){\mathbb Z}$-graded and we write $A_k$ for its degree $k$ graded piece.
In particular we have a decomposition
$$A_0 = \bigoplus_{k \geq 0} H^{2 k c_1(X)}(X,{\mathbb R}).$$
Denote by $\varphi_0 : A_0 \to {\mathbb R}$ the projection on the first factor and let $Q_X: A_0 \times A_0 \to {\mathbb R}$ be the quadratic form defined by $Q_X(\alpha,\beta) = \varphi_0(\alpha {\!\ \star_{0}\!\ } \beta)$.
\begin{thm}
\label{thm-alg-qh}
Let $X$ be Fano with Picard number $1$ and $Q_X$ positive definite.
1. Then $R(A) \subset A_0(E_{h})$.
2. If $\Ker E_h = 0$, then $A$ is semisimple.
3. If $A_0 = A_0 \cap \Ker E_h \oplus A_0 \cap {\rm Im} E_h$, then $R(A) \subset \Ker E_h$, $A = {\rm Im} E_h \oplus \Ker E_h$ and the subalgebra generated by $h$ is semisimple.
\end{thm}
\begin{proof}
Follows from Proposition \ref{main-prop}.
\end{proof}
\section{Varieties with $Q_X$ positive definite}
\label{section-QX}
In this section we give examples of Fano varieties $X$ with $Q_X$ positive definite. These varieties are obtained as complete intersections of homogeneous spaces. The commun feature of our homogeneous spaces is the fact that the variety of conics passing through a point has a positive definite intersection form on its middle cohomology. Note that this positivity property implies strong topological properties (see for example \cite{small-codim} and \cite{small-cras}).
\subsection{Fano of large index}
We start with general remarks on $Q_X$ for Fano varieties of large index. More precisely, we assume $2 c_1(X) > \dim X$. Note that with this assumption we have
$$A_0 = H^0(X,{\mathbb R}) \oplus H^{2 c_1(X)}(X,{\mathbb R}).$$
Furthermore the fact that $1 \in H^0(X,{\mathbb R})$ is a unit for the quantum cohomoloy implies $Q_X(1,1) = 1$ and $Q_X(1,\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q) = 0$ for $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q \in H^{2 c_1(X)}(X,{\mathbb R})$. To compute $Q_X$, we therefore only have to compute $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q {\!\ \star_{0}\!\ } \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'$ for $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q , \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' \in H^{2 c_1(X)}(X,{\mathbb R})$. By dimension arguments we have
$$\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q {\!\ \star_{0}\!\ } \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' = q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'' + I_{2}(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q,\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q',\{{\rm pt}\}) q^2$$
where $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'' \in H^{2 c_1(X)}(X,{\mathbb R})$ and $I_2(a,b,c) I_{0,3,2}(a,b,c)= $ is $3$-points Gromov-Witten invariant of degree $2$ in genus $0$ for the classes $a,b,c$. We obtain the following result.
\begin{lemma}
\label{lemm-c>dim}
Let $X$ be a Fano variety with $2 c_1(X) > \dim X$. Then $Q_X$ is positive definite if and only if $I_2(-,-,\{{\rm pt}\})$ is positive definite on $H^{2 c_1(X)}(X,{\mathbb R})$.
\end{lemma}
For computing $I_2(-,-,\{{\rm pt}\})$ we shall prove using Proposition \ref{prop-class-virt} that the virtual fundamental class is the actual fundamental class.
\subsection{Complete intersections in projective spaces}
\label{subsection-complete-intersections}
We first consider complete intersections in $\mathbb{P}^{n+r}$ with large index. We reinterpret here results obtained by Beauville \cite{beauville}. Let $X$ be a smooth complete intersection of $r$ hypersurfaces of degree $(d_1,\cdots,d_r)$ in $\mathbb{P}^{n+r}$ with $n \geq 2$. Assume that $n \geq 2 \sum(d_i-1) -1$.
\begin{lemma}
\label{lemm-comp-int}
The form $Q_X$ is positive definite.
\end{lemma}
\begin{proof}
This easily follows from the results in \cite{beauville}. We have a basis $(1,h^{c_1(X)})$ of $A_0$ and $\varphi_0(h^{c_1(X)}) \neq 0$. Furthermore $h^{c_1(X)} {\!\ \star_{0}\!\ } h^{c_1(X)} = d_1^{d_1} \cdots d_r^{d_r} h^{c_1(X)}$ proving the result.
\end{proof}
\begin{remark}
Let $X$ be a complete intersection as above.
1. By Theorem \ref{thm-alg-qh}, we have $R(X) \subset A_0(E_h)$. Using the results of Beauville \cite{beauville} the primitive classes $H^{\dim X}_0(X,{\mathbb R})$ in $H^{\dim X}(X,{\mathbb R})$ (defined as the kernel of $E_h$) are nilpotent as well as the class $h^{c_1(X) + 1} - d_1^{d_1} \cdots d_r^{d_r} q h$. One can furthermore easily check that these classes generate the radical $R(X)$ of $\QH(X)$. So $\QH(X)$ is not semisimple in general.
2. Tian and Xu \cite{tian-xu} proved that the subalgebra generated by the hyperplane class in $\BQH(X)$ -- the big quantum cohomology -- is semisimple for any complete intersection as above.
3. We do not know in general whether $\BQH(X)$ is semisimple.
\begin{itemize}
\item By results of Hertling, Manin and Teleman \cite{HMT}, a variety has semisimple quantum cohomology only if its cohomology is even and pure of type $(p,p)$. By results of Deligne \cite{deligne} the only possible complete intersections are quadrics, the cubic surface and even-dimensional complete intersection of two quadrics.
\item For the first three it is known that the (small) quantum cohomology is semisimple (see \cite{semisimple}, \cite{ciolli}).
\item For the last one, we do not know if $\BQH(X)$ is semisimple.
\end{itemize}
\end{remark}
\subsection{Cominuscule homogeneous spaces}
\label{subsect-qh-comin}
Let $G$ be a semisimple algebraic group. A parabolic subgroup $P$ is called \emph{cominuscule} if its unipotent radical $U_P$ is abelian. This group theoretic condition has many nice implications on the geometry of $X = G/P$ (\cite{seshadri}, \cite{small}, \cite{adv}, \cite{thomas-yong}). Table 1 gives a list of all cominuscule homogeneous spaces.
Recall that the vertices of the Dynkin diagram of $G$ are the simple roots. The marked vertex is the simple root of $G$ which is not a simple root of $P$. In the above table we denoted $\Gr(k,n)$ (resp. $\LG(n,2n),\OG(n,2n)$) the Gra\ss mann variety of $k$-subspaces in ${\mathbb C}^n$ (resp. isotropic $n$-subspaces for a symplectic or non degenerate quadratic form in ${\mathbb C}^{2n}$, for $\OG(n,2n)$ we only consider a connected component of the Gra\ss mann variety). We wrote $Q_n$ for a smooth $n$-dimensional quadric hypersurface and $\OO \PP^2 = E_6/P_6$ and $E_7/P_7$ are the Cayley plane and the Freudenthal variety.
$$\begin{array}{ccccccc}
Type & X & Diagram & Dimension &
c_1(X) & \dim(\Gamma_2) \\
\hline
A_{n-1} & \Gr(k,n) &\setlength{\unitlength}{2.5mm}
\begin{picture}(10,3)(0,0)
\put(0,0){$\circ$}
\multiput(2,0)(2,0){5}{$\circ$}
\multiput(.73,.4)(2,0){5}{\line(1,0){1.34}}
\put(4,0){$\bullet$}
\end{picture} & k(n-k) & n & 4 \\
B_n & Q_{2n-1} &
\setlength{\unitlength}{2.5mm}
\begin{picture}(10,3)(0,0)
\put(0,0){$\circ$}
\multiput(2,0)(2,0){5}{$\circ$}
\multiput(.73,.4)(2,0){4}{\line(1,0){1.34}}
\multiput(8.73,.2)(0,.4){2}{\line(1,0){1.34}}
\put(0,0){$\bullet$}
\end{picture} & 2n-1 & 2n - 1 & 2n - 1 \\
C_n & \LG(n,2n) &
\setlength{\unitlength}{2.5mm}
\begin{picture}(10,3)(0,0)
\put(0,0){$\circ$}
\multiput(2,0)(2,0){5}{$\circ$}
\multiput(.73,.4)(2,0){4}{\line(1,0){1.34}}
\multiput(8.73,.2)(0,.4){2}{\line(1,0){1.34}}
\put(10,0){$\bullet$}
\end{picture} & \frac{n(n+1)}{2} & n+1 & 3 \\
D_n & Q_{2n-2} &
\setlength{\unitlength}{2.5mm}
\begin{picture}(10,3)(0,0)
\put(2,0){$\circ$}
\multiput(4,0)(2,0){4}{$\circ$}
\multiput(2.73,.4)(2,0){4}{\line(1,0){1.34}}
\put(10,0){$\bullet$}
\put(0,-1.1){$\circ$}
\put(0,1.2){$\circ$}
\put(.6,1.5){\line(5,-3){1.5}}
\put(.6,-.64){\line(5,3){1.5}}
\end{picture}
\vspace{.2cm}
& 2n-2 & 2n-2 & 2n - 2 \\
D_n & \OG(n,2n) &
\setlength{\unitlength}{2.5mm}
\begin{picture}(10,3)(0,0)
\put(2,0){$\circ$}
\multiput(4,0)(2,0){4}{$\circ$}
\multiput(2.73,.4)(2,0){4}{\line(1,0){1.34}}
\put(0,1.2){$\bullet$}
\put(0,-1.1){$\circ$}
\put(.6,1.5){\line(5,-3){1.5}}
\put(.6,-.64){\line(5,3){1.5}}
\end{picture}
\vspace{.2cm}
& \frac{n(n-1)}{2} & 2n-2 & 6 \\
E_6 & \OO\PP^2 &
\setlength{\unitlength}{2.5mm}
\begin{picture}(10,3)(-1,-.5)
\put(0,0){$\circ$}
\multiput(2,0)(2,0){4}{$\circ$}
\multiput(.73,.4)(2,0){4}{\line(1,0){1.34}}
\put(0,0){$\bullet$}
\put(4,-2){$\circ$}
\put(4.42,-1.28){\line(0,1){1.36}}
\end{picture}
& 16 & 12 & 8 \\
E_7 & E_7/P_7 &
\setlength{\unitlength}{2.5mm}
\begin{picture}(10,3)(0,0)
\put(0,0){$\circ$}
\multiput(2,0)(2,0){5}{$\circ$}
\multiput(.73,.4)(2,0){5}{\line(1,0){1.34}}
\put(10,0){$\bullet$}
\put(4,-2){$\circ$}
\put(4.42,-1.28){\line(0,1){1.36}}
\end{picture}
& 27 & 18 & 10
\end{array}
$$
\medskip
\medskip
\medskip
\centerline{Table 1. List of cominuscule homogeneous spaces.}
\subsubsection{Cominuscule varieties with $Q_X$ positive definite}
Let $X$ be cominuscule. The following results on $\QH(X)$ were proved in \cite{semisimple} and \cite{strange}. Let $\{{\rm pt}\}$ the the cohomology class of a point and let $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_u$ be a Schubert class. Then
$$\{{\rm pt}\} {\!\ \star_{0}\!\ } \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_u = q^{d(u)} \textrm{PD}(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\bar u})$$
where $d(u)$ is a non negative integer and $u \mapsto \bar u$ is an involution on Schubert classes. It was also proved that $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_u \mapsto q^{-d(u)} \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\bar u}$ defines an algebra involution. As explained in Example \ref{exam-comin} this defines an inner product on $\QH(X)_{q = 1}$. Note that proving that $Q_X$ is positive definite is equivalent to proving that the classes of degree a multiple of $c_1(X)$ are fixed by the above involution. Since the involution is explicit, an easy check gives the following result.
\begin{prop}
Let $X$ be cominuscule. The form $Q_X$ is positive definite if and only if $X$ is one of the following varieties:
\medskip
\centerline{\begin{tabular}{lll}
$\mathbb{P}^n$ & & $\LG(n,2n)$ for $n \in [3,4]$ \\
$\Gr(2,n)$ for $n \geq 2$ & & $\OG(n,2n)$ for $n \in [1,6]$ \\
$Q_n$ for $n \geq 2$ & & $\O\P^2$ or $E_7/P_7$. \\
\end{tabular}}
\medskip
\end{prop}
\subsubsection{Geometric proof} We now give a geometric proof of the above result when $2 c_1(X) > \dim X$ (this only excludes $\LG(4,8)$ of the list). According to Lemma \ref{lemm-c>dim}, we only have to understand $I_2(-,-,\{{\rm pt}\})$ on $H^{2 c_1(X)}(X,{\mathbb R})$. We recall a geometric construction for cominuscule homogeneous spaces.
For $x,y \in X$, let $d(x,y)$ be the minimal degree of a rational curve passing through $x$ and $y$ and $d_X(2) = \max\{d(x,y) \ | \ x,y \in X\}$. We denote by $\Gamma_d(x,y)$ the union of all degree $d(x,y)$ rational curves passing through $x$ and $y$. Note that $d_X(2) = 1 $ if and only if $X$ is a projective space so we may assume $d_X(2) \geq 2$. The following result was proved in \cite{cmp1}.
\begin{prop}
\label{prop-cmp}
Let $d \in [0,d_X(2)]$ and let $x,y \in X$ with $d(x,y) = d$.
1. The variety $\Gamma_d(x,y)$ is a homogeneous Schubert variety in $X$.
2. Any degree $d$ curve is contained in a $G$-translate of $\Gamma_d(x,y)$.
3. A generic degree $d$ curve is contained in a unique $G$-translate of $\Gamma_d(x,y)$.
4. There passes a unique degree $d$ curve through three general points in $\Gamma_d(x,y)$.
\end{prop}
For $d$ and $x,y \in X$ as in the former proposition, denote by $Y_d(X)$ the variety of all $G$-translates of $\Gamma_d(x,y)$. Since $\Gamma_d(x,y)$ is a Schubert variety, its stabiliser is a parabolic subgroup $Q$ and $Y_d(X) = G/Q$. Without loss of generality, we may assume that $P \cap Q$ contains a Borel subgroup. Let $Z_d(X) = G/(P \cap Q)$ be the incidence variety. Write $M_d(X)$ for the moduli space of stable maps of genus $0$ and degree $d$ with $3$ marked points to $X$. Let $\text{B$\ell$}_d(X) = \{ (\Gamma_d,f) \in Y_d(X) \times M_d(X) \ | \ f \textrm{ factors through } \Gamma_d \}$ and $Z_d^{(3)}(X) = \{ (\Gamma_d,x_1,x_2,x_3) \in Y_d(X) \times X^3 \ | \ x_i \in \Gamma_d \textrm{ for all $i \in [1,3]$} \}$. We have a diagram
$$\xymatrix{\text{B$\ell$}_d(X) \ar[rr]^-\pi \ar[d]_-\phi & & M_d(X) \ar[d]^-{\operatorname{ev}_i} \\
Z_d^{(3)}(X) \ar[r]^-{e_i} & Z_d(X) \ar[r]^-p \ar[d]_-q & X \\
& Y_d(X), & \\}$$
where $\pi$ is the natural projection, $\phi$ maps $(\Gamma_d,f)$ to $(\Gamma_d,\operatorname{ev}_1(f),\operatorname{ev}_2(f),\operatorname{ev}_3(f))$, $e_i$ maps $(\Gamma_d,x_1,x_2,x_3)$ to $(\Gamma_d,x_i)$ and $p$ and $q$ are the natural projections. The above proposition implies that $\pi$ and $\phi$ are both birational. Note also that the third point of Proposition \ref{prop-cmp} implies that considering $\Gamma_d \in Y_d(X)$ as a smooth subvariety in $X$, we have $2 \dim(\Gamma_d) = d c_1(\Gamma_d)$ (see also \cite[Formula (5) on Page 73]{cmp1}). For $d = 2$ this implies $\dim \Gamma_2 = c_1(\Gamma_2)$ so $\Gamma_2$ is a smooth quadric hypersurface (see also Table 1 for its dimension).
By Proposition \ref{prop-class-virt} and since $M_d(X)$ has expected dimension, for $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q,\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q',\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'' \in \HH(X)$, the Gromov-Witten invariant $I_d(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q,\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q',\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'')$ is the push-forward to the point of the class $\operatorname{ev}_1^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q \cup \operatorname{ev}_2^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' \cup \operatorname{ev}_3^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q''$. Since $\pi$ and $\phi$ are birational, an easy diagram chasing gives the following formula (usually called \emph{quantum to classical principle}):
\begin{equation}
\label{q-to-c}
I_d(,\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q,\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q',\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'') = q_*p^* \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q \cup q_*p^* \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' \cup q_*p^* \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q''.
\end{equation}
The very first version of this result was proved in \cite{bkt} for (maximal isotropic) Gra\ss mann varietes and generalised in \cite{cmp1}. For the formal computation in the above setting (and even in equivariant $K$-theory), we refer to \cite[Lemma 3.5]{BM}.
\begin{prop}
\label{prop-geom-comin}
Let $X$ be one of the following varieties
\medskip
\centerline{\begin{tabular}{lll}
$\mathbb{P}^n$ & & $\LG(n,2n)$ for $n \in [1,4]$ \\
$\Gr(2,n)$ for $n \geq 2$ & & $\OG(n,2n)$ for $n \in [1,6]$ \\
$Q_n$ for $n \geq 2$ & & $\O\P^2$ or $E_7/P_7$. \\
\end{tabular}}
\medskip
Then $I_2(-,-,\{{\rm pt}\})$ is positive definite.
\end{prop}
\begin{proof}
We apply formula \eqref{q-to-c}. Let $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'' = \{{\rm pt}\}$, then $q_*p^*\{{\rm pt}\} = j_*[F]$ where $F$ is any fiber of $p$ and $j : F \to Y_d(X)$ the inclusion. Projection formula gives
$$I_d(,\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q,\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q',\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'') = I_d(,\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q,\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q',\{{\rm pt}\}) = q_*p^* \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q \cup q_*p^* \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' \cup q_*p^* \{{\rm pt}\} = j^*q_*p^* \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q \cup j^*q_*p^* \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'.$$
The following table gives the list of the varieties $F$ (see \cite[Table on Page 71]{cmp1}).
$$\begin{array}{c|cccccc}
X & \Gr(2,n) & Q_n & \LG(n,2n) & \OG(n,2n) & E_6/P_6 & E_7/P_7 \\
Y_2(X) & \Gr(4,n) & \{\{{\rm pt}\}\} & \IG(n-2,2n) & \OG(n-4,2n) & E_6/P_1 & E_7/P_1 \\
F & \Gr(2,n-2) & \{\{{\rm pt}\}\} & \Gr(2,n) & \Gr(4,n) & Q_8 & E_6/P_1. \\
\end{array}$$
\medskip
\centerline{Table 2. Varieties $Y_2(X)$ and $F$.}
\medskip
Note that since $\pi$ and $\phi$ are birational, we have $\dim X + 2 c_1(X) = \dim Z_2^{(3)}(X) = \dim Z_2(X) + 2 \dim \Gamma_2 = \dim X + \dim F+ 2 \dim \Gamma_2$. In particular we get
$$\dim F = 2(c_1(X) - \dim \Gamma_2).$$
Thus $\deg j^*q_*p^* \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q = \deg q_*p^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q = 2 c_1(X) - 2 \dim \Gamma_2 = \dim F$ for $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q \in H^{2 c_1(X)}(X,{\mathbb Z})$. We get an induced map
$$j^*q_*p^* : H^{2 c_1(X)}(X,{\mathbb Z}) \to H^{\dim F}(F,{\mathbb Z}).$$
Now for the varieties in the list of the proposition one easily checks (using Schubert classes) that this map is an isomorphism. Furthermore the variety $F$ is of even dimension and has positive definite Poincar\'e pairing on $H^{\dim F}(F,{\mathbb Z})$ (see \cite[Table on Page 572]{small-codim} for the fact that the Poincar\'e pairing is positive definite on the middle cohomology of $\Gr(2,n)$, $Q_8$ and $E_6/P_1$). This finishes the proof.
\end{proof}
\subsection{Linear sections of cominuscule homogeneous spaces}
In this subsection, we extend the result on cominuscule varieties to linear sections of cominuscule varietes. More precisely we prove
\begin{thm}
\label{thm-def-pos}
Let $X$ be a cominuscule variety with $Q_X$ positive definite and let $Y$ be a general linear section of codimension $k$ of $X$ with $2 c_1(Y) > \dim Y$. Then $Q_Y$ is positive definite.
\end{thm}
\begin{remark}
\label{rem-def-pos}
The possible values of $k$ in the above Theorem are as follows:
\medskip
\centerline{\begin{tabular}{lll}
For $\mathbb{P}^n$ we have $k \leq n$ & & For $\OG(5,10)$ we have $k \leq 5$ \\
For $\Gr(2,n)$ we have $k \leq 3$ & & For $\OG(6,12)$ we have $k \leq 4$ \\
For $Q_n$ we have $k \leq n$ & & For $\O\mathbb{P}^2$ we have $k \leq 7$ \\
For $\LG(3,6)$ we have $k \leq 1$ & & For $E_7/P_7$ we have $k \leq 8$. \\
\end{tabular}}
\end{remark}
We will prove this result in two steps. First note that since $2 c_1(Y) > \dim Y$ it is enough to check (using Lemma \ref{lemm-c>dim}) that $I_2(-,-,\{{\rm pt}\})$ is positive definite. Let $L_k$ be the linear subspace of codimension $k = \dim X - \dim Y$ cuting $Y$ out of $X$. Note that since $c_1(Y) = c_1(X) -k$ and $\dim Y = \dim X -k$, the condition $2 c_1(Y) > \dim Y$ translates into $k < 2 c_1(X) - \dim X$ and that this implies $k < \dim \Gamma_2$ (where $\Gamma_2$ is the fiber of $q: Z_2(X) \to Y_2(X)$).
\subsubsection{Moduli space $M_2(Y)$} We first prove the following result asserting that $M_2(Y)$ has the expected dimension.
\begin{prop}
We have $\dim M_2(Y) = 2 c_1(Y) + \dim Y$.
\end{prop}
\begin{proof}
Note that we have a map of stacks $M_2(Y) \to \mathfrak{M}_{0,3}$
where $\mathfrak{M}_{0,3}$ is the stack of prestable curves of genus
$0$ with $3$ marked points. The fibers of this map are the schemes of
morphisms of degree $2$ from a fixed prestable curve to $X$ (see for
example \cite{behrend}). In
particular the irreducible components of any of these fibers has
dimension at least the corresponding expected dimension.
Set $\text{B$\ell$}_2(Y) = \pi^{-1}(M_2(Y))$. We first prove that map $\text{B$\ell$}_2(Y) \to Y_2$ is surjective. Indeed. let $\Gamma_2 \in Y_2$. Then $\Gamma_2$ is a quadric of dimension $\dim \Gamma_2 > k$. Its intersection with $L_k$ therefore contains a conic. In particular, there exists a stable map in $M_2(Y)$ factorising through $\Gamma_2$ proving the surjectivity. The fiber of the map $\text{B$\ell$}_2(Y) \to Y_2(X)$ over $\Gamma_2$ is therefore given by the genus zero stable maps of degree 2 and three marked points to $\Gamma_2 \cap L_k$. We thus need to understand this intersection more precisely.
We shall consider $L_k$ as the intersection of $k$ hyperplanes $H_1,\cdots,H_k$.
\begin{itemize}
\item[1.] the intersection $\Gamma_2 \cap L_k$ is a smooth quadric of dimension $\dim \Gamma_2 -k$,
\item[2.] the intersection $\Gamma_2 \cap L_k$ is a quadric of dimension $\dim \Gamma_2 -k$ and rank $\ell < \dim \Gamma_2 - k + 1$,
\item[3.] we have $\dim( \Gamma_2 \cap L_k) > \dim \Gamma_2 -k$.
\end{itemize}
Since the dimension of the moduli space of genus zero stable maps of degree 2 to a quadric of given rank is well known, an easy check proves that the locus in $\text{B$\ell$}_2(Y)$ over points $\Gamma_2 \in Y_2(X)$ such that the intersection $\Gamma_2 \cap L_k$ is for each prestable curve in $\mathfrak{M}_{0,3}$ of dimension strictly less than the expected dimension. In particular irreducible components of $M_2(Y)$ come from irreducible components of $\text{B$\ell$}_2(Y)$ containing points mapping in $Y_2$ to an quadric $\Gamma_2$ such that $\Gamma_2 \cap L_k$ is a smooth quadric of dimension $\dim \Gamma_2 - k$. Since any stable map to a such intersection $\Gamma_2 \cap L_k$ is a limit of a stable map from an irreducible curve we can consider only irreducible curves. This implies that $M_2(Y)$ is irreducible of expected dimension $\dim Y + 2 c_1(Y)$.
\end{proof}
\subsubsection{Proof of Theorem \ref{thm-def-pos}} Since $M_2(Y)$ has expected dimension, the virtual class is the fundamental class by Proposition \ref{prop-class-virt}. Let $Z_2(Y) = p^{-1}(Y)$ and denote by $r$ and $s$ the projections $r:Z_2(Y) \to Y$ and $s:Z_2(Y) \to Y_2(X)$. Since $\pi$ and $\phi$ restricted to $\text{B$\ell$}_2(Y)$ are again birational, the same computation as in the cominuscule case gives the relation
$$I_2(\tau,\tau',\tau'')_Y = r_*s^*\tau \cup r_*s^*\tau' \cup r_*s^*\tau''$$
where $I_2(-,-,-)_Y$ denotes the Gromov-Witten invariants in degree $2$ in $Y$. But the diagram
$$\xymatrix{Z_2(X) \ar[r]^-p & X \\
Z_2(Y) \ar[r]^-r \ar@{^(->}[u]^-i & Y \ar@{^(->}[u]_-j}$$
is Cartesian with $p$ flat (it is a locally trivial fibration since $X = G/P$, see for example \cite[Proposition 2.3]{BCMP}). In particular we have $i_*r^* = p^*j_*$. We thus get $s_*r^*\tau = q_*i_*r^* \tau = q_*p^*j_*\tau$. We deduce
$I_2(\tau,\tau',\tau'')_Y
= I_2(j_*\tau , j_*\tau' , j_*\tau'')_X.$
In particular the result follows since $I_2(-,-,\{{\rm pt}\})_X$ is positive definite.
\begin{remark}
\label{rem-thm-def-pos}
Note that we proved more than Theorem \ref{thm-def-pos}. Indeed, for any cohomology classes $\tau,\tau',\tau'' \in \HH(Y)$ we have the equality
$$I_2(\tau,\tau',\tau'')_Y
= I_2(j_*\tau , j_*\tau' , j_*\tau'')_X.$$
\end{remark}
\subsubsection{Examples}
\label{sub-exqm}
Several linear sections satisfying Theorem \ref{thm-def-pos} are classical.
1. Hyperplane sections of the Gra{\ss}mann variety $\Gr(2,n)$. The Pl\"ucker embedding is given by the representation $\Lambda^2{\mathbb C}^n$. A general linear section corresponds to a general symplectic form on ${\mathbb C}^n$ and the hyperplane section is the subvariety of isotropic $2$-dimensional subspaces.
For $n = 2p$ even, the variety $Y = \IG(2,2p)$ is homogeneous.
For $n = 2p + 1$ odd, the variety $Y = \IG(2,2p+1)$ is not homogeneous. This variety has two orbits under its automorphism group and is known as the odd symplectic Gra{\ss}mann variety of lines. We refer to \cite{mihai,pech,pasquier,pp} for several geometric results on this variety.
2. Hyperplane sections of $\O\mathbb{P}^2 = E_6/P_6$ are homogeneous under the group $F_4$. Actually we have $Y = F_4/P_4$ which is the coadjoint variety of type $F_4$ (see \cite{land-man,adjoint}).
3. For $X = \Gr(2,5)$ and $k = 3$, then $Y =V_5$ is the \emph{del Pezzo} threefold of index $2$ and degree $5$.
4. Note that we obtain almost all Fano varieties $X$ of coindex 3 \emph{i.e.} with $c_1(X) = \dim X -2$: we obtain all Fano varieties $X$ of coindex 3 with genus $g \in [7,10]$ \emph{i.e.} missing the extremal values $g = 6$ and $g = 12$ (see \cite[Theorem 5.2.3]{fano}).
\subsection{Adjoint varieties}
\label{subsection-adjoint}
The last family of varietes with $Q_X$ positive definite are the adjoint varieties (see \cite{adjoint}). These are homogeneous spaces and can be defined, for $G$ a semisimple group, as the closed $G$-orbit in $\mathbb{P}\mathfrak g$ where $\mathfrak g$ is the Lie algebra of $G$ and $G$ acts on its Lie algbera by via the adjoint representation.
The list of adjoint varieties is given in Table 3. Note that the following equality holds: $\dim X = 2 c_1(X) - 1$ (except in type $C_n$).
The following result was proved in \cite[Proof of Proposition 6.5]{adjoint}
\begin{prop}
\label{prop-def-pos}
Let $X$ be an adjoint variety. Then $Q_X$ is positive definite.
\end{prop}
$$\begin{array}{ccccc}
Type & variety & diagram & dimension & \hspace*{5mm}index \hspace*{5mm}\\
A_{n} &\hspace{5mm} \textrm{Fl}(1,n\ ;n+1) \hspace{5mm} & \setlength{\unitlength}{2.5mm}
\begin{picture}(14,3)(-2,0)
\put(0,0){$\circ$}
\multiput(2,0)(2,0){5}{$\circ$}
\multiput(.73,.4)(2,0){5}{\line(1,0){1.34}}
\put(0,0){$\bullet$}
\put(10,0){$\bullet$}
\end{picture} & 2 n - 1 & (n,n) \\
B_n & \hspace{5mm} \OG(2,{2n+1}) \hspace{5mm} &
\setlength{\unitlength}{2.5mm}
\begin{picture}(14,3)(-2,0)
\put(0,0){$\circ$}
\multiput(2,0)(2,0){5}{$\circ$}
\multiput(.73,.4)(2,0){4}{\line(1,0){1.34}}
\multiput(8.73,.2)(0,.4){2}{\line(1,0){1.34}}
\put(2,0){$\bullet$}
\end{picture} & 4 n - 5 & 2 n - 2 \\
C_n & \hspace{5mm} \mathbb{P}^{2n-1} \hspace{5mm} &
\setlength{\unitlength}{2.5mm}
\begin{picture}(14,3)(-2,0)
\put(0,0){$\circ$}
\multiput(2,0)(2,0){5}{$\circ$}
\multiput(.73,.4)(2,0){4}{\line(1,0){1.34}}
\multiput(8.73,.2)(0,.4){2}{\line(1,0){1.34}}
\put(0,0){$\bullet$}
\end{picture} & 2 n - 1 & 2 n \\
D_n & \hspace{5mm} \OG(2,2n)\hspace{5mm} &
\setlength{\unitlength}{2.5mm}
\begin{picture}(14,3)(-2,0)
\put(2,0){$\circ$}
\multiput(0,0)(2,0){5}{$\circ$}
\multiput(0.73,.4)(2,0){4}{\line(1,0){1.34}}
\put(10,1.2){$\circ$}
\put(2,0){$\bullet$}
\put(10,-1.1){$\circ$}
\put(8.6,0.2){\line(5,-3){1.5}}
\put(8.6,0.5){\line(5,3){1.5}}
\end{picture}
& 4 n - 7 & 2 n - 3 \\
E_6 &\hspace{5mm} E_6/P_2 \hspace{5mm}&
\setlength{\unitlength}{2.5mm}
\begin{picture}(14,3)(-2,0)
\put(0,0){$\circ$}
\multiput(2,0)(2,0){4}{$\circ$}
\multiput(.73,.4)(2,0){4}{\line(1,0){1.34}}
\put(0,0){$\circ$}
\put(4,-2){$\bullet$}
\put(4.42,-1.28){\line(0,1){1.36}}
\end{picture}
& 21 & 11 \\
E_7 & \hspace{5mm}E_7/P_1\hspace{5mm} &
\setlength{\unitlength}{2.5mm}
\begin{picture}(14,3)(-2,0)
\put(0,0){$\circ$}
\multiput(2,0)(2,0){5}{$\circ$}
\multiput(.73,.4)(2,0){5}{\line(1,0){1.34}}
\put(0,0){$\bullet$}
\put(4,-2){$\circ$}
\put(4.42,-1.28){\line(0,1){1.36}}
\end{picture}
& 33 & 17 \\
E_8 & \hspace{5mm}E_8/P_8\hspace{5mm} &
\setlength{\unitlength}{2.5mm}
\begin{picture}(14,3)(-2,0)
\put(0,0){$\circ$}
\multiput(2,0)(2,0){6}{$\circ$}
\multiput(.73,.4)(2,0){6}{\line(1,0){1.34}}
\put(12,0){$\bullet$}
\put(4,-2){$\circ$}
\put(4.42,-1.28){\line(0,1){1.36}}
\end{picture}
& 57 & 29 \\
F_4 & \hspace{5mm} F_4/P_1 \hspace{5mm} &
\setlength{\unitlength}{2.5mm}
\begin{picture}(14,3)(-2,0)
\put(0,0){$\circ$}
\multiput(2,0)(2,0){3}{$\circ$}
\multiput(.73,.4)(2,0){1}{\line(1,0){1.34}}
\multiput(4.73,.4)(2,0){1}{\line(1,0){1.34}}
\multiput(2.73,.2)(0,.4){2}{\line(1,0){1.34}}
\put(0,0){$\bullet$}
\end{picture} & 15 & 8 \\
G_2 & \hspace{5mm} G_2/P_1 \hspace{5mm} &
\setlength{\unitlength}{2.5mm}
\begin{picture}(14,3)(-2,0)
\put(0,0){$\circ$}
\multiput(2,0)(2,0){1}{$\circ$}
\multiput(.73,.4)(2,0){1}{\line(1,0){1.34}}
\multiput(0.73,.2)(0,.4){2}{\line(1,0){1.34}}
\put(0,0){$\bullet$}
\end{picture} & 5 & 3 \\
\end{array}
$$
\medskip
\medskip
\centerline{Table 3. List of adjoint varietes.}
\section{Semisimplicity of the quantum cohomology}
In this section we apply the results in Section \ref{section-rad} to the varieties of Section \ref{section-QX} and get results on the semisimplicity of their quantum cohomology.
\subsection{Linear sections of cominuscules homogeneous spaces}
We consider the varieties $Y$ obtained as linear sections of a cominuscule homogeneous space $X$ satisfying the assumptions of Theorem \ref{thm-def-pos}. These varietes are listed in Remark \ref{rem-def-pos}.
\subsubsection{Multiplication with degree $2$ classes} We want to understand the endomorphism $E_h^Y$ of $\QH(Y)$ obtained by multiplication with $h$ the hyperplane class. Let $j : Y \to X$ be the inclusion and let $\tau \in \HH(Y)$. We denote by $h$ the hyperplane class in $\HH(X)$ and $\HH(Y)$ as well. Projection formula gives the following result.
\begin{lemma}
\label{lemm-class}
We have $j_*(h \cup \tau) = h \cup j_*\tau$.
\end{lemma}
As in the proof of Theorem \ref{thm-def-pos} (see also Remark \ref{rem-thm-def-pos}) we prove a result comparing Gromov-Witten invariants on $X$ and $Y$. Write $I_d(-,-,-)_X$ and $I_d(-,-,-)_Y$ for Gromov-Witten invariants of degree $d$ in $X$ and $Y$.
\begin{lemma}
\label{lemm-pos-gen}
Let $\tau,\tau' \in \HH(Y)$ be cohomology classes such that the following conditions hold.
\begin{itemize}
\item $\deg \tau + \deg \tau' = 2 \dim Y + 2 c_1(Y) -2$.
\item There exists varieties $S,S'$ in $Y$ with $j_*\tau = [S]$, $j_*\tau' = [S']$ which are in general position in $X$.
\end{itemize}
Then $I_1(\tau,\tau',h)_Y = I_1(j_*\tau,j_*\tau',h)_X$
\end{lemma}
\begin{proof}
Note that the equality on degrees is equivalent to $\codim_X S +
\codim_X S' = \dim X + c_1(X) -1$ and $\codim_Y S + \codim_Y S' = \dim
Y + c_1(Y) -1$. This together with the second condition imply the following property: the scheme of $2$-points degree $1$ stable maps to $X$ passing through $S$ and $S'$ is finite and reduced. Remark that any degree $1$ stable map to $Y$ passing through $S$ and $S'$ is a degree $1$ stable map to $X$ passing through $S$ and $S'$ and conversely since $Y$ is a linear section of $X$. In particular the scheme of degree $1$ stable maps to $Y$ passing through $S$ and $S'$ is finite and reduced. This implies that the moduli space of degree $1$ stable maps to $Y$ has the expected dimension and that the above number of stable maps is equal to both $I_1(\tau,\tau',h)_Y$ and $I_1(j_*\tau,j_*\tau',h)_X$.
\end{proof}
\begin{prop}
Let $a,b$ be integers in $[0,\dim Y]$ such that
\begin{itemize}
\item $a + b = \dim Y + c_1(Y) - 1$.
\item $k \geq c_1(Y)$.
\end{itemize}
Then there exists basis of classes $\tau,\tau'$ in $j^*H^{2a}(X,{\mathbb R})$ and $j^*H^{2b}(X,{\mathbb R})$ such that the assumptions of Lemma \ref{lemm-pos-gen} are satisfied.
\end{prop}
\begin{remark}
Note that $a,b \geq c_1(Y) -1$. By the Hard Lefschetz Theorem and since $2 c_1(Y) > \dim Y$ we get $j^*H^{2a}(X,{\mathbb R}) = H^{2a}(Y,{\mathbb R})$ except if $a = c_1(Y) -1$ and $\dim Y = 2 c_1(Y) -1$.
\end{remark}
\begin{proof}
We may assume $a \geq b$. We shall construct general hyperplane sections $Y = X \cap L_k$ with $L_k$ a general linear subspace of codimension $k$ satisfying the proposition.
\textbf{Case 1.} We first deal with the case where $a = \dim Y \textrm{ or } \dim Y - 1$ \emph{i.e.} there is a unique class $\tau$ in $H^{2a}(Y,{\mathbb R})$: the class $\{{\rm pt}\}$ or a point or the class $\ell$ of a line.
We prove that $\tau$ and the pull-back via $j$ of the Schubert basis in $H^{2b}(X,{\mathbb Z})$ satisfy the proposition. Let $S$ be a Schubert variety of codimension $b$ in $X$ and $S'$ be a point or a line (depending on whether $a = \dim Y$ or $a = \dim Y -1$). Let $\mathcal{S}$ be family of $G$-translates of $S$ and $\mathcal{T}$ the family of $G$-translates of $T$. Let $\mathcal{L}_k$ be the variety parametrising linear subspaces of codimension $k$ in the Pl\"ucker embedding of $X$. We have a rational morphism $f:G \times \mathcal{L}_k \to \mathcal{H}$ where $\mathcal{H}$ is a Hilbert scheme of subvarieties in $X$ defined by $(g,L_k) \mapsto gS \cap L_k$. Let $\mathcal{V}$ be the closure of its image.
We consider $I = \{(V,g'T,L_k) \in \mathcal{V} \times \mathcal{T} \times \mathcal{L}_k \ | \ V \subset L_k \supset g'T\}$. We have a diagram
$$\xymatrix{I \ar[r]^-p \ar[d]_-q & \mathcal{V} \times \mathcal{T} \\
\mathcal{L}_k. & \\}$$
Since $b = \dim Y + c_1(Y) -1 -a$, the Schubert variety $S$ is contained in a linear section of $X$ of codimension $\dim Y + c_1(Y) -1 -a$ and thus $V = gS \cap L_k$ is contained in a linear section of $X$ of codimension $\dim Y + c_1(Y) -1 - a + k$. In particular since the space of $T$ has dimension $\dim Y - a$, the variety $V \cup g'T$ is contained in a linear section of codimension $c_1(Y) -1 + k \geq k$. This proves that the map $p$ is surjective. The map $q$ is also surjective: for $L_k \in \mathcal{L}_k$ pick for $g'T$ a point or a line in $X \cap L_k$, pick $g \in G$ general and set $V = g S \cap L_k$.
In particular, for $L_k \in \mathcal{L}_k$ general, there is a translate $gS$ and a point or a line $T \subset X \cap L_k$ such that $V = gS \cap L_k$ and $T$ are in general position in $X$. Setting $Y = X \cap L_k$ proves the result.
\textbf{Case 2.} We now consider the other cases. We prove that the pull-backs via $j$ of the Schubert basis in $H^{2a}(X,{\mathbb R})$ and $H^{2b}(X,{\mathbb R})$ satisfy the proposition. Let $S$ and $S'$ be Schubert varieties in $X$ of respective codimension $a$ and $b$. Denote by $\mathcal{S}$ and $\mathcal{S}'$ the family of $G$-translates of $S$ and $S'$. These are projective homogeneous spaces since the stabiliser of $S$ and $S'$ contain a Borel subgroup. Let $\mathcal{L}_k$ be the projective variety of all linear subspaces of codimension $k$ in the Pl\"ucker embedding of $X$.
We have a rational morphism $f:\mathcal{S} \times \mathcal{S}' \times \mathcal{L}_k \times \mathcal{L}_k \to \mathcal{H} \times \mathcal{H}'$, where $\mathcal{H}$ and $\mathcal{H}'$ are Hilbert schemes of subvarieties in $X$, defined by $(gS,g'S',L_k,L_k') \mapsto (gS \cap L_k, g'S' \cap L_k')$. Let $\mathcal{V} \times \mathcal{V'}$ be the closure of the image of $f$.
\begin{lemma}
Let $\Delta : \mathcal{S} \times \mathcal{S}' \times \mathcal{L}_k \to \mathcal{S} \times \mathcal{S}' \times \mathcal{L}_k \times \mathcal{L}_k$ be the map induced by the diagonal embedding of $\mathcal{L}_k$. Then $\mathcal{V} \times \mathcal{V'}$ is the image of $f \circ \Delta$.
\end{lemma}
Assume that the lemma holds. Let $O$ be the open subset in $\mathcal{V} \times \mathcal{V'}$ of subvarieties in general position and let $\mathcal{L}_k^\circ$ be the open non empty subset in $\mathcal{L}_k$ of linear subspaces having a smooth intersection with $X$. Then $(f \circ \Delta)^{-1}(O)$ is non empty and open in $\mathcal{S} \times \mathcal{S}' \times \mathcal{L}_k$ as well as ${\rm pr}_3^{-1}(\mathcal{L}_k^\circ)$ where ${\rm pr}_3$ is the projection on the third factor in $\mathcal{S} \times \mathcal{S}' \times \mathcal{L}_k$. Since $\mathcal{S} \times \mathcal{S}' \times \mathcal{L}_k$ is irreducible, these subsets intersect. Let $(gS,g'S',L_k)$ be in the intersection. Then $Y = X \cap L_k$ and $V = Y \cap gS$, $V' = Y \cap g'S'$ satisfy the desired property since $V$ and $V'$ are in general position in $X$.
We are left to proving the lemma. Equivalently we need to prove that for $(gS,L_K) \in \mathcal{S} \times \mathcal{L}_k$ and $(g'S',L_k') \in \mathcal{S}' \times \mathcal{L}_k$ such that $\codim_{gS} gS \cap L_k = k = \codim_{g'S'} g'S' \cap L'_k$, there exists $L_k'' \in \mathcal{L}_k$ such that $gS \cap L_k \subset L_k'' \supset g'S' \cap L'_k $. Let $W$ be the vector space defining the Pl\"ucker embedding and let $\scal{gS}$ and $\scal{g'S'}$ be the spans in $W$ of element whose classes are in $gS$ and $g'S'$. It is enough to prove that $\dim \scal{gS} + \dim \scal{g'S'} \leq \dim W + k$. We prove this inequality by case by case analysis.
For $X = \LG(3,6)$, we have $k = 1$ and $\dim Y + c_1(Y) -1 = 7$ thus $a \geq \dim Y -1$ and there is nothing to prove.
Note that in the other cases $X$ is a minuscule homogeneous space. This means that the weights of $W$ for a maximal torus form a unique orbit for the action of the Weyl group of $G$. This in particular implies that there is a correspondence between Schubert varieties and weights of $W$. As a consequence we get the equality
$$\dim \scal{S} = |\{S'' \subset S \ | \ \textrm{$S''$ a Schubert variety}\}|.$$
In words: the dimension of the span $\scal{S}$ of a Schubert variety $S$ is equal to the number of Schubert varieties contained in $S$. This translates the inequality $\dim \scal{gS} + \dim \scal{g'S'} \leq \dim W + k$ into a combinatorial computation and an easy case by case check gives the result.
\end{proof}
We shall now define maps between subspaces of $\QH(Y)$ and $\QH(X)$. Recall that we have morphisms $j^* : H^m(X,{\mathbb Z}) \to H^m(Y,{\mathbb Z})$ and $j_* : H_m(X,{\mathbb Z}) \to H_m(X,{\mathbb Z})$ which become isomorphisms for $m < \dim Y$ by Lefschetz Theorem. Let $A(Y)$ and $A(X)$ be the algebras obtained from $\QH(Y)$ and $\QH(X)$ by quotienting with the ideal $(q - 1)$. Recall that these algebras are respectively ${\mathbb Z}/2 c_1(Y){\mathbb Z}$ and ${\mathbb Z}/2 c_1(X){\mathbb Z}$ graded. We write $A_a(Y)$ and $A_a(X)$ for their degree $a$ graded piece. We have
$$\begin{array}{rll}
A_a(X) = & H^a(X,{\mathbb R}) \oplus H^{2 c_1(X) + a}(X,{\mathbb R}) & \textrm{for $a \in [0, 2 \dim Y - 2 c_1(Y)]$} \\
A_a(Y) = & H^a(Y,{\mathbb R}) \oplus H^{2 c_1(Y) + a}(Y,{\mathbb R}) & \textrm{for $a \in [0, 2 \dim Y - 2 c_1(Y)]$} \\
A_{2 c_1(Y) -2}(X) = & H^{2 c_1(Y) - 2}(X,{\mathbb R}) &
\textrm{for $2 c_1(Y) - 1 > \dim Y$} \\
A_{2c_1(Y) - 2}(Y) = & H^{2 c_1(Y) - 2}(Y,{\mathbb R}) &
\textrm{for $2 c_1(Y) - 1 > \dim Y$} \\
A_{2 c_1(Y) -2}(X) = & H^{2 c_1(Y) - 2}(X,{\mathbb R}) \oplus H^{2 \dim (X)}(X,{\mathbb R}) &
\textrm{for $2 c_1(Y) - 1 = \dim Y$} \\
A_{2c_1(Y) - 2}(Y) = & H^{2 c_1(Y) - 2}(Y,{\mathbb R}) \oplus H^{2 \dim (Y)}(Y,{\mathbb R}) &
\textrm{for $2 c_1(Y) - 1 = \dim Y$} \\
\end{array}$$
We define a morphism $J$ between these spaces as follows:
$$\begin{array}{rll}
J = j^* \oplus j_*^{-1} : & A_a(X) \to A_a(Y) & \textrm{for $a \in [2 c_1(Y), 2 \dim Y]$} \\
J = {j^*} : & A_{2c_1(Y)-2}(X) \to A_{2c_1(Y)-2}(Y) & \textrm{for $2 c_1(Y) - 1 > \dim Y$} \\
J = {j^* \oplus j_*^{-1}} : & A_{2 c_1(Y) -2}(X) \to A_{2c_1(Y) - 2}(Y) & \textrm{for $2 c_1(Y) - 1 = \dim Y$} \\
\end{array}
$$
Note that by Lefschetz Theorem and because $2 c_1(Y) > \dim Y$, the first map is an isomorphism for all $a \in [2 c_1(Y), 2 \dim Y]$.
\begin{cor}
\label{coro-rest}
For all $a \in [2 c_1(Y), 2 \dim Y - 2]$, we have a commutative diagrams
$$\xymatrix{
A_a(X) \ar[r]^-J_-\sim \ar[d]_-{E_h^X} & A_a(Y) \ar[d]^-{E_h^Y} \\
A_{a+2}(X) \ar[r]^-J_-\sim & A_{a+2}(Y)} \ \ \textrm{ and } \ \ \xymatrix{
A_{2c_1(Y)-2}(X) \ar[r]^-J \ar[d]_-{E_{h^{k+1}}^X} & A_{2c_1(Y)-2}(Y) \ar[d]^-{E_{h}^Y} \\
A_0(X) \ar[r]^-J_-\sim & A_0(Y).}$$
\end{cor}
\begin{proof}
For a quantum product $a {\!\ \star_{0}\!\ } b$ we write $a {\!\ \star_{0}\!\ } b = \sum_d q^d (a {\!\ \star_{0}\!\ } b)_d$.
We start with the first square. Let $\sigma \in A_a(Y)$. We have $\deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q = a$ or $\deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q = 2c_1(X) + a$. In the first case $J\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q = j^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q$ and $\deg J\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q + 2 = \deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q + 2 < 2 c_1(Y) < 2 c_1(X)$. In particular $E_X(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q) = h \cup \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q$ and $E_h^Y(Js) = h \cup J\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q$ and we get $E_h^YJ(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q) = h \cup j^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q = j^*(h \cup \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q) = JE_h^X(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q)$. In the second case we have $\sigma = j_* \tau$ with $\tau = J\sigma$. We get $JE_h^X(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q) = J(h {\!\ \star_{0}\!\ } j_*\tau)$ and $E_h^YJ(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q) = E_h^Y(\tau) = h {\!\ \star_{0}\!\ } \tau$. By Lemma \ref{lemm-class} we have $j_*(h {\!\ \star_{0}\!\ } \tau)_0 = (h {\!\ \star_{0}\!\ } j_*\tau)_0$ so the result is true for the classical part of the quantum product. The quantum parts of $E_h^YJ(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q)$ and $JE_h^XJ(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q)$ are of the form
$$(h {\!\ \star_{0}\!\ } \tau)_1 = I_1(h,\tau,\ell)_Y q h \textrm{ and } J((h {\!\ \star_{0}\!\ } j_*\tau)_1) = I_1(h,j_*\tau,\ell)_X qh.$$
Since $2 c_1(Y) > \dim Y$, the Hard Lefschetz Theorem implies that there is a $\sigma''$ with $\tau = j^*\sigma''$. Applying the above proposition, we get $I_1(h,\tau,\ell)_Y = I_1(h,j_*\tau,\ell)_X$ proving the result.
We now consider the second square. The possible degrees for $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q \in A_{2 c_1(Y) - 2}(X)$ are $2 c_1(Y) - 2$ or $2 \dim X$ if $\dim Y = 2c_1(Y) - 1$. First assume $\deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q = 2 c_1(Y) -2$ and let $\tau = J(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q) = j^* \sigma$. We have $JE_{h^{k+1}}^X(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q) = J(h^{k+1} {\!\ \star_{0}\!\ } \sigma)$ and $E_h^YJ(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q) = h {\!\ \star_{0}\!\ } \tau$. For degree reasons, we have $h^k = [Y]$ and $[Y] {\!\ \star_{0}\!\ } \sigma = [Y] \cup \sigma$. We get $E_{h^{k+1}}^X(\sigma) = h^{k+1} {\!\ \star_{0}\!\ } \sigma = h {\!\ \star_{0}\!\ } ([Y] \cup \sigma) = h {\!\ \star_{0}\!\ } j_*j^* \sigma$. By Lemma \ref{lemm-class}, we have $j_*(h {\!\ \star_{0}\!\ } \tau)_0 = j_*(h \cup \tau) = h \cup j_*\tau = h \cup j_*j^*\sigma$ and the result is true for the classical part of the quantum product. The quantum part of $E_h^YJ(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q)$ and $JE_{h^{k+1}}^X(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q)$ are of the form
$$(h {\!\ \star_{0}\!\ } \tau)_1 = I_1(h,\tau,\{{\rm pt}\})_Y q \textrm{ and } J((h {\!\ \star_{0}\!\ } j_*\tau)_1) = I_1(h,j_*\tau,\{{\rm pt}\})_X q.$$
By the above proposition, we get $I_1(h,\tau,\{{\rm pt}\})_Y = I_1(h,j_*\tau,\{{\rm pt}\})_X$ proving the result for $\deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q = 2 c_1(Y) -2$.
Finally assume $\dim Y = 2 c_1(Y) -1$ and $\deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q = 2 \dim X$. We have $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q = \{{\rm pt}\}$ and $J(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q) = \{{\rm pt}\}$. We get $JE_{h^{k+1}}^X(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q) = J(h^{k+1} {\!\ \star_{0}\!\ } \{{\rm pt}\})$ and $E_h^YJ(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q) = h {\!\ \star_{0}\!\ } \{{\rm pt}\}$. We have $h {\!\ \star_{0}\!\ } \{{\rm pt}\} = q \sum_\gamma I_1(h,\{{\rm pt}\},\gamma)_Y {\rm PD}(\gamma) + I_2(h,\{{\rm pt}\},\{{\rm pt}\})_Y q^2$
where the sum runs over a basis of $H^{2 c_1(Y) - 2}(Y,{\mathbb R})$. By Lefschetz Theorem the classes $\delta$ with $J \delta = j^*\delta = \gamma$ form a basis of $H^{2 c_1(Y) - 2}(X,{\mathbb R})$. By the above proposition, we have $I_1(h,\{{\rm pt}\},\gamma)_Y = I_1(h,\{{\rm pt}\},j_*\gamma)_X$. On the other hand, applying Remark \ref{rem-thm-def-pos} we have $I_2(h,\{{\rm pt}\},\{{\rm pt}\})_Y = I_2(j_*h,\{{\rm pt}\},\{{\rm pt}\})_X$. We get
$$\begin{array}{rl}
E_h^YJ(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q) = & q \sum_\sigma I_1(h,\{{\rm pt}\},j_*j^*\delta)_X {\rm PD}(j^*\delta) + I_2(j_*h,\{{\rm pt}\},\{{\rm pt}\})_X q^2 \\
= & q \sum_\sigma I_1(h,\{{\rm pt}\},[Y] \cup \delta)_X j^*{\rm PD}(\delta) + I_2([Y] \cup h,\{{\rm pt}\},\{{\rm pt}\})_X q^2 \\
= & q \sum_\sigma I_1(h,\{{\rm pt}\},[Y] {\!\ \star_{0}\!\ } \delta)_X j^*{\rm PD}(\delta) + I_2([Y] {\!\ \star_{0}\!\ } h,\{{\rm pt}\},\{{\rm pt}\})_X q^2 \\
= & q \sum_\sigma I_1([Y] {\!\ \star_{0}\!\ } h,\{{\rm pt}\},\delta)_X j^*{\rm PD}(\delta) + I_2(h^{k+1},\{{\rm pt}\},\{{\rm pt}\})_X q^2\\
= & q j^*\sum_\sigma I_1(h^{k+1},\{{\rm pt}\},\delta)_X {\rm PD}(\delta) + I_2(h^{k+1},\{{\rm pt}\},\{{\rm pt}\})_X q^2\\
= & JE_{h^{k+1}}^X(\{{\rm pt}\}) \end{array}
$$
This completes the proof.
\end{proof}
\begin{remark}
For $\dim Y < 2 c_1(Y) -2$ using the same arguments as above we get a slightly better factorisation. We have an isomorphism $J$ defined by
$$J = j_*^{-1} : A_{-2}(X) = H^{2 c_1(X) - 2}(X,{\mathbb R}) \to A_{-2}(Y) = H^{2 c_1(Y) - 2}(Y,{\mathbb R}).$$
For $\dim Y < 2 c_1(Y) - 2$, we have a commutative diagram
$$\xymatrix{A_{-2}(X) \ar[r]^-J_-\sim \ar[d]_-{E_h^X} & A_{-2}(Y) \ar[d]^-{E_h^Y} \\
A_0(X) \ar[r]^-J_-\sim & A_0(Y).}$$
\end{remark}
\subsection{Semisimple small quantum cohomology}
In this subsection, we prove the following semisimplicity result.
\begin{thm}
\label{thm-semisimple-Y}
Let $Y$ be a general hyperplane section with $2 c_1(Y) > \dim Y$ of the following homogeneous space $X$:
$$\begin{array}{llll}
\Gr(2,2n+1) & & & F_4/P_1 \\
\OG(5,10) & & & \OG(2,2n+1) \\
\LG(3,6) & & & G_2/P_1.\\
\end{array}$$
Then $\QH(Y)$ is semisimple.
\end{thm}
\begin{proof}
Note that for the varieties of the second column, we have $2 c_1(X) -1 = \dim X$ thus we must have $Y = X$. By Theorem \ref{thm-def-pos} and Proposition \ref{prop-def-pos}, the quadratic form $Q_Y$ is positive definite. By Theorem \ref{thm-alg-qh}, it is enough to prove that $\Ker E_h^Y = 0$ \emph{i.e.} that $E_h^Y$ is bijective. It is therefore enough to prove that $h$ is invertible \emph{i.e.} that $E_h^Y$ is surjective onto $A_0(Y)$. By Corollary \ref {coro-rest} it is enough to prove that $E_h^X$ is surjective onto $A_0(X)$. This is now an easy check using \cite{cmp1} and \cite{adjoint}.
\end{proof}
\begin{remark}
Note that this result together with the result of \cite{HMT} implies that the cohomology of the varieties $Y$ above is even and of pure type $(p,p)$.
\end{remark}
\subsection{Structure of the radical}
In this subsection, we describe the radical of $\QH(Y)$ for some Fano varieties $Y$.
\begin{prop}
\label{prop-loc-rad}
Let $Y$ be a general hyperplane section with $2 c_1(Y) > \dim Y$ of the following homogeneous space $X$:
$$\begin{array}{llll}
\Gr(2,2n) & & & \OG(2,2n) \\
\OG(6,12) & & & E_6/P_2 \\
E_6/P_6 & & & E_7/P_1\\
E_7/P_7 & & & E_8/P_8.\\
\end{array}$$
Then
$$R(Y) = \Ker E_h^Y \cap \bigoplus_{2 c_1(Y) \not\ \!| \ k}\QH^{k}(Y).$$
\end{prop}
\begin{proof}
Note that for the varieties of the second column, we have $2 c_1(X) -1 = \dim X$ thus we must have $Y = X$.
We first prove $R(Y) \subset \Ker E_h^Y$. By Theorem \ref{thm-def-pos} and Proposition \ref{prop-def-pos}, the quadratic form $Q_Y$ is positive definite. By Theorem \ref{thm-alg-qh}, it is enough to prove that $A_0(Y) = \Ker E_h^Y \cap A_0(Y) \oplus {\rm Im} E_h^Y \cap A_0(Y)$. By Corollary \ref{coro-rest} this is equivalent to the same statement in $A_0(X)$. This is now an easy check using \cite{cmp1} and \cite{adjoint}.
Note that in $\QH_0(Y)$, there is a unique vector $K \in \Ker E_h^Y$
and this vector is of the form $K = \lambda + v$ with $\lambda \in
{\mathbb R}\setminus \{0\}$ and $v \ni {\rm Im} E_h^Y$. We thus have $K^n =
\lambda^{n-1} K$ and $K$ is not nilpotent. We now prove that any
element $K \in \Ker E_h^Y \cap \bigoplus_{2 c_1(Y) \not\ \!|
\ k}\QH^{k}(Y)$ is nilpotent. Remark that for degree reasons there
will be a power $K^n$ of $K$ in ${\rm Im} E_h^Y$ (by Corollary
\ref{coro-rest} and case by case inspection on $X$. We refer to
Subsection \ref{sec-ig2n} for a detailled proof of this in the case $X
= \Gr(2,2n)$). But $K^n$ is in $\Ker E_h^Y$ thus $K^n = 0$.
\end{proof}
\begin{example}
Let $X = \Gr(2,6)$ and $Y$ be a general hyperplane section. Then $Y$ is
isomorphic to an isotropic Gra{\ss}mann variety $\IG(2,6)$. We have
$\dim Y = 7$ and $c_1(Y) = 5$. The dimensions of the graded parts of
$A(Y)$ are
$$\begin{array}{l|ccccc}
k & 0 & 2 & 4 & 6 & 8 \\
\hline
A_k(Y) & 3 & 2 & 3 & 2 & 2 \\
\end{array}$$
One easily checks that $\Ker E_h^Y$ has dimension 2. There is an
element $K_0$ of degree $0$ in $\Ker E_h^Y$ and an element $K_4$ of
degree $4$ in $\Ker E_h^Y$. The image ${\rm Im} E_h^Y$ is a complement of
$\Ker E_h^Y$. As in the above proposition $K_0$ is not nilpotent and
$K_4^2$ has degree $8$ this is in ${\rm Im} E_h^Y$ so $K_4^2 = 0$ and we
have $R(Y) = \scal{K_4}$. We recover the example in \cite[Section
7]{semisimple}.
\end{example}
\section{Big quantum cohomology}
In this section we consider the case of two Fano varieties $Y$ obtained as
hyperplane section of a cominuscule homogeneous space $X$ whose small
quantum cohomology $\QH(Y)$ is not semisimple. We however prove that
the big quantum cohomology $\BQH(Y)$ is semisimple. These are the
first examples of semisimplicity of the big quantum cohomology in the
presence of non semisimple small quantum cohomology.
The varieties we consider are $Y = \IG(2,2n)$ obtained as hyperplane section of $X = \Gr(2,2n)$ and $Y = F_4/P_4$ obtained as hyperplane section of $X = E_6/P_6$. Note that both varieties are homogeneous and actually \emph{coadjoint varieties} in the sense of \cite{adjoint}. Their small quantum cohomology is not semisimple but well understood. We refer to \cite{adjoint} for more details. In the next subsection, we recall few fact on $\QH(Y)$ for $Y$ coadjoint.
\subsection{Quantum cohomology of coadjoint varieties}
Let $Y$ be one of the following two varieties $Y = \IG(2,2n)$ or $Y = F_4/P_4$ which are homogeneous under the action of a reductive group $G$ of type $C_n$ of $F_4$ respectively.
\subsubsection{Cohomology and short roots}
The cohomology of a coadjoint variety $Y$ homogeneous under the action of a reductive group $G$ is easily described using Schubert classes. There are several indexing sets for Schubert varieties. We will choose the indexing set described in \cite{adjoint}. Let $R_s$ be the set of short roots in $R$ the root system of $G$. For each root ${\alpha} \in R_s$, there is a cohomology class $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\alpha}$ and the family $(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\alpha})_{{\alpha} \in R_s}$ form a basis of $\HH(X)$.
Let $n$ be the rank of the group (and of the roots system $R$). We choose $({\alpha}_1 , \cdots , {\alpha}_n)$ a Basis of the root system with notation as in \cite{bou}. For a root ${\alpha}$ we have an expression
$${\alpha} = \sum_{i = 1}^n a_i {\alpha}_i$$
with $a_i \in {\mathbb Z}$ for all $i \in [1,n]$. We define the height ${\textrm{ht}}({\alpha})$ of ${\alpha} \in R$ by
$${\textrm{ht}}({\alpha}) = \sum_{i = 1}^n a_i.$$
Let $\theta$ be the highest short root of $R$. The above indexing satisfies many nice properties. We have (see \cite[Proposition 2.9]{adjoint})
$$\deg(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\alpha}) = \left\{ \begin{array}{ll}
2({\textrm{ht}}(\theta) - {\textrm{ht}}({\alpha})) & \textrm{ for ${\alpha}$ positive,} \\
2({\textrm{ht}}(\theta) - {\textrm{ht}}({\alpha}) -1) & \textrm{ for ${\alpha}$ negative.} \\
\end{array}\right.$$
We will write $1$ for the class $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\theta}$ and $h$ for the hyperplane class in the Pl\"ucker embbeding of $X$. The Poincar\'e duality has a very simple form on roots: the Poincar\'e dual $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\alpha}^\vee$ of $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\alpha}$ is simply $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\alpha}^\vee = \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\alpha}}$ (see \cite[Proposition 2.9]{adjoint}).
\subsubsection{Small quantum cohomology and affine short roots}
The above parametrisation of Schubert classes by short roots can be
extended to quantum monomials in $\QHl = \QH(X)[q,q^{-1}]$. A quantum monomial is a element $q^d \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\alpha}$ where $d \in {\mathbb Z}$ and ${\alpha} \in R_s$. We write $\QM(X)$ for the set of all quantum monomials and we write $\QHlh{k}$ for the degree $k$ part of $\QHl$.
Let ${{\widehat{R}}}$ be the extended affine root system of $R$ and let $\delta$
be the minimal positive imaginary root. The extended root system has basis
$({\alpha}_0, \cdots , {\alpha}_n)$ and in this basis we have $\delta = \Theta +
{\alpha}_0$ where $\Theta$ is the highest root of $R$. A short root of ${{\widehat{R}}}$ is a root
of the form ${\alpha} + d \delta$ for ${\alpha} \in R_s$ and $d \in {\mathbb Z}$. We write
${{\widehat{R}}}_s$ for the set of short roots in ${{\widehat{R}}}$. There is a bijection
$$\eta : {{\widehat{R}}}_s \to \QM(X)$$
defined by $\eta({\alpha} - d \delta) = q^d \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\alpha}$. Note that we can extend the height function on ${{\widehat{R}}}$ and that we have $\deg(q) = 2({\textrm{ht}}(\delta) - 1)$.
\subsubsection{Multiplication with the hyperplane}
We have the following very simple description of the small quantum product $\star_0$ with $h$ (see \cite[Theorem 3]{adjoint}):
$$h \star_0 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\alpha} = \sum_{i \in [0,n],\ \scal{{\alpha}_i^\vee,{\alpha}} > 0}
\scal{{\alpha}_i^\vee,{\alpha}} \eta(s_{{\alpha}_i}({\alpha})).$$
\subsection{Gra{\ss}mann variety of lines}
\label{sec-ig2n}
Let $X = \Gr(2,2n)$ and let $Y$ be a linear section of $X$ of
codimension $1$.
Note that $Y = \IG(2,2n)$ is an isotropic Gra{\ss}mann
variety. The small quantum cohomology of this variety is described in \cite{adjoint}. We first prove the following result which improves Proposition \ref{prop-loc-rad}.
\begin{lemma}
Let $Y$ be a linear section of $X = \Gr(2,2n)$ of codimension $1$.
1. There exists a unique element $K_{4n+2}$ modulo scalar in $\QH^{4n+2}(Y) \cap
\Ker E_h^Y$.
2. The element $K_{4n+2}^k$ is divible by $q^k$.
3. We have $R(Y) = \scal{K_{4n+2},\cdots,q^{-(n-2)}K_{4n+2}^{n-2}}$ and $K_{4n+2}^{n-1} = 0$.
\end{lemma}
\begin{proof}
1. Note that all odd dimension cohomolgy groups of $X$ and $Y$
vanish. Recall also that we have the inclusion $R(Y) \subset \Ker
E_h^Y$. Using the description of Schubert classes with partitions
having 2 parts, we
have
$$\dim \QH^{2a}(X) = \left\{ \begin{array}{ll}
n & \textrm{for $a$ even} \\
n - 1 & \textrm{for $a$ odd.} \\
\end{array}
\right.$$
We deduce results on the dimension of the graded
parts in $\QH(Y)$. Recall that multiplication with $q$ induces an
isomorphism $\QH^{2a}(Y) \to \QH^{2a + 2 c_1(Y)}(Y)$ so we only need
to describe these dimensions for $0 \leq a < c_1(Y) = 2n - 1$.
$$\dim \QH^{2a}(Y) = \left\{ \begin{array}{ll}
n & \textrm{for $a \leq 2n - 4$ even} \\
n - 1 & \textrm{for $a = 2n - 2$} \\
n - 1 & \textrm{for $a \leq 2n -3$ odd.} \\
\end{array}
\right.$$
We work modulo the ideal $(q-1)$. This is enough since we can recover the powers of $q$ by considering the degrees. An easy check gives that $E_h^X : A_{2a}(X)
\to A_{2a + 2}(X)$ is of (maximal) rank $n-1$. Corollary
\ref{coro-rest} implies that the same holds for $E_h^Y$. In particular
$\Ker E_h^Y = \scal{K_{4n - 2} , K_{4n-2},\cdots,K_{8n - 10}}$ for some $K_{a} \in
A_{a}(Y) \cap \Ker E_h^Y$. Furthermore ${\rm Im} E_h^Y$ is a complement of this space. This implies that $K_{4n-2} = \lambda + v$ with $v \in {\rm Im}
E_h^Y$ and since $\Ker E_h^Y \cap {\rm Im} E_h^Y = 0$, we have $K_{4n-2}^N =
\lambda^{N-1}K_{4n-2} \neq 0$ and $K_{4n-2}$ is not nilpotent. We claim that
modulo scalar we have $K_{4n+2}^i = K_{4i}$ for all $i \in [1,n-2]$.
Let $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{(1,1)} \in H^{4}(X,{\mathbb Z})$ be the Schubert class defined by the
partition $(1,1)$ (this Schubert class is also the top Chern class of the
tautological subbundle of $X$). It is easy to check that
$\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{(1,1)}^{n-2} = \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{(n-2,n-2)}$ this last class is the Schubert class associated to
the partition $(n-2,n-2)$. Indeed for degree reasons, this product is
a classical cohomological product and the result follows from
Littelwood-Richardson's rule. We get $h \cup \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{(n-2,n-2)} \neq
0$. This implies that for $j:Y \to X$ the inclusion, we have
$j^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{(1,1)}^{n-2} = j^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{(n-2,n-2)}$ and that $j^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{(1,1)}^{n-2}
\cup h \neq 0$. In particular $j^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{(1,1)}^{n-2} \not \in \Ker
E_h^Y$ thus $j^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{(1,1)} \not \in \Ker E_h^Y$ and we may write
$j^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{(1,1)}^{n-2} = \mu K_{8n - 10} + w$ and $j^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{(1,1)} = \lambda K_{4n+2} + v$ for some $\lambda,\mu \in {\mathbb R} \setminus
\{0 \}$ and $v,w \in {\rm Im} E_h^Y$. This implies $\mu K_{8n - 10} + w =
j^*\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{(1,1)}^{n-2} = \lambda^{n-2} K_{4n+2} + v^{n-2}$ and thus $\mu
K_{8n-10} = \lambda^{n-2} K_{4n+2}^{n-2}$ proving the claim. Now $K_{4n+2}^{n-1}
\in A_{4n-4}(Y) \subset {\rm Im} E_h^Y$ (recall that we have a ${\mathbb Z}/2c_1(Y){\mathbb Z} = {\mathbb Z}/(4n-2){\mathbb Z}$-grading) thus $K_{4n+2}^{n-1} = 0$ proving the
result.
\end{proof}
For technical reasons we have to distinguish the cases $n = 3$ and $n \geq 4$ in the proof of the next result.
\begin{thm}
\label{thm-bqh1}
Let $Y = \IG(2,2n)$. Then $\BQH(Y)$ is semisimple.
\end{thm}
\begin{proof}
We will use the notation of \cite{bou} to index simple roots of the root system of type $\textrm{C}_n$. For classes $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q,\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' \in \HH(Y)$ we will write
$$\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q \star_\tau \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' = \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q {\!\ \star_{0}\!\ } \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' + t (\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q \star_1 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q') + O(t^2).$$
\textbf{We start with the case $n = 3$}. Let $\tau = \{{\rm pt}\} = \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta}$ where $\theta$ is the highest short root. We prove that $\BQH_\tau(Y)$ (with notation as in Subsection \ref{sub-def}) is semisimple.
Let $C \in \BQH_\tau(Y)$ be a nilpotent element of order $2$. If $C \neq 0$, up to dividing with the parameter $t$ (see Subsection \ref{sub-def}), we may assume that $C$ is of the form
$$C = C_0 + t C_1 + O(t^2) \textrm{ with $C_0 \neq 0$}.$$
We have $0 = C \star_\tau C = C_0 {\!\ \star_{0}\!\ } C_0 + O(t)$ thus $C_0 \in R(Y)$. Up to rescaling, we may assume that $C_0 = q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_2 + {\alpha}_3} - q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_2 + {\alpha}_3} + \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta} \in \QH^{14}(Y)$. On the other hand, the product $h \star_\tau C$ has to be nilpotent. But we have
$$h \star_\tau C = h {\!\ \star_{0}\!\ } C_0 + t ( h {\!\ \star_{0}\!\ } C_1 + h \star_1 C_0) + O(t^2) = t ( h {\!\ \star_{0}\!\ } C_1 + h \star_1 C_0) + O(t^2).$$
In particular $h {\!\ \star_{0}\!\ } C_1 + h \star_1 C_0$ is nilpotent. Since $h \star_1 C_0$ is of degree 28 and since there is no nilpotent element of degree 28 in $\QH(Y)$, we must have $h {\!\ \star_{0}\!\ } C_1 + h \star_1 C_0 = 0$. This is possible since $E_h^Y$ is surjective on degree 28. The element $C_1$ is thus uniquely determined by $h {\!\ \star_{0}\!\ } C_1 = - h \star_1 C_0$.
Note that $C_1 \in \QH^{26}(Y) \subset {\rm Im} E_h^Y$ and $C_0 \in \Ker E_h^Y$ thus $C_0 {\!\ \star_{0}\!\ } C_1 = 0$.
We now remark the following equality $C_0 = 3 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta} + (q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_2 + {\alpha}_3} - q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_2 + {\alpha}_3} -2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta})$ where the second term is in the image of $E_h^Y$. Thus there exists $D_0 \in \QH^{12}(Y)$ with $h {\!\ \star_{0}\!\ } D_0 = q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_2 + {\alpha}_3} - q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_2 + {\alpha}_3} -2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta}$. An easy computation gives
$$D_0 = q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_1 + {\alpha}_2 + {\alpha}_3} - 2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\alpha}_1-{\alpha}_2-{\alpha}_3}.$$
We have $h \star_\tau D_0 = (q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_2 + {\alpha}_3} - q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_2 + {\alpha}_3} -2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta}) + t (h \star_1 D_0) + O(t^2)$ with $h \star_1 D_0 \in \QH^{26}(Y)$. Since $E_h^Y$ is surjective on degree 26, there exists $D_1 \in \QH^{24}(Y)$ with $h {\!\ \star_{0}\!\ } D_1 = - h \star_1 D_0$. Setting $D = D_0 + t D_1$ we get $h \star_\tau D = q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_2 + {\alpha}_3} - q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_2 + {\alpha}_3} -2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta} + O(t^2)$. This altogether gives $C_0 = 3 \{{\rm pt}\} + h \star_\tau D + O(t^2)$ and
$$3 \{{\rm pt}\} = C - h \star_\tau D - t C_1 + O(t^2).$$
Computing the square gives (recall that $C \star_\tau C = 0$, $h \star_\tau C = O(t^2)$ and $C_0 {\!\ \star_{0}\!\ } C_1 = 0$)
$$\begin{array}{ll}
9 \{{\rm pt}\} \star_\tau \{{\rm pt}\}
& = h \star_\tau D \star_\tau h \star_\tau D - 2 t C \star_\tau C_1 + 2 t h \star_\tau D \star_\tau C_1 + O(t^2) \\
& = h \star_\tau D \star_\tau h \star_\tau D - 2 t C_0 {\!\ \star_{0}\!\ } C_1 + 2 t h {\!\ \star_{0}\!\ } D {\!\ \star_{0}\!\ } C_1 + O(t^2) \\
& = h \star_\tau D \star_\tau h \star_\tau D + 2 h {\!\ \star_{0}\!\ } D {\!\ \star_{0}\!\ } C_1 + O(t^2) \\
\end{array}$$
Note that we have
$$\begin{array}{rl}
h \star_\tau D \star_\tau h \star_\tau D = & h {\!\ \star_{0}\!\ } h {\!\ \star_{0}\!\ } D_0 {\!\ \star_{0}\!\ } D_0 \\
& + 2 t h {\!\ \star_{0}\!\ } h {\!\ \star_{0}\!\ } D_0 {\!\ \star_{0}\!\ } D_1 + t h {\!\ \star_{0}\!\ } h {\!\ \star_{0}\!\ } (D_0 \star_1 D_0) \\
& + t h {\!\ \star_{0}\!\ } ( h \star_1 (D_0 {\!\ \star_{0}\!\ } D_0)) + t h \star_1 (h {\!\ \star_{0}\!\ } D_0 {\!\ \star_{0}\!\ } D_0) + O(t^2)\\
= & t h \star_1 (h {\!\ \star_{0}\!\ } D_0 {\!\ \star_{0}\!\ } D_0) + t \cdot {\rm Im} E_h^Y + O(t^2).\\
\end{array}$$
Finally we obtain $9 \{{\rm pt}\} \star_\tau \{{\rm pt}\} = 9 \{{\rm pt}\} {\!\ \star_{0}\!\ } \{{\rm pt}\} + t h \star_1 (h {\!\ \star_{0}\!\ } D_0 {\!\ \star_{0}\!\ } D_0) + t \cdot {\rm Im} E_h^Y + O(t^2)$. Since $D_0$ is explicitely given and since the endomorphism $h \star_1 -$ is understood using $\{{\rm pt}\} {\!\ \star_{0}\!\ } -$ (see Subsection \ref{sub-def}) we get
$$9 \{{\rm pt}\} \star_1 \{{\rm pt}\} = 12 q^4 + 3 q^3 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\alpha}_1-{\alpha}_2} - 3 q^3 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\alpha}_2-{\alpha}_3} + {\rm Im} E_h^Y = 6 q^4 + {\rm Im} E_h^Y.$$
On the other hand we compute directly the product $\{{\rm pt}\} \star_\tau \{{\rm pt}\}$. We have
$$\{{\rm pt}\} \star_1 \{{\rm pt}\} = \sum_{d \geq 0} \sum_{{\alpha} \in R_s} q^d I_d(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\alpha}}) \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}}.$$
The invariant $I_d(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\alpha}})$ vanishes unless $3 \deg \{{\rm pt}\} + \deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\alpha}} = 2 \dim Y + 2d c_1(Y) + 2$ thus $\deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\alpha}} = 10 d - 26$. Since $\deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\alpha}}$ is even and contained in $[0,14]$, the invariant $I_d(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\alpha}})$ vanishes unless $d = 3$ and $\deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\alpha}} = 4$ or $d = 4$ and $\deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\alpha}} = 14$. But one easily check that there is no degree 3 curve in $Y = \IG(2,6)$ passing through 3 points in general position. Thus $I_3(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},-) = 0$. The only non vanishing invariant is thus $I_4(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\})$ and we have
$$9 \{{\rm pt}\} \star_1 \{{\rm pt}\} = 9 I_4(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\}) q^4 = 6 q^4 + {\rm Im} E_h^Y.$$
Since $q^4 \not \in {\rm Im} E_h^Y$ this implies $9 I_4(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\}) = 6$ which is impossible since $I_4(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\}) \in {\mathbb Z}_{\geq0}$.
\medskip
\textbf{Assume $n \geq 4$.}
Let $\tau = \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\theta - {\alpha}_1 - {\alpha}_2} \in \QH^{4}(Y)$ where $\theta$ is the highest short root. We prove that $\BQH_\tau(Y)$ (with notation as in Subsection \ref{sub-def}) is semisimple.
Let $C \in \BQH_\tau(Y)$ be a nilpotent element of order $2$. If $C \neq 0$, up to dividing with the parameter $t$ (see Subsection \ref{sub-def}), we may assume that $C$ is of the form
$$C = C_0 + t C_1 + O(t^2) \textrm{ with $C_0 \neq 0$}.$$
We have $0 = C \star_\tau C = C_0 {\!\ \star_{0}\!\ } C_0 + O(t)$ thus $C_0 \in R(Y)$. Up to rescaling and multiplying with the generator of $R(Y)$, we may assume that $C_0$ is the unique element in $R(Y)$ of degree $8n-10$ such that the coefficient of $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta}$ in $C_0$ is 1. The product $h \star_\tau C$ has to be nilpotent. But we have
$$h \star_\tau C = h {\!\ \star_{0}\!\ } C_0 + t ( h {\!\ \star_{0}\!\ } C_1 + h \star_1 C_0) + O(t^2) = t ( h {\!\ \star_{0}\!\ } C_1 + h \star_1 C_0) + O(t^2).$$
In particular $h {\!\ \star_{0}\!\ } C_1 + h \star_1 C_0$ is nilpotent. Since $h \star_1 C_0$ is of degree $8n - 6$ and since there is no nilpotent element of degree $8n - 6$ in $\QH(Y)$, we must have $h {\!\ \star_{0}\!\ } C_1 + h \star_1 C_0 = 0$. This is possible since $E_h^Y$ is surjective on degree $8n - 6$. The element $C_1$ is thus uniquely determined by $h {\!\ \star_{0}\!\ } C_1 = - h \star_1 C_0$.
Note that $C_1 \in \QH^{8n - 4}(Y) \subset {\rm Im} E_h^Y$ and $C_0 \in \Ker E_h^Y$ thus $C_0 {\!\ \star_{0}\!\ } C_1 = 0$ (one can check that $h \star_1 C_0 \neq 0$ but we do not need this since if $h \star_1 C_0 = 0$, then $h {\!\ \star_{0}\!\ } C_1$ has to be nilpotent which implies $h {\!\ \star_{0}\!\ } C_1 = 0$ thus $C_1 \in R(X)$ and $C_0 {\!\ \star_{0}\!\ } C_1 = 0$).
We now remark the following equality $C_0 = n \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta} + v$ with $v \in {\rm Im} E_h^Y$. Thus there exists $D_0 \in \QH^{8n-12}(Y)$ with $h {\!\ \star_{0}\!\ } D_0 = v$.
We have $h \star_\tau D_0 = v + t (h \star_1 D_0) + O(t^2)$ with $h \star_1 D_0 \in \QH^{8n - 8}(Y)$. Since $E_h^Y$ is surjective on degree $8n - 8$, there exists $D_1 \in \QH^{8n - 10}(Y)$ with $h {\!\ \star_{0}\!\ } D_1 = - h \star_1 D_0$. Setting $D = D_0 + t D_1$ we get $h \star_\tau D = v + O(t^2)$. This altogether gives $C_0 = n \ \{{\rm pt}\} + h \star_\tau D + O(t^2)$ and
$$n \ \{{\rm pt}\} = C - h \star_\tau D - t C_1 + O(t^2).$$
Computing the square gives
$$n^2 \ \{{\rm pt}\} \star_\tau \{{\rm pt}\} = n^2 \ \{{\rm pt}\} {\!\ \star_{0}\!\ } \{{\rm pt}\} + t h \star_1 (h {\!\ \star_{0}\!\ } D_0 {\!\ \star_{0}\!\ } D_0) + t \cdot {\rm Im} E_h^Y + O(t^2).$$
Now $h {\!\ \star_{0}\!\ } D_0 {\!\ \star_{0}\!\ } D_0 = v {\!\ \star_{0}\!\ } D_0 = (C_0 - n \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta}) {\!\ \star_{0}\!\ } D_0$ and since $D_0 \in \QH^{8n-12}(Y) \subset {\rm Im} E_h^Y$ we have $C_0 {\!\ \star_{0}\!\ } D_0 = 0$ thus $h {\!\ \star_{0}\!\ } D_0 {\!\ \star_{0}\!\ } D_0 = - n \ \{{\rm pt}\} {\!\ \star_{0}\!\ } D_0$. Let $A = \{{\rm pt}\} {\!\ \star_{0}\!\ } D_0$. To compute $A$ we first compute $h {\!\ \star_{0}\!\ } A = \{{\rm pt}\} {\!\ \star_{0}\!\ } h {\!\ \star_{0}\!\ } D_0 = \{{\rm pt}\} {\!\ \star_{0}\!\ } (C_0 - n \ \{{\rm pt}\} ) = - n \ \{{\rm pt}\} {\!\ \star_{0}\!\ } \{{\rm pt}\}$ (since $\{{\rm pt}\} \in \scal{C_0,{\rm Im} E_h^Y}$ and $C_0$ is orthogonal to both terms we have $C_0 {\!\ \star_{0}\!\ } \{{\rm pt}\} = 0$). The Product $\{{\rm pt}\} {\!\ \star_{0}\!\ } \{{\rm pt}\} = q^2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q$ with $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q = \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_1 + {\alpha}_2 + {\alpha}_3 - \theta}$ is easy to compute using for example the kernel and span technique presented in \cite{BKT}. In particular we get $h {\!\ \star_{0}\!\ } A = - n \ q^2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q$. Thus for $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' = \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_1 + 2 {\alpha}_2 + {\alpha}_3 - \theta}$ and $K$ the unique element in $\Ker E_h^Y$ of degree $16 n - 22 = \deg(A)$ with coefficient 1 on $q^2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'$, since $h {\!\ \star_{0}\!\ } \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' = \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q$, we have
$$A = \lambda K - n \ q^2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'$$
for some scalar $\lambda$. Let $\ell$ be the class of a line in $Y$. Note that the coefficient of $\ell$ in $D_0$ is $1-n$. To compute $\lambda$, first note that $\{{\rm pt}\} {\!\ \star_{0}\!\ } \textrm{PD}(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q') = q^2 h$ (again obtained using the kernel and span technique). In particular, the only non vanishing invariant of the form $I_d(\{{\rm pt}\},\textrm{PD}(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'),-)$ is the invariant $I_2(\{{\rm pt}\},\textrm{PD}(\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'),\ell) = 1$. This implies that the coefficient of $q^2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q'$ in $A = \{{\rm pt}\} {\!\ \star_{0}\!\ } D_0$ is the coefficient of $\ell$ in $D_0$ and has value $1-n$. We deduce that $\lambda = 1$ and
$$h {\!\ \star_{0}\!\ } D_0 {\!\ \star_{0}\!\ } D_0 = n^2 \ q^2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' - n \ K.$$
We now compute $h \star_1 (h {\!\ \star_{0}\!\ } D_0 {\!\ \star_{0}\!\ } D_0)$. We actually only need to compute modulo $n^2$ so that we only to consider $h \star_1 K$. An easy degree argument gives that there are only 2 Schubert classes appearing in $K$ with non vanishing value under $h \star_1 - $: the classes $q^3 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{a_1 + {\alpha}_2 + {\alpha}_3 + {\alpha}_4 - \theta}$ and $q^3 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_2 + {\alpha}_3 + {\alpha}_4 + {\alpha}_5 - \theta}$ (only the first one for $n = 4$). One then easily check using the kernel and span technique the following formula
$$h \star_1 K = q^3 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_2 + {\alpha}_3 + {\alpha}_4 + {\alpha}_5} \textrm{ (for $n = 4$ we have $h \star_1 K = \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}_1 + {\alpha}_2 + {\alpha}_3 + {\alpha}_4}$).}$$
Write $q^3 \gamma = h \star_1 K$. The class $\gamma$ is a Schubert class and is not contained in ${\rm Im} E_h^Y$. Altogether working in the Schubert basis modulo $n^2$ and modulo ${\rm Im} E_h^Y$ we get:
$$- n q^2 \gamma \equiv n^2 \{{\rm pt}\} \star_1 \{{\rm pt}\} \equiv 0 \textrm{ (mod. $n^2$ and ${\rm Im} E_h^Y$)}.$$
A contradiction.
\end{proof}
\subsection{Cayley plane}
In this subsection we consider the variety $Y = F_4/P_4$ obtained as hyperplane section of the Cayley plane $\O\mathbb{P}^2 = E_6/P_6$. Since the arguments are very similar to the case $Y = \IG(2,2n)$ and since the computer program \cite{programme} gives a complete description of the small quantum cohomology we shall only state the results and give a sketch of proof.
\begin{lemma}
Let $Y$ be a linear section of $X = E_6/P_6$ of codimension $1$. We have $\Ker E_h^Y = R(Y) = \scal{K_8}$ for some element $K_8 \in \QH^8(Y)$.
\end{lemma}
\begin{proof}
Follows from the description of $E_h^Y$ using short roots and the fact that the element $K_8 \in \Ker E_h^Y \cap \QH^8(Y)$ satisfies $K_8^2 \in A_5(Y) \subset E_h^Y$ and $K_8^2 \in \Ker E_h^Y$ thus $K_8^2 = 0$ since ${\rm Im} E_h^Y \cap \Ker E_h^Y = 0$.
\end{proof}
\begin{thm}
\label{thm-bqh2}
The big quantum cohomology $\BQH(Y)$ is semisimple.
\end{thm}
\begin{proof}
For classes $\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q,\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' \in \HH(Y)$ we will write
$$\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q \star_\tau \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' = \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q {\!\ \star_{0}\!\ } \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q' + t (\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q \star_1 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q') + O(t^2).$$
Let $({\alpha}_1,{\alpha}_2,{\alpha}_3,{\alpha}_4)$ a system of simple roots of the root system of $F_4$ with ${\alpha}_3$ and ${\alpha}_4$ short. We have $\theta = {\alpha}_1 + 2 {\alpha}_2 + 3 {\alpha}_3 + 2 {\alpha}_4$ (recall that $\theta$ is the highest short root).
Let $\tau = \{{\rm pt}\}$, we prove that $\BQH_\tau(Y)$ is semisimple. Let $C \in \BQH_\tau(Y)$ be nilpotent with $C \neq 0$ and $c \star_\tau C = 0$. We may assume that
$$C = C_0 + t C_1 + O(t^2) \textrm{ with $C_0 \neq 0$}.$$
We have $0 = C \star_\tau C = C_0 {\!\ \star_{0}\!\ } C_0 + O(t)$ thus $C_0 \in R(Y)$ and modulo rescaling we may assume $C_0 = \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta} - q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}} + q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\beta}} \in \QH^{30}(Y)$ with ${\alpha} = {\alpha}_1+{\alpha}_2+{\alpha}_3+{\alpha}_4$ and ${\beta} = {\alpha}_2 + 2 {\alpha}_3 + {\alpha}_4$. We have:
$$h \star_\tau C = h {\!\ \star_{0}\!\ } C_0 + t (h \star_1 C_0 + h {\!\ \star_{0}\!\ } C_1) + O(t^2) = t (h \star_1 C_0 + h {\!\ \star_{0}\!\ } C_1) + O(t^2).$$
In particular since $h \star_\tau C$ is nilpotent, we get that $h \star_1 C_0 + h {\!\ \star_{0}\!\ } C_1$ is nilpotent. Since $h \star_1 C_0 = \in \QH^{60}(Y) = q^2 \QH^{16}(Y)$ and since there is no nilpotent element in degree 60 we get $h \star_1 C_0 + h {\!\ \star_{0}\!\ } C_1 = 0$. This is possible since $E_h^Y$ is surjective on degree 60. The element $C_1$ is thus uniquely determined by $h {\!\ \star_{0}\!\ } C_1 = - h \star_1 C_0$.
Note that $C_1 \in \QH^{58}(Y) \subset {\rm Im} E_h^Y$ and $C_0 \in \Ker E_h^Y$ thus $C_0 {\!\ \star_{0}\!\ } C_1 = 0$.
We now remark the following equality $C_0 = 3 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta} + (q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\beta}} - q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}} -2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta})$ where the second term is in the image of $E_h^Y$. Thus there exists $D_0 \in \QH^{28}(Y)$ with $h {\!\ \star_{0}\!\ } D_0 = q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\beta}} - q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}} -2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta}$. An easy computation gives
$$D_0 = q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\gamma} - 2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\delta}$$
with $\gamma = {\alpha}_1 + {\alpha}_2 + 2 {\alpha}_3 + {\alpha}_4$ and $\delta = {\alpha}_1 + 2 {\alpha}_2 + 3 {\alpha}_3 + {\alpha}_4$. We have $h \star_\tau D_0 = (q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\beta}} - q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}} -2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta}) + t (h \star_1 D_0) + O(t^2)$ with $h \star_1 D_0 \in \QH^{58}(Y)$. Since $E_h^Y$ is surjective on degree 58, there exists $D_1 \in \QH^{56}(Y)$ with $h {\!\ \star_{0}\!\ } D_1 = - h \star_1 D_0$. Setting $D = D_0 + t D_1$ we get $h \star_\tau D = q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\beta}} - q \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{{\alpha}} -2 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\theta} + O(t^2)$. This altogether gives $C_0 = 3 \{{\rm pt}\} + h \star_\tau D + O(t^2)$ and
$$3 \{{\rm pt}\} = C - h \star_\tau D - t C_1 + O(t^2).$$
Computing the square gives as in the proof of Theorem \ref{thm-bqh1}
$$9 \{{\rm pt}\} \star_\tau \{{\rm pt}\} = \{{\rm pt}\} {\!\ \star_{0}\!\ } \{{\rm pt}\} + t h \star_1 (h {\!\ \star_{0}\!\ } D_0 {\!\ \star_{0}\!\ } D_0) + t \cdot {\rm Im} E_h^Y + O(t^2).$$
Since $D_0$ is explicitely given and since the endomorphism $h \star_1 -$ is understood using $\{{\rm pt}\} {\!\ \star_{0}\!\ } -$ (see Subsection \ref{sub-def}) we get
$$9 \{{\rm pt}\} \star_1 \{{\rm pt}\} = 12 q^4 + 12 q^3 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\alpha}} + 6 q^3 \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-{\beta}} + {\rm Im} E_h^Y = 6 q^4 + {\rm Im} E_h^Y.$$
On the other hand we compute directly the product $\{{\rm pt}\} \star_\tau \{{\rm pt}\}$. We have
$$\{{\rm pt}\} \star_1 \{{\rm pt}\} = \sum_{d \geq 0} \sum_{\zeta \in R_s} q^d I_d(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\zeta}) \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{\zeta}.$$
The invariant $I_d(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\zeta})$ vanishes unless $3 \deg \{{\rm pt}\} + \deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\zeta} = 2 \dim Y + 2d c_1(Y) + 2$ thus $\deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\zeta} = 22 d - 58$. Since $\deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\zeta}$ is even and contained in $[0,30]$, the invariant $I_d(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\zeta})$ vanishes unless $d = 3$ and $\deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\zeta} = 8$ or $d = 4$ and $\deg \sigma}\def\OO{\mathbb O}\def\PP{\mathbb P}\def\QQ{\mathbb Q_{-\zeta} = 30$. But one it was proved in \cite[Proposition 2.13]{rational} that there is no degree 3 curve in $Y$ passing through 3 points in general position. Thus $I_3(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},-) = 0$. The only non vanishing invariant is thus $I_4(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\})$ and we have
$$9 \{{\rm pt}\} \star_1 \{{\rm pt}\} = 9 I_4(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\}) q^4 = 6 q^4 + {\rm Im} E_h^Y.$$
Since $q^4 \not \in {\rm Im} E_h^Y$ this implies $9 I_4(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\}) = 6$ which is impossible since $I_4(\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\},\{{\rm pt}\}) \in {\mathbb Z}_{\geq0}$.
\end{proof}
\addtocontents{toc}{\protect\setcounter{tocdepth}{2}}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,877,628,090,316 | arxiv | \section{Introduction}
\label{sec:introduction}
\PARstart{T}{o} compare two fuzzy sets (FSs) one may consider their similarity or distance. To assess their similarity, we measure the similarity of the membership values for each element in each set. The result is given within the interval $[0,1]$, where 0 indicates that there are no elements shared between both FSs and 1 indicates that the sets are identical. Alternatively, to assess the distance between FSs, given as a value in $\mathbb{R}$, we measure the distance between the elements which belong to each set; typically the distance between elements is also weighted by their membership values.
Measures of similarity and distance have frequently been applied to a variety of different applications. For example, similarity has often been used to measure the similarity between different word models \cite{WuComparative, wagnerwordmodels2013}, or to find similar patterns in classification \cite{Wang20052063} and clustering \cite{Wang20052063}.
Distance Measures (DMs), though less commonly researched, have been used to compare FSs, for example, in the ranking of fuzzy numbers \cite{Cheng1998307}.
Measures of similarity and distance evaluate two fundamentally different aspects of FSs, and it is due to the unique properties of these measures, or more directly, through the nature of \emph{what} the measures actually measure that their applicability to a given problem setting is determined. For example, there are cases in which a similarity measure (SM) may not be useful, such as when the FSs are disjoint. In this case, the result of the SM is always zero. This does not tell us how far apart the FSs are placed in the universe of discourse (UoD); they may be far apart or right next to each other. Where this is of concern, a DM may be beneficial. However, likewise, a DM is also not always a useful measure, for example when one FS is a subset of another. In this case the results become ambiguous as DMs are not ideal for detecting overlap between FSs.
Current research within the literature has generally made a choice between using either measures of similarity or distance, however in many cases, it is not trivial to make this choice, in particular when FSs are dynamically created from data such as for approaches like \cite{wagnerwordmodels2013} and \cite{coupland2010intervalapproach}. This paper proposes the fusion of both measures into a single measure which can be applied in the comparison of FSs and produces meaningful results regardless of the exact nature of the FSs to be measured. The fusion is achieved by an ordered weighted average (OWA) operator, and is applied to data-driven FSs to demonstrate the benefits of the measure.
Section \ref{sec:background} introduces FSs, SMs, DMs, and OWA operators, followed by an examination of what exactly the measures measure in Section \ref{sec:comparing_measures}. In Section \ref{sec:combining_measures}, a combined measure is presented which utilises the unique properties of both similarity and distance, and demonstrations of the combined measure are shown in Section \ref{sec:demonstrations}. Finally, conclusions are given in Section \ref{sec:conclusions}.
\section{Background}
\label{sec:background}
\subsection{Fuzzy Sets}
\label{sec:fuzzy_sets}
Fuzzy sets have been applied to many applications in which uncertainty is present; some examples of which include data mining \cite{datamining} and Computing with Words \cite{CWW}. Unlike traditional logic, for which the membership of each element to the set is a Boolean value, the elements of a FS have a membership value that lies anywhere in the interval $[0,1]$. A FS $F$ may be represented as a set of ordered pairs as follows \cite{mendel2001uncertain}:
\begin{equation}
F = (x, \mu_F(x))\ |\ x \in X
\end{equation}
where $\mu_F(x)$ indicates the membership value of the element $x$ in the FS $F$. For a discrete UoD, the FS $F$ may be written as \cite{mendel2001uncertain}
\begin{equation}
F = \sum_x \mu_F(x)\ /\ x
\end{equation}
where $\sum$ denotes the collection of all points $x \in X$ with associated membership value $\mu_F(x)$.
\subsection{Similarity Measures}
\label{sec:similarity_intro}
SMs are a common tool used within fuzzy logic. A SM $s(A,B) \rightarrow [0,1]$ calculates how similar two FSs are to each other through a comparison of the degrees of membership within each set. Common properties of a SM $s$ for FSs $A$, $B$ and $C$ are as follows:\\
\textbf{Reflexivity:} $s(A, B) = 1 \Longleftrightarrow A = B $ \\
\textbf{Symmetry:} $s(A, B) = s(B, A)$ \\
\textbf{Transitivity:} If $A \subseteq B \subseteq C$, then $s(A, B) \geq s(A, C)$ \\
\textbf{Overlapping:} If $A \cap B \neq \emptyset$, then $s(A, B) > 0$; \\
otherwise, $s(A, B) = 0$
Note that it is not necessary for a SM to have all of these properties as the application for which the measure is used may not depend on all of them. However, it is typical that a SM always follows the property of reflexivity.
Throughout this paper, similarity is measured using the Jaccard SM, which supports all of the four properties listed above \cite{WuComparative}. The Jaccard measure $s$ for FSs $A$ and $B$ is given as:
\begin{equation}
s(A, B) = \frac{\sum^n_{i=1} min(\mu_A(x_i),\ \mu_B(x_i))}
{\sum^n_{i=1} max(\mu_A(x_i),\ \mu_B(x_i))}
\label{eq:jaccard}
\end{equation}
where $n$ is the total number of discretisations along the $x$-axis.
\subsection{Distance Measures}
\label{sec:distance_intro}
A DM $d(A,B) \rightarrow \mathbb{R}^+$ is used to asses the distance between FSs by calculating the distances between the elements in each set.
A DM $d$ on FSs $A$, $B$ and $C$ holds the following properties: \\
\textbf{Self-identity}: $ d(A, A) = 0 $ \\
\textbf{Separability}: $ d(A, B) \geq 0 $ \\
\textbf{Symmetry:} $ d(A, B) = d(B, A)$\\
\textbf{Transitivity:} If $A \subseteq B \subseteq C$, then $d(A, B) \leq d(A, C)$ \\
\textbf{Triangle inequality}: $ d(A, C) \leq d(A, B) + d(B, C) $
The distance between two FSs is most commonly measured by taking $\alpha$-cuts of FSs and measuring the distance between the $\alpha$-cuts. The $\alpha$-cut of the FS $A$ is a non-FS comprised of all the elements whose membership grade within $A$ is greater than or equal to $\alpha$ \cite{Zadeh1975199}; this is written formally as $A_\alpha = \{x\ |\ \mu_A(x) \geq \alpha \}$.
Chaudhuri and Rosenfeld \cite{Chaudhur19961157} proposed the following metric to measure the distance between two convex, normal FSs $A$ and $B$:
\begin{equation}
d(A, B) = \frac{\sum^m_{i=1} y_{\alpha_i}\ h(A_{\alpha_i}, B_{\alpha_i})}{\sum^m_{i=1} y_{\alpha_i}}
\label{eq:CR_haus}
\end{equation}
where the $y$-axis is discretised into $m$ points ($y_1, y_2, ..., y_m$), $A_{\alpha_i}$ is the non-fuzzy $\alpha$-cut (given as an interval) of the FS $A$ at y-coordinate $y_{\alpha_i}$, and $h$ is the conventional Hausdorff metric for two continuous intervals $\bar{A}$ and $\bar{B}$ as follows \cite{Zwick1987221}:
\begin{equation}
h(\bar{A}, \bar{B}) = max \{|\bar{A}_l - \bar{B}_l |, |\bar{A}_r - \bar{B}_r |\}
\label{eq:interval_haus}
\end{equation}
where $\bar{A} = [\bar{A}_l, \bar{A}_r]$ and $\bar{B} = [\bar{B}_l, \bar{B}_r]$.
In addition to the Hausdorff distance given above, a directional DM (DDM) is given as follows \cite{mcculloch2013measuring}:
\begin{equation}
h(\bar{A}, \bar{B})=
\begin{cases}
\bar{B}_l - \bar{A}_l, & \text{if $|\bar{B}_l - \bar{A}_l| > |\bar{B}_r - \bar{A}_r| $}.\\
\bar{B}_r - \bar{A}_r, & \text{otherwise}.
\end{cases}
\label{eq:interval_haus_with_sign}
\end{equation}
for which a positive distance is given where $A < B$, and a negative value of distance is given where $B > A$. The DDM, however, does not hold the property of symmetry and instead follows partial symmetry, defined as $|d(A,B)| = |d(B,A)|$ and $d(A,B) \neq d(B,A)$ where $A \neq B$. Throughout this paper, (\ref{eq:CR_haus}) is used in conjunction with (\ref{eq:interval_haus_with_sign}).
Having reviewed SMs and DMs, a brief overview of OWA operators is given next, which will be used to aggregate similarity and distance.
\subsection{Ordered Weighted Average}
\label{sec:owas}
OWA operators \cite{yager1988ordered} are used to aggregate sub-components of a problem. An OWA involves assigning objects to an ordered set of weights $w = \{w_1, w_2, ....., w_n\}$, for which $w_i \in [0, 1]$ and $\sum^n_{i=1} w_i = 1$. The objects which are to be aggregated are sorted into descending order, and each object is multiplied by the corresponding weight. Thus, for a given list of objects $a_1, a_2, ..., a_n$ and weights $w_1, w_2, ..., w_n$, the OWA is calculated as follows \cite{yager1988ordered}:
\begin{equation}
F(a_1, a_2, .... a_n) = w_1 b_1 + w_2 b_2 + .... + w_n b_n
\label{eq:OWA}
\end{equation}
where $b_i$ is the $i_{th}$ largest element in the collection $a_1, a_2, ..., a_n$.
OWAs have been commonly used in the literature to solve a variety of problems. For example, \cite{Canos2008669} uses an OWA in decision making applied to the personnel selection problem. In \cite{Sadiq20104881}, an OWA is used to aggregate different performance indicators to assess the performance of small drinking water utilities, and \cite{Sadiq20104881} uses and OWA to aid in the selection of financial products.
\section{Comparison of Measures on Fuzzy Sets}
\label{sec:comparing_measures}
In this Section, SMs and DMs are compared on a series of real-data driven FSs. This is in order to clarify their respective outputs in an applied context, and to demonstrate the proposition that it can be more beneficial to use a combination of both measures.
As previously discussed, SMs and DMs have unique properties which lead to them measuring fundamentally different concepts. To demonstrate the nature of the measures, and the strengths of using both similarity and distance together to analyse FSs, consider the FSs shown in Fig. \ref{fig:demonstration_T1_sets}. These FSs have been constructed from the Movie Lens data set \cite{movielens}, in which films are rated between 1 (poor) and 5 (great). Histograms were created to represent the distribution of ratings and each histogram was normalised by dividing the membership value at each $x$-coordinate by the peak membership value of the histogram. Linear-interpolation was used to determine membership values between known points.
The SM and DDM introduced in Sections \ref{sec:similarity_intro} and \ref{sec:distance_intro} were applied to each pair of movies, respectively. Their results are shown in Table \ref{tab:type1_results}. The results of the combined measure are also shown in Table \ref{tab:type1_results} for comparison purposes, and will be introduced in the next section. For each pair, the FS $A$ was given as the first parameter for the measure, and the FS $B$ was given as the second parameter.
\begin{table}[h!]
\setlength{\tabcolsep}{4pt}
\caption{Values given by SMs and DMs on the FSs in Fig. \ref{fig:demonstration_T1_sets}}
\begin{center}
\begin{tabular}{ c c c c c c c c }
\toprule
Fig. \ref{fig:demonstration_T1_sets} - part:
& a & b & c & d & e & f \\ \midrule
Similarity (\ref{eq:jaccard}) & 0.050 & 0.067 & 0.170 & 0.242 & 0.0 & 0.892 \\
Distance (\ref{eq:CR_haus}) & -3.628 & 2.936 & 2.723 & -1.999 & 3.258 & 0.169 \\
Comparative (\ref{eq:fused}) & -0.915 & 0.864 & 0.857 & -0.806 & 0.938 & 0.072 \\ \bottomrule
\end{tabular}
\end{center}
\label{tab:type1_results}
\end{table}
The following is a discussion of the results for similarity (\ref{eq:jaccard}) and distance (\ref{eq:CR_haus}) in Table \ref{tab:type1_results} for the FSs in Fig. \ref{fig:demonstration_T1_sets}. For each case, a brief discussion highlights where both measures contribute information that is particularly helpful when considered together.
\textbf{Sets \textit{a} \& \textit{b}} For the sets in Fig. \ref{fig:demo_a} and \ref{fig:demo_b}, the SM indicates that the FSs are almost disjoint, but there is a small degree of similarity between them. However, there is no indication of where this similarity lies and how much the FSs differ. One can, however, see that the sign of the DM may be helpful to indicate the actual region of similarity. In this case, the direction of the DM tells us that the similarity is likely towards the lower end of the UoD of the FS $A$ for Fig. \ref{fig:demo_a} and the higher end of $A$ for Fig. \ref{fig:demo_b}.
\textbf{Sets \textit{c} \& \textit{d}} The SM indicates a small difference in similarity between the sets of \textit{c} and the sets of \textit{d}, but it tells us very little else; in both cases, there is a small amount of overlap but we do not know where. However, the DM reveals that this overlap is to the right of the FS $A$ for \textit{c}, and to the left $A$ for \textit{d}.
\textbf{Sets \textit{e}} In this case, the SM indicates that there is no similarity between the FSs, i.e. they are disjoint, and the DM indicates that there is a large amount of distance between the FSs.
\textbf{Sets \textit{f}} Both the SM and DM are able to identify when two FSs are identical or, in this case, almost identical. For the results of Fig. \ref{fig:demo_f}, each measure indicates that the membership functions of both FSs are very close to each other.
Given the results above, it is clear that SMs and DMs are each unique functions with distinct properties. This results in the common necessity to choose between both types of measure or indeed to apply both individually. While the application of both measures individually as conducted here can provide some insight, it can be challenging to interpret the two distinct outputs simultaneously for given FSs.
\begin{figure}[t!]
\centering
\subfigure[]
{
\includegraphics[height=4.4cm, width=6cm]{images/movie_lens/a.png}
\label{fig:demo_a}
}
\subfigure[]
{
\includegraphics[height=4.4cm, width=6cm]{images/movie_lens/b.png}
\label{fig:demo_b}
}
\subfigure[]
{
\includegraphics[height=4.4cm, width=6cm]{images/movie_lens/c.png}
\label{fig:demo_c}
}
\end{figure}
\begin{figure}[t!]
\centering
\subfigure[]
{
\includegraphics[height=4.4cm, width=6cm]{images/movie_lens/d.png}
\label{fig:demo_d}
}
\subfigure[]
{
\includegraphics[height=4.4cm, width=6cm]{images/movie_lens/e.png}
\label{fig:demo_e}
}
\subfigure[]
{
\includegraphics[height=4.4cm, width=6cm]{images/movie_lens/f.png}
\label{fig:demo_f}
}
\caption{Fuzzy sets used to demonstrate the attributes of SMs and DMs in Table \ref{tab:type1_results}.}
\label{fig:demonstration_T1_sets}
\end{figure}
In the next section, the measures are combined into a single measure resulting in a single value, which can be used to determine the similarity, distance and direction between FSs.
\section{Combining Measures}
\label{sec:combining_measures}
The comparative measure removes the need to choose between measures of similarity or distance, and by combining both measures it creates a more detailed comparison of FSs. Both of these aspects are particularly important in cases where a potentially large number of FSs are generated from data. In such cases, an appropriate decision between the individual measures (and/or joint result interpretation) cannot be conducted by a human expert but has to be done automatically. Thus, a single measure is proposed to provide a detailed comparison of FSs.
\subsection{A Single Comparative Measure}
Note that both measures commonly yield results in different domains; SMs within $[0,1]$ and DMs within $\mathbb{R}$ (or $\mathbb{R}^+$ if it is non-directional). A decision must therefore be made as to which domain will be used for the results of the combined measure. The following presents a measure which yields results in $[0,1]$, for which the value 0 indicates minimum distance/maximum similarity, and the value 1 indicates maximum distance/minimum similarity.
To fuse the measures, it is important to consider that similarity and distance represent two fundamentally different comparisons of FSs; i.e. both measures measure ``opposite'' concepts. The SM indicates how \textit{similar} or how \textit{close} two FSs are placed, and the DM indicates how \textit{far apart} they are positioned. To fuse these measures they must both represent the same concept, either both \textit{similarity/closeness} or both \textit{dissimilarity/distance}. The following considers the latter case.
As similarity is within the domain $[0, 1]$, to achieve a measure of dissimilarity for the combined measure the complement of the SM may be used (i.e. $1 - s(A,B)$) \cite{Lipkus1999tanimoto}. This can then be used in conjunction with the DM.
Considering the DM is within $\mathbb{R}$, it should be changed such that the result falls within $[0,1]$ to enable a meaningful fusion of both measures. To alter the result, it is necessary to take into account the UoD in which the measure has been applied. For example, if the UoD is in $\{1, 2, 3, 4, 5\}$ then the maximum distance that can be achieved is 4. The result of the DM may be given as a ratio to the maximum possible distance. Taking the above into account, $\frac{d(A,B)}{\lambda}$ is used to obtain a ratio of closeness from the DM, where $\lambda$ is the largest possible distance within the UoD. For a finite UoD $X$ described as $\{x_1, x_2, ...., x_n\}$, $\lambda$ will be $x_n - x_1$.
Given the above, the OWA provides a reasonable approach to fusing both measures ((\ref{eq:jaccard}) and (\ref{eq:CR_haus})). (\ref{eq:fused}) presents the comparative measure as an OWA based aggregation of the measures of similarity and distance for FSs $A$ and $B$:
\begin{equation}
c(A,B) =
\begin{cases}
F{\scriptstyle \big( (1-s(A,B))}, \big( \frac{d(A,B)}{\lambda} \big)\big), & d(A,B) \geq 0 \\
F{\scriptstyle \big( -(1-s(A,B))}, \big( \frac{d(A,B)}{\lambda} \big)\big), & \text{otherwise}
\end{cases}
\label{eq:fused}
\end{equation}
where $F$ is an OWA as shown in (\ref{eq:OWA}) with weights $w=\{0.7, 0.3\}$ and $d$ is the DDM (\ref{eq:CR_haus}) with (\ref{eq:interval_haus_with_sign}).
The weights $w=\{0.7, 0.3\}$ are chosen such that the largest of the dissimilarity measure and normalised DM within (\ref{eq:fused}) is assigned the weight 0.7, and the smallest is assigned the weight 0.3. Note that the absolute values of the measures are used when assigning the weights, thus a measure of -0.45 is considered larger than a measure of 0.3. These weights have been determined heuristically as outlined in Section \ref{sec:owa_weights}, and in the future other ways of determining such weights may be investigated.
Note that if the result from the DDM gives a negative value then the result of (\ref{eq:fused}) will also be a negative value. Likewise, if the DDM gives a positive value then the result of the combined measure will also be positive.
In (\ref{eq:fused}), a value of 0 represents identical FSs, as proven in theorem 1 below, and a value of 1 (or -1) represents the maximum distance possible of two disjoint FSs. If one wishes to have the value 1 to represent identical FSs, the complement of (\ref{eq:fused}) may be used as
\begin{equation}
c'(A,B) =
\begin{cases}
1 - c(A,B), & \text{if } c(A,B) \geq 0 \\
-1 - c(A,B), & \text{otherwise}
\end{cases}
\label{eq:fused_comp}
\end{equation}
Note that (\ref{eq:fused_comp}) maintains the direction according to the comparative measure (\ref{eq:fused}).
Within (\ref{eq:fused}), the measure of similarity is altered such that it reflects dissimilarity or distance. Note that this is just one method of combining the measures proposed because of both its simplicity and its ability to represent both similarity and distance as demonstrated in the examples within the next section. Another method, for example, is the special case where the weights are both 0.5, resulting in the standard average of both measures. It is also possible that the result may be altered to yield results in the domain $\mathbb{R}$ by multiplying the SM by the value $\lambda$ and fusing the result with the unaltered DM.
\subsection{Choosing the OWA Weights}
\label{sec:owa_weights}
The following discusses how the weights of the OWA operator (\ref{eq:OWA}) may be chosen for the comparative measure (\ref{eq:fused}). Referring to Fig. \ref{fig:weights_example}, the FS pairs $(A, B)$ and $(A, C)$ are compared. According to the DM (\ref{eq:CR_haus}) the distance for both pairs is 0.331. However, according to the SM (\ref{eq:jaccard}) the similarity of $(A, B)$ is 0.182, and the similarity of $(A, C)$ is 0.0. Due to the DM giving the same result for $(A, B)$ and $(A, C)$ one would assume that the $B$ and $C$ are the same FS. It is only by also referring to the SM (or by viewing the FSs) that it becomes clear that the FSs are different. By using the comparative measure, however, it is possible to distinguish between different pairs of FSs which give equal values of similarity or distance. It is also important to note that, by using the comparative measure, this can be confirmed by using a single measure; a user does not have to check the results of both the SM and DM to ensure pairs of FSs are different.
The weights of the comparative measure play an important role in determining the difference between different pairs of FSs which result in equal values from a measure. Table \ref{tab:weights_example} shows the difference between pairs $(A,B)$ and $(A,C)$ using a variety of weights. As the first weight increase in value the difference between the two pairs also increases; the results begin to signify that $B$ is closer to $A$ than $C$ is to $A$. This is because the dissimilarity measure gives 1 for disjoint sets (such as pairs $(A,C)$) and thus, in such cases, will always be given the first weight. As the first weight increases in value the overall measure is, in effect, placing more importance on the fact that the sets are disjoint.
However, it is unhelpful to have too large of a value for the first weight. If the first weight equals 1 then the output of the combined measure will always equal 1 for disjoint sets. Considering this, the first weight should be low enough such that it is possible to distinguish between different pairs of disjoint sets. However, it must also be large enough such that it is possible to make a distinction between FSs which give equivalent values of similarity or distance, such as those in Fig. \ref{fig:weights_example}.
Ideally, when the FSs are known beforehand, the weights should be tuned such that the widest range of values are given by the measure. This decreases possible confusions over pairs of FSs which would give close or identical values from a single measure. However, if the weights cannot be tuned, the weights $\{0.7, 0.3\}$ are ideal and are used throughout this paper. This is because tests showed that these weights are useful for preventing disjoint FSs from resulting in a lower distance/dissimilarity than non-disjoint FSs.
\begin{figure}[t]
\centering
\includegraphics[height=4.7cm, width=6cm]{images/weights_demo.png}
\caption{Three FSs, $A$, $B$ and $C$.}
\label{fig:weights_example}
\end{figure}
\begin{table}
\caption{Comparative measure on the FSs within Fig. \ref{fig:weights_example} using different weights (* indicates the chosen weights within this paper).}
\begin{center}
\begin{tabular}{ c c c c }
\toprule
\ Weight 0 & Weight 1 & c(A,B) & c(A,C) \\ \midrule
\ 0.0 & 1.0 & 0.331 & 0.331 \\
\ 0.1 & 0.9 & 0.380 & 0.398 \\
\ 0.2 & 0.8 & 0.428 & 0.465 \\
\ 0.3 & 0.7 & 0.477 & 0.532 \\
\ 0.4 & 0.6 & 0.526 & 0.598 \\
\ 0.5 & 0.5 & 0.574 & 0.665 \\
\ 0.6 & 0.4 & 0.623 & 0.732 \\
*0.7 & 0.3 & 0.672 & 0.799 \\
\ 0.8 & 0.2 & 0.721 & 0.866 \\
\ 0.9 & 0.1 & 0.769 & 0.933 \\
\ 1.0 & 0.0 & 0.818 & 1.0 \\ \bottomrule
\end{tabular}
\end{center}
\label{tab:weights_example}
\end{table}
\subsection{Properties of the combined measure}
This Section introduces and proves the properties of the combined measure (\ref{eq:fused}).
\begin{thm}[Self-identity]
The comparative measure (\ref{eq:fused}) follows the property of self-identity. That is, $c(A,B) = 0 \Longleftrightarrow A = B$.
\end{thm}
\begin{IEEEproof}
If $A = B$ then $s(A,B)=1$ according the property of reflexivity, and so $w(1-s(A,B))=0$. \\
Also, if $A=B$ then $d(A,B) = 0$ according to the property of self-identity, and so $w \bigg( \frac{d(A,B)}{\lambda} \bigg) = 0$ for any $w$. Thus $c(A,B) = 0$ if $A=B$.\\
Alternatively, if $A \neq B$ then $s(A,B) \neq 1$ and $d(A,B) \neq 0$, thus $c(A,B) \neq 0$.
\end{IEEEproof}
\begin{thm}[Symmetry]
The comparative measure (\ref{eq:fused}) follows the property of symmetry. That is, $c(A,B) = c(B,A)$.
\end{thm}
\begin{IEEEproof}
If the SM and DM that are aggregated are both symmetrical, then the same values will be given to the comparative measure for both $c(A,B)$ and $c(B,A)$, thus the comparative measure is also symmetrical.
\end{IEEEproof}
\begin{thm}[Partial Symmetry]
Where the DDM (\ref{eq:interval_haus_with_sign}) is used with the comparative measure (\ref{eq:fused}), the property of partial symmetry holds. That is $|c(A,B)| = |c(B,A)|$, and $c(A,B) \neq c(B,A)$ where $A \neq B$
\end{thm}
\begin{IEEEproof}
When the distance is a positive value, the values of both distance and dissimilarity given to the OWA are in the positive domain. Likewise, where the distance is negative, both inputs given to the OWA are in the negative domain. In each case, the absolute values of the positive and negative inputs are the same, thus the absolute values of the outputs are also the same, and the sign of the final value is in same domain as the input values.
\end{IEEEproof}
\begin{thm}[Separability]
The result of the comparative measure is always greater than or equal to zero, i.e. $c(A,B) \geq 0$ when aggregating with the non-DDM.
\end{thm}
\begin{IEEEproof}
Given that $1-s(A,B) \in [0,1]$, and $\frac{d(A,B)}{\lambda} \in [0,1]$, as $\lambda$ never exceeds the maximum value of $d(A,B)$, it follows that $c(A,B) \in [0,1]$ thus $c(A,B) \geq 0$.\\
\end{IEEEproof}
Note, however, if the DDM is used to construct the comparative measure, then $c(A,B) \in [-1, 1]$. Thus, separability does not apply where the DDM is used.
\begin{thm}[Transitivity]
The comparative measure (\ref{eq:fused}) follows the property of transitivity. That is, if $A \subseteq B \subseteq C$, then $d(A, B) \leq d(A, C)$
\end{thm}
\begin{IEEEproof}
Given that both the dissimilarity measure and the DM follow transitivity, when they are aggregated the resulting comparative measure also follows transitivity.
\end{IEEEproof}
\begin{thm}[Triangle Inequality]
The comparative measure (\ref{eq:fused}) follows the property of triangle inequality. That is, $ c(A, C) \leq c(A, B) + c(B, C) $
\end{thm}
\begin{IEEEproof}
Given that both the dissimilarity measure \cite{Lipkus1999tanimoto} and the DM follow triangle inequality, when they are aggregated the resulting comparative measure also follows triangle inequality.
\end{IEEEproof}
The complement of the comparative measure (\ref{eq:fused_comp}), likewise, follows the properties of symmetry, separately, transitivity and triangle inequality. However, the complement does not satisfy self-identity and instead follows reflexivity ($c'(A,B)=1 \Leftrightarrow A=B$). This is because the complement uses 1 to indicate identical FSs, where as the comparative measure uses 0 instead. It is trivial to see from theorem 1 that the complement of the comparative measure satisfies reflexivity.
Note that the comparative measure does not follow the property of overlapping (i.e. if $A \cap B \neq \emptyset$, then $c(A, B) > 0$, otherwise $c(A, B) = 0$) unless the weights $w=[1.0, 0]$ are given, such that the maximum weight is given to the dissimilarity measure when the FSs are identified as disjoint.
\section{Demonstrations}
\label{sec:demonstrations}
Examples of the comparative measure (\ref{eq:fused}) are given in Table \ref{tab:type1_results} in which the measure is applied to the FSs in Fig. \ref{fig:demonstration_T1_sets}. A demonstration and discussion of the comparative measure in an applied context are presented next, and compared against using a single measure of similarity or distance.
\subsection{Demonstration - MovieLens}
The following is a discussion of comparisons between the FSs in Fig. \ref{fig:demonstration_T1_sets} according to the comparative measure (\ref{eq:fused}), the results of which are shown in Table \ref{tab:type1_results}.
\textbf{Sets \textit{a} \& \textit{b}} The results of \textit{a} and \textit{b} in Table \ref{tab:type1_results} have a high degree of dissimilarity/distance. Additionally, the sign of the comparative measure shows in which direction the dissimilar regions of the FSs reside. To the left of $A$ in the case of the FSs in \textit{a} and to the right of $A$ in the case of the sets in \textit{b}. The comparative measure also shows that the FSs within \textit{b} are closer than those in \textit{a}.
\textbf{Sets \textit{c} \& \textit{d}} The FSs within \textit{c} and \textit{d} both have a slightly increased degree of dissimilarity/distance than indicated by the original SM and DM. This is due to a large difference between the range of elements contained within the sets, which increases the dissimilarity according to the measure. In both cases, the FS $B$ covers the range $[1,5]$ whereas $A$ only covers $[1,2]$ in \textit{c}, and $[4,5]$ in \textit{d}. It can also now be observed, by using the comparative measure alone, that $B$ is to the right of $A$ in \textit{c}, and is to the left of $A$ in \textit{d}. The ordering and direction of the distance between \textit{c} and \textit{d} are the same using both the DM and the comparative measure.
\textbf{Sets \textit{e}} The FSs within \textit{e} are disjoint and were thus given the value 0 by the Jaccard SM. However, the comparative measure gives a non-zero value for \textit{e}. Note that this value is still the largest dissimilarity/distance compared to the other pairs within Table \ref{tab:type1_results}. This value also helps to identify the direction of the FSs indicating that the FS $B$ is to the right of $A$.
\textbf{Sets \textit{f}} The comparative measure indicates with a high degree of certainty that the FSs of \textit{f} are nearly identical.
A possible application of the comparative measure is to the problem of ranking. Comparing the comparative measure against the DM, which may also be used for ranking \cite{Cheng1998307}, the ordering of the FSs differs. By observing the absolute values of the measures, according to the DM the most distant pair is \textit{a} and the second most distant is \textit{e}, however, it is the other way round according to the comparative measure. This is because the SM indicates there is some similarity between the FSs within Fig. \ref{fig:demo_a}, which causes the comparative measure to decrease in dissimilarity/distance. However, the FSs in Fig. \ref{fig:demo_e} are disjoint, so the dissimilarity/distance remains high. This leaves the FSs with no similarity as the most distant. One could argue that this is an expected result of the comparative measure because the FSs within \textit{e} are disjoint where as the FSs within \textit{a} are not disjoint. Thus the measure may be considered more intuitive as it is natural to consider the sets in \textit{e} as being more distant than the sets in \textit{a}.
As stated earlier, the unique properties of similarity and distance enable the measures to be applied to a wide variety of fields, and the same can be said for the comparative measure, which, as demonstrated, can be used in terms of a measure of similarity and a measure of distance. For example, with the FSs in Fig. \ref{fig:demonstration_T1_sets} the comparative measure may be used to find similarly rated films by choosing FSs with a low value of dissimilarity/distance, or it may be used to rank the film ratings by ordering the results of the measure.
\subsection{Demonstration - Classification}
\label{sec:demo_classification}
This section presents a synthetic example of the comparative measure applied to the problem of classification. In this example, three initial FSs are given which represent three different descriptions. In this case, they each represent different levels of ambience within an establishment on a scale from 1 to 10. These levels, as shown in Fig. \ref{fig:restaurant_example}, are labelled as \textit{Poor}, \textit{OK} and \textit{Great}. Given a FS representing the ambience of a restaurant, as shown in Fig. \ref{fig:restaurant_example}, the aim is to classify which description best fits the restaurant.
In Table \ref{tab:restaurant_example}, comparisons are given between the ambience of the restaurant and the descriptions. The measures of similarity (\ref{eq:jaccard}) and distance (\ref{eq:CR_haus}) are shown, as well as the complement of the comparative measure (\ref{eq:fused_comp}). For each measure, the word model is given as the first parameter, and the restaurant is given as the second parameter. The complement of the comparative measure is given to match the SM, such that both measures give the value 1 for identical FSs.
\begin{figure}[t]
\centering
\includegraphics[height=4.5cm, width=6cm]{images/classification.png}
\caption{Three FSs modelling degrees of ambience, with a FS representing the ambience of a given restaurant.}
\label{fig:restaurant_example}
\end{figure}
\begin{table}[t]
\caption{Similarity and distance between the restaurant and word models in Fig. \ref{fig:restaurant_example}}
\begin{center}
\begin{tabular}{ c c c c }
\toprule
& Poor & OK & Great \\ \midrule
Similarity (\ref{eq:jaccard}) & 0.081 & 0.493 & 0.469 \\
Distance (\ref{eq:CR_haus}) & 5.573 & 1.064 & -3.360 \\
Comparative (\ref{eq:fused_comp}) & 0.171 & 0.609 & -0.516 \\ \bottomrule
\end{tabular}
\end{center}
\label{tab:restaurant_example}
\end{table}
According to the SM (\ref{eq:jaccard}), the restaurant's ambience is similar to the descriptions \textit{poor}, \textit{OK} and \textit{great} to the degree 0.081, 0.493 and 0.469, respectively. It is clear from these results that the restaurant's ambience cannot be described as \textit{poor}, however, it is almost equally valid that it may be described as \textit{OK} or \textit{Great}.
The DM (\ref{eq:CR_haus}), however, gives a clearer view of which FS the restaurant most closely matches; the restaurant has a smaller distance to \textit{OK} than to \textit{Great}. Thus, by fusing the distance and similarity as in (\ref{eq:fused}) and (\ref{eq:fused_comp}), a more distinct match is achieved. Now, it is clear from the results of the comparative measure in Table \ref{tab:restaurant_example} that the restaurant most closely matches \textit{OK} ambience to the degree of 0.609, whereas it only matches \textit{Great} by 0.516 and \textit{Poor} by 0.171. It can now be determined with greater certainty that \textit{OK} is the correct classification. It should be noted that this can be determined by observing a single measure (\ref{eq:fused_comp}), rather than viewing the SM and DM separately.
\section{Conclusions}
\label{sec:conclusions}
\enlargethispage{-0.5cm}
This paper has introduced a novel measure, referred to as a comparative measure, which analyses and compares FSs by combining a SM and DM. When these measures are viewed separately the results may be difficult and time-consuming to interpret as similarity and distance each measure fundamentally different concepts. By joining the measures together, the comparison of FSs is simplified by reducing any ambiguity in the results. Additionally, compared to a single measure, the combined measure provides is a richer comparison as it may be swayed towards a preference in representing similarity or distance. This is especially useful for the automatic comparison of a large number of FSs which have been constructed from data. Additionally, through using an OWA operator, it is possible to refine the weights to further alleviate ambiguous values resulting from the original measures.
Demonstrations using data-driven FSs have shown that the comparative measure may be applied in terms of both similarity and distance, and as such may be applied to applications of these measures. Though the demonstrations have been applied to type-1 FSs only, as the comparative measure uses the outputs of the SM and DM, it may also be applied to type-2 FSs, where the original measures are a type-2 SM and DM.
Future work will look at measures which indicate similarity and distance as a FS, which better reflects the uncertainty inherent in FSs.
\bibliographystyle{IEEEtran}
|
2,877,628,090,317 | arxiv | \section{Introduction}
Recently, there has been various developments expanding our understanding of the structure of vacua from string and M-theory. First, conjectures are proposed which restrict the possible landscape of quantum gravity. In particular, a strong version, \cite{Ooguri:2016pdq, Freivogel:2016qwc}, of the weak gravity conjecture, \cite{ArkaniHamed:2006dz}, implies the non-existence of stable non-supersymmetric $AdS$ vacua in quantum gravity. Second, machine learning techniques enable us to discover a larger landscape of $AdS$ vacua in string and M-theory, \cite{Comsa:2019rcz, Bobev:2019dik, Krishnan:2020sfg, Bobev:2020ttg, Bobev:2020qev}. In gauged supergravity, due to the complexity of scalar potentials, search of critical points was a daunting task. Third, as an application of exceptional field theory, a powerful tool of Kaluza-Klein spectroscopy for mass spectrum was developed in \cite{Malek:2019eaz, Malek:2020mlk, Malek:2020yue, Varela:2020wty, Cesaro:2020soq}. It is useful in checking the perturbative stability of an $AdS$ vacuum by the Breitenlohner-Freedman (BF) bound, \cite{Breitenlohner:1982bm, Breitenlohner:1982jf, Gibbons:1983aq}.
In addition, a new decay channel for $AdS$ vacua, called the brane-jet instability, was proposed in \cite{Bena:2020xxb}. Along the lines of previous studies, \cite{Maldacena:1998uz, Danielsson:2017max, Apruzzi:2019ecr}, it employs a probe brane to test the instability. When the force acting on the probe brane is repulsive, the vacuum is determined to be unstable. In close relation to the developments we mentioned, the brane-jet instabilities of numerous $AdS$ vacua from gauged supergravity in diverse dimensions have been tested, \cite{Bena:2020xxb, Suh:2020rma, Guarino:2020jwv, Apruzzi:2021nle}. In the end, among the non-supersymmetric $AdS$ vacua which have been tested in the literature, only seven $AdS_4$ vacua of massive type IIA supergravity, \cite{Guarino:2015jca, Guarino:2015qaa, Guarino:2015vca}, are proven to be both perturbatively, \cite{Guarino:2020flh}, and brane-jet stable, \cite{Guarino:2020jwv}.{\footnote{Recently, in \cite{Bomans:2021ara} so-called $dilaton$ bubble solution was explicitly constructed for the $G_2$-symmetric vacuum which is one of the seven $AdS_4$ vacua of massive type IIA supergravity known to be BF and brane-jet stable.}}
In this paper, we will show that there are more non-supersymmetric $AdS$ vacua which are both perturbatively and brane-jet stable: the non-supersymmetric Janus solutions of type IIB supergravity, the skew-whipped Freund-Rubin, and the $AdS_4$ vacua on $Q^{1,1,1} $ and $M^{1,1,1}$ manifolds in eleven-dimensional supergravity. They are possible counter-examples to the swampland conjecture for non-supersymmetric $AdS$ vacua, \cite{Ooguri:2016pdq, Freivogel:2016qwc}.
\subsection{Janus solutions of type IIB supergravity}
So far, the brane-jet instability was only tested for $AdS$ vacua from flat domain walls. We test the brane-jet instability of $AdS$ solutions from curved domain walls, namely the non-supersymmetric Janus solutions of type IIB supergravity, \cite{Schwarz:1983qr, Howe:1983sra}, obtained by Bak, Gutperle and Hirano in \cite{Bak:2003jk}, which is the first Janus solution constructed in the literature. This class of solutions were shown to be stable perturbatively in \cite{Bak:2003jk} and by the Nester-Witten positive energy theorem in \cite{Freedman:2003ax}.
In order to understand the brane-jet in the curved domain walls, we start by studying a simple example: we put the famous supersymmetric $AdS_5\times{S}^5$ solution in the $AdS_4$-sliced coordinates and compare the brane-jet with the usual $AdS_5\times{S}^5$ solution in the Poincar{\'e} coordinates. In the case of curved domain walls, the worldvolume of probe D3-brane is on $AdS_4$ instead of Mink$_4$ of the usual flat domain walls. As supersymmetric solutions, they are both brane-jet stable, but display different behaviours of brane-jets. Then we study the brane-jet of the non-supersymmetric Janus solutions. The non-supersymmetric Janus solutions turn out to be brane-jet stable.
\subsection{$AdS_4$ vacua from Sasaki-Einstein manifolds}
For a Sasaki-Einstein manifold, $SE_7$, there is the Freund-Rubin solution, \cite{Freund:1980xh}, which is $\mathcal{N}\,=\,2$ supersymmetric $AdS_4\,\times\,SE_7$ solution of eleven-dimensional supergravity, \cite{Cremmer:1978km}. There are also non-supersymmetric solutions: the skew-whiffed Freund-Rubin, the Pope-Warner, and the Englert solutions.
The skew-whiffed Freund-Rubin solutions, \cite{Freund:1980xh}, are obtained by flipping the orientation of $SE_7$ of supersymmetric Freund-Rubin solutions. This solution breaks all the supersymmetry. However, if $SE_7$ is $S^7$, it preserves the full supersymmetry. As this solution inherits its mass spectrum from the superymmetric one, it is automatically perturbatively stable by the Breitenlohner-Freedman bound, \cite{Duff:1984sv}.
The Pope-Warner solutions, \cite{Pope:1984bd, Pope:1984jj}, break all the supersymmetry. However, they were shown to be stable within the massive truncations of \cite{Gauntlett:2009zw, Gauntlett:2009bh, Cassani:2012pj}. On the other hand, when $SE_7$ is $S^7$, unstable modes were found in the $SU(4)^-$-invariant sector of gauged $\mathcal{N}\,=\,8$ supergravity, \cite{deWit:1982bul}, which is the consistent truncation on the solutions, \cite{Bobev:2010ib}. They were also proved to be unstable within the massive truncation on tri-Sasakian manifolds, \cite{Cassani:2011fu}. The final perturbative instability of these solutions on arbitrary Sasaki-Einstein manifolds was proved in \cite{Pilch:2013gda}.
The Englert solutions were first found on $S^7$ in \cite{Englert:1982vs}, and were generalized to $SE_7$ in \cite{Awada:1982pk}. The Englert solutions break all the supersymmetry, \cite{Englert:1983qe, Page:1984fu}. The perturbatively unstable modes of the solutions were found in \cite{Page:1984fu} and also within the massive truncations of \cite{Gauntlett:2009zw, Gauntlett:2009bh}. This solution corresponds to the $SO(7)^-$ fixed point of gauged $\mathcal{N}\,=\,8$ supergravity and the instability of the fixed point was shown in \cite{deWit:1983gs, Biran:1984jr}.
To recapitulate, among the non-supersymmetric $AdS_4$ solutions from Sasaki-Einstein manifolds, the skew-whiffed Freund-Rubin solutions are known to be perturbatively stable and the Pope-Warner and the Englert solutions are proved to be unstable. For a more detailed account on the stability analysis of the solutions, see the introduction of \cite{Pilch:2013gda} and section 3 of \cite{Gauntlett:2009bh}.
Additionally, there are non-supersymmetric $AdS_4$ solutions from eleven-dimensional supergravity on Sasaki-Einstein manifolds of $Q^{1,1,1} $ and $M^{1,1,1}$ found by Cassani, Koerber and Varela in \cite{Cassani:2012pj}. These solutions were proven to be perturbatively stable within the truncation performed there.
So far we have reviewed a number of $AdS_4\,\times\,SE_7$ solutions in eleven-dimensional supergravity. In this paper, we will examine the brane-jet instabilities of these solutions. It turns out that all solutions we consider are brane-jet stable. Thus the skew-whiffed Freund-Rubin solutions and the solutions on $Q^{1,1,1} $ and $M^{1,1,1}$ are both perturbatively and brane-jet stable. Other $AdS_4$ vacua are already known to be perturbatively unstable.
\bigskip
In section 2, we study the brane-jet of the non-supersymmetric Janus solutions. In section 3, we consider the supersymmetric Freund-Rubin, the skew-whiffed Freund-Rubin, the Pope-Warner, and the Englert solutions and calculate the M2-brane probe potentials to examine the brane-jet instability. In section 4, we consider the solutions from $Q^{1,1,1} $ and $M^{1,1,1}$ manifolds and calculate the M2-brane probe potentials to examine the brane-jet instability. We conclude in section 5. In an appendix, we present the normalization of the supersymmetric $AdS_5\times{S}^5$ solution of type IIB supergravity.
\section{Non-supersymmetric Janus solutions}
\subsection{The solutions}
We review the non-supersymmetric Janus solutions of type IIB supergravity in \cite{Bak:2003jk}. The solutions in the Einstein frame are given by the metric, the dilaton field, and the five-form flux, respectively,
\begin{align} \label{janusjanus}
ds^2\,=&\,f(r)\left(ds_{AdS_4}^2+dr^2\right)+ds_{S^5}^2\,, \notag \\
\phi\,=&\,\phi(r)\,, \notag \\
F_{(5)}\,=&\,4f(r)^{5/2}dr\wedge\,\text{vol}_{AdS_4}+4\text{vol}_{S^5}\,,
\end{align}
where the range of $r$ is $r\in[-\frac{\pi}{2},\frac{\pi}{2}]$.{\footnote{We corrected the factors in the five-form flux to be 4 instead of 2 in \cite{Bak:2003jk}. See appendix A for the normalization of the $AdS_5\times{S}^5$ solutions.}}
The Einstein equations give
\begin{align}
2f'f'-2ff''\,=&\,-4f^3+\frac{1}{2}f^2\phi'\phi'\,, \notag \\
12f^2+f'f'+2ff''\,=&\,16f^3\,.
\end{align}
The equation of motion for the dilaton field reduces to
\begin{equation}
\phi'(r)\,=\,\frac{c_0}{f(r)^{3/2}}\,,
\end{equation}
where $c_0$ is a constant. We obtain numerical solutions and present them in Figure 1.
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.0in]{Janusf} \qquad \includegraphics[width=3.0in]{Janusphi}
\caption{{\it Representative solutions for $f(r)$ and $\phi(r)$ with $c_0\,=\,0.2,\,0.8,\,1.3$ in red, green, and blue, respectively.}}
\label{1}
\end{center}
\end{figure}
When we have $c_0=0$ and
\begin{equation} \label{tosj}
f(r)\,=\,\frac{1}{\cos^2r}\,.
\end{equation}
the solutions reduces to the supersymmetric $AdS_5\times{S}^5$ solution in the $AdS_4$-sliced coordinates.
\subsection{D3-brane probe}
\subsubsection{The $AdS_5\times{S}^5$ solution}
We consider the supersymmetric $AdS_5\times{S}^5$ solution of type IIB supergravity. The metric and the self-dual five-form flux are given, respectively, by
\begin{align}
ds^2\,=&\,ds_{AdS_5}^2+ds_{S^5}^2\,, \notag \\
F_{(5)}\,=&\,4\text{vol}_{AdS_5}+4\text{vol}_{S^5}\,.
\end{align}
We take the Poincar{\'e} coordinates for the $AdS_5$ and the corresponding volume form for the flux,
\begin{align}
ds^2\,=&\,\frac{-dt^2+dx^2+dy^2+dz^2+dr^2}{r^2}+ds^2_{S^5}\,, \notag \\
F_{(5)}\,=&\,\frac{4}{r^2}dt\wedge{d}x\wedge{d}y\wedge{d}z\wedge{d}r+4\text{vol}_{S^5}\,.
\end{align}
Then the four-form potential is given by
\begin{equation}
C_{(4)}\,=\,-\frac{1}{r^4}dt\wedge{d}x\wedge{d}y\wedge{d}z+\ldots\,.
\end{equation}
We partition the spacetime coordinates,
\begin{equation}
x^a\,=\,\{t,x,y,z\}\,, \qquad y^m\,=\,\{r,\ldots\}\,,
\end{equation}
and choose the static gauge,
\begin{equation}
x^a\,=\,\xi^a\,, \qquad y^m\,=\,y^m(t)\,,
\end{equation}
where $\xi^a$ are the worldvolume coordinates. The pull-back of the metric is
\begin{equation}
\tilde{G}_{ab}\,=\,G_{\mu\nu}\frac{\partial{x}^\mu}{\partial\xi^a}\frac{\partial{x}^\nu}{\partial\xi^b}\,.
\end{equation}
Now we study the worldvolume action of the D3-brane which is given by a sum of DBI and WZ terms. If the probe brane moves slowly, the worldvolume action in the Einstein frame is, $e.g.$, \cite{Clark:2004sb},
\begin{align}
S\,=&\,-\int{d}^4\xi\sqrt{-\text{det}(\tilde{G})}-\int\tilde{C}_{(4)} \notag \\
=&\,-\int{d}^4\xi\left(\frac{1}{r^4}-\frac{1}{2r^2}G_{mn}\dot{y}^m\dot{y}^n\right)-\int\left(-\frac{1}{r^4}\right)dt\wedge\,dx\wedge\,dy\wedge\,dz\,,
\end{align}
where $\tilde{C}_{(4)}$ is the pull-back of the four-form potential. Then the worldvolume action reduces to
\begin{equation}
S\,=\,\int{d}^4\xi\left(K-V\right)\,,
\end{equation}
where the kinetic and the potential terms are
\begin{align}
K\,=&\,\frac{1}{2r^2}G_{mn}\dot{y}^m\dot{y}^n+\ldots\,, \notag \\
V\,=&\,\frac{1}{r^4}-\frac{1}{r^4}\,=\,0\,.
\end{align}
For the $AdS_5\times{S}^5$ solution, potential for the probe D3-brane vanishes identically. It is non-negative and, thus, brane-jet stable as we expect for supersymmetric solutions.
\subsubsection{The $AdS_5\times{S}^5$ solution in the $AdS_4$-sliced coordinates}
We consider the $AdS_5\times{S}^5$ solution in the $AdS_4$-sliced coordinates, \eqref{tosj}, \cite{Bak:2003jk}. The metric and the self-dual five-form flux are given, respectively, by
\begin{align}
ds^2\,=&\,f(r)\left(ds_{AdS_4}^2+dr^2\right)+ds_{S^5}^2\,, \notag \\
F_{(5)}\,=&\,4f(r)^{5/2}dr\wedge\,\text{vol}_{AdS_4}+4\text{vol}_{S^5}\,,
\end{align}
where the function, $f(r)$, is given by
\begin{equation}
f(r)\,=\,\frac{1}{\cos^2r}\,.
\end{equation}
We take the Poincar{\'e} coordinates for the $AdS_4$,
\begin{align}
ds^2_{AdS_4}\,=&\,\frac{-dt^2+dx^2+dy^2+dz^2}{z^2}\,.
\end{align}
The four-form potential is given by
\begin{equation}
C_{(4)}\,=\,\left(4\int{f}(r)^{5/2}dr\right)\text{vol}_{AdS_4}+\ldots\,.
\end{equation}
We partition the spacetime coordinates,
\begin{equation}
x^a\,=\,\{t,x,y,z\}\,, \qquad y^m\,=\,\{r,\ldots\}\,,
\end{equation}
and choose the static gauge,
\begin{equation}
x^a\,=\,\xi^a\,, \qquad y^m\,=\,y^m(t)\,,
\end{equation}
where $\xi^a$ are the worldvolume coordinates. The pull-back of the metric is
\begin{equation}
\tilde{G}_{ab}\,=\,G_{\mu\nu}\frac{\partial{x}^\mu}{\partial\xi^a}\frac{\partial{x}^\nu}{\partial\xi^b}\,.
\end{equation}
\begin{figure}[t]
\begin{center}
\includegraphics[width=3in]{AdSpot}
\caption{{\it The force acting on probe D3-brane for the $AdS_5\times{S}^5$ solution in the $AdS_4$-sliced coordinates.}}
\label{1}
\end{center}
\end{figure}
Now we study the worldvolume action of the D3-brane which is given by a sum of DBI and WZ terms. If the probe brane moves slowly, the worldvolume action in the Einstein frame is, $e.g.$, \cite{Clark:2004sb},
\begin{align}
S\,=&\,-\int{d}^4\xi\sqrt{-\text{det}(\tilde{G})}-\int\tilde{C}_{(4)} \notag \\
=&\,-\int{d}^4\xi\frac{1}{z^4}\left(f^2-\frac{f}{2}G_{mn}\dot{y}^m\dot{y}^n\right)-\int\frac{1}{z^4}\left(4\int{f}^{5/2}dr\right)dt\wedge\,dx\wedge\,dy\wedge\,dz\,,
\end{align}
where $\tilde{C}_{(4)}$ is the pull-back of the four-form potential. Then the worldvolume action reduces to
\begin{equation}
S\,=\,\int{d}^4\xi\left(K-V\right)\,,
\end{equation}
where the kinetic and the potential terms are
\begin{align}
K\,=&\,\frac{1}{z^4}\frac{f}{2}G_{mn}\dot{y}^m\dot{y}^n+\ldots\,, \notag \\
V\,=&\,\frac{1}{z^4}\left(f^2+4\int{f}^{5/2}dr\right)\,.
\end{align}
We numerically compute the force acting on the probe D3-brane, $z^4\frac{dV}{dr}$, and present it in Figure 2. It is positive and the force is attractive. Thus it is brane-jet stable as we expect for supersymmetric solutions.
\subsubsection{The non-supersymmetric Janus solutions}
We consider the non-supersymmetric Janus solutions presented in \eqref{janusjanus}. We partition the spacetime coordinates,
\begin{equation}
x^a\,=\,\{t,x,y,z\}\,, \qquad y^m\,=\,\{r,\ldots\}\,,
\end{equation}
and choose the static gauge,
\begin{equation}
x^a\,=\,\xi^a\,, \qquad y^m\,=\,y^m(t)\,,
\end{equation}
where $\xi^a$ are the worldvolume coordinates. The pull-back of the metric is
\begin{equation}
\tilde{G}_{ab}\,=\,G_{\mu\nu}\frac{\partial{x}^\mu}{\partial\xi^a}\frac{\partial{x}^\nu}{\partial\xi^b}\,.
\end{equation}
\begin{figure}[t]
\begin{center}
\includegraphics[width=3in]{Januspot}
\caption{{\it The force acting on probe D3-brane for the non-supersymmetric Janus solutions with $c_0\,=\,0.2,\,0.8,\,1.3$ in red, green, and blue, respectively.}}
\label{1}
\end{center}
\end{figure}
Now we study the worldvolume action of the D3-branes which is given by a sum of DBI and WZ terms. If the probe branes move slowly, the worldvolume action in the Einstein frame is, $e.g.$, \cite{Clark:2004sb},
\begin{align}
S\,=&\,-\int{d}^4\xi\,\sqrt{-\text{det}(\tilde{G})}-\int\tilde{C}_{(4)} \notag \\
=&\,-\int{d}^4\xi\frac{1}{z^4}\left(f^2-\frac{f}{2}G_{mn}\dot{y}^m\dot{y}^n\right)-\int\frac{1}{z^4}\left(4\int{f}^{5/2}dr\right)dt\wedge\,dx\wedge\,dy\wedge\,dz\,,
\end{align}
where $\tilde{C}_{(4)}$ is the pull-back of the four-form potential. Then the worldvolume action reduces to
\begin{equation}
S\,=\,\int{d}^4\xi\left(K-V\right)\,,
\end{equation}
where the kinetic and the potential terms are
\begin{align}
K\,=&\,\frac{1}{z^4}\frac{f}{2}G_{mn}\dot{y}^m\dot{y}^n+\ldots\,, \notag \\
V\,=&\,\frac{1}{z^4}\left(f^2+4\int{f}^{5/2}dr\right)\,.
\end{align}
We numerically compute the force acting on the probe D3-brane, $z^4\frac{dV}{dr}$, and present it in Figure 3. It is positive and the force is attractive. Thus we conclude that the non-supersymmetric Janus solutions are brane-jet stable.
\section{$AdS_4$ vacua from arbitrary Sasaki-Einstein manifolds}
\subsection{Freund-Rubin, skew-whiffed, Pope-Warner, and Englert solutions}
We consider $AdS_4$ solutions of eleven-dimensional supergravity on arbitrary seven-dimensional Sasaki-Einstein manifolds. In particular, we review the supersymmetric Freund-Rubin, the skew-whiffed Freund-Rubin, the Pope-Warner, and the Englert solutions. To present the solutions in a uniform manner, we employ the ansatz used for the consistent truncation of eleven-dimensional supergravity on arbitrary seven-dimensional Sasaki-Einstein manifolds, \cite{Gauntlett:2009zw, Gauntlett:2009bh}.
Locally Sasaki-Einstein manifold is a fibration over a K\"ahler-Einstein manifold,
\begin{equation}
ds^2_{SE_7}\,=\,ds^2_{KE_6}+\eta\otimes\eta\,,
\end{equation}
where $\eta$ is the one-form dual to the Reeb Killing vector from $d\eta\,=\,2J$ and $J$ is the K\"ahler form of $KE_6$. The (3,0)-form on $KE_6$ is denoted by $\omega$ and satisfies $d\Omega\,=\,4i\eta\wedge\Omega$. Then the volume form is $vol_{SE_7}\,=\,\eta\wedge{J}^3/3!\,=\,(i/8)\eta\wedge\Omega\wedge\Omega^*$.
The metric employed for the consistent truncation on general Sasaki-Einstein manifolds is given by, \cite{Gauntlett:2009bh},
\begin{equation}
\frac{1}{(2L)^2}ds^2\,=\,e^{-6U-V}ds_4^2+e^{2U}ds^2_{KE_6}+e^{2V}\left(\eta+A_1\right)\otimes\left(\eta+A_1\right)\,,
\end{equation}
and the four-form flux is
\begin{align}
\frac{1}{(2L)^3}G_4\,=\,6&e^{-18U-3V}\left(\epsilon+h^2+|\chi|^2\right)vol_4+H_3\wedge\left(\eta+A_1\right)+H_2\wedge{J} \notag \\
&+dh\wedge{J}\wedge\left(\eta+A_1\right)+2hJ\wedge{J} \notag \\
&+\sqrt{3}\left[\chi\left(\eta+A_1\right)\wedge\Omega-\frac{i}{4}D\chi\wedge\Omega+c.c.\right]\,,
\end{align}
where $\epsilon\,=\,\pm{1}$, $D\chi\,=\,d\chi-4iA_1\chi$ and $L$ is an overall scale parameter. $U$, $V$, $h$ are real scalar fields and $\chi$ is a complex scalar field in four dimensions. In four dimensions there are also one- and two-form fields, $A_1$, $B_1$ and $B_2$, with field strengthes,
\begin{align}
F_2\,=&\,dA_1\,, \notag \\
H_3\,=&\,dB_2\,, \notag \\
H_2\,=&\,dB_1+2B_2+hF_2\,.
\end{align}
There are previously known $AdS_4\,\times\,SE_7$ solutions.{\footnote{In the consistent truncation to four-dimensional gauged supergravity, \cite{Gauntlett:2009zw, Gauntlett:2009bh}, the solutions we consider are fixed points of the scalar potential,
\begin{equation}
\mathcal{P}\,=\,48e^{-8U-V}-6e^{-10U+V}-24h^2e^{-14U-V}-18\left(\epsilon+h^2+|\chi|^2\right)^2e^{-18U-3V}-24e^{-12U-3V}|\chi|^2\,.
\end{equation}}} The supersymmetric Freund-Rubin solution, \cite{Freund:1980xh}, is
\begin{equation} \label{frs}
\epsilon\,=\,+1\,, \qquad U\,=\,0\,, \qquad V\,=\,0\,, \qquad \chi\,=\,0\,, \qquad h\,=\,0\,, \qquad R^2_{AdS_4}\,=\,\frac{1}{4}\,,
\end{equation}
and is explicitly given by
\begin{align}
\frac{1}{(2L)^2}ds^2\,&=\,\frac{1}{4}ds^2_{AdS_4}+ds^2_{SE_7}\,, \notag \\
\frac{1}{(2L)^3}G_4\,&=\,\epsilon\frac{3}{8}vol_{AdS_4}\,.
\end{align}
Flipping the sign of the four-form flux by choosing $\epsilon\,=\,-1$, we obtain the skew-whiffed Freund-Rubin solution which breaks all the supersymmetry.
The Pope-Warner solution, \cite{Pope:1984bd, Pope:1984jj}, is
\begin{equation} \label{pws}
\epsilon\,=\,-1\,, \qquad e^U\,=\,2^{-1/6}\,, \qquad e^V\,=\,2^{1/3}\,, \qquad \chi^2\,=\,2/3\,, \qquad h\,=\,0\,, \qquad R^2_{AdS_4}\,=\,\frac{3}{16}\,,
\end{equation}
and is explicitly given by
\begin{align}
\frac{1}{(2L)^2}ds^2\,&=\,2^{2/3}\left[\frac{3}{16}ds^2_{AdS_4}+\frac{1}{2}ds^2_{KE_6}+\eta\otimes\eta\right]\,, \notag \\
\frac{1}{(2L)^3}G_4\,&=\,2\left[-\frac{9}{64}vol_{AdS_4}+\frac{1}{\sqrt{2}}\left(\eta\wedge\Omega+c.c.\right)\right]\,.
\end{align}
This solution breaks all the supersymmetry.
The Englert solution, \cite{Englert:1982vs}, is
\begin{equation} \label{es}
\epsilon\,=\,-1\,, \,\,\,\, e^U\,=\,(4/5)^{1/6}\,, \,\,\,\, e^V\,=\,(4/5)^{1/6}\,, \,\,\,\, \chi^2\,=\,4/15\,, \,\,\,\, h^2\,=\,1/5\,, \,\,\, R^2_{AdS_4}\,=\,\frac{12}{25\sqrt{5}}\,,
\end{equation}
and is explicitly given by
\begin{align}
\frac{1}{(2L)^2}ds^2\,&=\,\left(\frac{4}{5}\right)^{1/3}\left[\frac{3}{10}ds^2_{AdS_4}+ds^2_{KE_6}+\eta\otimes\eta\right]\,, \notag \\
\frac{1}{(2L)^3}G_4\,&=\,\left(\frac{4}{5}\right)^{1/2}\left[-\frac{9}{25}vol_{AdS_4}+J\wedge{J}+\left(\eta\wedge\Omega+c.c.\right)\right]\,.
\end{align}
This solution also breaks all the supersymmetry.
\subsection{M2-brane probe}
At the $AdS_4$ fixed points, we have
\begin{equation}
ds^2_4\,=\,e^{2A}\left(-dx_0^2+dx_1^2+dx_2^2\right)+dr^2\,,
\end{equation}
where
\begin{equation}
A\,=\,\frac{r}{l}\,,
\end{equation}
and $l$ is the radius of $AdS_4$.
We obtain that the three-form potential is
\begin{equation}
\frac{1}{(2L)^3}A_3\,=\,\frac{l}{3}e^{3A}6e^{-18U-3V}\left(\epsilon+h^2+|\chi|^2\right)dx_0\wedge{d}x_1\wedge{d}x_2+\ldots\,,
\end{equation}
We partition the spacetime coordinates,
\begin{equation}
x^a\,=\,\{x_0,x_1,x_2\}\,, \qquad y^m\,=\,\{r,\ldots\}\,,
\end{equation}
and choose the static gauge,
\begin{equation}
x_0\,=\,t\,=\,\xi^0\,,\qquad x^a\,=\,\xi^a\,, \qquad y^m\,=\,y^m(t)\,,
\end{equation}
where $\xi^a$ are the worldvolume coordinates. The pull-back of the metric is
\begin{equation}
\tilde{G}_{ab}\,=\,G_{\mu\nu}\frac{\partial{x}^\mu}{\partial\xi^a}\frac{\partial{x}^\nu}{\partial\xi^b}\,.
\end{equation}
Now we study the worldvolume action of the M2-branes which is given by a sum of DBI and WZ terms. If the probe branes move slowly, the worldvolume action is
\begin{align}
S\,=&\,-\int{d}^3\xi\sqrt{-\text{det}(\tilde{G})}+\int\tilde{A}_3 \notag \\
=&\,-\left(2L\right)^3\int{d}^3\xi\left(e^{-9U-3V/2+3A}-\frac{1}{2}e^{-3U-V/2+A}G_{mn}\dot{y}^m\dot{y}^n+\ldots\right) \notag \\
&\,+\left(2L\right)^3\int\frac{l}{3}{e}^{3A}6e^{-18U-3V}\left(\epsilon+h^2+|\chi|^2\right)\,{d}x_0\wedge{d}x_1\wedge{d}x_2\,,
\end{align}
where $\tilde{A}_3$ is the pull-back of the three-form potential. Then the worldvolume action reduces to
\begin{equation}
S\,=\,\left(2L\right)^3\int{d}^3\eta\left(K-\mathcal{V}\right)\,,
\end{equation}
where the kinetic and the potential terms are
\begin{align}
K\,=&\,\frac{1}{2}e^{-3U-V/2+A}G_{mn}\dot{y}^m\dot{y}^n+\ldots\,, \notag \\
\mathcal{V}\,=&\,e^{3A}\left(e^{-9U-3V/2}-\frac{l}{3}6e^{-18U-3V}\left(\epsilon+h^2+|\chi|^2\right)\right)\,.
\end{align}
For the $AdS_4$ solutions of the supersymmetric Freund-Rubin, the skew-whiffed Freund Rubin, the Pope-Warner, and the Englert solutions in \eqref{frs}, \eqref{pws}, and \eqref{es}, respectively, we obtain
\begin{align}
e^{-3A}\mathcal{V}|_{\text{SUSY}}\,=&\,0\,, \notag \\
e^{-3A}\mathcal{V}|_{\text{skew-whiffed}}\,=&\,2\,, \notag \\
e^{-3A}\mathcal{V}|_{\text{Pope-Warner}}\,=&\,2+\frac{2}{\sqrt{3}}\,, \notag \\
e^{-3A}\mathcal{V}|_{\text{Englert}}\,=&\,\frac{5^{5/4}}{4\sqrt{3}}+\frac{5^{7/4}}{8\sqrt{2}}\,.
\end{align}
All M2-brane probe potentials obtained here for non-supersymmetric solutions are positive. Thus the force acting on the probe M2-brane, $d\mathcal{V}/dr$, is positive and attractive. We conclude that all solutions we consider are brane-jet stable. However, the Pope-Warner and the Englert solutions are known to be BF unstable. On the other hand, the skew-whiffed solutions are BF and also brane-jet stable.
\section{$AdS_4$ vacua from particular Sasaki-Einstein manifolds}
\subsection{Non-supersymmetric $AdS_4$ solutions on $Q^{1,1,1}$ and $M^{1,1,1}$}
In this section, we consider the non-supersymmetric $AdS_4$ solutions found from the consistent truncation of eleven-dimensional supergravity on seven-dimensional homogeneous Sasaki-Einstein manifolds, specifically on $Q^{1,1,1}$ and $M^{1,1,1}$, \cite{Cassani:2012pj}.
We review gauged $\mathcal{N}\,=\,2$ supergravity in four dimensions from the consistent truncation of eleven-dimensional supergravity on $Q^{1,1,1}$ manifolds, \cite{Cassani:2012pj}. The truncation on $M^{1,1,1}$ manifolds are obtained from the truncation on $Q^{1,1,1}$ by identifying $t^3\,=\,t^1$. The field content consists of 1 gravity multiplet, $\{g_{\mu\nu}, A^0_\mu\}$, 3 vector multiplets, $\{A^i_\mu, t^i\}$, and 1 hypermultiplet, $\{\phi, a, \xi^0, \tilde{\xi}_0\}$, where $i\,=\,1,2,3$. There are 4 real scalar fields, $\phi, a, \xi^0, \tilde{\xi}_0$, where $\phi$ and $a$ are dilaton and axion fields in four dimensions. There are 3 complex scalar fields, $t^i$, for which we also employ the parametrizations,
\begin{equation}
t^i\,=\,b^i+iv^i\,,
\end{equation}
and
\begin{equation}
v^i\,=\,e^{2u_i}\,.
\end{equation}
The scalar fields from the vector multiplets and the hypermultiplet parametrize the coset manifolds,
\begin{equation}
\mathcal{M}_v\times\mathcal{M}_h\,=\,\left(\frac{SU(1,1)}{U(1)}\right)^3\,\times\,\frac{SU(2,1)}{S(U(2)\,\times\,U(1))}\,,
\end{equation}
which is a product of special K\"ahler and quaternionic manifolds, respectively. The metric of the special K\"ahler and quaternionic manifolds are, respectively, given by
\begin{equation}
ds^2\,=\,\sum_{i=1}^3\left[\left(du_i\right)^2+\frac{1}{4}e^{-2u_i}\left(db^i\right)^2\right]\,,
\end{equation}
and
\begin{equation}
h_{uv}dq^udq^v\,=\,\left(d\phi\right)^2+\frac{1}{4}e^{4\phi}\left[da+\frac{1}{2}\left(\xi^0d\tilde{\xi}_0-\tilde{\xi}_0d\xi^0\right)\right]^2+\frac{1}{4}e^{2\phi}\left(d\xi^0\right)^2+\frac{1}{4}e^{2\phi}(d\tilde{\xi}_0)^2\,.
\end{equation}
The scalar potential is given by
\begin{align}
\mathcal{P}\,=\,&-8e^{2\phi}\left(e^{-2u_1}+e^{-2u_2}+e^{-2u_3}\right)+e^{4\phi}\left(e^{-2u_1+2u_2+2u_3}+e^{2u_1-2u_2+2u_3}+e^{2u_1+2u_2-2u_3}\right) \notag \\
&+e^{4\phi-2u_1-2u_2-2u_3}\left[e^{4u_1}\left(b^2+b^3\right)+e^{4u_2}\left(b^1+b^3\right)^2+e^{4u_3}\left(b^1+b^2\right)^2\right] \notag \\
&+\frac{1}{4}e^{4\phi-2u_1-2u_2-2u_3}\left[e_0+2b^1b^2+2b^1b^3+2b^2b^3+2\left(\xi^0\right)^2+2(\tilde{\xi}_0)^2\right]^2 \notag \\
&+4e^{4\phi-2u_1-2u_2-2u_3}\left(\left(\xi^0\right)^2+(\tilde{\xi}_0)^2\right)\,.
\end{align}
There is a supersymmetric fixed point of the scalar potential which corresponds to the $AdS_4\,\times\,Q^{1,1,1}$ solution of eleven-dimensional supergravity, $e.g.$, in (3.16) of \cite{Halmagyi:2013sla},
\begin{align} \label{susyfp}
&v^i\,=\,\sqrt{\frac{e_0}{6}}\,, \qquad e^{-2\phi}\,=\,\frac{e_0}{6}\,,\qquad R_{AdS_4}\,=\,\frac{1}{2}\left(\frac{e_0}{6}\right)^{3/4}\,, \notag \\
&e^{3V}\,=\,\sqrt{\frac{e_0}{6}}\,, \qquad b^i\,=\,0\,, \qquad \xi^0\,=\,\tilde{\xi}_0\,=\,0\,.
\end{align}
There is also a non-supersymmetric fixed point first found in (5.10) of \cite{Cassani:2012pj},
\begin{align} \label{nonsusyfp}
&e^{2U_1}\,=\,\left(\frac{9}{5}\right)^{1/3}a\,, \qquad e^{2U_2}\,=\,\left(\frac{9}{5}\right)^{1/3}a\,, \qquad e^{2U_3}\,=\,\left(\frac{9}{5}\right)^{-2/3}a\,, \qquad e^{2V}\,=\,\frac{2}{7}15^{2/3}a\,, \notag \\
&b^1\,=\,b^2\,=\,b^3\,=\,0\,, \qquad \left(\xi^0\right)^2+(\tilde{\xi}_0)^2\,=\,\frac{172}{49}a^3\,, \qquad e_0\,=\,-\frac{540}{49}a^3\,,
\end{align}
where $a>0$ is a free parameter. This point is BF-stable within the truncation.
The supersymmetric and non-supersymmetric fixed points of $AdS_4\,\times\,M^{1,1,1}$ are obtained from \eqref{susyfp} and \eqref{nonsusyfp} as particualr cases of $t^3\,=\,t^1$ and $U_2\,=\,U_1$.
Now we present the uplift formula to eleven-dimensional supergravity. The metric is given by
\begin{equation}
ds^2\,=\,e^{2V}\mathcal{K}^{-1}ds_4^2+e^{-V}ds^2(B_6)+e^{2V}\left(\theta+A^0\right)^2\,,
\end{equation}
with the six-dimensional base space of
\begin{equation}
e^{-V}ds^2(B_6)\,=\,\frac{1}{8}e^{2U_1}ds^2_{\mathbb{CP}^1}+\frac{1}{8}e^{2U_2}ds^2_{\mathbb{CP}^1}+\frac{1}{8}e^{2U_3}ds^2_{S^2}\,.
\end{equation}
The warp factors are
\begin{equation}
u_1\,=\,U_1+\frac{1}{2}V\,, \qquad u_2\,=\,U_2+\frac{1}{2}V\,, \qquad u_3\,=\,U_3+\frac{1}{2}V\,, \qquad \phi\,=\,-U_1-U_2-U_3\,.
\end{equation}
The K\"ahler potential is
\begin{equation}
\mathcal{K}\,=\,\frac{1}{6}\mathcal{K}_{ijk}v^iv^jv^k\,,
\end{equation}
and for $Q^{1,1,1}$ solutions,
\begin{equation}
\mathcal{K}_{123}\,=\,1\,.
\end{equation}
There is a relation which we employ later,
\begin{equation} \label{phivk}
e^{2\phi}\,=\,e^{3V}\mathcal{K}^{-1}\,.
\end{equation}
The four-form flux is given by
\begin{align}
G_4\,=\,dA_3+G_4^{\text{flux}}=&\,H_4+dB\wedge\left(\theta+A^0\right)+H_2^i\wedge\omega_i+Db^i\wedge\omega_i\wedge\left(\theta+A^0\right) \notag \\
+&D\xi^A\wedge\alpha_A-D\tilde{\xi}_A\wedge\beta^A+\chi_i\tilde{\omega}^i \notag \\
+&\left[\left(b^IQ^I+\mathbb{U}\xi\right)^A\alpha_A-\left(b^IQ_I+\mathbb{U}\xi\right)_A\beta^A\right]\wedge\left(\theta+A^0\right)\,,
\end{align}
where
\begin{equation}
H_4\,=\,\mathcal{K}^{-1}e^{4\phi}\left(b^I\mathcal{E}_I+\frac{1}{2}\mathcal{K}_{ijk}m^ib^jb^k\right)*1\,,
\end{equation}
and
\begin{equation}
\mathcal{E}_I\,=\,e_I+Q_I^T\mathbb{C}\xi-\frac{1}{2}\delta^0_I\xi^T\mathbb{CU}\xi\,.
\end{equation}
On $SE_7$ there exists a set of real differential forms, an one-form, $\theta$, $n_V$ two-forms, $\omega_i$, 2$n_H$ three-forms, $\alpha_A$, $\beta^A$, $n_V$ four-forms, $\tilde{\omega}^i$, and a six form, $\tilde{\omega}^0$ where $n_V$ and $n_H$ are the numbers of vector and hypermultiplets. Four-form fluxes, ($p^A$, $q_A$), and geometric fluxes, ($m_i\,^A$, $e_{iA}$), and ($v^A\,_B$, $t^{AB}$, $s_{AB}$, $u_A\,^B$), define matrices of
\begin{equation}
Q_I\,=\,\left(
\begin{array}{ll}
p^A & m_i\,^A \\
q_A & e_{iA}
\end{array}
\right)\,, \qquad
\mathbb{U}\,=\,\left(
\begin{array}{ll}
v^A\,_B & t^{AB} \\
s_{AB} & u_A\,^B
\end{array}
\right)\,,
\end{equation}
where we have
\begin{equation}
v^A\,_B\,=\,-u_B\,^A\,.
\end{equation}
The $Sp(2n_H,\mathbb{R})$ matrix is defined to be
\begin{equation}
\mathbb{C}\,=\,\left(
\begin{array}{ll}
\,\,\,\,\,\, 0 & \delta_A\,^B \\
-\delta^A\,_B & \,\,\,\, 0
\end{array}
\right)\,,
\end{equation}
and the scalar fields from hypermultiplets parametrize
\begin{equation}
\xi\,=\,\left(
\begin{array}{ll}
\xi^A \\
\tilde{\xi}_A
\end{array}
\right)\,.
\end{equation}
Some parameters are introduced to be
\begin{equation}
e_I\,=\,\left(e_0, e_i\right)\,, \qquad m^I\,=\,\left(0, m^i\right)\,, \qquad b^I\,=\,\left(1, b^i\right)\,,
\end{equation}
with a choice of
\begin{equation}
e_i\,=\,0\,.
\end{equation}
The constant, $e_0$, is a dualization of three-form potential on four-dimensional external spacetime.
When the Betti number, $b_3\,=\,0$, which is the case we consider, we have the four-form and geometric fluxes to be
\begin{equation}
e_i\,=\,0\,, \qquad p^A\,=\,0\,, \qquad q_A\,=\,0\,.
\end{equation}
For $Q^{1,1,1}$ and $M^{1,1,1}$ solutions, we have the geometric fluxes of
\begin{equation}
e_{iA}\,=\,0\,, \qquad m_i\,^A\,=\,0\,,
\end{equation}
and
\begin{equation}
u_A\,^B\,=\,0\,, \qquad s_{00}\,=\,-4\,, \qquad t^{00}\,=\,4\,.
\end{equation}
Also for $Q^{1,1,1}$ and $M^{1,1,1}$ solutions, we have
\begin{equation}
m^1\,=\,m^2\,=\,m^3\,=\,2\,.
\end{equation}
\subsection{M2-brane probe}
At the $AdS_4$ fixed points, we have
\begin{equation}
ds^2_4\,=\,e^{2A}\left(-dx_0^2+dx_1^2+dx_2^2\right)+dr^2\,,
\end{equation}
where
\begin{equation}
A\,=\,\frac{r}{l}\,,
\end{equation}
and $l$ is the radius of $AdS_4$.
The relevant part of four-form flux is
\begin{equation}
G_4\,=\,\mathcal{K}^{-1}e^{4\phi}\left(e_0+2\left((\xi^0)^2+(\tilde{\xi}_0)^2\right)+2b^1b^2+2b^2b^3+2b^3b^1\right)vol_4\,.
\end{equation}
Thus we obtain that the three-form potential is
\begin{equation}
A_3\,=\,\frac{l}{3}e^{3A}\mathcal{K}^{-1}e^{4\phi}\left(e_0+2\left((\xi^0)^2+(\tilde{\xi}_0)^2\right)+2b^1b^2+2b^2b^3+2b^3b^1\right)dx_0\wedge{d}x_1\wedge{d}x_2\,.
\end{equation}
We partition the spacetime coordinates,
\begin{equation}
x^a\,=\,\{x_0,x_1,x_2\}\,, \qquad y^m\,=\,\{r,\ldots\}\,,
\end{equation}
and choose the static gauge,
\begin{equation}
x_0\,=\,t\,=\,\eta^0\,,\qquad x^a\,=\,\eta^a\,, \qquad y^m\,=\,y^m(t)\,,
\end{equation}
where $\eta^a$ are the worldvolume coordinates. The pull-back of the metric is
\begin{equation}
\tilde{G}_{ab}\,=\,G_{\mu\nu}\frac{\partial{x}^\mu}{\partial\eta^a}\frac{\partial{x}^\nu}{\partial\eta^b}\,.
\end{equation}
Now we study the worldvolume action of the M2-branes which is given by a sum of DBI and WZ terms. If the probe branes move slowly, the worldvolume action is
\begin{align}
S\,=&\,-\int{d}^3\eta\sqrt{-\text{det}(\tilde{G})}+\int\tilde{A}_3 \notag \\
=&\,-\int{d}^3\eta\left(e^{3V+3A}\mathcal{K}^{-3/2}-\frac{1}{2}e^{V+A}\mathcal{K}^{-1/2}G_{mn}\dot{y}^m\dot{y}^n+\ldots\right) \notag \\
&\,+\int\frac{l}{3}{e}^{3A}\mathcal{K}^{-1}e^{4\phi}\left(e_0+2\left((\xi^0)^2+(\tilde{\xi}_0)^2\right)+2b^1b^2+2b^2b^3+2b^3b^1\right)\,{d}x_0\wedge{d}x_1\wedge{d}x_2\,,
\end{align}
where $\tilde{A}_3$ is the pull-back of the three-form potential. Then the worldvolume action reduces to
\begin{equation}
S\,=\,\int{d}^3\eta\left(K-\mathcal{V}\right)\,,
\end{equation}
where the kinetic and the potential terms are
\begin{align}
K\,=&\,\frac{1}{2}e^{V+A}\mathcal{K}^{-1/2}G_{mn}\dot{y}^m\dot{y}^n+\ldots\,, \notag \\
\mathcal{V}\,=&\,e^{3A}\left(e^{3V}\mathcal{K}^{-3/2}-\frac{l}{3}e^{6V}\mathcal{K}^{-3}\left(e_0+2\left((\xi^0)^2+(\tilde{\xi}_0)^2\right)+2b^1b^2+2b^2b^3+2b^3b^1\right)\right)\,,
\end{align}
where we employed the relation in \eqref{phivk}. For the supersymmetric fixed point in \eqref{susyfp}, we obtain
\begin{equation}
e^{-3A}\mathcal{V}|_{\text{SUSY}}\,=\,0\,.
\end{equation}
For the non-supersymmetric fixed point in \eqref{nonsusyfp}, we obtain
\begin{equation}
e^{-3A}\mathcal{V}|_{\text{non-SUSY}}\,=\,\frac{1}{\sqrt{15}\,a^{21/4}}\left(\frac{7}{2}\right)^{3/4}+\frac{7\sqrt{14}\,l^{9/4}}{45\,a^{15/2}}\,,
\end{equation}
where the free parameter, $a>0$, and $l$ is the radius of $AdS_4$. Thus the force acting on the probe M2-brane, $d\mathcal{V}/dr$, is positive and attractive. We conclude that both supersymmetric and non-supersymmetric vacua are brane-jet stable. They are also known to be BF stable within the truncation of \cite{Cassani:2012pj}.
So far we have considered the solutions of $AdS_4\,\times\,Q^{1,1,1}$. The analysis for the $AdS_4\,\times\,M^{1,1,1}$ solutions goes parallel by identifying $t^3\,=\,t^1$ and $U_2\,=\,U_1$. Therefore, the vacua from $M^{1,1,1}$ are also BF and brane-jet stable.
\section{Conclusions}
In this paper, we have examined the brane-jet instabilities of diverse non-supersymmetric $AdS$ solutions: the Janus solutions of type IIB supergravity and the $AdS_4$ vacua from eleven-dimensional supergravity on Sasaki-Einstein manifolds. We showed that all $AdS$ vacua we considered are brane-jet stable. Thus, among those solutions, the Janus, the skew-whipped Freund-Rubin, and the $AdS_4$ vacua from $Q^{1,1,1} $ and $M^{1,1,1}$ manifolds are both perturbatively and brane-jet stable. They make candidates of the counter-example to the swampland conjecture on the instability of non-supersymmetric $AdS$ vacua.
In the analysis we have extended the application of brane-jets to the $AdS$ vacua from curved domain walls. Unlike the flat domain walls where the worldvolume of probe brane is on Minkowski spacetimes, the worldvolume is on the curved spacetimes for curved domain walls. Depending on the geometry of domain walls, identical solutions display different behaviours of brane-jets. For the Janus solutions we considered, the worldvolume was on $AdS_4$ instead of Mink$_4$. This suggests that we could apply the brane-jet analysis to various $AdS$ vacua from more general geometries.
Unlike the usual solutions studied for brane-jets, the solutions from Sasaki-Einstein manifolds are not warped but direct products of $AdS_4$ and internal manifolds. Thus the probe brane potentials were independent of the internal coordinates and were constant. There are similar examples of direct product solutions studied previously for brane-jets: the $SO(7)^-$ solution of eleven-dimensional supergravity was proven to be brane-jet stable in \cite{Bena:2020xxb} and the $G_2$ and $SO(7)$ solutions of massive type IIA supergravity were brane-jet stable and unstable, respectively, in \cite{Guarino:2020jwv}.
If the $AdS$ vacua we have found to be both perturbatively and brane-jet stable are truly stable, it would be most interesting to establish the precise AdS/CFT correspondence, \cite{Maldacena:1997re}, in the non-supersymmetric setting. For the skew-whiffed Freund-Rubin solutions, possible dual field theories were studied in \cite{Berkooz:1998qp, Forcella:2011pp}. However, there is always a possibility of new instabilities we have not yet discovered. Some potential instabilities from global singlet marginal operators and tunneling into bubble of nothing were discussed in \cite{Berkooz:1998qp, Murugan:2016aty}.
\bigskip
\leftline{\bf Acknowledgements}
\noindent We would like to thank Krzysztof Pilch and Oscar Varela for reading a draft of the manuscript. This research was supported by the National Research Foundation of Korea under the grant NRF-2019R1I1A1A01060811.
|
2,877,628,090,318 | arxiv | \section{Introduction}
Although most neutron stars are endowed with typical magnetic fields of order $10^{12}$~G, the subclass of magnetars~\cite{thompson1992} - including anomalous X-ray pulsars and soft gamma-ray repeaters (SGR) - exhibit much higher fields, up to a few times $10^{15}$~G at their surface~\cite{turolla2015,kaspi2017}. Potentially even more extreme magnetic fields could be sustained in their interior, as shown by numerical simulations~\cite{uryu2019}.
Giant flares, such as those observed in SGR 1806$-$20, are among the most spectacular astrophysical manifestations of the magnetic activity, whereby sudden changes in the magnetic field configuration are accompanied by starquakes, as suggested by the detection of quasiperiodic oscillations (see, e.g. Ref.~\cite{glampedakis2018} for a recent review). The frequencies of the various modes depend on the internal constitution of these stars. However, the identification of these modes remains challenging due to uncertainties on the stellar structure, in particular on the properties of the crust~\cite{nandi2016,tews2017,sotani2018}. Some parts of the crust may actually be ejected during such events~\cite{gelfand2007}. The subsequent decompression of this neutron-rich material provides suitable conditions for the rapid neutron capture process so called $r$-process at the origin of stable and some long-lived radioactive neutron-rich nuclides heavier than iron~\cite{arnould2007}. The final nuclear abundances of the processed stellar material depend on the initial composition of magnetar crusts. The crustal properties are also important for the long-term evolution of the magnetic field and the cooling of the star~\cite{mereghetti2015,potekhin2015,pons2019}.
The internal structure of a neutron star can be significantly altered by the presence of a high
magnetic field, especially in the crust region (see, e.g., Ref.~\cite{blaschke2018} for a recent review). The composition of the outer crust of a magnetar has been traditionally determined following the study of Ref.~\cite{lai91b} by minimizing the Gibbs free energy per nucleon $g$ at zero temperature and for a finite set of pressure values (see, e.g. Refs.~\cite{nandi2011,chapav2012,basilico2015,mutafchieva2019}). The pressure step must be small enough to find the complete stratification, especially in the deepest region of the outer crust where even a thin layer can contain the most abundant nuclear species. Systematic calculations over a wide range of magnetic-field strengths, as required for the modelling of magnetars, can thus become computationally very expensive.
In this paper, the computationally very fast approach recently proposed to calculate
the structure of the outer crust of unmagnetized neutron stars~\cite{chamel2020,chamel2020b} is extended to take into account the presence of a strongly quantizing magnetic field. New analytical solutions for the transition pressure between adjacent crustal layers are presented in Sect.~\ref{sec:transition}. The analytical approximations for the nuclear abundances and the depth of the different layers that were previously discussed in Ref.~\cite{chamel2020} are suitably generalized to magnetars in Sect.~\ref{sec:structure}. The numerical implementation of all these formulas is discussed in Sect.~\ref{sec:implementation}, where numerical tests of their precision are also presented.
\section{Transition between adjacent crustal layers}
\label{sec:transition}
In the following, we shall consider the crustal region at densities $\rho$ above the ionization threshold and below the neutron-drip point. As in Ref.~\cite{chamel2020}, we assume that each crustal layer is made of a single nuclear species ($A$, $Z$) with mass number $A$ and atomic number $Z$ in thermodynamic equilibrium at temperature $T$ below the crystallization temperature $T_m$ (for all practical purposes, we shall set $T=0$~K).
The equilibrium composition at pressure $P$ is determined by the minimization of the Gibbs free energy per nucleon given by (see, e.g. Ref.~\cite{chapav2012})
\begin{equation}
\label{eq:gibbs}
g(A,Z,n_e) = \frac{M^\prime(A,Z)c^2}{A} + \frac{Z}{A}\bigg[\mu_e-m_e c^2+\frac{4}{3}C \alpha \hbar c n_e^{1/3} Z^{2/3} \biggr]\, ,
\end{equation}
where $M^\prime(A,Z)$ is the mass of the nucleus ($A$,$Z$) (including the rest mass of $Z$ electrons), $m_e$ is the electron mass, $\mu_e$ is the Fermi energy of a relativistic electron gas with number density $n_e$, $C$ is the crystal lattice structure constant, and $\alpha=e^2/(\hbar c)$ is the fine structure constant ($e$ being the elementary electric charge, $\hbar$ the Planck-Dirac constant and $c$ the speed of light). The pressure is expressible in terms of the electron number density $n_e$ by the relation
\begin{equation}
\label{eq:pressure}
P=P_e + C\, \alpha \hbar c Z^{2/3} n_e^{4/3}\, ,
\end{equation}
where $P_e$ denotes the pressure of an ideal electron Fermi gas (see, e.g. Ref.~\cite{haensel2007} for general expressions). To first order in $\alpha$, the transition from a crustal layer made of nuclei ($A_1$, $Z_1$) to
a denser layer made of nuclei ($A_2$, $Z_2$) is formally determined by the same condition as in the absence of magnetic fields, and is approximately given by~\cite{chamel2017a}:
\begin{equation}\label{eq:threshold-condition}
\mu_e + C\, \alpha \hbar c n_e^{1/3}F(Z_1,A_1 ; Z_2, A_2) = \mu_e^{1\rightarrow 2}\, ,
\end{equation}
\begin{equation}\label{eq:def-F}
F(Z_1, A_1 ; Z_2, A_2)\equiv \left(\frac{4}{3}\frac{Z_1^{5/3}}{A_1} - \frac{1}{3}\frac{Z_1^{2/3} Z_2}{A_2} -\frac{Z_2^{5/3}}{A_2}\right)
\left(\frac{Z_1}{A_1}-\frac{Z_2}{A_2}\right)^{-1} \, ,
\end{equation}
\begin{equation}\label{eq:muethres}
\mu_e^{1\rightarrow 2}\equiv \biggl[\frac{M^\prime(A_2,Z_2) c^2}{A_2}-\frac{M^\prime(A_1,Z_1)c^2}{A_1}\biggr]\left(\frac{Z_1}{A_1}-\frac{Z_2}{A_2}\right)^{-1} + m_e c^2\, .
\end{equation}
The singular case $Z_1/A_1=Z_2/A_2$ needs not be considered as it leads to much higher densities than any other transition (see, e.g., the discussion in Appendix A of Ref.~\cite{chamel2016}).
The baryon chemical potential $\mu_{1\rightarrow2}$ and the pressure $P_{1\rightarrow2}$ at the interface between the two layers both vary continuously and can thus be calculated from Eqs.~(\ref{eq:gibbs}) and (\ref{eq:pressure}) respectively, with $Z=Z_1$, $A=A_1$.
The transition is
generally accompanied by a discontinuous change of the mean nucleon number density:
\begin{equation}
\bar n_1^{\rm max} = \frac{A_1}{Z_1} n_e\, ,
\end{equation}
\begin{equation}
\bar n_2^{\rm min} = \frac{A_2}{Z_2} n_e \Biggl[ 1+\frac{1}{3}C \alpha \hbar c n_e^{1/3} (Z_1^{2/3}-Z_2^{2/3})\left(\frac{dP_e}{dn_e}\right)^{-1} \Biggr]\, .
\end{equation}
The bottom of the outer crust is marked by the onset of neutron emission by nuclei. This process is determined by the following equations~\cite{chamel2015b}
\begin{equation}\label{eq:n-drip-mue}
\mu_e + \frac{4}{3}C \alpha \hbar c n_e^{1/3} Z^{2/3} = \mu_e^{\rm drip}\, ,
\end{equation}
\begin{equation}\label{eq:muedrip}
\mu_e^{\rm drip}\equiv \frac{-M^\prime(A,Z)c^2+A m_n c^2}{Z} +m_e c^2 \, ,
\end{equation}
where $m_n$ is the neutron mass.
In the presence of a magnetic field, the electron motion perpendicular to the field is quantized, as first shown by Rabi~\cite{rabi1928}. The magnetic field is strongly quantizing if $B_\star\equiv B/B_{\rm rel}\gg 1$ with
\begin{equation}
\label{eq:Bcrit}
B_\textrm{rel}=\left(\frac{m_e c^2}{\alpha \lambda_e^3}\right)^{1/2}\approx 4.4\times 10^{13}\, \rm G\, ,
\end{equation}
where $\lambda_e=\hbar/(m_e c)$ is the electron Compton wavelength.
For a given magnetic field strength $B_\star$, the number of occupied levels is determined by the condition
\begin{equation}
n_e =\frac{2 B_\star}{(2 \pi)^2 \lambda_e^3} \sum_{\nu=0}^{\nu_{\rm max}} g_\nu x_e(\nu)\, ,
\end{equation}
\begin{equation}
x_e(\nu) =\sqrt{\gamma_e^2 -1-2 \nu B_\star}\, ,
\end{equation}
where $\gamma_e=\mu_e/(m_e c^2)$, $g_\nu=1$ for $\nu=0$ and $g_\nu=2$ for $\nu\geq 1$. For a given value of the Fermi energy $\mu_e$, the electron number density $n_e$ exhibits typical quantum oscillations as a function of $B_\star$.
For $B_\star\geq (\gamma_e^2-1)/2$, electrons are confined to the lowest Rabi level. The equilibrium condition~(\ref{eq:threshold-condition}) is amenable to analytical solutions if the electron density in the second term of the left hand side is expressed in terms of the electron Fermi energy using the ultrarelativistic approximation
\begin{equation}\label{eq:mue-strongB}
n_e \approx \frac{B_\star} {2 \pi^2 \lambda_e^3 }\gamma_e\, .
\end{equation}
Introducing
\begin{equation}
\bar F(Z_1,A_1;Z_2,A_2; B_\star)\equiv \frac{1}{3} C\alpha F(Z_1,A_1;Z_2,A_2)\left(\frac{B_\star}{2\pi^2}\right)^{1/3}\, ,
\end{equation}
Eq.~(\ref{eq:threshold-condition}) thus reduces to
\begin{equation}\label{eq:threshold-condition-strongB}
\gamma_e + 3\bar F(Z_1,A_1;Z_2,A_2; B_\star) \gamma_e^{1/3} = \gamma_e^{1\rightarrow 2}\, ,
\end{equation}
which can be expressed as a cubic polynomial equation. Introducing the dimensionless parameter
\begin{equation}
\upsilon\equiv \frac{\gamma_e^{1\rightarrow 2}}{2 |\bar F(Z_1,A_1;Z_2,A_2; B_\star)|^{3/2}}\, ,
\end{equation}
and using the known analytical expressions for the real roots of cubic equations (see, e.g., Ref.~\cite{birkhoff2010}), the solutions of Eq.~(\ref{eq:threshold-condition-strongB}) for $\gamma_e$ are given by the following formulas:
\begin{itemize}
\item $\bar F(Z_1,A_1;Z_2,A_2; B_\star)>0$
\begin{equation}
\gamma_e=8\bar F(Z_1,A_1;Z_2,A_2; B_\star)^{3/2}\, {\rm sinh}^3 \left(\frac{1}{3}{\rm arcsinh~} \upsilon\right)\, ,
\end{equation}
\item $\bar F(Z_1,A_1;Z_2,A_2; B_\star)<0$
\begin{equation}
\gamma_e=\begin{cases}
8|\bar F(Z_1,A_1;Z_2,A_2; B_\star)|^{3/2}\, {\rm cosh}^3\left(\frac{1}{3}{\rm arccosh~} \upsilon\right) & \text{if} \ \upsilon\geq 1\, ,\\
8|\bar F(Z_1,A_1;Z_2,A_2; B_\star)|^{3/2}\, \cos^3\theta_k & \text{if} \ 0\leq \upsilon< 1\, .
\end{cases}
\end{equation}
with
\begin{equation}
\theta_k\equiv \frac{1}{3}\arccos \upsilon + \frac{2\pi k}{3}\, ,
\end{equation}
and $k=0,1,2$.
\end{itemize}
The mathematical solutions $k=1$ and $k=2$ yield $\gamma_e \leq 0$, and they must therefore be discarded.
The transition pressure and the densities of the layers are given by
\begin{equation}\label{eq:P-strongB}
P_{1\rightarrow 2} = \frac{B_\star m_e c^2 }{4 \pi^2 \lambda_e^3 }\biggl[x_e+
\sqrt{1+x_e^2}-\log\left(x_e+\sqrt{1+x_e^2}\right)
+\left(\frac{4 B_\star Z_1^2 x_e^4}{\pi^2}\right)^{1/3}\frac{C \alpha }{3}\biggr] \, ,
\end{equation}
\begin{equation}\label{eq:n1max-strongB}
\bar n_1^{\rm max} = \frac{B_\star}{2\pi^2 \lambda_e^3} \frac{A_1}{Z_1} x_e
\end{equation}
\begin{equation}\label{eq:n2min-strongB}
\bar n_2^{\rm min} = \frac{B_\star}{2\pi^2 \lambda_e^3} \frac{A_2}{Z_2} x_e\biggl[1+ \frac{1}{3}C \alpha (Z_1^{2/3}-Z_2^{2/3}) \left(\frac{B_\star}{2\pi^2}\right)^{1/3}\frac{\sqrt{1+x_e^2}}{x_e^{5/3}}\biggr]\, ,
\end{equation}
where $x_e=\sqrt{\gamma_e^2-1}$.
For high enough magnetic fields, the second term in the left-hand side of Eq.~(\ref{eq:threshold-condition-strongB}) can be larger than $\gamma_e$ so that the equilibrium composition corresponds to $\gamma_e^{1\rightarrow 2}<0$. Although the expansion of the Gibbs free energy is not expected to be accurate in this case, analytical solutions can still be of interest as a first initial guess in the search for the numerical value of $\gamma_e$. Real solutions only exist for $\bar F(Z_1,A_1;Z_2,A_2; B_\star)<0$ and $-1<\upsilon\leq 0$:
\begin{equation}
\gamma_e=8|\bar F(Z_1,A_1;Z_2,A_2; B_\star)|^{3/2}\, \cos^3\theta_k \, ,
\end{equation}
and $k=0,1,2$.
As in the case of ``low'' magnetic fields (but still strongly quantizing), the solution $k=1$ must be ignored since $\gamma_e<0$. However, both $k=0$ and $k=2$ now leads to $\gamma_e\geq 0$. The physically admissible solution is determined by selecting the expression yielding the lowest transition pressure satisfying the conditions $\gamma_e\geq 1$ and $\bar n_2^{\rm min}\geq \bar n_1^{\rm max}$, as required by mechanical stability.
Solutions for the neutron-drip transition can be found using the above formulas after substituting $F(Z_1,A_1;Z_2,A_2)$ by $(4/3) Z^{2/3}$ and $\gamma_e^{1\rightarrow 2}$ by $\gamma_e^{\rm drip}$.
\section{Global structure and nuclear abundances}
\label{sec:structure}
In principle, the global structure of a highly-magnetized neutron star should be calculated solving simultaneously Einstein's and Maxwell's equations. However,
the influence of the magnetic field on the crust size was shown to lie below about $1-2\%$ for $B_\star \lesssim 10^4$~\cite{franzon2017,gomes2019, chatterjee2019,chatterjee2019b}. We shall thus employ the same analytical formulas as those derived for unmagnetized neutron stars in Ref.~\cite{chamel2020}.
The relative nuclear abundance of a crustal layer is thus approximately given by
\begin{equation}\label{eq:xi}
\xi= \frac{\delta P}{P_{\rm drip}}\, ,
\end{equation}
where $\delta P$ the range of pressures of the layer under consideration and $P_{\rm drip}$
is the neutron-drip pressure. The associated baryonic mass for a star with a gravitational
mass $\mathcal{M}$ and a circumferential radius $R$ can be obtained as follows:
\begin{equation}\label{eq:baryonic-mass}
\delta\,M_B \approx \xi \frac{8\pi R^4 P_{\rm drip}}{r_g c^2} \left(1-\frac{r_g}{R}\right)^{3/2} \, ,
\end{equation}
where $r_g=2 G\mathcal{M}/c^2$ is the Schwarzschild radius.
The proper depth $z$ below the surface at the transition between two adjacent crustal layers with baryon chemical potential $\mu_{1\rightarrow 2}$ is
approximately given by
\begin{equation}\label{eq:z}
z \approx z_{\rm drip} \frac{(\mu_{1\rightarrow 2}/\mu_s)^2-1}{(m_n c^2/\mu_s)^2-1} \, ,
\end{equation}
where $\mu_s$ is the baryon chemical potential at the stellar surface (where $P=0$), and the depth at the neutron-drip transition is given by
\begin{equation}\label{eq:zdrip}
z_{\rm drip}\approx \frac{R^2}{r_g}\biggl[\left(\frac{m_n c^2}{\mu_s}\right)^2-1\biggr]\sqrt{1-\frac{r_g}{R}}\, .
\end{equation}
Contrary to the case of unmagnetized neutron stars, $\mu_s/c^2$ is not simply given by the mass $m_0$ per nucleon of $^{56}$Fe because the density at the surface is finite and is approximately given by~\cite{lai91b,chapav2012}
\begin{equation}
\label{eq:ns}
n_s \approx \frac{A_s}{\lambda_e^3}\biggl[\frac{|C| \alpha B_\star^2 }{4\pi^4 Z_s}\biggr]^{3/5} \, ,
\end{equation}
with $Z_s=26$ and $A_s=56$ the corresponding atomic and mass numbers of $^{56}$Fe. The corresponding value of $\mu_s$ can be calculated from Eq.~(\ref{eq:gibbs}) with $n_e=(Z_s/A_s)n_s$.
\section{Stratification of the outer crust}
\label{sec:implementation}
The stratification of the outer crust is determined as in the case of unmagnetized neutron stars~\cite{chamel2020}. Given a crustal layer made of nuclide ($A_1$, $Z_1$), the composition of the layer beneath can be found by merely determining the nuclide ($A_2$, $Z_2$) leading to the lowest transition pressure $P_{1\rightarrow 2}$. Starting with $^{56}$Fe at the stellar surface, the sequence of equilibrium nuclides can thus be determined iteratively. The iteration is stopped when the baryon chemical potential exceeds the neutron mass energy. Once the composition has been found, the detailed structure of the crust and the nuclear abundances can be readily calculated using the analytical formulas for the pressure and baryon chemical potential at the interface between adjacent layers.
To assess the efficiency of
the method in the strongly quantizing regime, we have calculated the internal constitution of the outer crust of a nonaccreted magnetar with $B_\star=2000$ using experimental data from the 2016 Atomic Mass Evaluation~\cite{ame2016} supplemented with the nuclear mass table HFB-27 from the {\footnotesize BRUSLIB} database~\cite{bruslib} and based on the Hartree-Fock-Bogoliubov method~\cite{hfb27}. We have also made use of the recent measurements of copper isotopes~\cite{welker2017}. Nuclear masses were estimated from tabulated \emph{atomic} masses after subtracting out the electron binding energy using Eq.~(A4) of Ref.~\cite{lunney2003}.
In each layer, nuclei are arranged in a body-centered cubic lattice independently of the magnetic field strength~\cite{kozhberov2016}. The structure constant is taken from Ref.~\cite{baiko2001}.
Results are summarized in Table~\ref{tab1}. The computations took about 0.07 seconds using an Intel Core i7-975 processor. In contrast, the standard approach using about 19000 different pressure values between $P=9\times 10^{-12}$ MeV~fm$^{-3}$ and $P=P_{\rm drip}$ with a pressure step $\delta P=10^{-3}P$ took about 24 minutes, i.e. $\approx 2\times 10^4$ times longer. Comparing with the results obtained in Ref.~\cite{chamel2020}, the magnetic field changes the composition of the crust: the layers made of $^{64}$Ni, $^{66}$Ni, and $^{78}$Ni have disappeared, while new layers made of nuclei $^{88}$Sr, $^{132}$Sn, $^{128}$Pd are now present. Due to the increase of the matter density induced by the magnetic field, matter is more uniformly distributed: the baryonic content of the shallow layers is now comparable to that of the deeper layers. In these calculations, the same nuclear masses as in the absence of magnetic fields were employed. However, high enough magnetic fields can also influence the structure of nuclei~\cite{arteaga2011,stein2016}, inducing additional changes in the crustal composition~\cite{basilico2015}.
We have determined the precision of the method by solving
numerically the exact equilibrium conditions:
\begin{equation}
\label{eq:equilibrium1}
g(A_1,Z_1,n_e^1)=g(A_2,Z_2,n_e^2)\equiv \mu_{1\rightarrow2}\, ,
\end{equation}
\begin{equation}
\label{eq:equilibrium2}
P(n_e^1,Z_1)=P(n_e^2,Z_2)\equiv P_{1\rightarrow2}\, .
\end{equation}
The relative deviations between these
results and the analytical formulas are indicated in Table~\ref{tab2}. In most cases, the errors on the pressures and densities do not exceed 0.24\%. The errors of a few \% found for the transition from $^{56}$Fe to $^{62}$Ni in the shallow region of the crust where electrons are only moderately relativistic (as indicated by the rather low value of the parameter $x_e$) can be traced back to the approximation~(\ref{eq:mue-strongB}). The transition from $^{132}$Sn to $^{80}$Zn also exhibits comparatively large deviations, but their origin is different. As indicated in Table~\ref{tab2}, the threshold electron chemical potential $\mu_e^{1\rightarrow 2}$ associated with this transition is negative so that the second term in Eq.~(\ref{eq:threshold-condition}) must be large and negative. However, the expansion of the Gibbs free energy per nucleon to first order in $\alpha$ requires this term to be small.
Except for the two peculiar cases discussed above, the depths are determined with an error of 0.14~\% at most. As expected, the relative abundances being obtained from pressure differences exhibit larger deviations, especially in the vicinity of the transitions from $^{56}$Fe to $^{62}$Ni and from $^{132}$Sn to $^{80}$Zn. On the other hand, the analytical formulas for the baryon chemical potentials remain very accurate in all cases, with deviations below $8\times 10^{-3}$\%, thus ensuring that the sequence of equilibrium nuclides is correctly reproduced. Having found the composition, the crustal properties could thus be refined in a second stage by solving numerically Eqs.~(\ref{eq:equilibrium1}) and (\ref{eq:equilibrium2}). The overall procedure will still remain much faster than the full minimization.
\section{Conclusions}
We have extended the iterative method proposed in Ref.~\cite{chamel2020} for determining the structure and the composition of the outer crust of a cold nonaccreted neutron star to allow for the Landau-Rabi quantization of the electron motion induced by the presence of a magnetic field. We have shown that this method can be very efficiently implemented in the limit of a strongly quantizing magnetic field by making use of new analytical solutions for the transitions between adjacent crustal layers. Computations are found to be as fast as for unmagnetized neutron stars. Computer codes have been made publicly available for both unmagnetized~\cite{chamel2020b} and strongly magnetized neutron stars~\cite{chamel2020c}. The general scheme proposed in Ref.~\cite{chamel2020} is therefore particularly well-suited for systematic calculations of the equation of state of dense magnetized matter for a large number of different magnetic-field strengths, as required for the modelling of magnetars.
\begin{table}
\centering
\caption{Stratification of the outer crust of a magnetar with $B_\star=2000$, as obtained using recent experimental data supplemented with the nuclear mass model HFB-27~\cite{hfb27}. In the table are listed: the atomic numbers $Z_1$ and $Z_2$ of adjacent layers, the corresponding mass numbers $A_1$ and $A_2$, the dimensionless parameter $x_e$, the maximum and minimum mean nucleon number densities $\bar{n}_1^{\rm max}$ and $\bar{n}_2^{\rm min}$ at which the nuclides are present, the transition pressure
$P_{1\rightarrow 2}$, the electron and baryon threshold chemical potentials $\mu_e^{1\rightarrow 2}$ and $\mu_{1\rightarrow 2}$, the relative abundance $\xi_1$ of nuclide ($A_1$, $Z_1$) and its relative depth $z_1/z_{\rm drip}$. Units are megaelectronvolts for energy and femtometers for length. See text for details.}
\label{tab1}
\vspace{.5cm}
\begin{tabular}{|cccccccccccc|}
\hline
$Z_1$ & $A_1$ & $Z_2$ & $A_2$ & $x_e$ & $\bar n_1^{\rm max}$ & $\bar n_2^{\rm min}$ & $P_{1\rightarrow 2}$ & $\mu_e^{1\rightarrow 2}$ & $\mu_{1\rightarrow 2}$ & $\xi_1$ & $z_1/z_{\rm drip}$ \\
\hline
26 & 56 & 28 &62 & 1.31 & 4.98$\times 10^{-6}$ & 5.15$\times 10^{-6}$ & 3.00$\times 10^{-7}$ & 0.966 & 930.4 & 2.62$\times 10^{-4}$ & 8.67$\times 10^{-3}$ \\
28 & 62 & 38 &88 & 5.68 & 2.21$\times 10^{-5}$ & 2.34$\times 10^{-5}$ & 1.23$\times 10^{-5}$ & 4.44 & 931.3 & 1.04$\times 10^{-2}$ & 0.101 \\
38 & 88 & 36 &86 & 8.26 & 3.37$\times 10^{-5}$ & 3.47$\times 10^{-5}$ & 2.69$\times 10^{-5}$ & 2.84 & 931.8 & 1.28$\times 10^{-2}$ & 0.156 \\
36 & 86 & 34 &84 & 13.1 & 5.49$\times 10^{-5}$ & 5.67$\times 10^{-5}$ & 7.06$\times 10^{-5}$ & 5.13 & 932.8 & 3.82$\times 10^{-2}$ & 0.261 \\
34 & 84 & 32 &82 & 18.6 & 8.08$\times 10^{-5}$ & 8.37$\times 10^{-5}$ & 1.46$\times 10^{-4}$ & 7.83 & 933.9 & 6.60$\times 10^{-2}$ & 0.380 \\
32 & 82 & 50 &132 & 23.9 & 1.08$\times 10^{-4}$ & 1.12$\times 10^{-4}$ & 2.44$\times 10^{-4}$ & 19.6 & 934.9 & 8.55$\times 10^{-2}$ & 0.491 \\
50 & 132& 30 &80 & 25.1 & 1.17$\times 10^{-4}$ & 1.17$\times 10^{-4}$ & 2.67$\times 10^{-4}$ & -17.0 & 935.1 & 1.98$\times 10^{-2}$ & 0.513 \\
30 & 80 & 46 &128 & 28.2 & 1.32$\times 10^{-4}$ & 1.39$\times 10^{-4}$ & 3.44$\times 10^{-4}$ & 19.0 & 935.7 & 6.72$\times 10^{-2}$ & 0.579 \\
46 & 128& 44 &126 & 34.5 & 1.69$\times 10^{-4}$ & 1.74$\times 10^{-4}$ & 5.11$\times 10^{-4}$ & 15.2 & 936.8 & 0.146 & 0.697 \\
44 & 126& 42 &124 & 37.8 & 1.91$\times 10^{-4}$ & 1.96$\times 10^{-4}$ & 6.18$\times 10^{-4}$ & 16.9 & 937.4 & 9.39$\times 10^{-2}$ & 0.761 \\
42 & 124& 40 &122 & 42.8 & 2.22$\times 10^{-4}$ & 2.29$\times 10^{-4}$ & 7.95$\times 10^{-4}$ & 19.4 & 938.2 & 0.154 & 0.853 \\
40 & 122& 38 &120 & 45.2 & 2.43$\times 10^{-4}$ & 2.51$\times 10^{-4}$ & 8.90$\times 10^{-4}$ & 20.7 & 938.6 & 8.36$\times 10^{-2}$ & 0.897 \\
38 & 120& 38 &122 & 50.0 & 2.78$\times 10^{-4}$ & 2.82$\times 10^{-4}$ & 1.09$\times 10^{-3}$ & 24.2 & 939.4 & 0.175 & 0.980 \\
38 & 122& 38 &124 & 51.0 & 2.88$\times 10^{-4}$ & 2.93$\times 10^{-4}$ & 1.14$\times 10^{-3}$ & 24.7 & 939.5 & 4.11$\times 10^{-2}$ & 0.998 \\
38 & 124& $-$ &$-$& 51.2 & 2.94$\times 10^{-4}$ & $-$ & 1.14$\times 10^{-3}$ & 24.8 & 939.6 & 5.47$\times 10^{-3}$ & 1.00 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Precision of the calculated properties of the outer crust of a magnetar, as listed in Table~\ref{tab1}. The relative deviation $\delta q$ (in \%) of a quantity $q$ is calculated
as $\delta q=100(q-q_\textrm{exact})/q_\textrm{exact}$, where $q_\textrm{exact}$ is the exact value while $q$ denotes the value calculated using the analytical formulas. See text for details. }
\label{tab2}
\vspace{.5cm}
\begin{tabular}{|ccccccccccc|}
\hline
$Z_1$ & $A_1$ & $Z_2$ & $A_2$ & $x_e$ & $\bar n_1^{\rm max}$ & $\bar n_2^{\rm min}$ & $P_{1\rightarrow 2}$ & $\mu_{1\rightarrow 2}$ & $\xi_1$ & $z_1/z_{\rm drip}$ \\
\hline
26 & 56 & 28 & 62 & -1.4 & -1.4 & -1.6 & -4.6 & -3.1$\times 10^{-4}$ & -4.6 & -3.5 \\
28 & 62 & 38 & 88 & -1.1$\times 10^{-1}$& -1.1$\times 10^{-1}$ & -1.9$\times 10^{-1}$ & -2.4$\times 10^{-1}$ & -1.4$\times 10^{-4}$ & -1.2$\times 10^{-1}$ & -1.4$\times 10^{-1}$ \\
38 & 88 & 36 & 86 & 9.7$\times 10^{-2}$ & 9.7$\times 10^{-2}$ & 1.1$\times 10^{-1}$ & 2.1$\times 10^{-1}$ & 1.8$\times 10^{-4}$ & 5.8$\times 10^{-1}$ & 1.1$\times 10^{-1}$ \\
36 & 66 & 34 & 84 & 2.7$\times 10^{-2}$ & 2.7$\times 10^{-2}$ & 3.3$\times 10^{-2}$ & 5.7$\times 10^{-2}$ & 7.8$\times 10^{-5}$ & -3.6$\times 10^{-2}$ & 3.0$\times 10^{-2}$ \\
34 & 84 & 32 & 82 & 1.0$\times 10^{-2}$ & 1.0$\times 10^{-2}$ & 1.4$\times 10^{-2}$ & 2.1$\times 10^{-2}$ & 4.1$\times 10^{-5}$ & -1.2$\times 10^{-2}$ & 1.1$\times 10^{-2}$ \\
32 & 82 & 50 & 132 & 5.3$\times 10^{-2}$ & 5.3$\times 10^{-2}$ & 3.0$\times 10^{-2}$ & 1.1$\times 10^{-1}$ & 2.6$\times 10^{-4}$ & 2.4$\times 10^{-1}$ & 5.4$\times 10^{-2}$ \\
50 & 132 & 30 & 80 & 1.6 & 1.6 & 1.7 & 3.3 & 8.0$\times 10^{-3}$ & 5.9$\times 10^1$ & 1.6 \\
30 & 80 & 46 & 128 & 2.7$\times 10^{-2}$ & 2.7$\times 10^{-2}$ & 1.2$\times 10^{-2}$ & 5.6$\times 10^{-2}$ & 1.6$\times 10^{-4}$ & -9.9 & 2.7$\times 10^{-2}$ \\
46 & 128 & 44 & 126 & 2.6$\times 10^{-3}$ & 2.6$\times 10^{-3}$ & 4.2$\times 10^{-3}$ & 5.4$\times 10^{-3}$ & 1.8$\times 10^{-5}$ & -1.0$\times 10^{-1}$ & 2.6$\times 10^{-3}$ \\
44 & 126& 42& 124 & 2.1$\times 10^{-3}$ & 2.1$\times 10^{-3}$ & 3.4$\times 10^{-3}$ & 4.2$\times 10^{-3}$ & 1.5$\times 10^{-5}$ & -2.5$\times 10^{-3}$ & 1.9$\times 10^{-3}$ \\
42 & 124& 40& 122 & 1.5$\times 10^{-3}$ & 1.5$\times 10^{-3}$ & 2.6$\times 10^{-3}$ & 3.0$\times 10^{-3}$ & 1.1$\times 10^{-5}$ & -1.8$\times 10^{-3}$ & 1.4$\times 10^{-3}$ \\
40 & 122& 38& 120 & 1.3$\times 10^{-3}$ & 1.3$\times 10^{-3}$ & 2.3$\times 10^{-3}$ & 2.5$\times 10^{-3}$ & 9.9$\times 10^{-6}$ & -2.0$\times 10^{-3}$ & 1.1$\times 10^{-3}$ \\
38 & 120& 38& 122 & 3.7$\times 10^{-4}$& 3.7$\times 10^{-4}$& 3.7$\times 10^{-4}$& 7.5$\times 10^{-4}$ & 3.1$\times 10^{-6}$ & -7.9$\times 10^{-3}$ & 3.3$\times 10^{-4}$ \\
38 & 122& 38& 124 & 3.5$\times 10^{-4}$& 3.5$\times 10^{-4}$& 3.5$\times 10^{-4}$& 7.1$\times 10^{-4}$ & 3.0$\times 10^{-6}$ & -9.4$\times 10^{-4}$ & 3.1$\times 10^{-4}$ \\
38 & 124&$-$ & $-$ & 3.5$\times 10^{-4}$ & 3.5$\times 10^{-4}$ & $-$ & 7.0$\times 10^{-4}$ & $-$ & -9.4$\times 10^{-4}$ & $-$ \\
\hline
\end{tabular}
\end{table}
\section*{Acknowledgments}
This work of N.C. was financially supported by Fonds de la Recherche Scientifique (Belgium) under grant no. IISN 4.4502.19. The work of Z. S. was financially supported by the National programme ``Young scientists'' funded by the Bulgarian Ministry of Education and Science and a Short Term Scientific Mission grant from the European Cooperation in Science and Technology (COST) Action CA16117. This work was also partially supported by the COST Action CA16214.
|
2,877,628,090,319 | arxiv | \section{Introduction}
Over the past decade, science has been developing very rapidly, with connections between one branch and another forming in various fields. Physicists specializing in nonlinear phenomena and statistical physics, for example, have attempted to apply the relevant concepts to understand social and political phenomena~\cite{castellano2009statistical,serge2016sociophysics, sen2014sociophysics, javarone2014network}. This form of interdisciplinary science is well known as sociophysics~\cite{serge2016sociophysics}, in which there emerge some statistical physics features such as phase transition, scaling, and universality. One of the most discussed topics in sociophysics is opinion dynamics~\cite{castellano2009statistical, serge2016sociophysics, sen2014sociophysics, stauffer2009encyclopedia}. To understand and predict opinion dynamics, physicists have tried to correlate the microscopic-macroscopic phenomena in the physical system to the social system, e.g., collective phenomena in the macroscale with the individual behavior in the microscale~\cite{myer2013}. Such correlation is similar to what physicists study in thermodynamics and statistical physics so that various models of opinion dynamics from physics' point of view have emerged, such as the celebrated Sznajd model~\cite{sznajd2000opinion}, the Galam model~\cite{galam2008sociophysics}, the voter model~\cite{liggett1985interacting}, the majority rule model~\cite{mobilia2003majority,galam2002minority,krapivsky2003dynamics}, and the Biswas-Sen model~\cite{biswas2009model}. Most of the models have a ferromagnetic-like character which makes the system always homogeneous, i.e., all members in the system have the same opinion. The ferromagnetic-like character in those models represents conformity behavior in social literature~\cite{nail2000proposal}; however, the models are inadequate when faced with social reality. Therefore, physicists have further been proposing several social parameters, such as contrarian~\cite{galam2004contrarian}, inflexibility~\cite{galam2007role}, nonconformity~\cite{nyczka2013anticonformity}, and fanaticism~\cite{mobilia2003does}, to make the models more realistic.
Inspired by social psychology, one can classify several social responses or behaviors such as conformity and nonconformity\cite{nail2000proposal, willis1963two, willis1965conformity, macdonald2004expanding, nail2011development}. Conformity is a behavior that obeys group norms, while nonconformity does not follow the group norms. Nonconformity further splits into two types, namely anticonformity and independence. Anticonformity is a behavior contrary to that of the group majority. Galam defines anticonformity as ``contrarian" behavior, in which an agent or individual takes a particular choice opposite to the group choices with a certain probability~\cite{galam2004contrarian}. Nonconformity agents play an important role in the dynamics, in which the conformity agents can give the stabilization in a complete graph, yet the opposite occurs in the incomplete graph~\cite{javarone2014social}. TThe density of conformity agents also plays an important role in opinion formation within the system. With a certain density of conformity agents, the system reaches a full consensus with all agents having the same opinion~\cite{javarone2015conformism}.
Implementation of the contrarian behavior in the outflow dynamics within the Sznajd model (``united we stand, divided we fall") exhibits an order-disorder nonequilibrium phase transition. The contrarian term in opinion dynamics is analogous with thermal fluctuation in the thermodynamic system, which acts to make the system disorder. It has been shown that the parameter has an impact on the system that is indicated by the presence of a nonequilibrium phase transition. With a certain threshold probability of contrarian $p_c$, the system reaches a stalemate situation with no majority-minority opinion~\cite{de2005spontaneous}. The same effect of anticonformity behavior occurs within the Sznajd model; with a certain probability $p_c$, the system undergoes a second-order phase transition. In the social context, below the critical point $p_c$ there is consensus with all agents having the same opinion or the emergence of majority opinion~\cite{calvelli2019phase, muslim2020phase}.
The other subset of nonconformity, i.e., independence, is a behavior that involves an individual who is not following group norms but acting independently to the group norms. In the physics context, the behavior of independence is similar to the temperature in the Ising system; it acts like sources of noises that can induce a system to undergo an order-disorder phase transition~\cite{nyczka2013anticonformity, calvelli2019phase, sznajd2011phase, crokidakis2015inflexibility}. In addition, the presence of independence behavior adopted by agents in various scenarios causes the system to undergo a discontinuous phase transition, especially in the $q$-voter model~\cite{sznajd2011phase, chmiel2015phase, abramiuk2019independence, nowak2021discontinuous, civitarese2021external}.
The level of conformity or nonconformity is not the same in every community group; it depends on several factors such as culture, education, and age. The level of independence also varies in each country. One way to quantify the level of independence is using the individualism index value or the individualism distance index (IDV) proposed by Hofstede~\cite{hofstede2001, hofstede2010}. The IDV indicates a degree of thinking of yourself first as an individual and then considering the group after. The US (IDV = 92) and UK (IDV = 89) have higher IDV levels than Asian countries or ``conservative" countries, for example Indonesia (IDV = 14) and South Korea (IDV = 18). Although individualism is not the same as independence, we may observe both individualism and independence more commonly in countries with high IDV~\cite{solomon2010consumer}. The IDV data for other countries are available on the internet~\cite{hofstedeIDV}.
The implementation of the independence behavior in the Sznajd model can be found in Ref.~\cite{sznajd2011phase}. In that paper, Sznajd-Weron et al. introduced a flexibility factor $f$ and analyzed the model on the lattice and a complete graph with three-spin (representing three-agent) interactions. On the other hand, the implementation of the independence behavior to the other opinion dynamics model can be found in Refs.~\cite{nyczka2013anticonformity, crokidakis2015inflexibility, abramiuk2019independence, nyczka2012phase}. However, there were no studies that combine and compare the contrarian behavior with independence based on the Sznajd model, especially for the interactions of the four agents with two-two agent interaction and three-one agent interaction.
In this paper, we investigate the opinion dynamics within the Sznajd model involving both behaviors of contrarian and independence. We consider four-spin (or four-agent) local interactions defined on a complete graph, as described in Section 2. We then divide the model into the so-called two-two scenario and the three-one scenario for developing the microscopic rules of the outflow dynamics in the presence of contrarian and independent characters. The two-two (three-one) scenario occurs when two (three) agents influence the other two (one) wherever they are. For the independence behavior, we use the flexibility parameter $f$ that describes the level of independence, as previously defined by Sznajd-Weron et al.~\cite{sznajd2011phase}. For the contrarian behavior, we introduce a contrarian factor $\lambda$ defining the level of contrarian character, i.e., how often an individual takes a particular choice opposite to that of the group majority. Based on the proposed model, we present the analytical and numerical results in Section 3. In particular, we focus on discussing the phase transition and universality of the model. Finally, in Section~\ref {sec.4}, we give our conclusions and outlook.
\section{Model and methods}
\label{sec.2}
We consider the outflow dynamics under the Sznajd model with contrarian and independence behaviors for four-agent local interaction. The model is defined on a complete graph (Fig. \ref{cg}) with total population $N = N_+ + N_-$, where $N_+$ and $N_-$ are total spin-up and spin-down states, respectively. The opinions of agents are represented by Ising spins $S_n = \pm 1 \,(n = 1,2,\cdots, N)$. The initial condition of the system is $50\%$ up and $50\%$ down, i.e. the initial magnetization is zero (disorder state). We categorize the system of four-agent local interaction into (1) scenario two-two, where two agents persuade the other two in the population wherever they are, and (2) scenario three-one, where three agents persuade the fourth agent in the population, wherever it is in the population.
\begin{figure}[ht!]
\centering
\includegraphics[width=5 cm]{Fig01}
\caption{Illustration of a complete graph with 8 nodes (agents) with two different states (opinions) and 7 connection/edges (social relation in social context) for each node. This topology can be interpreted as a group interaction with each member of the group can interact each other. We can also consider all members of the group are neighboring each other.
\label{cg}}
\end{figure}
The macroscopic behavior of the system can be analyzed by considering the magnetization or the order parameter $m$, which, in the sociophysics point of view, is equivalent to the average opinion. In the analytical consideration, the order parameter $m$ is defined as $m = 1/N \sum_{i=1}^{N} S_i$. In the numerical simulation, the order parameter $m$ is an average of total samples, defined as $\langle m \rangle = 1/K \sum_{i =1}^{K} m_i$, where $\langle \cdots \rangle$ is the average of all samples, $K$ is the total number of samples, and $m_i$ is the order parameter of the $i$th sample. To estimate the critical exponents of the systems, we consider the ``magnetic" susceptibility $\chi$ and the Binder cumulant $U$, respectively defined as \cite{binder1981finite}:
\begin{align}
\chi =& N \left( \langle m^2 \rangle - \langle m \rangle^2\right), \label{eq2}\\
U =& 1 - \dfrac{\langle m^4 \rangle}{3\, \langle m^2 \rangle^2}. \label{eq3}
\end{align}
The universality class of the model can be defined by estimating the critical exponents of the model using finite-size scaling relations, defined as follows~\cite{cardy1996scaling}:
\begin{align}
m ( N) & \sim N^{-\beta/\nu}, \label{eq4} \\
\chi(N) & \sim N^{\gamma/\nu}, \label{eq5}\\
U(N) &\sim \text{constant}, \label{eq6}\\
p_{c}(N)-p_{c} & \sim N^{-1/\nu}, \label{eq7}
\end{align}
all of which are relevant around the critical point. We perform analytical calculations and numerical simulations and compare the results. The description of all scenarios of the model is given at each of the following subsections.
\subsection{Scenario two-two}
\label{subsec:2.1}
In this scenario, four agents represented by four spins are chosen randomly, namely, $S_i, S_j, S_k,$ and $S_l$, where two agents persuade the other two wherever they are in the populations according to the following microscopic rules:
\begin{itemize}
\item For the scenario with contrarian
\begin{itemize}
\item With probability $\lambda p$, agents with states $S_k$ and $S_l$ take opposite state/opinion to the state/opinion of agents $S_i$ and $S_j$ only if the four agents have the same opinion. This is the contrarian rule of the model, where symbolically two configurations of the agents' opinion satisfy the rule as illustrated below:
\begin{equation}\label{contrarian-lambdap}
\begin{aligned}
& (1) \, \uparrow\uparrow \Uparrow\Uparrow \quad \rightarrow \quad \uparrow \uparrow \Downarrow \Downarrow \\
& (2) \,\downarrow \downarrow \Downarrow \Downarrow \quad \rightarrow \quad \downarrow \downarrow \Uparrow \Uparrow.
\end{aligned}
\end{equation}
\item With probability $(1-p)$, agents with states $S_k$ and $S_l$ follow the state/opinion of agents $S_i(t)$ and $S_j$ if $S_i = S_j$. Based on this rule, there are six configurations satisfying the rule as follows:
\begin{equation}\label{contrarian_1-p}
\begin{aligned}
& (1) \,\uparrow \uparrow \Downarrow \Uparrow \quad \rightarrow \quad \uparrow \uparrow \Uparrow \Uparrow \\
& (2) \,\uparrow \uparrow \Uparrow \Downarrow \quad \rightarrow \quad \uparrow \uparrow \Uparrow \Uparrow \\
&\qquad \vdots\\
& (6)\, \downarrow \downarrow \Uparrow \Downarrow \quad \rightarrow \quad \downarrow \downarrow \Downarrow \Downarrow.
\end{aligned}
\end{equation}
This behavior is similar to the original Sznajd model (social validation)~\cite{sznajd2000opinion}, where for $p=0$, this model reduces to the original Sznajd model.
\end{itemize}
\item For the scenario with independence
\begin{itemize}
\item With probability $p$, agents $S_k$ and $S_l$ act independently. In other words, they do not follow the group norm; with probability $f$, the agents change, i.e., $S_{k,l}(t) = -S_{k,l}(t+dt)$, and with probability $(1-f)$, nothing changes, i.e., $S_{k,l}(t) = S_{k,l}(t+dt)$. The scheme of the agents configuration is illustrated as follows:
\begin{equation}\label{independent}
\begin{aligned}
& (1)\,\cdot \cdot \Uparrow \Uparrow \quad \rightarrow \quad \cdot \cdot \Downarrow \Downarrow \\
& (2) \,\cdot \cdot \Downarrow \Downarrow \quad \rightarrow \quad \cdot \cdot \Uparrow \Uparrow.
\end{aligned}
\end{equation}
The contrarian rule for the independence scenario is the same as ~\eqref{contrarian_1-p}.
\end{itemize}
\end{itemize}
The symbols $\uparrow$ and $\downarrow$ stand for agents with states $S_i$ and $S_j$, respectively, while the symbols $\Uparrow$ and $\Downarrow$ stand for agents $S_k$ and $S_l$, respectively. The parameters $\lambda \in [0,1]$ and $f \in [0,1]$ are the contrarian factor and flexibility, respectively, which describe how often agents change their opinions in the contrarian and independence cases. These parameters are also similar to the stochastic driving parameter that causes the system to undergo an order-disorder phase transition \cite{nyczka2012phase, crokidakis2014phase}.
Based on this model, there are sixteen agent combinations. Eight combinations are active, where six of which follow the conformity rule [illustrated in Eq.~\eqref{contrarian_1-p}] and two other combinations follow the contrarian [Eq.~\eqref{contrarian-lambdap}] or independence [Eq.~\eqref{independent}] rules. Eight other combinations are inactive because the polarity of agents persuader ($S_i$ and $S_j$) are tied (no or less contribution). From the social point of view, the inactive combinations can be interpreted as a weak opinion to persuade others. Therefore they do not affect the public opinion $m$ or the critical point of the systems.
\subsection{Scenario three-one}
\label{subsec:2.2}
In this scenario, four agents are chosen randomly, namely $S_i,S_j, S_k$ and $S_l$, where three agents persuade the fourth agents, namely, $S_l$ wherever it is in the population according to the following microscopic rules:
\begin{itemize}
\item For the scenario with contrarian
\begin{itemize}
\item With probability $\lambda p$, agent $S_l$ takes the opposite state (opinion) to the majority state of the agents $S_i,S_j$ and $S_k$. Based on this rule, there are eight agent combinations satisfying the rule, as illustrated below:
\begin{equation}\label{eq.confor31}
\begin{aligned}
&(1) \, \uparrow \uparrow \uparrow \Uparrow \quad \rightarrow \quad \uparrow \uparrow \uparrow \Downarrow \\
&(2)\, \uparrow \uparrow \downarrow \Uparrow \quad \rightarrow \quad \uparrow \uparrow \downarrow \Downarrow \\
& \qquad \vdots \\
& (8) \, \downarrow \downarrow \uparrow \Downarrow \quad \rightarrow \quad \downarrow \downarrow \uparrow \Uparrow
\end{aligned}
\end{equation}
\item With probability $(1-p)$, agent $S_l$ follows the majority state of the agents $S_i,S_j$ and $S_k$. Based on this rule, there are eight agent combinations satisfying the rule. In this case, for $p = 0$, this model reduces to the original Sznajd model~\cite{sznajd2000opinion}.
\begin{equation}\label{eq.contra31}
\begin{aligned}
& (1) \, \uparrow \uparrow \uparrow \Downarrow \quad \rightarrow \quad \uparrow \uparrow \uparrow \Downarrow \\
& (2) \, \downarrow \uparrow \uparrow \Downarrow \quad \rightarrow \quad \downarrow \uparrow \uparrow \Uparrow \\
& \qquad \vdots \\
& (8) \, \uparrow \downarrow \uparrow \Downarrow \quad \rightarrow \quad \uparrow \downarrow \uparrow \Uparrow.
\end{aligned}
\end{equation}
\end{itemize}
\item For the scenario with independence
\begin{itemize}
\item With probability $p$, the agent $S_l$ acts independently to the agents $S_i,S_j$ and $S_k$. In this case, with probability $f$, agent $S_l$ changes or flips, $S_l(t)= - S_l(t+dt)$, and with probability $(1-f)$ nothing changes, $S_l(t)= S_l(t+dt)$ that illustrated as follows:
\begin{equation}\label{eq.indep31}
\begin{aligned}
& (1) \cdots \Uparrow \quad \rightarrow \quad \cdots \Downarrow \\
& \qquad \vdots \\
& (8) \cdots \Downarrow \quad \rightarrow \quad \cdots \Uparrow.
\end{aligned}
\end{equation}
Eight other spin combinations follow the conformity rule as illustrated in Eq.~\eqref{eq.confor31}.
\end{itemize}
\end{itemize}
The symbols $\uparrow,\downarrow$ stand for agents $S_i,S_j,$ and $S_k$, while $\Uparrow,\Downarrow$ stand for agent $S_l$. If we consider the persuaders have the same state (opinion), the model reduces to the $q$-voter model with $q = 3$ and exhibits second-order phase transition~\cite{abramiuk2019independence}.
\section{Results and discussion}
\label{sec.3}
\subsection{Phase Transition}
In this section, we find the critical point that makes the system undergo an order-disorder phase transition. If we define the order parameter of the system as:
\begin{equation} \label{order-param}
m = \dfrac{1}{N} \sum_{i =1}^{N} S_i = \dfrac{1}{N} \left(N_{\uparrow} -N_{\downarrow}\right),
\end{equation}
and the concentration of spin up is $c = N_{\uparrow}/N$, then from two relations above, we have $c = \left( m+1 \right)/2$, where $N_{\uparrow}$ and $N_{\downarrow}$ are total spin up and spin down respectively. During the dynamics process, total spins up $N_{\uparrow}$ increase by $1$, decrease by $-1$, or remain constant in a time step $t$. The value of $c(t)$ also simultaneously increases or decreases by $1/N$ or remains constant with the probabilities:
\begin{equation}\label{gamma-probs}
\begin{aligned}
\gamma_{+} & = \text{prob}\left(c \rightarrow c+ 1/N\right), \\
\gamma_{-} & = \text{prob}\left(c \rightarrow c- 1/N\right), \\
\end{aligned}
\end{equation}
where the explicit formulations of $\gamma_{+}$ and $\gamma_{-}$ are given in each following system.
\subsubsection{Scenario two-two with contrarian agents}
\label{subsub.3.1.1}
For scenario two-two involving the contrarian agents, based on the model, there are eight active agent combinations: four combinations in which the spins flip from up-state to down-state and four other combinations, the spins (agents) flip from down-state to up-state. Therefore, the transition probability density of spin up increases $\gamma_{+}$ and decreases $\gamma_{-}$ are
\begin{equation}\label{gamma_prob2_contra2nd}
\begin{aligned}
{\it \gamma_{+}(c)} = & \,2\, \left( 1-p \right) {c}^{3} \left( 1-c \right) +2\, \left( 1-p \right) {c}^{2} \left( 1-c \right) ^{2} \\
& +2\,\lambda\,p \left( 1-c \right) ^{4}, \\
{\it \gamma_{-}(c)} = & \, 2\, \left( 1-p \right) c \left( 1-c \right) ^{3}+2\, \left( 1
-p \right) {c}^{2} \left( 1-c \right) ^{2} \\
&+2\,\lambda\,p\,{c}^{4},
\end{aligned}
\end{equation}
and by using the master equation that describes the "gain-lose" of the concentration $c$, we can write~\cite{krapivsky2010kinetic}:
\begin{equation}\label{gain-lose}
\dfrac{dc}{dt} = \gamma_{+}(c) -\gamma_{-}(c).
\end{equation}
From Eqs.~\eqref{gamma_prob2_contra2nd} and~\eqref{gain-lose} for a stationary condition, $\gamma_{+}(c) =\gamma_{-}(c)$, we found for non zero $c$:
\begin{equation}\label{concentration_non-zero}
c = \dfrac{1}{2}\left(1\pm \left(\dfrac{1-\left(2\lambda +1 \right)p}{1+\left(2\lambda-1\right)p}\right)^{1/2}\right),
\end{equation}
and from the relation $m = 2\,c-1$, we found the order parameter $m$ that depends on the $p$ and $\lambda$:
\begin{equation}\label{order-parameter2nd}
m = \pm \left(\dfrac{1-\left(2\lambda +1 \right)p}{1+\left(2\lambda-1\right)p}\right)^{1/2}.
\end{equation}
In this case, by setting $m = 0$, the critical point that makes the model undergo order-disorder phase transition depends on the level of contrarian $\lambda$, that is
\begin{equation}\label{critical-point22contra}
p_c = \dfrac{1}{1+2\, \lambda},
\end{equation}
which shows that there are several second-order phase transitions for all values of contrarian level $\lambda >0$. We show plot of Eq.~\eqref{order-parameter2nd} for typical values of $\lambda$ in Fig.~\ref{fig:22contra}(a). From a sociophysics point of view, there is a majority opinion for $p < p_c(\lambda)$. In other words, when the contrarian probability $p = 0$ (there is no noise), there is a complete consensus with all members having the same opinion (ferromagnetic-like or completely ordered). The consensus level decreases for the probability of conformity $p(\lambda)$ increases until it reaches zero at the critical point $p_c(\lambda)$. Above the critical point $p_c(\lambda)$ there is no majority opinion or a stalemate situation (antiferromagnetic-like or complete disordered). Based on this model, one can also say, if the contrarian level is small (high conformity level), commonly found in conservative societies, the probability of the societies undergoing the status quo is small, and vice versa. It is also shown that, based on Eq.~\eqref{critical-point22contra}, the critical point $p_c$ decreases exponentially as $\lambda$ increases as exhibited in Figure \ref{fig:22contra}(b). Equation~\eqref{critical-point22contra} also separates into order and disorder phases.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{Fig02v1}
\caption{(a) Order parameter $m$ versus contrarian probability $p$ showing second-order phase transitions for the scenario two-two with contrarian agents [Eq.~\eqref{order-parameter2nd}] using typical values of contrarian level $\lambda$ (solid line: $\lambda = 1$, dashed line: $\lambda= 1/2$, dash-dotted line: $\lambda= 1/4$, dotted line: $\lambda= 1/8$). (b) Critical point $p_c$ versus contrarian level $\lambda$ according to Eq.~\eqref{critical-point22contra}. As contrarian level $\lambda$ increases, the critical point $p_c$ decreases. This means that in a society with high level of contrarian (non conservative society), the probability to reach a stalemate situation is high. Equation~\eqref{critical-point22contra} also separates the order and disorder phases of the system.}
\label{fig:22contra}
\end{figure}
\subsubsection{Scenario two-two with independence agents}
For scenario two-two involving independence agents, six agent (spin) combinations follow the conformity rule (illustration \eqref{contrarian_1-p}), and two agent combinations follow the independence rule (illustration \eqref{independent}). Four agent combinations make agent change their opinion/state from 'up' to 'down', and four other agent combinations change their opinion/state from 'down' to 'up'. Therefore, the probability density of agent-up state increases $\gamma_{+}$ and decreases $\gamma_{-}$ can be written as follows:
\begin{equation}\label{gamma2_independent}
\begin{aligned}
\gamma_{+}(c) =& \, 2\, \left( 1-p \right) {c}^{3} \left( 1-c \right) +2\, \left( 1-p
\right) {c}^{2} \left( 1-c \right) ^{2}\\
&+2\,f \, p \,\left( 1-c \right), \\
\gamma_{-}(c) =& \, 2\, \left( 1-p \right) c \left( 1-c \right) ^{3}+2\, \left( 1-p
\right) {c}^{2} \left( 1-c \right) ^{2}\\
&+2\,f\,p\,c,
\end{aligned}
\end{equation}
and for a stationary condition $\gamma_{+} = \gamma_{-}$. By using the relation $m = 2\,c-1$, the order parameter $m$ is given by
\begin{equation}\label{order-param2_indep}
m = \pm \left(\dfrac{1-\left(1+4\,f\right)p}{1-p}\right)^{1/2},
\end{equation}
which shows the order parameter $m$ that depends on the flexibility $f$. Plot of Eq.~\eqref{order-param2_indep} for typical values of $f$ is depicted in Fig.~\ref{fig.22indep}(a). Based on this scenario, we also found that the critical point decreases exponentially as $f$ increases:
\begin{equation}\label{critical-point_indep22}
p_c = \dfrac{1}{1+4\,f},
\end{equation}
which indicates that there are several order-disorder phase transitions for all values of $f > 0$. From sociophysics point of view, there is a coexistence of a majority-minority opinion for $p < p_c(f)$, and stalemate situation for $p>p_c(f)$, for all values of $f > 0$. Furthermore, if $f$ is small, the probability to reach a stalemate situation is high, and vice versa. The critical point $p_c$ in Eq.~\eqref{critical-point_indep22} separates the order-disorder phase as shown in Fig.~\ref{fig.22indep}(b).
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{Fig03v1}
\caption{(a) Order parameter $m$ versus independence probability $p$ for the scenario two-two with independence agents [Eq. \eqref{order-param2_indep}] using typical values of flexibility $f$ (solid line: $f = 1$, dashed line: $f= 1/2$, dash-dotted line: $f= 1/4$, and dotted line: $f= 1/8$). (b) Critical point $p_c$ versus flexibility $f$ according to Eq.~\eqref{critical-point_indep22}. As flexibility $f$ increases, the critical point $p_c$ decreases. This result indicates that, in a society with high IDV, the probability to reach a stalemate situation is high.}
\label{fig.22indep}
\end{figure}
Based on Eqs.~\eqref{critical-point22contra} and \eqref{critical-point_indep22}, the critical point for scenario two-two with independence agents is smaller than the scenario two-two with contrarian agents for the similar value of $\lambda$ and $f$ (the same level of contrarian and flexibility). It means that the probability to undergo an order-disorder phase transition for scenario two-two with agents of independent behavior is higher than scenario two-two with contrarian behavior. This is because the effect of the independence behavior makes the system is more 'chaotic' than the effect of the contrarian behavior in the system, thus, making the system more disordered. From a social point of view, the probability that reaches a status quo or a stalemate situation is more possible or more common in a society with independent behavior than in a society with contrarian behavior.
\subsubsection{Scenario three-one with contrarian agents}
For scenario three-one, three agents persuade the fourth agent. There are sixteen active agent combinations, where eight agent combinations follow the conformity rule [Eq.~\eqref{eq.confor31}] and eight other agent combinations follow the contrarian rule [Eq.~\eqref{eq.contra31}]. Eight agent combinations change their opinion/state from 'up' to 'down', and vice versa. Therefore, the probability density of agent-up state increases and decreases can be written explicitly as:
\begin{equation}\label{eq.gamma31_2nd}
\begin{aligned}
\gamma_{+}(c) = & \, {c}^{3} \left( 1-c \right) \left( 1-p \right) +3\,{c}^{2} \left( 1-c \right) ^{2} \left( 1-p \right) \\
& +3\,c \left( 1-c \right) ^{3}\lambda \,p+ \left( 1-c \right)^{4}\lambda\,p, \\
\gamma_{-}(c) = & \, c\,\left( 1-c \right) ^{3} \left( 1-p \right) +3\,{c}^{3} \left( 1-c \right) \lambda\,p \\
& +3\,{c}^{2} \left( 1-c\right) ^{2} \left( 1-p \right) +{c}^{4}\lambda\,p,
\end{aligned}
\end{equation}
where $p_1 = 1-p $ and $p_2 = \lambda p$ are the conformity and contrarian probability, respectively. For a stationary state, by using the relation $m = 2\,c - 1$, the order parameter $m$ of the system is found to be:
\begin{equation}\label{order-parameter3_contra}
m = \pm \left(\dfrac{1-\left(1+5\,\lambda\right)p}{1-\left(1+\lambda\right)p}\right)^{1/2},
\end{equation}
which depends on the level of contrarian $\lambda$ as shown in Figure ~\ref{fig.31contra}(a). The situation is similar to scenario two-two with contrarian agents, i.e., for small $\lambda$ the probability to reach the stalemate situation is high, and vice versa. If we compare scenario three-one versus two-two with contrarian agents, the critical point $p_c$ of scenario three-one is smaller than scenario two-two for the same level of $\lambda$.
The critical point $p_c$ of this scenario is given by
\begin{equation}\label{eq.critical31contra}
p_c = \dfrac{1}{1+5\, \lambda},
\end{equation}
where the critical point $p_c$ decreases exponentially as the contrarian level $\lambda$ increases. Eq.~\eqref{eq.critical31contra} indicates order-disorder phase transition for all values of $\lambda > 0$. The critical point $p_c(\lambda)$ also separates between order and disorder phases, as exhibited in Fig.~\ref{fig.31contra}(b).
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{Fig04v1}
\caption{(a) Order parameter $m$ versus contrarian probability $p$ showing second-order phase transitions for the scenario three-one with contrarian agents [Eq.~\eqref{order-parameter3_contra}] for typical values of contrarian level $\lambda > 0$ (solid line: $\lambda = 1$, dashed line: $\lambda= 1/2$, dash-dotted line: $\lambda= 1/4$, dotted line: $\lambda= 1/8$). (b) Critical point $p_c$ versus contrarian level $\lambda$ according to Eq.~\eqref{eq.critical31contra}. As contrarian level $\lambda$ increases, the critical point decreases exponentially and it also separates the order and disorder phases of the system. }
\label{fig.31contra}
\end{figure}
\subsubsection{Scenario three-one with independence agents}
For scenario three-one with independence agents, eight agent combinations follow the conformity rule [ illustration~\eqref{eq.contra31}] and eight other agent combinations follow the independence rule [illustration~\eqref{eq.indep31}]. Eight agent combinations make the agents change their opinion/state from 'up' to 'down', and vice versa. Therefore, the probability density of agent-up state increases $\gamma_{+}$ and decreases $\gamma_{-}$ are given by
\begin{equation}\label{gamma3_indep}
\begin{aligned}
\gamma_{+}(c) = & \, 4\,f\,p \left( 1-c \right) +3\,{c}^{2} \left( 1-c \right) ^{2} \left( 1-
p \right) \\
&+{c}^{3} \left( 1-c \right) \left( 1-p \right), \\
\gamma_{-}(c) = & \, 4\,f\,p\,c+3\,{c}^{2} \left( 1-c \right)^{2} \left( 1-p \right) \\
&+ c
\left( 1-c \right) ^{3} \left( 1-p \right).
\end{aligned}
\end{equation}
Again, for a stationary condition $\gamma_{+} = \gamma_{-}$, by using the relation $m=2\,c-1$, the order parameter $m$ of the system is given by:
\begin{equation}\label{eq.orderindep31}
m = \pm \left(\dfrac{1-\left(1+16\,f\right)\,p}{1-p}\right)^{1/2}.
\end{equation}
Based on Eq.~\eqref{eq.orderindep31}, there are several second-order phase transitions for typical values of flexibility $f$, as shown in Fig.~\ref{fig.31indep}(a). The critical point $p_c$ also decreases exponentially as $f$ increases
\begin{equation}\label{critical-point3_indep}
p_c = \dfrac{1}{1+16\,f},
\end{equation}
which indicates the majority-minority opinion coexist for $p < p_c(f)$ and the stalemate situation for $p > p_c(f)$ for all values of $f > 0$. The situation is similar to scenario two-two with independence agents, i.e., for small flexibility factor $f$, the probability to reach a stalemate situation is high, and vice versa. When $\lambda = f$, we find that $p_c$ of scenario three-one with independence agents is smaller than $p_c$ of scenario two-two with contrarian agents.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{Fig05v1}
\caption{(a) Order parameter $m$ versus independence probability $p$ showing second-order phase transitions of the scenario three-one with independence agents [Eq.~\eqref{eq.orderindep31}] for typical values of flexibility $f$ (solid line: $f = 1$, dashed line: $f= 1/2$, dash-dotted line: $f= 1/4$, dotted line: $f= 1/8$). (b) Critical point $p_c$ versus flexibility $f$ according to Eq.~\eqref{critical-point3_indep}. The critical point $p_c$ decreases exponentially as the flexibility $f$ increases and it also separates order-disorder phase of the system.}
\label{fig.31indep}
\end{figure}
\subsection{Probability density function}
In the previous part, we have shown that the model with independence agents undergoes a second-order phase transition, with the critical point depending on the flexibility factor $f$ for the system with independence agents. In this part, we analyze the phase transition of the model from the probability density function $P(m,t)$ of the order parameter $m$ at time $t$. For simplicity without losing generality, here, we only consider the model with independence agents.
\subsubsection{Scenario two-two with independence agents}
The probability density function $P(m,t)$ of the order parameter $m$ at time $t$, for a large system $(1 < N \ll \infty)$, can be described approximately by the one-dimensional Fokker-Planck equation as follows~\cite{frank2005nonlinear}:
\begin{equation}\label{eq.fokker-planck}
\dfrac{N}{4}\dfrac{\partial}{\partial t} P(m,t) = \dfrac{\partial^2}{\partial m^2} (\xi_1 P(m,t)) -\dfrac{N}{2} \dfrac{\partial }{\partial m} (\xi_2 P(m,t)),
\end{equation}
where $\xi_1$ and $\xi_2$ are the diffusion and drift coefficients, respectively, defined as:
\begin{equation}\label{diffus-drift_coef}
\begin{aligned}
\xi_1 &= \left(\gamma_{+}(m,N)+\gamma_{-}(m,N)\right)/2, \\
\xi_2 & = \gamma_{+}(m,N)-\gamma_{-}(m,N).
\end{aligned}
\end{equation}
For a stationary condition, Eq.~\eqref{eq.fokker-planck} has a general solution as follows:
\begin{equation}\label{fokker-planck_solution}
P(m) = \dfrac{K}{\xi_1}\exp\int \dfrac{N\xi_2}{2\, \xi_1} dm,
\end{equation}
where $K$ is normalization constant that satisfies $\int_{+1}^{-1} P(m,t) \,dm = 1$.
We obtain the diffusion $\xi_1 $ and drift $\xi_2$ coefficients of the scenario two-two with independence agents from Eqs.~\eqref{gamma2_independent} and \eqref{diffus-drift_coef}:
\begin{equation}\label{diffus-drift_2nd}
\begin{aligned}
\xi_1& =\left(\left( {m}^{2}+4\,f-1 \right) p-{m}^{2}+1\right)/4, \\
\xi_2&= \left(\left(m^3-\left(1+4\,f\right)m\right)p-m^3+m\right)/2,
\end{aligned}
\end{equation}
and from Eqs.~\eqref{fokker-planck_solution} and \eqref{diffus-drift_2nd}, the probability density function $P(m)$ of the system is given by:
\begin{equation}\label{eq.probdens22indep}
\begin{aligned}
&P(m) \sim \left(\left(m^2+4\,f-1\right)p-m^2+1\right)^{-1} \\
&\times \exp\left(\frac{N \left(4pf \ln \left(m^2+4f-1\right)p-m^2+1\right)}{\left(1-p\right)}+\frac{m^2N}{2} \right).
\end{aligned}
\end{equation}
Plot of Eq.~\eqref{eq.probdens22indep} for typical values of $p$, $f = 1$, and $N = 200$ is shown in Fig.~\ref{fig.probdens22indep}. For small probability independence $p$, there is a polarization with $P(m)$ maximum at $\pm m(N)$ and it indicates that there is a majority opinion in the system. When $p$ increases, the polarization approaches each other and makes a single maximum at $m = 0$. In other words, the system goes toward the nonpolarized state, which means that there is no majority opinion. This phenomenon indicates a typical second-order phase transition. From a sociophysics point of view, for $p = 0$, all members have the same opinion indicated by the order parameter $m = \pm 1$. In other words, the system is in complete consensus. The consensus decreases until the independence probability $p =p_c$, and for $p>p_c$ the system is in a stalemate situation.
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm]{Fig06v1}
\caption{Probability density function $P (m)$ plotted versus order parameter $m$ for scenario two-two involving independence agents. In this plot, we set the population number $N = 200$ and the flexibility factor $f = 1$. The opinion polarization appears for small values of independence probability $p$, indicating the existence of a majority opinion. By increasing $p$, the system moves towards the non-polarized states, which form a single maximum at $m=0$, indicating no majority opinion in the system. This phenomenon is a typical second-order phase transition.}
\label{fig.probdens22indep}
\end{figure}
\subsubsection{Scenario three-one with independence agents}
The diffusion coefficient $\xi_1$ and drift $\xi_2$ from this system can be obtained from Eqs.~\eqref{gamma3_indep} and \eqref{diffus-drift_coef}:
\begin{equation}\label{diffus-drift3}
\begin{aligned}
\xi_1 =& \left(\left(1-p\right)\left(m^4-3\,m^2+2\right)+16\,p\,f\right)/8, \\
\xi_2 =& \left(\left(m^3-\left(1+16\,f\right)m\right)p-m^3+m\right)/4.
\end{aligned}
\end{equation}
By inserting Eq.~\eqref{diffus-drift3} into Eq.~\eqref{fokker-planck_solution} and integrating it, the probability density function $P(m)$ of the system can be obtained as follows:
\begin{equation}\label{prob-density}
\begin{aligned}
P(m) \sim & \left(\left(1-p\right)\left(m^4-3\,m^2+2\right)+16\,p\,f\right)^{-1}\\
&\times \exp \Bigg[ -N\tan^{-1} \left( \sqrt{\dfrac{p-1}{64pf+p-1}}\left(2m^2-3\right)\right) \\
& \times \dfrac{\left(32\,p\,f-p+1\right)}{2\sqrt{\left(64\,p\,f+p-1 \right)\left(p-1\right)}} \Bigg] \\
& \times \exp\left(-\frac{N}{4} \ln \left((m^4-3m^2+2) \left(p-1\right)-16\,p\,f\right)\right).
\end{aligned}
\end{equation}
Plot of Eq.~\eqref{prob-density} for typical values of $p$, $f = 1$, and $N = 200$ is shown in Fig.~\ref{fig.prob31}. The result is similar to scenario two-two with the agent of independence that the system undergoes a second-order phase transition. For small values of $p$, the system is in the polarized state, which corresponds to the existence of the majority opinion. As $p$ increases, the system goes toward the unpolarized state with a single maximum at $m=0$.
\begin{figure}[t]
\centering
\includegraphics[width=7.8 cm]{Fig07v1}
\caption{Probability density function $P (m)$ plotted versus order parameter $m$ for scenario three-one involving independence agents with $N = 200$ and $f = 1$ (same parameters as in Fig.~\ref{fig.probdens22indep}). Similar to Fig.~\ref{fig.probdens22indep}, the result for this scenario also exhibits a second-order phase transition.}
\label{fig.prob31}
\end{figure}
\subsection{Landau approach for phase transition}
In his theory~\cite{landau1937theory}, Landau assumed that the Gibbs free energy depends not only on the thermodynamics parameter such as temperature and pressure but also on the order parameter such as in Eq.~\eqref{order-param}. Generally, one defines the Landau potential as a function of order parameter $m$ and any parameter describing the system state, e.g., in this case, the probability of independence $p$. Therefore, to analyze the phase transition in this system, the first three terms of the simplified Landau potential can be written as follows:
\begin{equation}\label{landau-potential}
V(m,p) = V_0 + V_1 \, m^2 + V_2 \, m^4,
\end{equation}
where the parameters $V_1$ and $V_2$, in general, can be as a function of contrarian or independence probability $p$. Based on Eq.~\eqref{landau-potential}, the phase transition occurs when $V_1=0$ and $V_2>0$.
To obtain the 'effective potential' in this system, firstly, we define the 'effective force' that is, the difference between the probability density of spin up increasing $\gamma_{+}$ and decreasing $\gamma_{-}$, $f = \gamma_{+}- \gamma_{-}$ \cite{nyczka2012opinion}. Therefore, the effective potential of the system is obtained by using $V(m) = -\int f(m) \, dm$. For the scenario two-two with contrarian agents, its effective potential is given by
\begin{align}\label{eff-potential}
V(m) = \left(p\,(1+2\lambda)-1 \right)m^2/4+\left(1-p\,(1-2\lambda)\right)m^4/8
\end{align}
Therefore, from Eqs.~\eqref{landau-potential} and \eqref{eff-potential}, the parameters $V_1$ and $V_2$ are
\begin{align}\label{parameter-AB}
V_1= & \left(p\,(1+2\lambda)-1 \right)/4,\\
V_2 = &\left(1-p\,(1-2\lambda)\right)/8,
\end{align}
where the critical point corresponding to $V_1 = 0$, i.e.,
\begin{equation}\label{eq.30}
p_c = \dfrac{1}{1+2\lambda},
\end{equation}
is the same as Eq.~\eqref{critical-point22contra}. We can write $V_2(\lambda) = \lambda/\left(2+4\lambda\right)>0$ for all values of $\lambda$. Plot of Eq.~\eqref{eff-potential} for several values of contrarian probability $p$ is shown in Fig.~\ref{bis-potential}(a).
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{Fig08v1}
\caption{The effective potential $V(m)$ versus order parameter $m$ based on the Sznajd model on a complete graph for (a)~scenario two-two with contrarian agents and (b) scenario two-two with independence agents. Below the critical point $p_c = 1/3$ ($\lambda = 1$), the system with contrarian agents shown in panel (a) is in bi-stable states, which correspond to two meta-stable states. The bi-stable states also appear in case (b) with independence agents below the critical point $p_c = 1/5$ ($f = 1$). For both scenarios, the stability decreases as $p$ increases and moves towards the mono-stable state at $m = 0$. This situation also indicates that both scenarios undergo a second-order phase transition.}
\label{bis-potential}
\end{figure}
For scenario two-two with independence agents, the effective potential is given by
\begin{equation}\label{eff-potential_2indep}
\begin{aligned}
V(m) =& \, \Big( \left(1-\left(1+8\,f\right)p\right)-\left(2-\left(2+8\,f\right)p\right)m^2 \\
&+\left(1-p\right)m^4 \Big)/16,
\end{aligned}
\end{equation}
where the parameters $V_1$ and $V_2$ are given by:
\begin{align}
V_1 =& -\left(1-\left(1+4\,f\right)p\right)/8\\
V_2 =& \left(1-p\right)/16 = f/ (4+16\,f) >0.
\end{align}
Note that $V_2$ is always positive for all values of $f$. The critical point corresponding to $V_1 = 0$ is $p_c = 1/(1+4\,f)$, which is consistent with Eq.~\eqref{critical-point_indep22}. Plot of Eq.~\eqref{eff-potential_2indep} for several values of probability independence $p$ is given in Fig.~\ref{bis-potential}(b).
For scenario three-one with contrarian agents, its effective potential (not plotted) is given by
\begin{equation}\label{eff-potential3_contra}
\begin{aligned}
V(m) =& \, \Big(\left(1-\left(1+\lambda\right)p\right)5\,m^4-\left(1-\left(1+5\lambda\right)p\right)10\,m^2 \\
&+\left(5-\left(5+13\lambda\right)p\right)\Big)/160,
\end{aligned}
\end{equation}
where the parameters $V_1$ and $V_2$ are given by
\begin{align}\label{eq.param-V_contra}
V_1 & = -\left(1-\left(1+5\lambda\right)p\right)/16, \\
V_2 & = \left(1-\left(1+\lambda\right)p \right)/32 = \lambda/(8+40\,\lambda) > 0.
\end{align}
The critical point (when $V_1 = 0$) is $p_c =1/(1+5\,\lambda)$, which confirms Eq.~\eqref{eq.critical31contra}. The effective potential for the scenario three-one with independence agents (not plotted) is given by:
\begin{align}\label{eff-potential3_indep}
V(m) =~&\Big(\left(1-\left(1-32\,f\right)p\right)
-\left(2-\left(2+32\,f\right)p\right)m^2\nonumber\\
&+\left(1-p\right)m^4\Big)/32,
\end{align}
where the parameters $V_1$ and $V_2$ are given by
\begin{align}\label{param-AB3_indep}
V_1 =& -\left(1-\left(1+16f\right)p\right)/16, \\
V_2 =& (1-p)/32 = f/(2+32\,f)>0.
\end{align}
The critical point in this case is $p_c = 1/(1+16\,f)$.
For all scenarios, based on the parameters $V_1$ and $V_2$, the system undergoes a second-order phase transition for all values of $\lambda$ and $f$. As exhibited in Figs.~\ref{bis-potential}(a) and~\ref{bis-potential}(b) for scenario two-two and for $p$ below the critical point, the potential is in a bi-stable state indicating an ordered phase. As $p$ increases, the potential starts reaching a minimum and towards monostable at $m=0$, indicating a new disordered phase. This phenomenon is a typical second-order phase transition. We will obtain the same phenomenon also for scenario three-one.
\subsection{Numerical simulation}
We perform numerical simulations for $p \in [0,1]$ for all scenarios to estimate the critical point of the model. The relevant control parameter is a ratio between contrarian or independence probability with conformity probability $r = p_2/p_1$. We vary the total population $N$, e.g., $N = 256, 512, 1024, 2048, 8192$ and using the finite-size relation [Eqs. \eqref{eq4}--\eqref{eq7}] to estimate the critical point and the critical exponents of the system. The initial condition is $0.5$ (disorder state).
\begin{figure*}[t]
\centering
\includegraphics[width=15cm]{Fig09}
\caption{(a)-(c) Numerical results of order parameter, Binder cumulant, and susceptibility for the scenario two-two with contrarian agents based on the outflow dynamics with $\lambda = 1$ on a complete graph. The critical point can be obtained from the lines of intersection of Binder cumulant $U$ versus probability of contrarian behavior $p$ in (b). The best data collapses are obtained for $p_c \approx 0.33$, $\beta \approx 0.5$, $\gamma \approx 1$, and $\nu \approx 2$, indicating this model is in the same universality class as the mean-field Ising model. Panel (d) shows that the numerical results with $N = 10000$ and several values of $\lambda$ are in good agreement with the analytical results [Eq.~\eqref{critical-point22contra}].}
\label{fig.num22contra}
\end{figure*}
\subsubsection{Scenario two-two with contrarian agents}
The parameter control for this scenario is $r = \lambda\,p / (1-p)$; therefore, the real control parameter only depends on the contrarian factor $\lambda$ and contrarian probability $p$. The order parameter $m$ versus the contrarian probability $p$ is shown in Fig.~\ref{fig.num22contra}(a), where the inset graph shows the best collapse for all values of $N$. Based on Ref.~\cite{binder1981finite}, the critical point can be obtained from the intersection of lines between Binder cumulant $U$ versus $p$, that occurred at $p = p_c \approx 0.33$ for $\lambda = 1$ as shown in Fig.~\ref{fig.num22contra}(b). As shown in Fig.~\ref{fig.num22contra}(c), the ''peak" of the susceptibility $\chi$ shifts towards the critical point $p_c \approx 0.33$ for $N$ increases. This result confirms the analytical result in Eq.~\eqref{order-parameter2nd}, in which for $m = 0$ and $\lambda = 1$, we find $p = p_c = 1/3$. We also estimate the critical exponents $\beta, \gamma, $ and $\nu$ using the finite-size scaling relations in Eqs.~\eqref{eq4}--\eqref{eq7} and find that the critical exponents that make all the values of $N$ collapse are $\beta \approx 0.5, \gamma \approx 1,$ and $\nu \approx 2$. These exponents are universal, i.e., we obtain the same values of $\beta, \nu,$ and $\gamma$ for all values of $N$. Based on the values of the critical exponents, our results indicate that this model is in the same universality class as that of the kinetic exchange opinion model with two-one agent interactions~\cite{crokidakis2014phase, crokidakis2012role, biswas2009model}, as well as that of the mean-field Ising model. The numerical estimate for the critical exponent $\beta = 1/2$ also agrees with Eq.~\eqref{order-parameter2nd} which can be written in form of $m \sim
(p-p_c)^{\beta}$ for all values of $\lambda$.
Equation~\eqref{critical-point22contra} is also confirmed numerically, as shown in Fig.~\ref{fig.num22contra}(d), indicating that there are phase transitions for $\lambda > 0$. One can see the good agreement between the analytical and simulation results. The critical probability of contrarian $p$ decreases as $\lambda$ increases. For high $\lambda$, a spin-flip occurs more frequently in the contrarian case and makes the system more disordered. From a sociophysics point of view, at $ p = 0 $, a complete consensus is reached with all members having the same opinion. $p <p_c$ indicates a majority opinion exists, while $p \geq p_c$ indicates no majority opinion or the system is in a stalemate situation. Furthermore, in a society with high contrarian behavior, the possibility to achieve a status quo or a stalemate situation is relatively high.
\subsubsection{Scenario two-two with independence agents}
The control parameter for this scenario is $r = fp/(1-p)$ and we only prove the order parameter $m$ in Eq.~\eqref{order-param2_indep}, i.e., there are several phase transitions for all values of inflexibility ($f > 0$). We use $N = 10000$ and average over $1000$ simulation for each point as exhibited in Fig.~\ref{fig.orderindep22}. It can be seen that the numerical results for several values of $f$ also agree with the analytical result. From a sociophysics point of view, majority-minority opinion stills exist below the critical point $p_c(\lambda)$, while for $p > p_c(\lambda)$ there is no majority opinion or the systems in a stalemate situation.
One can see that the critical probability of independence decreases as $f$ increases. For high flexibility $f$, a spin-flip occurs more frequently in the independence case and makes the system more chaotic. Therefore, the probability of the system reaching the stalemate situation is higher. In other words, for high $f$, the possibility of consensus is tiny.
\begin{figure}[t]
\centering
\includegraphics[width = 7 cm]{Fig10}
\caption{Order parameter $m$ versus probability of independence $p$ for typical values of $f$. Symbols represent the numerical results for the population $N = 10000$, while solid lines represent the analytical result from Eq.~\eqref{order-param2_indep}.}
\label{fig.orderindep22}
\end{figure}
\subsubsection{Scenario three-one with contrarian agents}
\begin{figure*}[t]
\centering
\includegraphics[width = 14cm]{Fig11}
\caption{(a)--(c) Numerical results of order parameter, Binder cumulant, and susceptibility for the scenario three-one with contrarian agents based on the outflow dynamics with $\lambda = 1$ on a complete graph. The critical point can be obtained from the lines of intersection of Binder cumulant $U$ versus probability of contrarian behavior $p$ in (b). The best data collapses are obtained for $p=p_c \approx 0.166$, $\beta \approx 0.5$, $\gamma \approx 1$, and $\nu \approx 2$, indicating this model is in the same universality class as the mean-field Ising model. Panel (d) shows that the numerical results with $N = 10000$ and several values of $\lambda$ are in good agreement with the analytical results [Eq.~\eqref{order-parameter3_contra}].}
\label{fig.numcontra31}
\end{figure*}
The numerical results for the scenario three-one with contrarian agents are given in Figure \ref{fig.numcontra31}. The order parameter $m$ versus the contrarian probability $p$ is shown in Fig.~\ref{fig.numcontra31}(a). We also find the critical point $p_c$ from the intersection of lines between Binder cumulant $U$ versus probability $p$ that occurred at $p=p_c \approx 0.166$ for $\lambda = 1$ as shown in Fig.~\ref{fig.numcontra31}(b). The "peak" of the susceptibility $\chi$ shifts towards the critical point $p_c \approx 0.166$ for $N$ increases as shown in Fig.~\ref{fig.numcontra31}(c). We also find the critical exponents $\beta \approx 0.5, \gamma \approx 1,$ and $\nu \approx 2$ using finite-size scaling \eqref{eq4}-\eqref{eq7} (inset graphs). From this results, the model is also found to be in the same universality class as the mean-field Ising model. The numerical estimate for $\beta \approx 0.5$ agrees with Eq.~\eqref{order-parameter3_contra}, which can be written in form $m \sim (p-p_c)^{\beta} = (p-p_c)^{1/2}$ for all values of $\lambda$. The numerical result of order parameter $m$ for typical values of $\lambda$ also agrees with the analytical result \eqref{order-parameter3_contra} as exhibited in Fig.~\ref{fig.numcontra31}(d). One can see that there are several second-order phase transitions for typical values of $\lambda > 0$.
\begin{figure}[t]
\centering
\includegraphics[width = 7 cm]{Fig12}
\caption{Order parameter $m$ versus probability of independence $p$ for typical values of $f$. Symbols represent the numerical results for population $N = 10000$, while solid lines represent the analytical results from Eq.~\eqref{eq.orderindep31}.}
\label{fig.numorder31indep}
\end{figure}
\subsubsection{Scenario three-one with independence agents}
In this scenario, we also performed numerical simulation only for order parameter $m$ with typical flexibility value $f$. We use the population $N = 10000$ and average over $1000$ simulations for each point as exhibited in Fig.~\eqref{fig.numorder31indep}. Numerical and analytical results are in agreement \eqref{eq.orderindep31} for typical values of $f$. We also compare scenario three-one with scenario two-two. The critical point $p_c(f)$ is smaller in scenario three-one than in scenario two-two for the same value of $f$. The same results are obtained in both the scenario with independence agents and the scenario with contrarian agents due to the higher number of persuaders will cause the system to be more likely in a stalemate situation.
From the analytical and simulation results, for the same scenario, the critical point for the same value of $\lambda = f$ is more significant for the scenario with contrarian agents than the scenario with independence agents because independence behavior in the system makes it more chaotic than contrarian behavior in the system. This situation corresponds to the typical independence behavior that involves actions independent from the group norm. In contrast, the spin with the contrarian behavior flips in a more organized manner because it follows the group norm.
\subsubsection{Remarks on the novelty of this study}
Based on the analytical and simulation results, all four scenarios discussed above actually follow the same dynamical equation:
\begin{equation}
\dfrac{dm}{dt} = k_1 m - k_2 m^3.
\end{equation}
This indicates all four scenarios undergo second-order phase transition, in which $m$ is the order parameter and the $k_1$, $k_2$ parameters depend on the contrarian or independence probability $p$, the flexibility $f$ and the level of contrarian $\lambda$. For example, the order parameter $m$ is explicitly described by Eqs.~\eqref{order-parameter2nd} and \eqref{order-param2_indep} for scenarios two-two with contrarian agents and independence agents, respectively. Meanwhile, for scenarios three-one with contrarian agents and independence agents, the order parameter $m$ is explicitly described by Eqs.~\eqref{order-parameter3_contra} and \eqref{eq.orderindep31}, respectively.
The critical points $p_c$ of the scenario with contrarian and independence agents depend on the level of ``noise parameter" $\lambda$ and $f$, respectively [c.f. Eqs.~\eqref{critical-point22contra} and \eqref{critical-point_indep22} for scenario two-two; and Eqs.~\eqref{eq.critical31contra} and \eqref{critical-point3_indep} for scenario three-one]. Interestingly, although they have different microscopic interactions, even have a different type of ``noises" $\lambda$ or $f$, but based on our numerical simulations, all four scenarios have the same critical exponents $\beta \approx 0.5, \nu \approx 2.0$ and $\gamma \approx 1.0$, indicating that that they are identical and in the same universality class as the mean-field Ising model. This argument can be considered as the novelty of this work.
\section{Summary and outlook}
\label{sec.4}
We have investigated the outflow dynamics or the Sznajd model for four-agent (four-spin) local interaction with two different scenarios on a complete graph. In the first scenario, two agents persuade the other agents wherever in population; meanwhile, in the second scenario, three agents persuade the fourth agent wherever in population. For each scenario, we considered two types of social behaviors, namely contrarian and independence. These social behaviors act like noises that cause the system to undergo an order-disorder phase transition. We analyze the effect of both social behaviors on the phase transition of the systems and compare the results.
Based on the calculations of phase transition and universality of the model, we found that all systems undergo second-order phase transition for all values of contrarian factor $\lambda$ and flexibility $f$. The critical point $p_c$ depends on the contrarian factor $\lambda$ or on the flexibility $f$, where $p_c$ decreases exponentially as $\lambda$ or $f$ increases. For high-level contrarian and independence factors (nonconservative society), the possibility to reach a consensus is small. Otherwise, the possibility of reaching a status quo or a stalemate situation is high. We also found that the critical point for scenario three-one is smaller than scenario two-two for both contrarian and independence cases for the same value of $\lambda $ or $f$. In addition, the critical probability $p_c$ is smaller in the scenario with independence agents than the contrarian for $\lambda = f$. Using finite-size scaling relations, the critical exponents for both scenarios with contrarian agents are $\beta \approx 0.5, \gamma \approx 1$, and $\nu \approx 2$. Therefore, our results suggest that the model is in the same universality class as the mean-field Ising model.
From this study, we suggest that the existence of a group with independent behaviors may disrupt the social cohesion of a society more than a group of people with a tendency of contrarian behaviors. From either the two-two or three-one scenario of our result, the existence of a significant group with independent opinions in society makes reaching a consensus harder than that with contrarian behavior. The difficulty of reaching a consensus amid socio-political disturbance is a formula for creating an unstable society. Thus, this model corroborates the fact that the existence of a system with strong duo political-social entities that are contrary to each other in many issues can be more stable than a single omnipotent political-social entity with many unorganized independent movements, especially during strong socio-political turbulence.
\section*{Data Availability}
The raw/processed data required to reproduce these findings are available to download from \verb|https://github.com/muslimroni/Sznajd2231|.
\section*{CRediT authorship contribution statement}
\textbf{R. Muslim:} Conceptualization, Methodology, Software, Formal analysis, Investigation, Data Curation, Writing - original draft, Visualization. \textbf{M.J. Kholili:} Validation, Formal analysis, Writing - review \& editing. \textbf{A.R.T. Nugraha:} Writing - review \& editing, Supervision, Funding acquisition.
\section*{Declaration of Interests}
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
\section*{Acknowledgments}
R.M. is supported by postdoctoral fellowship under LIPI/BRIN talent management program. We acknowledge Dr. Rinto Anugraha NQZ from Gadjah Mada University for his guidance during the graduate study of R.M.
|
2,877,628,090,320 | arxiv | \section{Introduction}
The theorem of Rabinowitz \cite{rabinowitz}, extending the work of Krasnosel'skii \cite{krasnoselskii}, provides a classical topological tool for analysis of global bifurcations.
It establishes that a continuous branch of fixed points bifurcating from a trivial solution either extends to infinity or connects back to the trivial solution at another bifurcation point.
The global bifurcation theorem of Alexander and Yorke \cite{alexanderyorke} establishes a counterpart of this result for branches of periodic orbits bifurcating from an equilibrium via a Hopf bifurcation (see also \cite{malletparetchow, FiedlerBook2,IV-B,AED,Wu1}). The theorem states that these branches
are either unbounded or connect to an equilibrium at another Hopf bifurcation point.
The latter of the two alternatives can sometimes be excluded by local analysis at the equilibrium point, in which case the theorem guarantees the existence of an unbounded branch of periodic orbits.
It is important to note that in the above theorems branches of periodic orbits are considered in Fuller space, i.e.~in the product of the
space of periodic functions (solutions) and the space of parameters which include the bifurcation parameter $\alpha$ and the unknown period $p$ of periodic orbits \cite{qingwen}.
Thus, a branch is unbounded if it contains elements for $\alpha$ arbitrarily close to the boundary of the parameter interval, or contains periodic orbits of arbitrarily large amplitude (norm $\|x\|$), or orbits of arbitrarily large period $p$, or several of these possibilities are combined.
In particular, it is a non-trivial problem to determine whether the branch is unbounded because it contains large-amplitude periodic orbits or their periods are unbounded. An example of the latter possibility is a branch connecting a Hopf bifurcation point with a homoclinic bifurcation point. In this context, {\em a priori} estimates of the period play an important role. However, they are typically hard to establish, and
the question is further complicated by the fact that periods considered in the theorem are not necessarily the minimal period.
If the norm of periodic orbits along the branch is uniformly bounded, the above topological results do not provide any estimate for this norm.
The purpose of this paper is to prove the existence of continuous branches that contain periodic orbits of all amplitudes ranging from zero to $\|x\|=R$ where we can explicitly control $R$. In particular, under certain conditions, this branch is unbounded and $\|x\|$ ranges from zero to infinity, while the minimal period of the orbits is unifomly bounded. We prove the existence of a non-stationary periodic solution of any given norm $\|x\|=s$ satisfying $0< s<R$ by showing that the equivariant $S^1$-degree of a vector field $\mathcal F$ associated with the problem is non-trivial on the boundary of some domain $\Omega$ containing the periodic orbit. The proof is completed by Kuratowski's lemma which ensures in a standard way that all these solutions are embedded in a connected branch of non-stationary periodic orbits stemming from a Hopf bifurcation point.
Because the periodic orbits of interest are neither small nor close to the bifurcation point, the analysis is non-local, and the domain $\Omega$
is designed in a special way. In order to compute the $S^1$-degree, we demand
the vector field $\mathcal F$ to have a principal linear (with respect to $x$) component $a(\alpha,p)x$ on a part of the boundary of $\Omega$, in the sense that a certain projection $\mathcal Q \mathcal F$ of the vector field satisfies $\|\mathcal Q\mathcal F(\alpha,p,x) - a(\alpha,p)x\|\le \|a(\alpha,p)x\|$. This condition limits the size of the domain $\Omega$ and the maximal amplitude $\|x\|=R$ of the orbits that we can capture, unless the above estimate is global.
In addition, we prove the theorem in the equivariant setting where the system respects a group of spatial symmetries $\Gamma$. In other words, we consider a $\Gamma$-equivariant Hopf bifurcation. As a typical scenario, this bifurcation can give rise to multiple branches of periodic orbits characterized by different groups of spatio-temporal symmetries. In order to ensure the existence of a branch with a specific symmetry, and estimate the range of norms of its elements, we restrict the opeartor of the problem to the fixed point space of the corresponding symmetry group and apply the non-local topological construction described above to the restricted operator. As we illustrate by examples, this approach allows us to handle not only generic $\Gamma$-equivariant systems but also a number of degenerate cases. One of them is the simultaneous Hopf and steady state bifurcations. In another degenerate resonance situation the crossing number is undefined because the linearization has a pair of purely imaginary roots for all the parameter values, and other roots cross the imaginary axis through this pair at the Hopf bifurcation point.
The paper is organized as follows. Section \ref{prelim} contains a brief account of
{the $S^1$-degree \cite{Dylawer}, which is the main equivariant topological tool used in the proofs
(see also \cite{AED} for the axiomatic approach to the $S^1$-degree and \cite{AED,IV-B} for a systematic exposition of the equivariant degree theory and its applications to symmetric Hopf bifurcation).}
The main result and its proof are presented in Sections \ref{main} and \ref{proof}. Section \ref{examples} contains three examples. Some notation of the latter section is explained in Appendix.
\section{Preliminaries}\label{prelim}
In this section, we provide some equivariant degree background.
\subsection{$S^1$-degree}\label{subsec:S1-degree}
Let $G$ be a compact Lie group acting on a metric space $X$ (see, for example, \cite{AED,Brocker-tomDieck}). For any $x \in X$, put $G(x)=\{gx \in X \; : \; g \in G\}$ and call it the orbit of $x$.
A set $Z \subset X$ is called $G$-invariant (in short, invariant) if it contains all its orbits. Given a (closed) subgroup $H \leq G$, denote by $X^H:= \{x \in X \, : \, hx = x \; \forall h \in H\}$ the set of all $H$-fixed points of $X$.
Assume $G$ acts on two metric spaces $X$ and $Y$. A continuous map $f : X \to Y$ is called $G$-equivariant
if $f(gx) = gf(x)$ for all $x \in X$ and $g \in G$. In particular, if the action of $G$ on $Y$ is trivial, then the equivariant map is called $G$-invariant. We refer to \cite{Brocker-tomDieck,GolSteShef,GS,AED} for the equivariant topology and representation theory background frequently used in the present paper.
Let $V$ be an orthogonal $S^1$-representation. Suppose that an open bounded set $\Omega \subset \mathbb R \oplus V$ is invariant with respect to the $S^1$-action, where we assume that $S^1$ acts trivially on $\mathbb R$. As it is well-known, for any $x \in \Omega$, one has $G_x = S^1$ or $G_x = \mathbb Z_k$, where $\mathbb Z_k$ stands for a cyclic group of order $k$.
We say that an equivariant map $f: \overline{\Omega} \to V$ is admissible if $f^{-1}\{0\} \cap \partial \Omega = \emptyset$. In this case, $(f,\Omega)$ is called an admissible pair. Similarly, a continuous map $h : [0,1] \times \overline{\Omega} \to V$ is called an admissible (equivariant) homotopy if $h(t,\cdot)$ is admissible for any $t \in [0,1]$. It is possible to axiomatically define a unique function $S^1$-deg which assigns to each admissible pair a finite linear combination $\sum_{k=1}^m n_{l_k}(\mathbb Z_{l_k})$, where $n_{l_k} \in \mathbb Z$
(cf.~\cite{AED}, pp. 109, 113). The following is a partial list of the axioms:
\medskip
\noindent\textbf{(A1) (Existence)} If $S^1\text{\rm -deg}\,(f,\Omega) = \sum_{k=1}^m n_{l_k}(\mathbb Z_{l_k})$ and $n_{l_k} \neq 0$ for some $k$, then there exists an $x\in \Omega$ such that $f(x) = 0$ and
$\mathbb Z_{l_k} \subset G_x$.
\medskip
\noindent\textbf{(A2) (Homotopy)} If $h:[0,1]\times \overline{\Omega} \to V$ is an admissible equivariant homotopy, then the value of
$S^1\text{\rm -deg}\,(h(\mu,\cdot),\Omega)$ is the same for each $\mu$.
\medskip
\noindent\textbf{(A3)(Additivity)} For two invariant open disjoint subsets $\Omega_1,\Omega_2 \subset \Omega$ with $f^{-1}(0)\cap \Omega\subset \Omega_1\cup\Omega_2 $, one has
$$S^1\text{\rm -deg}\,(f,\Omega)= S^1\text{\rm -deg}\,(f,\Omega_1 )+ S^1\text{\rm -deg}\,(f,\Omega_2).$$
\medskip
\noindent
\textbf{(A4)(Normalization)} Denote by $\mathcal V_1$ the complex plane equipped with the $S^1$-action induced by complex multiplication:
$\gamma z := \gamma \cdot z$, $\gamma= e^{i\theta} \in S^1$, $z \in \mathbb C$.
Define a set $\Omega_0$ and a map $b : \mathbb R \oplus \mathcal V_1 \to \mathcal V_1$ by
$$
\Omega_0 = \Big\{(t,z) \in \mathbb R \oplus \mathcal V_1 \; : \; |t| < 1, \;\; {1 / 2} < \|z\| < 2\Big\}, \quad b(t,z)= (1 - \|z\| + it)\cdot z.
$$
Then, $S^1\text{\rm -deg}\,(b,\Omega_0)= 1 \cdot (\mathbb Z_1)$.
\medskip
\noindent
{\textbf{(A5)(Multiplicativity)} Suppose that $\mathcal U$ is a finite-dimensional space with the trivial $S^1$-representation, $U$ is an open bounded neighborhood of zero in $\mathcal U$ and $g : U \to \mathcal U$ is a continuous map with no zeros on $\partial U$ . Then,
$$S^1\text{\rm -deg}\,(f \times g, \Omega \times U) = S^1\text{\rm -deg}\,(f,\Omega) \cdot \deg(g,U),$$
where ``$\deg$" stands for the Brouwer degree.}
\medskip
\noindent
\textbf{(A6)(Suspension)} Suppose that $\mathcal U$ is an orthogonal $S^1$-representation and $U$ is an open bounded invariant neighborhood of zero in $\mathcal U$. Then, $$S^1\text{\rm -deg}\,(f \times {\rm Id}, \Omega \times U) = S^1\text{\rm -deg}\,(f,\Omega).$$
{\begin{remark}\label{rem:excision}{\rm
In a standard way, using property {\bf (A3)}, one can deduce the {\it excision} property of the $S^1$-degree. Namely, if $f^{-1}(0) \cap \Omega \subset \Omega_0$, where
$\Omega_0 $ is an invariant open subset of $\Omega$, then $S^1\text{\rm -deg}\,(f ,\Omega) = S^1\text{\rm -deg}\,(f ,\Omega_0)$.}
\end{remark}}
{Combining the equivariant version of the standard Leray-Schauder projection with property {\bf (A6)},} one can define the $S^1$-degree for $S^1$-equivariant compact vector fields (see \cite{AED, IV-B} for details). Also, combining the axioms of the $S^1$-degree with some standard homotopy theory techniques, one can reduce the computation of the $S^1$-degree of the maps naturally associated with a system undergoing the Hopf bifurcation to the computation of the Brouwer degree. To be more precise, let $V$ be an orthogonal
$S^1$-representation with $V^{S^1}=\{v\in V:\ (\gamma, v)= v \ \ \forall \gamma \in S^1\}=\{0\}$. Take the isotypical decomposition
$$
V = V_{k_1} \oplus V_{k_2} \cdots \oplus V_{k_s},$$
where each $V_{k_j}$ is modeled by the $k_j$-th irreducible representation. Define
\begin{equation}\label{eq:mathcal-O}
\mathcal O=\{(\lambda,v) \in \mathbb C \oplus V:\|v\|<2\ , {1}/{2}<|\lambda|<4\}.
\end{equation}
Now, consider a map $a:S^1 \to GL^{S^1}(V)$ and define $a_j:S^1 \to GL^{S^1}(V_{k_j})$ by
the formula $a_j(\lambda)= a(\lambda)_{|V_{k_j}}$ (see, \cite[p.~284]{AED}). Let $f_a:\overline{\mathcal O} \to \mathbb R\oplus V$ be an
$S^1$-equivariant map defined by
\begin{equation}\label{eq:f-a}
f_a(\lambda,v)= \bigl(|\lambda|(\|v\|-1)+\|v\|+1, a\left({\lambda}/{|\lambda|}\right)v \bigr).
\end{equation}
The following formula {(combined with Property {\bf (A5)}}) plays an important role in our proofs:
\begin{equation}\label{eq:main-formula}
S^1\text{\rm -deg}\,(f_a,\mathcal O)= \sum_{j=1}^s \left( \deg(\text{det}_{\mathbb C}\circ a_j,B)\right)(\mathbb Z_{k_j}),
\end{equation}
where $B$ stands for the unit ball in $\mathbb C$ (cf. \cite{AED}, Theorem 4.23).
\subsection{
Spatio-temporal symmetries of periodic functions}
\label{subsec:twisted-subgr}
{If $\Gamma$ is a finite group and $W$ is a $\Gamma$-representation, then for any periodic function $x :\mathbb R \to W$, its
spatio-temporal symmetries are described by a subgroup $ H < \Gamma$ and a homomorphism $\varphi : H \to S^1 = \mathbb R / \mathbb Z$. This information is encoded in the graph of the homomorphism $\varphi$ which we will denote by $H^{\varphi}$. To be more specific, if $x$ is a $p$-periodic function with symmetry group $H^{\varphi}$, then for each $h \in H$, one has $h x (t - \varphi(h)p ) = x(t)$ for any $t$. Clearly, if $x$ is a non-constant function, then
$H^\varphi $ is a finite group. Several twisted subgroups important for the present paper are explicitly described in the Appendix.}
\section{Main result}\label{subsec:main-results}\label{main}
Let $\Gamma$ be a finite group and $V=\mathbb{R}^N$ an orthogonal $\Gamma$-representation.
Suppose $A: [\alpha_-,\alpha_+] \to L^{\Gamma}(V;V)$ is a continuous curve in the space of $\Gamma$-equivariant linear maps from $V$ to $V$ and $f: [\alpha_-,\alpha_+]\times V\to V$ is a continuous $\Gamma$-equivariant map (we assume that $\Gamma$ acts trivially on $[\alpha_-,\alpha_+]$). We are interested in the existence of branches of periodic solutions with a prescribed spatio-temporal symmetry for the equation
\begin{equation}\label{PNH}
\dot{x}= A(\alpha)x + f(\alpha, x),\qquad x\in V.
\end{equation}
Further, we are interested in effective estimates of the length of these branches.
To be more precise, following the standard scheme based on the normalization of the period (see, for example, \cite{AED}), instead of looking for $p$-periodic solutions to \eqref{PNH} with unknown
period $p$, one can introduce $p$ as an additional parameter and reduce the original problem to looking for $2\pi$-periodic solutions.
To this end, put $\beta:= {2\pi / p}$ and apply the change of variables
$$
u(t) = x\Big(\frac{pt}{2\pi} \Big)
$$
to obtain the problem
\begin{equation}\label{eq:per-normalization}
\begin{cases}
\beta \dot{u} = A(\alpha)u + f(\alpha,u),\\
u(0) = u(2\pi).\\
\end{cases}
\end{equation}
Denote
by $C:= C(S^1;V)$ the space of continuous $V$-valued maps on $S^1$ equipped with the sup-norm.
We naturally identify $2\pi$-periodic functions with the elements of $C$.
\begin{definition}\label{YTM}
{Let $\mathcal{B}$ be a set of non-constant solutions $(\alpha, \beta, x(t)) \subset [\alpha_-,\alpha_+]\times[\beta_-,\beta_+]\times C$ of
problem \eqref{eq:per-normalization} such that $[\beta_-, \beta_+] \subset (0, \infty)$. The set $\mathcal{B}$ is called a branch joining the spheres of radius $r$ and $R$ centered at the origin of $C$ if $\mathcal{B}$ is a compact connected set in the space $[\alpha_-,\alpha_+]\times[\beta_-,\beta_+]\times C$
equipped with the product norm and $\mathcal{B}$
has a non-empty intersection with each of the sets $[\alpha_-,\alpha_+]\times[\beta_-,\beta_+]\times \{\|x\|_C=r\}$ and $[\alpha_-,\alpha_+]\times[\beta_-,\beta_+]\times \{\|x\|_C=R\}$.}
\end{definition}
{Denote $G:=\Gamma \times S^1$. To connect symmetric properties of the branches with equivariant spectral properties of $A(\alpha)$,
denote by $\widetilde{V}=\mathbb{C}^N$ the complexification of the representation $V$ and extend
the complex $\Gamma$-representation $\widetilde V$} to a real $G $-representation $^l\widetilde V$ by defining the $l$-folded action of $S^1$ by $e^{i{\theta}}\cdot v := e^{il{\theta}}v$.
The following family of finite-dimensional maps $\Delta_l(\alpha, \tau, \beta) \in L^G(^l\widetilde V; ^l\widetilde V)$ will {play an important role in our considerations
{(here $L^G(^l\widetilde V; ^l\widetilde V)$ stands for $G$-equivariant linear operators in $^l\widetilde V$)}:
\begin{equation}\label{eq:without}
\Delta_l(\alpha, \tau, \beta) := l(\tau +i\beta)\text{Id} - A(\alpha).
\end{equation}
Further, take a twisted subgroup $H^\varphi < G$ (cf. Subsection \ref{subsec:twisted-subgr})
and denote by $^l\widetilde V^{H^\varphi}$ the fixed point space of $H^\varphi$ and by $\Delta^{H^\varphi}_l(\alpha, \tau, \beta)$ the restriction of $\Delta_l(\alpha,\tau, \beta)$ to $^l\widetilde V^{H^\varphi}$.
With this restriction, we associate
the map $\Lambda^{H^\varphi}_l:[\alpha_-,\alpha_+] \times \mathbb R \times \mathbb R \to \mathbb C$ defined by
\begin{equation}\label{eq:det-Lambda}
\Lambda^{H^\varphi}_l (\alpha, \tau, \beta) := \begin{cases} \text{det}_\mathbb R (A(\alpha)|_{V^H}) \;\;\,\quad\qquad \text{if} \quad l=0, \\
\text{det}_{\mathbb C}(\Delta^{H^\varphi}_l(\alpha,\tau, \beta)) \qquad \text{otherwise},
\end{cases}
\end{equation}
which characterizes symmetric properties of branches of periodic solutions. For a fixed $\alpha$,\ the map $\Lambda^{H^\varphi}_l$
can be identified with a polynomial of the complex variable $\tau+i\beta$.
{To estimate the length of a branch,}
the following quantity (which is the operator norm of an operator acting from $L_2$ to $C$ associated with the periodic problem) will be used:
\begin{equation}\label{linear_bound}
M^{H^\varphi}(\alpha, \beta):= \Big(\sum_{l=0}^\infty \left|(\Delta^{H^\varphi}_l(\alpha, 0, \beta))^{-1}\right|^2\Big)^{1/2},
\end{equation}
where $|\cdot|$ is the matrix norm induced by the norm in $V$.
Given a set $\mathcal P \subset [\alpha_-,\alpha_+] \times \mathbb R_+ \times \mathbb R_+$,
define
\begin{equation}\label{eq:P-0-set}
\mathcal{P}_{\pm} := \mathcal{P}\bigcap \left(\{\alpha_\pm\} \times \mathbb R_+ \times \mathbb R_+\right) \quad {\rm and} \quad
\mathcal P_0 := \mathcal{P}\bigcap \left([\alpha_-,\alpha_+] \times \{0\} \times \mathbb R_+\right),
\end{equation}
{where $\mathbb R_+$ denotes the non-negative semi-axis.}
We denote by $\partial \Omega$ the boundary of a domain ${\Omega}$
and by $\overline{\Omega}$ the closure of ${\Omega}$.
\medskip
{We make the following assumptions:}
\smallskip
\noindent\textbf{(P0)} $A$ and $f$ in {\eqref{eq:per-normalization}} depend continuously on their arguments.
\smallskip
\noindent\textbf{(P1)} ${\Lambda^{H^\varphi}_0 (\alpha,0,0)}\neq 0 $ for all $\alpha\in [\alpha_-,\alpha_+]$.
\smallskip
\noindent\textbf{(P2)} There exists a bounded
domain $\mathcal{P} \subset [\alpha_-, \alpha_+] \times \mathbb R_+\times\mathbb R_+$ such that:
(i) $\mathcal{P}$ is homeomorphic to a closed ball;
\smallskip
(ii) $\Lambda^{H^\varphi}_1 (\alpha, \tau, \beta)\ne 0$ for all $(\alpha, \tau, \beta) \in \partial\mathcal{P}\setminus (\mathcal{P}_+ \bigcup \mathcal{P}_- \bigcup \mathcal P_0)$;
\smallskip
(iii) $\mathcal{P}_+$ and $\mathcal{P}_-$ contain a different number of roots of $\Lambda^{H^\varphi}_1 (\alpha, \tau, \beta)$ (counted according to their multiplicities).
\smallskip
\noindent\textbf{(P3)}
There exists an {open} set $\mathcal D \subset [\alpha_-,\alpha_+] \times {\{0\} \times \mathbb R_+}$ such that
\smallskip
(i) $\overline{\mathcal D}$ is homeomorphic to a closed disk;
\smallskip
(ii) $(\Lambda^{H^\varphi}_1)^{-1}(0)\bigcap \overline{\mathcal D} = (\Lambda^{H^\varphi}_1)^{-1}(0)\cap \mathcal P_0$;
\smallskip
(iii) $\Lambda_l^{H^\varphi} (\alpha, 0, \beta) \neq 0$ for any $l\in \mathbb N$ and any $(\alpha,{0, \beta})\in \partial\mathcal D$.
\smallskip
\noindent\textbf{(P4)} There exist $N(\alpha)$ and $0\le r<R$ such that for each $\alpha \in [\alpha_-,\alpha_+]$,
\begin{equation}
\label{estimate}
|f(\alpha, x)| \le N(\alpha) \max \{r,|x|\} \quad \text{for} \quad |x|\le R.
\end{equation}
\smallskip
\noindent\textbf{(P5)}
The following estimate holds:
$$N(\alpha)< \frac{1}{\sqrt{2\pi}M^{H^\varphi}(\alpha,\beta)}
\quad \text{for} \quad(\alpha,0,\beta)\in \partial \mathcal D.
$$
\smallskip
\begin{remark}\label{rem:hypotheses} {\rm Condition \textbf{(P0)} is a mild regularity requirement. Condition \textbf{(P1)} guarantees the absence of steady state bifurcation. Assumption \textbf{(P2)}(iii) provides the non-triviality of the (isotypical) crossing number, while \textbf{(P3})(iii) is a weak version of the non-resonance condition. Assumptions \textbf{(P4), (P5)} ensure that the vector field associated with the problem has a principal linear part on the set difference of balls of radii $R$ and $r$.}
{\rm
It was observed by J. Ize \cite{Ize-Topolo-bifurcation} that the occurrence of the
Hopf bifurcation with prescribed symmetry is related to the non-triviality of the equivariant $J$-homomorphism associated with the equivariant operator equation. This observation gives rise to the following two questions: (a) Under which conditions on the right-hand side of \eqref{PNH} is the $J$-homomorphism correctly defined? (b)
Under which conditions is this homomorphism non-trivial? From this viewpoint, conditions \textbf{(P0)}, \textbf{(P1)} and \textbf{(P3)} are related to (a), while condition \textbf{(P2)} is related to (b).}
\end{remark}
{We are now in a position to formulate the main result of the present paper.}
\begin{theorem}
\label{thm:main-theorem}\label{t1}
Suppose conditions \textbf{(P0)--(P5)} are satisfied.
Then, there exists a branch of non-constant periodic solutions to system \eqref{PNH} joining the sphere of radius $r$ to the sphere of radius $R$ (cf.~Definition \ref{YTM}). Solutions of this branch have {spatio-temporal} symmetry at least $H^\varphi$.
\end{theorem}
\begin{remark}{\rm
It will be shown in the proof that the minimal period of the periodic solutions of the branch is uniformly bounded.
If $r=0$ in {\bf (P4)}, then the branch connects to a Hopf bifurcation point. If $R=\infty$ (the estimate in {\bf (P4)} is global), then the branch extends to infinity. A non-equivariant variant of the theorem was proved in \cite{KR} for a class of scalar equations in which the nonlinearity satisfies a global sector estimate.
}
\end{remark}
In \cite{AED}, the $\Gamma$-equivariant Hopf bifurcation was {studied} using an invariant known as the $\Gamma \times S^1$-equivariant twisted degree. Its values are finite linear combinations of the form
\begin{equation}\label{eq:-equiv-degree}
\sum_i n_{\varphi_i} (H^{\varphi_i}),
\end{equation}
where $n_{\varphi_i} \in \mathbb Z$ and $(H^{\varphi_i})$ is a twisted orbit type. Generically, the coefficients $n_{\varphi_i} $ give an algebraic count of
orbits of type $(H^{\varphi_i})$ and as such, completely describe the $\Gamma \times S^1$-equivariant $J$-homomorphism of an operator involved.
In this paper, we do not compute this total invariant. Instead, we just compute {the $S^1$-equivariant twisted degree
(in short, $S^1$-degree)} of the {associate} operator restricted to $H^\varphi$-fixed point space {(essentially, we show the non-triviality of the corresponding $S^1$-equivariant $J$-homomorphism).} The advantage of this approach is that in a number of circumstances, which we illustrate by examples, the total $\Gamma \times S^1$-twisted degree is not defined, however we still succeed to detect branches with various symmetric properties.
{The usage of the total $\Gamma \times S^1$-equivariant twisted degree is effective for studying {\it global} behavior of branches of periodic solutions,} and in the case when it is defined, each coefficient $n_{\varphi_i}$ (see \eqref{eq:-equiv-degree}) can be recovered by considering {the usual crossing numbers related to the restrictions of the operator involved to $H^{\varphi_j}$-fixed point subspaces with $(H^{\varphi_j}) > (H^{\varphi_i})$ and applying the so-called Recurrence Formula (see \cite{AED}, p. 124 and Theorem 4.25).}
\begin{remark}{\rm
To verify condition \textbf{(P2)}(iii), one has to
compute multiplicities of the roots of $\Lambda_1^{H^\varphi} (\alpha_{\pm}, \tau, \beta)$ (cf. \eqref{eq:det-Lambda}).
To this end, decompose $^1\widetilde V$ into its $\Gamma \times S^1$-isotypical components
$$ ^1 \widetilde V = \, ^1 \widetilde V_1 \oplus \cdots \oplus \,^1 \widetilde V_q,$$
where each $^1\widetilde V_k$ is modeled on the irreducible $\Gamma \times S^1$-representation $^1\widetilde {\mathcal V}_k$
($k = 1,...,q$). Fix $\alpha$ and assume that $\lambda_o = \tau_o + i \beta_o$ is a root of $\Delta^{H^\varphi}_1 (\alpha, \tau, \beta)$.
Denote by $E(\lambda_o)$ the (generalized) eigenspace of $\lambda_o$ with respect to $\Delta_1 (\alpha, \tau, \beta)$ and let
$\mathfrak m_k(\lambda) := \dim (E(\lambda_o) \cap \, ^1 \widetilde V_k)/\dim \,^1\widetilde {\mathcal V}_k$
stand for the $^1\widetilde {\mathcal V}_k$-isotypical multiplicity
of $\lambda_o$, $k = 1,...,q$ (cf. \cite{AED}). Put $d_k^{H^\varphi}:= \dim \, ^1\mathcal V_k^{H^\varphi}$.
Then, the multiplicity of $\lambda_o$ considered as a root of $\Lambda_1^{H^\varphi} (\alpha, \tau, \beta)$ is given by
$$
\sum_{k=1}^q d_k^{H^\varphi} \mathfrak m_k(\lambda).
$$
}
\end{remark}
\section{Proof of Theorem \ref{thm:main-theorem}}\label{proof}
\subsection{Operator {reformulation}}
\label{subsec-operator-reform}
We consider the space $C^1=C^1(S^1;V)$ equipped with the standard norm
$\|u\|_{C^1} := \{\sup |u(x)| + \sup |\dot{u}(x)| \, : \, x \in S^1\}$.
Recall that $S^1$ is identified with the segment $[0,2\pi]$ and the spaces $C$, $C^1$ are identified with the spaces of $2\pi$-periodic functions.
Define the differention operator $L=\frac{d}{dt}: C^1\to C$ and the
projector
$K: C^1\to {C}$ onto the subspace of constant functions given by
\[
K u (t) =\frac1{2\pi}\int_0^{2\pi} u(s)\,ds.
\]
We note that the operator $L+K$ maps
$C^1$ onto ${C}$ and is invertible. Its inverse operator is defined by
\[
((L+K)^{-1} u)(t)=\int_0^{2\pi} H(t-s) u(s)\,ds,
\]
where
\[
H(\tau)=\frac1{2\pi}(1+\pi-\tau), \ \ \ 0\le \tau<2\pi; \qquad H(\tau+2\pi)=H(\tau), \ \ \ \tau\in\mathbb{R},
\]
is the impulse response function of the linear periodic problem
\[
\dot v + \frac1{2\pi}\int_0^{2\pi} v(s)\,ds = u, \qquad u(0)=u(2\pi).
\]
In other words, the bounded operator $(L+K)^{-1}: {C}\to C^1$ is the solution operator of this problem, i.e.~$v=(L+K)^{-1}u$.
Rewriting {\eqref{eq:per-normalization}} as an equivalent equation
\[
\dot u + \frac1{2\pi} \int_0^{2\pi}u(s)\,ds=\beta^{-1} A (\alpha) u + \beta^{-1} f(\alpha, u) + \frac1{2\pi} \int_0^{2\pi}u(s)\,ds
\]
with the $2\pi$-periodic boundary conditions, we see that the periodic problem for \eqref{PNH} is equivalent to the fixed point problem
\begin{equation}\label{eq:fixed-point}
u= {\mathcal J} (L+K)^{-1} \bigl(\beta^{-1}A(\alpha)u +Ku +\beta^{-1} F(\alpha, u)\bigr) =: {T(\alpha,\beta,u)}
\end{equation}
in the space $\mathbb R^2 \oplus {C}$, where $\mathcal{J}$ is the compact embedding operator from $ C^1$ to
$ {C}$ and $F : \mathbb R \oplus C \to C$ is given by $F(\alpha,u)(t):= f(\alpha,u(t))$.
{Also, by condition {\bf (P0)} and compactness of $\mathcal{J}$, the vector field $\text{Id} - T$
is compact. In addition, formula
$$(g,e^{i \theta }) u(t):= g u(t - \theta), \quad\quad (g,e^{i\theta}) \in \Gamma \times S^1 = G,$$
defines {\it isometric} Banach $G$-representations on ${C}$ and $ C^1$ and $\text{Id} - T:
\mathbb R^2 \oplus C \to C$ is $G$-equivariant (we assume that $G$ acts trivially on $\mathbb R^2$). In what follows, for any $s \in (r,R)$, we are going
to prove the existence of a solution $(\alpha,\beta,u)$ to \eqref{eq:fixed-point} such that $\|u\|_C = s$ and $G_u = H^{\varphi}$. Due to $G$-equivariance, this is equivalent to studying the solution set of the equation
\begin{equation}\label{eq:wth-s}
\mathfrak{F}_s(\alpha,\beta,u):= \Big(\|u\|_C - s, u - T(\alpha,\beta, u)\Big)=0,
\end{equation}
where $(\alpha,\beta) \in \mathbb R^2,$ $u \in C^{H^{\varphi}},$ $s \in (r,R).$
}
\subsection{Auxiliary lemmas}
{It is easy to see that the subspace $C^{H^{\varphi}} \subset C$ is an isometric $S^1$-representation. Therefore,
solutions to \eqref{eq:wth-s} will be studied in the subset $[\alpha_-,\alpha_+] \times [\beta_-,\beta_+] \times C^{H^{\varphi}} \subset \mathbb R^2 \oplus C$ using the $S^1$-degree theory (see Subsection \ref{subsec:S1-degree}). As it is common for the application of any (equivariant) degree based methods, our approach includes the following steps:}
{(a) Construction of an open bounded $S^1$-invariant domain $\Omega \subset [\alpha_-,\alpha_+] \times [\beta_-,\beta_+] \times C^{H^{\varphi}}$ such that related fields and homotopies are $\Omega$-admissible;}
{(b) Construction of an $\Omega$-admissible $S^1$-equivariant deformation of $\mathfrak{F}_s$ to an associated linear field $a_s$;}
{(c) Showing that the $S^1$-degree of $a_s$ is different from zero;}
{(d) Establishing the existence of (connected) branches of solutions for equation \eqref{eq:wth-s}.}
\medskip
{To simplify our notations, we identify the set
$\mathcal D \subset [\alpha_-,\alpha_+] \times \{0\} \times \mathbb R_+$ with the subset
of $ [\alpha_-,\alpha_+] \times \mathbb R_+$ via $(\alpha,0,\beta) \to (\alpha,\beta)$ (cf. condition {\bf (P3)}) for which we use the same symbol
$\mathcal D$. Also, put
$\mathfrak W := C^{H^\varphi}$ and observe that $\mathfrak W$
admits the $S^1$-isotypical decomposition
\begin{equation}\label{eq:isotypical-l}
\mathfrak W = \overline{\bigoplus_{l=0}^{\infty} \mathfrak W_l},
\end{equation}
where $\mathfrak W_0$ coincides with $V^H$-valued constant functions and as such, can be identified with the subspace $V^H$ of the phase space $V$, while $\mathfrak W_l$ can be identified with $^l\widetilde V^{H^\varphi}$(see Section \ref{subsec:main-results}).
\begin{remark}\label{rem:invertibility}
{\rm Due to conditions {\bf (P1)} and {\bf (P3)}(iii), the operator $\beta L - A(\alpha) : (C^1)^{H^{\varphi}} \to \mathfrak W$ is invertible for every
$(\alpha,\beta) \in \partial \mathcal D$. Therefore, applying the same argument as in Subsection \ref{subsec-operator-reform}, one can easily show that equation \eqref{eq:wth-s} restricted to $\partial \mathcal D \times \mathfrak M$ is equivalent to
\begin{equation}
\Big(\|u\|_C - s, u - \mathcal J(\beta L - A)^{-1}F(\alpha,u)\Big) = 0,
\end{equation}
where
$(\alpha,\beta) \in \partial \mathcal D,$ $u \in \mathfrak W,$ $s \in (r,R).$}
\end{remark}
\noindent
Define $\Omega \subset [\alpha_-,\alpha_+] \times [\beta_-,\beta_+] \times \mathfrak W$ by
\begin{equation}\label{eq:domain-Omega}
\Omega: = \mathcal D \times B_R(0),
\end{equation}
where $B_R(0) := \{u \in \mathfrak W \, : \, \|u\|_C < R\}$ (cf. conditions {\bf (P3)}--{\bf (P5)}). The following statement is crucial for our considerations.
\begin{lemma}\label{l1}
Assume that conditions {\bf (P0)}, {\bf (P1)}, {\bf (P3)}--{\bf (P5)} are satisfied and $\Omega$ is defined by \eqref{eq:domain-Omega}. Then, for any $\mu \in [0,1]$ and any $s \in (r,R)$, the equation
\begin{equation}\label{eq:parameter}
\mathcal{F}_s(\alpha,\beta,\mu,u) := \Big(\|u\|_C - s, u - \mu\mathcal J(\beta L - A)^{-1}F(\alpha,u)\Big) = 0
\end{equation}
does not have solutions on $\partial \Omega$.
\end{lemma}
}
\begin{proof} {Due to the restrictions on $s$, equation \eqref{eq:parameter} does not admit solutions on
\begin{equation}\label{eq:dom1}
\partial \mathcal D \times \{u \in \overline{B_0(R)} \, : \, \|u\|_C = R \,\, {\rm or} \,\, 0 \leq \|u\|_C \leq r \}.
\end{equation}
For contradiction to the statement of the lemma, assume that \eqref{eq:parameter} admits a solution on
\begin{equation}\label{eq:dom2}
\partial \mathcal D \times \{u \in \overline{B_R(0)} \, : \, r < \|u\|_C < R \}.
\end{equation}
Denote $v :=F(\alpha,u)$. With this notation, \eqref{eq:parameter}
implies
\[
u=\mu \mathcal J (\beta L - A(\alpha))^{-1} v.
\]
Hence, }
\begin{equation}\label{prohh}
\|u\|_C \le \mu
\|(\beta L - A(\alpha))^{-1}\|_{L^2\to C} \|v\|_{L_2} \leq \|(\beta L - A(\alpha))^{-1}\|_{L^2\to C} \|v\|_{L_2}.
\end{equation}
On the other hand, according to {\bf (P4)}, the relations $ r {<} \|u\|_C {<} R$ imply $\|v\|_C=\|F(\alpha,{u})\|_C\le N(\alpha)\|u\|_C$. Combining this estimate with \eqref{prohh} and $\|v\|_{L_2}\le \sqrt{2\pi}\|v\|_C$, we obtain
\begin{equation}\label{eq:estim1}
\|u\|_C \le \sqrt{2\pi} N(\alpha)\|(\beta L - A(\alpha))^{-1}\|_{L^2\to C}\|u\|_C.
\end{equation}
The quantity $\|(\beta L - A(\alpha))^{-1} \|_{L_2\to C}$ has already been defined in
\eqref{linear_bound} as $M^{H^\varphi}(\alpha,\beta)$. By
{\bf(P5)},
\begin{equation}\label{eq:estim2}
q: = \sqrt{2\pi} N(\alpha) M^{H^\varphi}(\alpha,\beta) < 1.
\end{equation}
This {together with \eqref{eq:estim1}} gives
$$
\|u\|_C\le q\|u\|_C<\|u\|_C,
$$
which is a contradiction.
\end{proof}
Define the vector field
\begin{equation}\label{10*}
a_{s,\mu}(\alpha,\beta,u)=\Big(\|u\|_C-s, u- J(L+K)^{-1}\bigl(\beta^{-1}A(\alpha)u +Ku +\mu\beta^{-1} F(\alpha, u)\Big)
\end{equation}
for $(\alpha,\beta,u)\in \partial \Omega$. We note that each of the vector fields \eqref{eq:parameter} and \eqref{10*} is equivalent to the periodic problem
\[
\begin{cases}
\beta \dot u= A(\alpha)u+\mu f(\alpha,u),\\
u(0) = u(2\pi), \ \|u\|_C=s.
\end{cases}
\]
{Therefore, as a consequence of Lemma \ref{l1}, one has the following statement.
\begin{corollary}\label{cor-homotopy}
Assume that conditions {\bf (P0)}, {\bf (P1)}, {\bf (P3)}--{\bf (P5)} are satisfied and $\Omega$ is given by \eqref{eq:domain-Omega}. Then, for any
$s \in (r,R)$, the vector field
$\mathfrak F_s$ given by \eqref{eq:wth-s} is $\Omega$-admissibly homotopic to the vector field
\begin{equation}\label{eq:linear}
a_s(\alpha,\beta, u) := \Big(\|u\|
_C - s, u - {\mathcal J} (L+K)^{-1} \bigl(\beta^{-1}A(\alpha)u +Ku)\Big).
\end{equation}
In particular, $S^1\text{\rm -deg}\,(\mathfrak F_s, \Omega)$ and $S^1\text{\rm -deg}\,(a_s, \Omega)$ are correctly defined and coincide (see Subsection \ref{subsec:S1-degree}, property {\bf (A2)}).
\end{corollary}
}
\subsection{Computation of $S^1\text{\rm -deg}\,(a_s, \Omega)$}
{Corollary \ref{cor-homotopy} essentially reduces studying the solution set of equation \eqref{eq:wth-s} to the computation of $S^1\text{\rm -deg}\,(a_s, \Omega)$. To this end, it is convenient to identify $\overline{\mathcal D}$ with a subset of $\mathbb C$ via
$(\alpha,\beta) \to \lambda = \alpha + i\beta$, and using {\bf (P3)}(i), to assume without loss of generality that $\overline{\mathcal D}$ is a closed disc of radius $\varepsilon$
centered at $\lambda_o$.
Put
\begin{equation}\label{eq:a-lambda}
a(\lambda)u := u - {\mathcal J} (L+K)^{-1} \bigl(\beta^{-1}A(\alpha)u +Ku),
\end{equation}
and denote by $a_l(\lambda)$ the restriction of $a(\lambda)$ to $\mathfrak W_l$ (see \eqref{eq:isotypical-l}). Also,
put
\begin{equation}\label{def_n_k}
\mathfrak n_0 := {\rm sign}\,(\text{det}(a_0(\lambda))), \quad \mathfrak n_l := {\rm deg}\,(\text{det}_{\mathbb C}(a_l(\cdot)),\mathcal D),
\end{equation}
where ``${\rm deg}$" stands for the usual winding number. By condition {\bf (P1)} (resp. {\bf (P3)}(iii)), $\mathfrak n_0$ is independent
of $\lambda \in \overline{\mathcal D}$ (resp. $\mathfrak n_l$ is correctly defined).
Observe also that by compactness of the vector field \eqref{eq:a-lambda}, only finitely many $\mathfrak n_l$ are different from zero.
}
\begin{lemma}\label{lem_deg_val} {Under the assumptions {\bf (P0)}, {\bf (P1)}, {\bf (P3)}--{\bf (P5)} and notations
\eqref{eq:domain-Omega}, \eqref{eq:linear} and \eqref{def_n_k}, one has}
\begin{equation}\label{eq:formula-S^1-lin}
S^1\text{\rm -deg}\,(a_s, \Omega) = \mathfrak n_0 \sum_{l=1}^{\infty} \mathfrak n_l(\mathbb Z_l)
\end{equation}
{for every $ r < s < R$.}
\end{lemma}
\begin{proof}
{We will use a modification of the argument given in \cite{AED}.}
The main strategy is to deform the vector field $a_s$ and to modify
$\Omega$
in such a way that the computational formula \eqref{eq:main-formula} combined with property {\bf (A5)} of the degree
(see Subsection \ref{subsec:S1-degree}) can be applied.
{The proof follows three main steps. First, we make a finite-dimensional approximation of the compact vector field $a_s$. We note that each subspace $\mathfrak W_l$ of $\mathfrak W$ is invariant for the compact linear map $a(\lambda)$, and so is any subspace $\mathfrak W^m:= {\bigoplus_{l=0}^m \mathfrak W_l}$. We choose a sufficiently large subspace
$\mathfrak W^m$ and fix a (closed) linear subspace $Y \subset \mathfrak W$ complementing $\mathfrak W^m$ (without loss of generality, one can assume that $Y$ is also invariant for $a(\lambda)$).
Now, we define
\begin{equation}\label{eq:decompos}
a^m(\lambda) := a(\lambda)|_{\mathfrak W^m} + \text{Id} |_Y, \quad\quad a^m_s(\lambda,u) := (\|u\|_C - s, a^m(\lambda)u).
\end{equation}
Due to compactness of $a_s$,
the linear homotopy joining $a_s$ and $a^m_s$ is $\Omega$-admissible for a sufficiently large $m$. Put
\begin{equation}\label{eq:finite-dimen}
\Omega^m := \Omega \cap (\mathbb C \oplus \mathfrak W^m), \quad\quad \tilde{a}^m_s := a^m_s |_{\overline{\Omega^m}}.
\end{equation}
Using properties {\bf(A2)} and {\bf(A6)} (see Subsection \ref{subsec:S1-degree}), one obtains:
\begin{equation}\label{eq:deg-approx}
S^1\text{\rm -deg}\,(a_s, \Omega) = S^1\text{\rm -deg}\,(a^m_s, \Omega) = S^1\text{\rm -deg}\,(\tilde{a}_s^m, \Omega^m).
\end{equation}
Let $P_l : \mathfrak W^m \to \mathfrak W_l$ be a canonical equivariant projection (see, for example, \cite{AED}, p. 36).
Then, $\tilde{a}_s^m$ is given by
\begin{equation}\label{eq:projection-l}
\tilde{a}_s^m(\lambda,u) = \Big(\|u\|_C - s, \bigoplus_{l = 0}^ma_l(\lambda)P_lu\Big),
\end{equation}
Since $\overline{\mathcal D}$ is contractible to $\lambda_o$, there exists a deformation $\mu : \overline{\mathcal D} \times [0,1] \to \overline{\mathcal D}$ such that $\mu(\lambda,0) =
\lambda$ and $\mu(\lambda,1) \equiv \lambda_o$.
Since $a_0(\lambda)$ is invertible for every $\lambda \in \overline{\mathcal D}$, formula
\begin{equation}\label{eq:const-for-a0}
\hat{a}(\lambda,u,\nu):= \Big(\|u\|_C - s, a_0(\mu(\lambda,\nu))P_0 u + \bigoplus_{l=1}^m a_l(\lambda) P_l u\Big)
\end{equation}
determines an $\Omega^m$-admissible homotopy joining $\tilde{a}_s^m$ with the vector field $\hat{a}$ defined by
\begin{equation}\label{eq:after-def}
\hat{a}(\lambda,u) := \Big(\|u\|_C - s, a_0(\lambda_o)P_0 u + \bigoplus_{l=1}^m a_l(\lambda) P_l u\Big).
\end{equation}
Put
\begin{equation}\label{eq:defs}
B_0: = \{u \in \mathfrak W_0 \,:\, \|u\|_C < R\}, \;\;\;\; \Omega_{\ast} := \Omega^m \cap \Big(\mathbb C \oplus \bigoplus_{l=1}^m W_l \Big), \;\;\;\; a_{\ast} := \hat{a}|_{\Omega_{\ast}}.
\end{equation}
Since
\begin{equation}\label{eq:product-map}
\hat{a} = a(\lambda_o) \times a_{\ast} : \overline{B_{0} \times \Omega_{\ast} }\to \mathfrak W_0 \times \Big(\mathbb R \oplus \bigoplus_{l=1}^m
\mathfrak W_l\Big),
\end{equation}
one has (thanks to property {\bf (A5)} of the degree, Subsection \ref{subsec:S1-degree}):
\begin{equation}\label{eq:product-map1}
S^1\text{\rm -deg}\,(a_s, \Omega) = n_0 \cdot S^1\text{\rm -deg}\,(a_{\ast}, \Omega_{\ast}).
\end{equation}
Finally, to compute $S^1\text{\rm -deg}\,(a_{\ast}, \Omega_{\ast})$, take $\xi : \Omega_{\ast} \to \mathbb R \oplus \bigoplus_{l=1}^m \mathfrak W_l$ defined by
$\xi(\lambda,u):= |\lambda - \lambda_o|(\|u\|_C - r) + \|u\|_C + \varepsilon {r / 2}$, put $\overline{a}:= \bigoplus_{l=1}^m a_l(\lambda) P_l u$ and observe that the field $a_{\xi} := (\xi, \overline{a})$ is $\Omega_{\ast}$-admissibly homotopic to $a_{\ast}$. Since
\begin{equation}\label{eq:zeros1}
a_{\xi}^{-1}(0) = \{(u,\lambda) \in \Omega_{\ast} \, : \, u = 0, \; \; \;\; |\lambda - \lambda_0| = {\varepsilon/ 2}\},
\end{equation}
one can combine property {\bf (A2)} of the degree with its excision property (cf. Remark \ref{rem:excision}) to obtain:
\begin{equation}\label{eq:applic-excision}
S^1\text{\rm -deg}\,(a_{\ast}, \Omega_{\ast}) = S^1\text{\rm -deg}\,(a_{\xi},\Omega_{\ast} ) = S^1\text{\rm -deg}\,(a_{\xi},\Omega_1),
\end{equation}
where
$$
\Omega_1 := \{(u,\lambda) \in \Omega_{\ast} \, : \, {\varepsilon / 4} < |\lambda - \lambda_o| < \varepsilon \}.
$$
For any $(\lambda,u) \in \Omega_1$, set
$$
\eta(\lambda):= \lambda_o + {\varepsilon (\lambda - \lambda_o) \over 2|\lambda - \lambda_o)|}
$$
and define $b : \Omega_1 \to \mathbb R \oplus \mathfrak W^m$ by $b(\lambda,u) := (\xi,\overline{a}(\eta(\lambda))u)$.
From \eqref{eq:zeros1} it follows that
$a_{\xi}^{-1}(0) = b^{-1}(0)$, hence $a_{\xi}$ and
$b$ are $\Omega_1$-admissibly homotopic.
To complete the proof, it remans: to take a homemorphism of $\Omega_1$ onto $\mathcal O$ (see \eqref{eq:mathcal-O}), replace the above function
$\xi$ by $|\lambda|(\|v\|_C-1)+\|v\|_C+1$ (see \eqref{eq:f-a})
and apply formula \eqref{eq:main-formula}.}
\end{proof}
{As a consequence of Lemma \ref{lem_deg_val}, we have the following statement.}
\begin{corollary}\label{cor:S1-non-zero}
Under the assumptions {\bf (P0)}--{\bf (P5)} and notations \eqref{eq:domain-Omega}, \eqref{eq:linear} and \eqref{def_n_k},
$$S^1\text{\rm -deg}\,(a_s, \Omega) \not=0.$$
\end{corollary}
\begin{proof}
By condition {\bf (P2)}(i), the local Brouwer degree of $\Lambda^{H^{\varphi}}_1 : \partial \mathcal P \to \mathbb C$ is correctly defined and equal to zero.
Combining this with condition {\bf (P2)}(ii) and excision of the local Brouwer degree yields
\begin{equation}\label{eq:excis}
\deg(\Lambda^{H^\varphi}_1,\partial \mathcal P) = \deg(\Lambda^{H^\varphi}_1,\mathcal P_+) + \deg(\Lambda^{H^\varphi}_1,\mathcal P_-) +
\deg(\Lambda^{H^\varphi}_1,\mathcal P_0) = 0.
\end{equation}
Denote by $\mathfrak t_{\pm}$ the number of roots of $\Lambda^{H^\varphi}_1$ in $\mathcal P_{\pm}$ (counted according to their multiplicities). Obviously, $\mathfrak t_{\pm} = \pm \deg(\Lambda^{H^\varphi}_1,\mathcal P_{\pm})$. Combining this with the $\mathbb Z_2$-equivariance of
$\Lambda^{H^\varphi}_1$, conditions {\bf (P2)}(iii) and {\bf (P3)}(ii) and formula \eqref{eq:excis} yields
$$
0 \not= \mathfrak t_- - \mathfrak t_+ = \deg(\Lambda^{H^\varphi}_1,\mathcal P_0) = 2 \deg(\Lambda^{H^\varphi}_1,\mathcal D) = 2\mathfrak n_1,
$$
and the result follows from \eqref{eq:formula-S^1-lin}.
\end{proof}
\subsection{Completion of the proof of Theorem \ref{thm:main-theorem}}
{Combining Corollaries \ref{cor-homotopy} and \ref{cor:S1-non-zero} with properties {\bf (A1)} and {\bf (A2)} of the degree (see Subsection
\ref{subsec:S1-degree}) implies the existence of a solution to equation \eqref{eq:wth-s} for each $s\in(0,1)$ and then by compactness also for $s=0,1$. }
{To show that these solutions are not constant, notice that if $(\alpha_1,\beta_1, u_1)$ is a constant solution of \eqref{eq:wth-s} then $
(\alpha_1,\beta_{\textcolor{blue}{\ast}}, u_1)$ is a solution of \eqref{eq:wth-s} for any $\beta^*$. In particular, we can choose $\beta^*$ such that $
(\alpha_1,\beta^*)\in\partial\mathcal D$ which contradicts Lemma \ref{l1}.
}
{Finally, to show that the solution set to equation \eqref{eq:fixed-point} contains a compact connected branch joining the spheres
$\{\|u\|_C=r\}$ and $\{\|u\|_C=R\}$, one can use the standard technique ((see, for example, \cite{AED,orig_slid} for details) based on the statement
following below (see \cite{Kurat}, Theorem 3, p. 170).
\begin{lemma}[Kuratowski]\label{lem:Kur}
Let $X$ be a metric space, $A,B \subset X$ two disjoint closed sets, and $K$ a compact set in $X$ such that $K \cap A \not=\emptyset\not=K \cap B$.
If the set $K$ does not contain a connected component $K_o$ such that $K_o \cap A \not=\emptyset\not=K_o \cap B$, then there exist two disjoint open sets $V_1$ and $V_2$ such that $A \subset V_1$, $B \subset V_2$ and $A \cup B \cup K \subset V_1 \cup V_2$.
\end{lemma}
}
\section{Examples}\label{examples}
In this section we
consider applications of Theorem \ref{thm:main-theorem}.
In the first example we consider a system without symmetry and compute estimates of $r$ and $R$.
That is, we guarantee not only the existence of a branch of periodic solutions but also estimate its length.
Further examples refer to a number of
circumstances where
standard genericity assumptions are not satisfied but Theorem \ref{thm:main-theorem} is applicable.
Here estimates of the length of the branch
are also possible to obtain but for convenience we consider nonlinearities with sublinear growth and only present results of the form ``there exists an $R$ such that there is a branch of solutions joining the trivial equilibrium to the sphere of radius $R$''.
In the second example, we consider a system of coupled oscillators which undergoes a Hopf bifurcation and a steady state bifurcation simultaneously. In this case, restriction to $H^\varphi$-fixed point spaces allows us to ``separate'' these two bifurcations. An additional assumption that the nonlinearity of individual oscillators is odd allows us to refine these results to be more inclusive of various {non-generic} scenarios.
Finally, in the third example we treat the case when a pair of purely imaginary eigenvalues persists
independently of the bifurcation parameter, and other eigenvalues cross through them as the parameter is varied. Again, in this case the restriction to $H^\varphi$-fixed point spaces allows us to ``separate'' these eigenvalues.
\subsection{Example 1}
Consider a single Van der Pol oscillator given by
\begin{equation}\label{ind_vdp}
\begin{aligned}
\dot x_1 &= x_2,\\
\dot x_2 &= -x_1 - x^2_1 x_2 + \alpha x_2.
\end{aligned}
\end{equation}
In this case, the group $H^\varphi$ is trivial, $$A(\alpha)= \begin{bmatrix}
0 & 1\\
-1 & \alpha
\end{bmatrix}$$ and $f(\alpha, x) = (0,-x_1^2x_2)^T.$ To apply the main theorem we begin by identifying that $$\Lambda_l(\alpha, \tau, \beta)= l^2(\tau+i\beta)^2 - \alpha l(\tau+i\beta) + 1.$$ Given any $\alpha_-<0<\alpha_+$, it is easy to see that if we take $$\mathcal P = [\alpha_-,\alpha_+] \times [0,\tau_*] \times [0,\beta_*]$$
with sufficiently large $\tau_*, \beta_*$, then
conditions {\bf(P0)-(P2)} are satisfied. The main challenge is now to construct the set $\mathcal D$ in such a way that condition {\bf (P3)} is satisfied and the estimate for $R$ can be optimized. Satisfying condition {\bf (P3)} only requires that $(\alpha,\beta)=(0,1)\in \mathcal D$, while $(0,1/l) \notin \partial \mathcal D$ for $l\ge 2$. In particular, if $(\alpha,\beta)\neq (0, 1/l)$ for $l\in\mathbb{N}$, then the number \eqref{linear_bound} is given by
\[
M(\alpha,\beta)=\Big( \sum_{l=0}^\infty \frac{2(l^2\beta^2+1)+\alpha^2}{(l^2\beta^2-1)^2+l^2\alpha^2\beta^2} \Big)^{1/2}.
\]
Hence, if we take $\mathcal D$ to be any sufficiently small disk surrounding the point $(0,1)$, then $$
\mathcal N := \inf_{(\alpha,\beta)\in \partial \mathcal D} \frac{1}{\sqrt{2\pi}M(\alpha,\beta)} >0.
$$
The
form of $f$ implies that $|f(\alpha,x)|\le \mathcal N|x|$ for all $x=(x_1,x_2)^T$ with $|x|\le \sqrt \mathcal N$. We can therefore see that conditions {\bf (P4)}, {\bf (P5)} are satisfied with $N(\alpha)=\mathcal N$, $r=0$ and any $R <\sqrt \mathcal N$. Therefore Theorem \ref{t1} guarantees the existence of a branch of periodic solutions joining the zero equilibrium with the sphere $\{\|x\|_C=\sqrt{\mathcal N}\}$.
However we can try to maximize the number $\mathcal N$ by choosing an appropriate domain $\mathcal D$. For $\mathcal D$ to include the point $(0,1)$ the boundary $\partial \mathcal D$ must intersect the line segment $(0, \beta)$, $\beta \in (0,1)$. If we therefore identify the point $(0, \beta_*)$ which minimizes the function
$
M(0,\beta)
$
along this line segment and take $\partial {\mathcal D}$ to be the level curve of $M(\alpha,\beta)$ passing through that point, then we can maximize the number $\mathcal N$.
Noting that the function
\[
M(0,\beta)=1+\left(\frac{\pi}{\beta} \csc \frac{\pi}{\beta} \right)^2
\]
achieves its global minimum at the point $\beta_*\approx0.699$ which is determined as a root of the equation $\tan (\pi/\beta)=\pi/\beta$,
in this way we obtain
that there exists a branch of periodic solutions joining the trivial equilibrium to the sphere of radius $R\approx 0.3$, see Figure \ref{f1}. The same scheme can be used to construct the domain $\mathcal D$ satisfying the conditions of Theorem \ref{t1} and obtain an estimate for $R$
for the general non-equivariant system \eqref{PNH} undergoing a generic Hopf bifurcation.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.45\textwidth]{levelline1} \qquad \includegraphics[width=0.46\textwidth]{levelline}\\
\qquad {\bf (a)} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad {\bf (b)}
\caption{Left: The function $M(0,\beta)$ with the global minimum at the point $\beta_*\approx 0.7$. Right: The domain ${\mathcal D}$ (gray) bounded by the level curve $M(\alpha,\beta)=M(0,\beta_*)$ of the function $M(\alpha,\beta)$.}
\label{f1}
\end{figure}
\subsection{Example 2}
Our second example illustrates the situation where the spectrum of $A(\alpha)$ contains purely imaginary eigenvalues for some value $\alpha_0$ of the parameter $\alpha$ and, simultaneously, $\det A(\alpha_0) = 0$. This contradicts the usual ``absence of the steady state bifurcation'' condition. However, we overcome this by considering \eqref{eq:det-Lambda} instead of \eqref{eq:without} with properly chosen $H^{\varphi}$ and applying
Theorem \ref{thm:main-theorem}. In the following example complex eigenvalues cross the imaginary axis transversally at the bifurcation point $\alpha=\alpha_0$. We use the notations adopted
in \cite{BKRZ-Hysteresis} and \cite{HBKR}.
We begin with the system
\begin{equation}\label{eq:VdP-1}
\begin{aligned}
\dot j_m &= -{R \over L} j_m + {1 \over L} u_m,\\
\dot u_m &= -{1 \over C} j_m +{\alpha \over C} u_m - {\sigma \over C} u^3 _m
\end{aligned}
\end{equation}
describing an LCR circuit with a cubic current-voltage characteristic (here $\alpha$ is a bifurcation parameter),
which can be rewritten as the classical Van der Pol equation. We further consider a symmetrically coupled system of eight identical oscillators \eqref{eq:VdP-1}, which are arranged in a cube-like configuration. More precisely, using the vector notation $j= (j_1,...,j_8)^T$, $u = (u_1,...,u_8)^T$ and $u^3 = (u_1^3,...,u_8^3)^T$, one can represent the corresponding system as follows:
\begin{equation}\label{eq:VdP-8}
\begin{aligned}
\dot j &= -{R \over L} j + {1 \over L} u,\\
\dot u &= -{1 \over C} j +{\alpha \over C} u - {\sigma \over C} u^3 + {\rho \over 2C}\mathcal K u,
\end{aligned}
\end{equation}
where
\begin{equation}\label{eq:K}
\mathcal K =\begin{bmatrix}
-3 & 1 & 0 & 1 & 1 & 0 & 0 & 0 \\
1 & -3 & 1 & 0 & 0 & 1 & 0 & 0 \\
0 & 1 & -3 & 1 & 0 & 0 & 1 & 0 \\
1 & 0 & 1 & -3 & 0 & 0 & 0& 1 \\
1 & 0 & 0 & 0 & -3 & 1 & 0 & 1 \\
0 & 1 & 0 & 0 & 1 & -3 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & 1 & -3 & 1 \\
0 & 0 & 0& 1 & 1 & 0 & 1 & -3
\end{bmatrix}
\end{equation}
(we assume that the oscillators are coupled by resistors having the same conductivity $\rho$ as in \cite{BKRZ-Hysteresis}). Denote
by $V:= \mathbb R^{16}$ the phase space of \eqref{eq:VdP-8}. Clearly, $V$ is an $O_4$-representation, where $O_4 = S_4 \times O_1$
acts by permuting pairs of coordinates $(j_m,u_m)$, $m = 1,...,8$. In addition, system \eqref{eq:VdP-1} respects the antipodal symmetry, meaning that
system \eqref{eq:VdP-8} is $\Gamma := \mathbb Z_2 \times O_4$-equivariant (see Appendix for the explicit description of $O_1$ and $\Gamma$). The
$\Gamma$-representation $V$ admits the isotypical decomposition
\begin{equation}\label{eq:isotyp}
V = V_0 \oplus V_1 \oplus V_2 \oplus V_3,
\end{equation}
where each $V_k$, $k = 0,1,2,3$, is of isotypical multiplicity two and is modeled on the irreducible $\Gamma$-representations $\mathcal W_0,\mathcal W_1, \mathcal W_2, \mathcal W_3$ respectively, which can be described as follows. Let $\mathcal B^+$ and $\mathcal B^-$ be the one-dimensional
$\mathbb Z_2 \times O_1$-representation where $\mathbb Z_2$ acts antipodally on both $\mathcal B^+$ and $\mathcal B^-$ while $O_1$ acts trivially on $\mathcal B^+$ and antipodally on $\mathcal B^-$. Let $\mathcal V_0$ (resp. $\mathcal V_ 3$) be the one-dimensional trivial (resp. sign)
$S_4$-representation, let $\mathcal V_2$ be the natural three-dimensional $S_4$-representation, where $S_4$ acts as a subgroup of $SO(3)$, and let $\mathcal V_1 := \mathcal V_2 \otimes \mathcal V_3$. Then,
\begin{equation}\label{eq:irred-model}
\mathcal W_0 = \mathcal V_0 \otimes \mathcal B^+, \;\; \mathcal W_1 = \mathcal V_1 \otimes \mathcal B^-, \;\; \mathcal W_2 = \mathcal V_2 \otimes \mathcal B^+, \;\; \mathcal W_3 = \mathcal V_3 \otimes \mathcal B^-.
\end{equation}
Denote by $A=A(\alpha)$ the linearization of the right-hand side of \eqref{eq:VdP-8} at the origin:
\begin{equation}\label{111}
A=
\left[\begin{array}{cc} -{R \over L} & {1 \over L} \\
-{1 \over C} & \alpha \over C
\end{array} \right]\otimes \text{Id}_8 + \rho \left[\begin{array}{cc} 0 & 0\\ 0 & {1 \over 2C}
\end{array}\right] \otimes \mathcal K.
\end{equation}
By choosing an appropriate basis in $V$ respecting isotypical decomposition \eqref{eq:isotyp}, one can show (see \cite{BKRZ-Hysteresis,HBKR}) that
$A(\alpha)$ admits a block diagonal representation with $8$ two-by-two blocks
\begin{equation}\label{eq:2-2-blocks}
A_k(\alpha)= \begin{bmatrix}
-\frac{R}{L} & \frac{1}{L}\\
-\frac{1}{C} & \frac{\alpha}{C} - \frac{k\rho}{C}
\end{bmatrix},
\end{equation}
where $A(\alpha)|_{V_k} = A_k(\alpha)$ for $k = 0,3$ and $A(\alpha)|_{V_k} = A_k(\alpha) \oplus A_k (\alpha)\oplus A_k(\alpha)$ for $k = 1,2$.
%
Further, if ${R^2C} < L$, then $A(\alpha)$ has purely imaginary eigenvalues when $$\alpha = \alpha^h_j:= {RC}/{L} + {j}\rho,\qquad j=0,1,2,3,$$ and $A(\alpha)$ is not invertible when
$\alpha = \alpha^s_k ={1}/{R}+ {k} \rho$, $k = 0,1,2,3$. Following \cite{BKRZ-Hysteresis}, assume that ${R^2C} < L$ and define
$$
\mathcal C := \frac{1}{\rho R}\Big(1-\frac{R^2C}{L}\Big) > 0.
$$
With this notation, the scenario when $\alpha^h_j = \alpha^s_k$ corresponds to
\begin{equation}\label{eq:simult-Hopf-steady}
\mathcal C = j -k >0\quad \text{for some} \;\; k,j = 0,1,2,3.
\end{equation}
Therefore, if $j=0,1,2,3$ and $\mathcal C$ is {\em not} an integer satisfying $\mathcal C\le j$, then the steady state bifurcation is {\em a priori} excluded at the point $\alpha=\alpha^h_j$ (i.e.~$\alpha^h_j\ne \alpha^s_k$ because \eqref{eq:simult-Hopf-steady} is violated), which implies in a standard way that
$\alpha=\alpha^h_j$ is a Hopf bifurcation point for system \eqref{eq:VdP-8}. Moreover, due to symmetries, the Hopf bifurcation points $\alpha=\alpha^h_1$, $\alpha=\alpha^h_2$ give rise to multiple branches of periodic solutions, which can be distinguished by their maximal symmetry group \cite{HBKR,BKRZ-Hysteresis}.
Table \ref{tab_clas} presents spatio-temporal symmetries of multiple branches bifurcationg from the four bifurcation points in this case.
An interesting case is when \eqref{eq:simult-Hopf-steady} holds for some $j,k$.
This case can be handled by Theorem \ref{thm:main-theorem}. In fact, by direct verification (see Appendix), all
the twisted subgroups $H^{\varphi}$ appearing in Table \ref{tab_clas} have the element $(-1,(),{1 \over 2})$. Therefore, $V^H$ (which can be identified with $H$-fixed constant functions) is trivial, hence condition {\bf (P1)} is trivially satisfied. Since all the other assumptions of Theorem \ref{thm:main-theorem}
are obviously also satisfied, we conclude that Table \ref{tab_clas} applies to system \eqref{eq:VdP-8}
in the case \eqref{eq:simult-Hopf-steady} too.
In particular, if $\mathcal C=1,2,3$, some of the Hopf bifurcations listed in Table \ref{tab_clas} are simulatneous with the steady state bifurcation.
\begin{remark}\label{rem:oddness}
{\rm Let us consider system \eqref{PNH} with $V=\mathbb{R}^{16}$,
where $A(\alpha)$ is given by \eqref{eq:K}, \eqref{111} and $f : \mathbb R \times V \to V$ is an arbitrary
$\Gamma := \mathbb Z_2 \times O_4$-equivariant continuous function satisfying $f(x,\alpha)/|x|\to 0$ as $x\to 0$ for all $\alpha$.
Since the above argument was based on the linearization at zero, Table \ref{tab_clas} applies to this system for every $\mathcal C>0$.
}
\end{remark}
\medskip
Now, let us consider system \eqref{PNH} with the linear part defined by \eqref{eq:K}, \eqref{111} assuming that $f$ is $O_4$-equivariant but not necessarily $\mathbb Z_2 \times O_4$-equivariant (as in Remark \ref{rem:oddness}). For example, one can think of a system of eight coupled identical oscillators similar to \eqref{eq:VdP-8}, in which the cubic nonlinearity of an individual oscillator is replaced with a polynomial nonlinearity which is not odd. In this case, the results obtained in \cite{BKRZ-Hysteresis,HBKR} imply that
if $\mathcal C\ne 1,2,3$ and $\mathcal C>0$, then Table \ref{tab_clas} should be slightly modified. Namely, each twisted subgroup ${}^{+}H^{\varphi}$ (resp. ${}^{-}H^{\varphi}$) is replaced by ${}^{+}\overline{H}^{\varphi}$ (resp. ${}^{-}\overline{H}^{\varphi}$), see Appendix for the explanation of the notation. The question, whether the same table applies in the cases \eqref{eq:simult-Hopf-steady} when a Hopf bifurcation is simultaneous with a steady state bifurcation (i.e., $\mathcal C = 1,2,3$) is more subtle than for the $\mathbb Z_2 \times O_4$-equivariant system considered above. That is, Theorem \ref{thm:main-theorem} can still be used in some of the cases \eqref{eq:simult-Hopf-steady} but not in all of them.
To be more specific, consider, for example, the branch with symmetry $^+\overline{D}^d_4$ which can potentially bifurcate from the trivial solution at $\alpha^h_2$.
By definition, $^+\overline{D}^d_4$ is a graph of the homomorphism $\varphi : D_4 \times O_1 \to \mathbb Z_2 \subset S^1$. By direct computation
(cf. \eqref{eq:isotyp} and \eqref{eq:2-2-blocks}),
\begin{equation}\label{eq-V-D_4O_1}
V^{D_4 \times O_1} = V_0 \quad \text{and} \quad
A(\alpha^h_2)|_{V^{D_4 \times O_1}}= A_0(\alpha^h_2) =
\begin{bmatrix}
-\frac{R}{L} & \frac{1}{L}\\
-\frac{1}{C} & \frac{\alpha^h_2}{C}
\end{bmatrix}.
\end{equation}
Combining \eqref{eq-V-D_4O_1} with \eqref{eq:simult-Hopf-steady} implies that if $\mathcal C = 1$, then $\big(\ker A \big) \cap V^{D_4 \times O_1} = \{0\}$,
and condition {\bf (P1)} is satisfied. A similar argument shows that {\bf (P1)} is also satisfied for other branches bifurcating from the point $\alpha^h_2$ for $\mathcal C=1$. Hence, Theorem \ref{thm:main-theorem} ensures that $\alpha^h_2$ is a Hopf bifurcation point giving rise to multiple branches of periodic solutions with symmetries $(^+\overline{D}_4^d), (^+\overline{D}_3), (^+\overline{D}_2^d), (^+\overline{\mathbb Z}_4^c), (^+\overline{\mathbb Z} _3^t) $ for the $O_4$-equivariant system \eqref{PNH} with $\mathcal C=1$ and, simultaneously, $\alpha^h_2$ is a steady state bifurcation point. However, if $\mathcal C=2$, then condition {\bf (P1)} is not satisfied at the point $\alpha=\alpha^h_2$.
Similarly, Theorem \ref{thm:main-theorem} guarantees the Hopf bifurcation of periodic solutions with symmetry $(^-\overline{S}_4^-)$ at the point $\alpha=\alpha^h_3$ (with a simultaneous steady state bifurcation) for $\mathcal C=2$ but not for $\mathcal C=1$.
\begin{table}[]
\centering
\caption{Symmetries of periodic solutions bifurcating from four Hopf bifurcation points of $\mathbb{Z}_2\times O_4$-equivariant system \eqref{PNH} with the linear part defined by \eqref{eq:K}, \eqref{111} for any $\mathcal C>0$.}
\medskip
\label{tab_clas}
\begin{tabular}{|c|c|}
\hline Bifurcation point &
Symmetry group of periodic solutions
\\ \hline
$\alpha =\alpha_0^h$ & $(^+S_4)$ \\ \hline
$\alpha =\alpha_1^h$ & $(^-D_4^z), (^-D_3^z), (^-D_2^d), (^-\mathbb Z_4^c), (^-\mathbb Z _3^t)$\\ \hline
$\alpha =\alpha_2^h$ & $(^+D_4^d), (^+D_3), (^+D_2^d), (^+\mathbb Z_4^c), (^+\mathbb Z _3^t) $\\ \hline
$\alpha =\alpha_3^h$ & $(^-S_4^-)$ \\ \hline
\end{tabular}
\end{table}
\begin{remark}\label{rem:hysteres}
{\rm
Including a ferromagnetic core in an inductor
can cause a hysteretic relationship between the magnetic induction $B$ and the magnetic field $H$. In this case, the instantaneous value of $B$ depends not only on the value of $H$ at the same moment, but also on some previous values of $H$. Hence, the constitutive relationship between $B$ and $H$ is an operator relationship, which translates into a similar operator relationship between the voltage $v_m$ and the current $i_m$ in an LCR contour with a ferromagnetic-core inductor. The Preisach model is a widely used description of such an operator constitutive relationship, defining the dependence of $B$ on $H$ in ferromagnetic materials (see, for example, \cite{Mayergoyz}). The hysteresis memory is the source of {\it non-smoothness} and the presence of an infinite dimensional phase space without local linear structure. Hence, the application of the classical methods based on the centre manifold reduction to systems with hysteresis meets serious difficulties. However, following the scheme described in \cite{BKRZ-Hysteresis,Rachi} and using Theorem \ref{thm:main-theorem}, one can obtain equivariant bifurcation results for networks of LCR circuits with a hysteretic relationship between $B$ and $H$, which are parallel to the results discussed above.
}
\end{remark}
\subsection{Example 3}
In this example, we consider system $\dot x=A(\rho) x + f(\rho,x)$ with the linearization $A=A(\rho)$ given by
\eqref{111} but this time we use $\rho$ (the coupling strength) as the bifurcation parameter.
The other parameters are fixed. In particular, we assume that $\alpha=RC/L$.
In this case, the spetrum of $A(\rho)$ consists of the eigenvalues $\pm i \omega$ of multiplicity 8 for $\rho=0$ (with $\omega=1/\sqrt{LC}$).
Let the phase space $V:= \mathbb R^{16}$ of the system be the $O_4$-representation described in Example 2,
and assume again that $f : \mathbb R \times V \to V$ is an $O_4$-equivariant continuous function satisfying $f(\rho,x)/|x|\to 0$ as $x\to 0$.
Formula \eqref{eq:K} implies
\begin{equation}\label{eq:K-kernel}
\ker \mathcal K = \{(x_1,...,x_8) \, : \, x_1 = \cdots = x_8\} = (\mathbb R^8)^{O_4},
\end{equation}
therefore $A(\rho)$ has the same pair of eigenvalues $\pm i \omega$ corresponding to the eigen\-space $V_0$ (see \eqref{eq:isotyp}) for all values of the parameter $\rho$,
while the other seven pairs of complex conjugate eigenvalues of $A(\rho)$ cross the imaginary axis transversely through the pair $\pm i\omega$ for $\rho=0$.
This is a degenerate situation because the crossing number is not defined, however we can use Theorem \ref{thm:main-theorem}.
The complexification of the phase space is an $O_4\times S^1$-representation admitting the isotypical decomposition
$$
{}^1\widetilde {V}= {}^1\widetilde V_0\oplus {}^1\widetilde V_1 \oplus {}^1\widetilde V_2 \oplus {}^1\widetilde V_3,
$$
where $^1\widetilde V_k$ is modeled on the irreducible representation $^1\widetilde {\mathcal V_k}$ (recall that all the $O_4$-isotypical components of $V$ are modeled on irreducible representations of {\it real} type). In order to apply Theorem \ref{thm:main-theorem}, one needs to choose maximal twisted subgroups $H^{\varphi}$ occurring in ${}^1\widetilde V_k$ with $k = 1,2,3$ such that
\begin{equation}\label{eq:non-itersect-cond-D3}
^1\widetilde {\mathcal V_0} ^{H^\varphi} = \{0\}.
\end{equation}
It is easy to verify that with the exception of $D_3$, condition \eqref{eq:non-itersect-cond-D3} is satisfied for all maximal twisted subgroups.
Hence,
Theorem \ref{thm:main-theorem} guarantees the existence of bifurcating branches of periodic solutions with symmetries $(^-\overline{D}_4^z),(^-\overline{D}_3^d)$,
$(^-\overline{D}_2^d),(^-\overline{\mathbb Z}_4^c),(^-\overline{\mathbb Z}_3^t), (^+\overline{D}_4^d), (^+\overline{D}_2^d),(^+\overline{\mathbb Z}_4^c),
(^+\overline{\mathbb Z}_3^t),(^-\overline{S}_4^-)$. All these branches bifurcate from the trivial solution at the bifurcation point $\rho=0$.
\section{Appendix}\label{appendix}
Given a cube with subsequent vertices $1,2,3,4$ on one facet, and subsequent vertices $5,6,7,8$ on the opposite facet, with the vertices $1$ and $5$ connected by an edge, denote by $O_4$ the subgroup of the symmetry group $S_8$ consisting of all symmetries of the above cube, and by $S_4$ the subgroup of $O_4$ consisting of all symmetries of the cube preserving its orientation. As is well-known, $S_4$ can be thought of as the group of permutations of the large diagonals of the cube. Denote by $O_1$ a subgroup
of $O_4$ generated by the permutation $(17)(28)(35)(46)$. Clearly, $O_1$ is isomorphic to $\mathbb Z_2$ and $O_4 = S_4 \times O_1$.
Define two subgroups $(\mathbb Z_2\times O_1 )^{o}, (\mathbb Z_2 \times O_1 )^{oz} < \mathbb Z_2 \times O_4 \times S^1=: G$ (both isomorphic to
$\mathbb Z_2\times O_1$) by
\begin{align*}
(\mathbb Z_2\times O_1 )^{o} :=& \Big\{\big(1,( ),0\big),
\big(1,(17)(28)(35)(46),0\big), \big(-1,( ),1/2\big), \\&
\big(-1,(17)(28)(35)(46),1/2\big)\Big\},
\end{align*}
\begin{align*}
(\mathbb Z_2 \times O_1 )^{oz} :=& \Big\{\big(1,( ),0\big), \big(-1,(17)(28)(35)(46),0\big), \big(-1,( ),1/2\big), \\&
\big(1,(17)(28)(35)(46),1/2\big)\Big\},
\end{align*}
and two their subgroups (both isomorphic to $O_1$):
\begin{equation}
(1_{\mathbb Z_2} \times O_1 )^{o}:= \Big\{\big(1,( ),0\big), \big(1,(17)(28)(35)(46),0\big)\Big\},
\end{equation}
\begin{equation}
(1_{\mathbb Z_2} \times O_1 )^{oz}:= \Big\{\big(1,( ),0\big), \big(1,(17)(28)(35)(46),1/2\big)\Big\}
\end{equation}
(here $1_{\mathbb Z_2}$ stands for the neutral element in $\mathbb Z_2$).
Then, for any $H < S_4 \times S^1 \simeq \{1\} \times S_4 \times S^1 < G$, define the subgroups ${}^+H,\; {}^-H,\; {}^+\overline{H}, \; {}^-\overline{H} < G$ by
\[
{}^+H := H \cdot (\mathbb Z_2\times O_1)^{o}, \quad {}^-H := H \cdot (\mathbb Z_2\times O_1)^{oz},\;
\]
\[
{}^+\overline{H}:= H \cdot (1_{\mathbb Z_2}\times O_1)^{o}, \quad
{}^-\overline{H} := H \cdot (1_{\mathbb Z_2}\times O_1)^{oz}.
\]
Clearly, for any $H < S_4$, one has $H \cap (\mathbb Z_2\times O_1)^{o} = H \cap (\mathbb Z_2\times O_1)^{oz} = 1_G$, where $1_G$ stands for the neutral element in $G$, therefore ${}^+H$ and ${}^-H$ are isomorphic to the direct products of their factors.
All twisted subgroups of $G$ which we deal with in Example 2 appear as
either $^+H$, or $^-H$, or $ {}^+\overline{H}$, or ${}^-\overline{H}$, where $H$ is among the following groups:
\begin{align*}
S_4:=&\Big\{\big((),0\big),\big((15)(28)(37)(46),0\big),\big((17)(26)(35)(48),0\big),\big((12)(35)(46)(78),0\big),\\&
\big((17)(28)(34)(56),0\big),\big((14)(28)(35)(67), 0\big),\big((17)(23)(46)(58),0\big),\\&
\big((13)(24)(57)(68),0\big),\big((18)(27)(36)(45),0\big),\big((16)(25)(38)(47),0\big),\big((254)(368),0\big),\\&
((245)(386),0),((163)(457),0),((136)(475),0),((168)(274),0),\\&((186)(247),0),
\big((138)(275),0\big),\big((183)(257),0\big),
\big((1234)(5678),0\big),\big((1432)(5876),0\big),\\& (\big(1265)(3874),0\big),\big((1562)(3478),0\big),
\big((1485)(2376),0\big),\big((1584)(2678),0\big)\Big\}\\
D_4^z:=&\Big\{\big((),0\big),\big((1234)(5678),0\big),\big((13)(24)(57)(68),0\big),\big((1432)(5876),0\big),
\\&\big((17)(26)(35)(48),1/2\big),\big((18)(27)(36)(45),1/2\big),\big((15)(28)(37)(46),1/2\big),\\&
\big((16)(25)(38)(47),1/2\big)\Big\}\\
D_3^z:=&\Big\{\big((),0\big),\big((254)(368),0\big),\big((245)(386),0\big),\big((17)(26)(35)(48),1/2\big),
\\&\big((17)(28)(34)(56),1/2\big),\big((17)(23)(46)(58),1/2\big)\Big\}\\
D_2^d:=&\Big\{\big((),0\big),\big((17)(26)(35)(48),1\big),\big((13)(24)(57)(68),1/2\big),\big((15)(28)(37)(46),
1/2\big)\Big\}\\
\mathbb Z^c_4:=&\Big\{\big((),0\big),\big((1234)(5678),1/4\big),\big((13)(24)(57)(68),1/2\big),\big((1432)(5876),3/4\big)\Big\}
\\
\mathbb{Z}^t_3:=&\Big\{\big((),0\big),\big((254)(368),1/3\big),\big((245)(386),2/3\big)\Big\}\\
D_4^d:=&\Big\{\big((),0),((1234)(5678),1/2\big),\big((13)(24)(57)(68),0\big),\big((1432)(5876),1/2\big),
\\&\big((17)(26)(35)(48),0\big),\big((18)(27)(36)(45),1/2\big),\big((15)(28)(37)(46),0\big), \\& \big((16)(25)(38)(47),1/2\big)\}\\
D_3:=&\{\big((),0\big),\big((254)(368),0\big),\big((245)(386),(17)(26)(35)(48),0\big),\\&\big((17)(28)(34)(56),0\big),\big((17)(23)(46)(58),0\big)\Big\}\\
\end{align*}
\begin{align*}
S_4^-:=&\Big\{\big((),0\big),\big((15)(28)(37)(46),1/2\big),\big((17)(26)(35)(48),1/2\big),\big((12)(35)(46)(78)
,1/2\big),\\&\big((17)(28)(34)(56),1/2\big),\big((14)(28)(35)(67)1/2\big),\big((17)(23)(46)(58),1/2\big),
\\&((13)(24)(57)(68),0),((18)(27)(36)(45),0),((16)(25)(38)(47),0),((254)(368),0)
,\\&\big((245)(386),0\big),\big((163)(457),0\big),\big((136)(475),0\big),\big((168)(274),0\big),\big((186)(247),0\big),\\&
((138)(275),0), ((183)(257),0),
\big((1234)(5678),1/2\big),\big((1432)(5876),1/2\big),\\&\big((1265)(3874),1/2\big),\big((1562)(3478),1/2\big),
\big((1485)(2376),1/2\big),\big((1584)(2678),1/2\big)\Big\}
\end{align*}
\section*{Acknowledgments}
This work has been done as a part of the Prospective Human
Resources Support Program of the Czech Academy of Sciences; EH acknowledges the support by this program.
\bibliographystyle{abbrv}
|
2,877,628,090,321 | arxiv |
\section{Introduction}
Growing number of network devices and services have increased the importance of network security. There is an increasing demand for protective measures as hackers launch attacks to paralyze or steal information from computer systems connected to network. Examples of these attacks are transmission of malicious files and exploitation of security vulnerability of targets \cite{Beaugnon2018}. In such attacks, hackers interact with the target system, generating network activity \cite{Shah2018}. Network Intrusion Detection System (NIDS) is one of the essential elements of network perimeter security which analyzes these activities and raises alarms \cite{Northcutt2005}. Specifically, NIDS analyzes the header and payload data of incoming and outgoing network packets, and it invokes alerts when detecting a malicious network activity \cite{Laskov2008}.
There are two approaches for detection of malicious network activities: traditional methods and machine learning based ones \cite{Garcia2009}. Both of them involve feature extraction stage, but differ in how to identify malicious activity. In the feature extraction approach, individual packets in network activities are summarized into high level events such as sessions \cite{Zeek}. Each summarized record consists of feature values that characterize the high level event \cite{Lee1999}. Then, in traditional development of NIDS, security experts identify patterns of attacks, deciding threshold ranges for each features \cite{Garcia2009}. On the other hand, in the machine learning based approach, a given model automatically learns patterns of malicious activities from a given dataset \cite{Beaugnon2018}. Recently, machine learning based methods have been attracting more attention over traditional methods, due to its potential capability to detect more complicated patterns in a large scale dataset \cite{Tang2018}.
While many researchers have experimented various machine learning techniques, time-series information of network traffic data have not received much attention \cite{Beaugnon2018,Tang2018}. As network activity occurs in timely manner, usage of sequential information in machine learning models should lead to more comprehensive analysis as long as the model has enough computational capacity for such additional information. Recurrent neural networks (RNNs) can capture temporal dependence in data, which brought significant advances in the fields of speech recognition and machine translation \cite{Moon2015,Bahdanau2015}, and long short-term memory (LSTM) \cite{Hochreiter1997} or gated recurrent unit (GRU) \cite{Cho2014} are popular RNNs \cite{Greff2015}.
In addition to temporal dependence, categorical information has been neglected in neural network based NIDS. Categorical information means non-numeric (or symbolic) features like protocol type, state, and service in network traffic data. While such features are crucial in recognizing malicious pattern activity, traditional neural network approaches could not accept them as input. Categorical features are very common in natural language processing (NLP), because words are symbols, and there are several feature embedding (or word embedding) techniques \cite{Pennington2014,hchoi2017csl} to handle symbolic words in NLP tasks, like language model and neural machine translation \cite{Mikolov2011,Bahdanau2015}.
In this paper, we propose to apply LSTM and feature embedding to build intrusion detection models, where sequential information of network traffic data is captured by LSTM and categorical features are utilized with feature embedding. For evaluation, after checking open datasets for network intrusion detection, we use the UNSW-NB15 dataset \cite{Moustafa2015}, which is an up-to-date dataset for network intrusion detection. We assume that records are arranged in timely order, which is to capture temporal dependence for intrusion detection. In experiments, we present that LSTM can effectively model sequential structures for NIDS and feature embedding can make categorical features available in the neural network models. Finally, after many experiments with various options and hyper parameters, LSTM with feature embedding leads to significant improvement in detection performance compared to other machine learning techniques. We have achieved binary classification accuracy of 99.72\% over the UNSW-NB15 dataset.
The rest of this paper is organized as follows. Related works and backgrounds are described in Section 2. In Section 3, we propose new network intrusion detection models. The experiment results are presented and analyzed in Section 4, followed by Section 5 where we conclude the paper.
\section{Background}
\subsection{Network Intrusion Detection Data}
As an open dataset, we use the UNSW-NB15 dataset, which is a broad-gauge network intrusion detection dataset. UNSW-NB15 was created for standardized evaluation of NIDS \cite{Moustafa2015}. Especially, it aimed to replace the KDD Cup 99 and NSL-KDD datasets, which have been popular datasets for NIDS over the years, but do not convey newly emerging network attack behaviors. As specified in \cite{Moustafa2015}, in order to reflect contemporary hacking behaviors, attacks in UNSW-NB15 were generated using IXIA Perfect Storm, which can simulate attacks listed in CVE website. After arranging a testbed environment with the attack generator, traffic was captured by TCP dump. Then the final dataset was formulated by conducting feature extraction with tools such as Bro and Argus.
\subsection{Network Intrusion Detection Method}
IDS has two detection mechanisms according to definitions of malicious activity \cite{Garcia2009}. Signature-based detection mechanism defines malicious activities, and recognizes behaviors that match the attacks. In contrast, anomaly-based detection mechanism defines normal activity, and recognizes behaviors that deviate from the normal ones. The former mechanism is more compatible with attacks that are already known and shows low false-positive rate compared to the latter. On the other hand, the latter has potential to recognize unknown attacks, but it can suffer from high false-positive rate.
About the two detection mechanisms, there are corresponding development approaches for NIDS: expert-centered and machine learning based ones \cite{Garcia2009}. In expert-centered approach, signatures are written by security experts. For example, Snort, a renowned open-source project for NIDS, lets user write rules by which it examines network packets and creates alerts \cite{Northcutt2005}. This approach requires expert knowledge or rule sets. In the latter approach, on the other hand, signatures are automatically learned by machine learning models. Also, it requires a dataset which contains massive amount of data and corresponding labels specifying attack type of each datum \cite{Beaugnon2018}.
As pointed out in \cite{Niyaz2015,Beaugnon2018}, some challenges should still be addressed for deployment of machine learning based NIDS to real network environments. In addition, experiment over an open dataset assumes that network traffic can be pre-processed in advance \cite{Lee1999}. Nevertheless, experiments are useful for evaluation of potential detection performance of different machine learning techniques.
After publication of UNSW-NB15, there have been many research works to apply myriad of machine learning techniques to the dataset. Suleiman et al. applied various classical machine learning algorithms such as Random Forest, K-nearest neighbor, and Support Vector Machine \cite{Suleiman2018}. Among the experiments, J48 and K-NN algorithms were proposed as the most suitable models with high efficiency and accuracy. Moustafa et al. experimented anomaly-based detection method based on geometric area analysis using trapezoidal area estimation \cite{Moustafa2017}. Meanwhile, Papamarztivanos et al. proposed a novel approach to NIDS with genetic algorithm and decision tree \cite{Papamartzivanos2018}. In their work, they used genetic algorithm to produce detection rules that compose a decision-tree model. The resulting model was experimented over UNSW-NB15, and it showed good performance in detecting both attacks that are common and rare in the dataset.
Recently, Tama et al. experimented effectiveness of deep neural networks (DNNs) for NIDS on UNSW-NB15 \cite{Tama2017}. In addition, VinayaKumar et al. carried out comparative analysis of DNN models and classical machine learning algorithms \cite{Vinayakumar2019}. After conducting extensive parameter search for optimization, they concluded that DNNs were suitable for development of IDS. Furthermore, Nawir et al. discovered Average One Dependence Encoder (AODE) achieves high accuracy with relatively short amount of classification time \cite{Nawir2018}.
While previous works demonstrated various machine-learning-based NIDS, most of them did not pay attention to sequential information. As exceptions, Staudemeyer applied long short-term memory (LSTM) network to the KDD99 dataset to improve classification performance \cite{Staudmeyer2015}. In addition, Kim et al. experimented an LSTM-RNN model on KDD99 and obtained significant performance improvement \cite{Kim2016}. However, their experiments were performed based on the KDD99 dataset, which does not reflect contemporary attack behaviors. Moreover, the previous LSTM models used categorical features as if the features are continuous ordinal data. Categorical features need feature embedding like word embedding in NLP \cite{Guo2016}.
\subsection{Long Short-Term Memory}
In this section, we briefly review recurrent neural network (RNN) and LSTM. For more information, the readers are referred to \cite{Hochreiter1997,HChoi2018neuro,Staudmeyer2015,Olah2015}.
\begin{figure}[h]
\centerline{\hbox{
\includegraphics[width=2.4in]{LSTMStructrue.PNG}\hspace{0.3in}
}}
\caption{Structure of LSTM Cell. Adopted from \cite{Moon2015}}
\label{fig:LSTM}
\end{figure}
RNN is a modified version of neural network that updates its internal state over time. By forming circular connections within the network, RNNs can memorize past inputs and capture temporal properties in sequential data. However, RNNs can retain memory for only a short amount of time steps due to the vanishing gradient problem. LSTM \cite{Hochreiter1997} solves the vanishing gradient problem by introducing three gates (input, forget and output) around special memory units called cell states $c_t$. The gates control the update of the cell states as shown in Fig. \ref{fig:LSTM}.
In LSTM, inferences for the cell states $c_t$ and hidden states $h_t$ are given by
\begin{eqnarray}
c_t &=& f_t \odot c_{t-1} + i_t \odot tanh(Uh_{t-1} + Wx_t + b), \\
h_t &=& o_t \odot tanh(c_t),
\end{eqnarray}
where $\odot$ indicates the element-wise multiplication operation, and the three gates are defined by
\begin{eqnarray}
i_t &=& \sigma(W_i x_t + U_i h_{t-1} + b_i), \\
f_t &=& \sigma(W_f x_t + U_f h_{t-1} + b_f), \\
o_t &=& \sigma(W_o x_t + U_o h_{t-1} + b_o).
\end{eqnarray}
Here, $\sigma$ is the sigmoid function, and $U$, $W$, and $b$ are parameters. For implementation convenience without degradation of performance as recommended in \cite{Greff2015}, peephole connections are not included.
\subsection{Word Embedding}
To use categorical (or nominal) values like words in natural language processing in neural networks, the values should be projected into a continuous vector space, called {\em word embedding}, which captures relations among nominal values and represents them in a vector space \cite{Guo2016} as shown in Fig. \ref{fig:word_embedding}. In our experiments, feature embedding will also be included in our network architecture, which leads to better performance.
\begin{figure}[h]
\centerline{\hbox{
\includegraphics[width=3.0in]{word_embedding.png}\hspace{0.3in}
}}
\caption{Embedded words in a continuous vector space. Words are represented as vectors with semantic meaning.}
\label{fig:word_embedding}
\end{figure}
Given a categorical variable $x \in {1, 2, \dots, T}$, where $T$ is the number of possible values that $x$ can take. Let $f$ be a simple function from $x$ to $e$, where $e$ is a one-hot vector in a $T$ dimensional space, and only the $x$th element of $e$ is one and the others are zeros. Then the vector representation of $x$ is defined as $W * e$, where $W$ is the embedding matrix with the shape of $(T \times D)$, and $D$ is the embedding dimension in which the categorical variable will be represented. In practice, $W * e$ might be implemented in a different way that the $x$th column of $W$ is selected, which is more efficient rather than the matrix-vector multiplication.
The weight matrix $W$ represents weights connecting one-hot layer to embedding layer. Notice that weights can be initialized with random values and be trained just as other parameters in neural networks. Once a categorical variable is projected into a continuous vector, then the vector can be concatenated with other continuous input feature values, and the combined data travels to upper neural network layers.
\section{Intrusion Detection based on LSTM}
Network intrusions have patterns according to their types. Generally, those patterns do not appear in a single packet, but can be dispersed for multiple packets. However, most of the previous machine learning methods for NIDS failed to address such characteristic, and they were not able to capture patterns that appear in multiple packets. For example, multi-layer Perceptron (MLP) performs intrusion detection with only one packet ignoring temporal dependency. Actually, if you want to detect DoS attack with MLP, it would be very tough because DoS is an attack to bring down a server by sending many packets, each of which is not very different from normal packets. This issue might not be limited to DoS attack, but also for other attack types. For more accurate intrusion detection, therefore, it is necessary to deal with multiple packets rather than a single packet.
In this study, we use LSTM to detect whether or not the current packet is normal considering the previous packets. The current packet and previous packets are put into LSTM as inputs as in Fig. \ref{fig:detection_with_many_packets}.
\begin{figure}[h]
\centerline{\hbox{
\includegraphics[width=3.0in]{detection_with_many_packets.png}\hspace{0.3in}
}}
\caption{Intrusion detection through multiple packets}
\label{fig:detection_with_many_packets}
\end{figure}
However, there are several different ways to train the network or to construct our network architecture.
First, the network can be trained with the final label corresponding to the current packet or with all the labels to the current and the past packets. Second, the network can be constructed for binary classification (`normal' or `attack') or multiple classification (`normal' or several attack types). Even for binary classification, we can train a model for multiple classification and classify all types of attack as a single class `attack' as in binary classification. Further, the network can be constructed with or without embedding layer which is for categorical features.
\subsection{Many-To-Many Train vs. Many-To-One Train}
Given sequential packets, NIDS is to perform many-to-one classification where the current input is classified using sequential packets as in Fig. \ref{fig:detection_with_many_packets}. That is, given many input steps, the output is determined only for the last step. Basically, RNNs like LSTM take one input packet at a time and yield a prediction output at each time step. Therefore, it is natural to train the model only with the last error as in many-to-one classification as in Fig. \ref{fig:many_one_train}(a), which is called many-to-one (M2O) training. However, it is possible to use all the errors for training as in Fig. \ref{fig:many_one_train}(b) if the labels are available, because the labels for the previous packets have some information which accelerate the training process, which is called many-to-many (M2M) training.
That is, in the M2M strategy, not only the attack type of the target packet but also the attack type of the previous packets are learned together.
We compare two training approaches in experiments.
\begin{figure}[h]
\centerline{\hbox{
\includegraphics[width=2.0in]{one_train.png}\hspace{0.5in}
\includegraphics[width=2.0in]{many_train.png}
}}
\vspace{0.1in}
\centerline{\hbox{(a) \hspace{2.3in} (b)}}
\vspace{0.1in}
\caption{Two learning methods: (a) M2O training learns only the last output, and (b) M2M training learns all the outputs in the sequence.}
\label{fig:many_one_train}
\end{figure}
\subsection{Multi Classes to Binary Class Detection}
There are various types of network intrusions, and multi-classification might be of interest. Sometimes, however, it would be interesting to classify a packet as normal or abnormal. For that, there are two approaches. First, various attack types of packets can be converted into `attack' before training. Then, the model is trained to classify a packet into binary results (`normal' or `attack'). Alternatively, without converting multiple attack types into a single class `attack', the model can be trained for multiple classification and the prediction results can be merged into binary classification results as in Fig. \ref{fig:m2b}. That is, basically the model performs multi-classification, but if the prediction is one of attack types, then it is classified to `attack'. We call it multi-to-binary (M2B) classification.
\begin{figure}[h]
\centerline{\hbox{
\includegraphics[width=2.0in]{m2b_training.png}\hspace{0.5in}
\includegraphics[width=2.0in]{m2b_testing.png}
}}
\vspace{0.1in}
\centerline{\hbox{(a) \hspace{2.3in} (b)}}
\vspace{0.1in}
\caption{M2B classification: (a) The model is trained to perform multi-classification, (b) The prediction results are merged into binary classification results.}
\label{fig:m2b}
\end{figure}
\subsection{Detection With Feature Embedding}
Network packets (or connection) contain several nominal (or categorical) features like protocol and state.
Each nominal feature indicates what role the packet plays and in what state it is. Each feature has characteristics that distinguish one packet from other packets with different nominal values. However, different values of one feature can have a very similar behavior by its functions.
Therefore, simply replacing the nominal features with one-hot encoding vector may not be enough to represent packets. We apply the feature embedding technique to make each nominal feature a suitable vector in a continuous vector space according to the attack types.
As in NMT \cite{hchoi2017csl}, all nominal features are initialized to random vectors. With training, the vectors converge to appropriate points depending on packet’s attack types. For example, TCP and UDP often appear in the same attack type, so they are located closely in the vector space after training. With all embedded category features, our model could improve the detection performance utilizing relationships between nominal features.
\section{Experiments}
\subsection{Dataset}
To evaluate our proposed method for network intrusion detection system, we adopted the UNSW-NB15 dataset \cite{Moustafa2015}. UNSW-NB15 is an open dataset published by UNSW, a university in Australia, for network intrusion detection research in 2015. The KDD Cup 99 dataset used to be extensively used for network intrusion detection research in the past, but more recently UNSW-NB15 has been used because KDD Cup 99 does not contain much of the recent network hacking patterns \cite{Moustafa2015}. UNSW-NB15 consists of nine attack types and normal type as described in Table \ref{tbl:unsw_dist}. The dataset consists of 3 nominal, 2 binary, and 37 numerical features. The dataset splits into two sets: training(175,341 packets) and testing(82,332 packets). From the training set, 10\% of randomly selected samples are put aside and used for validation.
In addition, the records of UNSW-NB15 are sorted in chronological order, which provides sequential patterns \cite{Moustafa2015}.
\begin{table}[h!] \centering
\caption{UNSW-NB15 Dataset Attack Type}
\label{tbl:unsw_dist}
\begin{tabular}{| c | r | r |}
\hline
\textbf{ Category } & \textbf{ Train } & \textbf{ Test } \\ \hline \hline
Total Records & 175,341 (100\%) & 82,332(100\%) \\
\hline
Normal & ~56,000(31.94\%) & ~37,000(44.94\%) \\
Analysis & 2,000(1.14\%) & 677(0.82\%) \\
Backdoor & 1,746(1.00\%) & 583(0.71\%) \\
Dos & 12,264(6.99\%) & 4,089(4.97\%) \\
Exploits & 33,393(19.04\%) & 11,132(13.52\%) \\
Fuzzers & 18,184(10.37\%) & 6,062(7.36\%) \\
Generic & 40,000(22.81\%) & 18,871(22.92\%) \\
~Reconnaissance~ & 10,491(5.98\%) & 3,496(4.25\%) \\
Shellcode & 1,133(0.65\%) & 378(0.46\%) \\
Worms & 130(0.07\%) & 44(0.05\%) \\
\hline
\end{tabular}
\end{table}
\subsection{Model Architecture}
\begin{figure}[h]
\centerline{\hbox{
\includegraphics[width=3.1in]{model_architecture.png}\hspace{0.3in}
}}
\caption{Model Architecture: embedding, LSTM, and fully connected layers. `Fully Connected 2' is used only for binary classification.}
\label{fig:LSTM_model}
\end{figure}
Our model is composed of 3 types of layers: embedding, LSTM, and fully connected layers. The embedding layer is only for nominal features of an input, and continuous features are set aside. 3 nominal features (proto, service, and state) are mapped to 5, 3, and 2 dimensional vectors, respectively. These output vectors are concatenated to continuous features and travel to the next layer in the model. The LSTM layer is composed of hidden state with 100 nodes. The fully connected layer is of size 50 with dropout. As activation function, leaky ReLU \cite{He2015} is applied for non-linear transformation. In case of binary classification, the second fully connected layer is added with size of 10 nodes. In Fig. \ref{fig:LSTM_model}, the dotted line indicates the layer working only in case of binary classification.
\subsection{Evaluation Metrics}
As evaluation metrics, we used accuracy (AC) and F1-score (F1). Given true positive (TP), true negative (TN), false positive (FP), and false negative (FN), AC and F1 are respectively calculated by
\begin{eqnarray}
AC &=& \frac{TP + TN}{ TP+TN+FP+FN}, \\
F1 &=& \frac{2P*R}{P + R},
\end{eqnarray}
where P and R stand for precision and recall, respectively as follows.
\begin{eqnarray}
P &=& \frac{TP}{TP + FP}, \\
R &=& \frac{TP}{TP + FN}.
\end{eqnarray}
As the harmonic mean of precision and recall, F1-score provides a better evaluation measure than accuracy especially for imbalanced data.
\subsection{Experiment Results}
We evaluate many combination of training configurations on LSTM with feature embedding. First, the LSTM model is trained in two ways as we described above. One is learning from errors of every output (M2M) and the other one is learning only from the error of the last output (M2O). In addition, for binary-classification, we add `multi-classification to binary-classification' (M2B) which trains a multi-classification model and converts all of malicious labels and outputs of model to the same label `attack'. Finally, feature embedding (EMB) is applied to every models.
\begin{table}[h!] \centering
\caption{Binary-classification LSTM Model results for test data. Validation results are in the parenthesis.}
\label{tbl:LSTM_comp_binary}
\begin{tabular}{| l| c | c | c |}
\hline
\textbf{Model} & \textbf{Sequence Length} & \textbf{ Accuracy } & \textbf{ F1 Score } \\
\hline
ANN \cite{Suleiman2018} & - & 81.91 & 95.2 \\
RepTree \cite{Belouch2017} & - & 88.95 & - \\
Random Forest \cite{Vinayakumar2019} & - & 90.3 & 92.4 \\
MLP & - & 83.55 (94.00) & 86.89 \\
LSTM(M2M) & 110 & 98.68 (99.88) & 99.16 \\
LSTM(M2O) & 310 & 98.49 (97.99) & 98.90 \\
LSTM(M2M M2B) & 130 & 98.29 (99.84) & 98.43 \\
LSTM(M2O M2B) & 210 & 99.42 (98.07) & 99.47 \\
LSTM(M2M + EMB) & 270 & ~\textbf{99.72 (99.97)}~ & \textbf{99.75} \\
LSTM(M2O + EMB) & 90 & 99.52 (97.82) & 99.56 \\
LSTM(M2M M2B + EMB) & 110 & 99.53 (99.93) & 99.67 \\
LSTM(M2O M2B + EMB) & 110 & 98.83 (98.02) & 98.93 \\
\hline
\end{tabular}
\end{table}
\begin{table}[h!] \centering
\caption{Multi-classification LSTM Model results. Validation results are in the parenthesis.}
\label{tbl:LSTM_comp_multi}
\begin{tabular}{| l| c | c |}
\hline
\textbf{Model} & \textbf{Sequence Length} & \textbf{ Accuracy } \\
\hline
Random Forest \cite{Vinayakumar2019} & - & 75.5\\
RepTree \cite{Belouch2017} & - & 81.28 \\
MLP & - & 72.81 (79.32) \\
LSTM(M2M) & 20 & 84.78 (85.52)\\
LSTM(M2O) & 250 & 83.45 (82.72)\\
LSTM(M2M + EMB) & 30 &~\textbf{86.98 (88.50)}~\\
LSTM(M2O + EMB) & 150 & 85.93 (83.00)\\
\hline
\end{tabular}
\end{table}
MLP model and LSTM models have apparent differences in terms of performances as summarized in Tables \ref{tbl:LSTM_comp_binary} and \ref{tbl:LSTM_comp_multi}. The MLP model shows the accuracy of 83.55\% and 72.81\% for binary-classification and multi-classification, respectively. The corresponding F1 score for the binary case is 86.89\%. The LSTM models show the accuracy over 98\% in binary-classification (F1 score of 99.75\%) and 83\% in multi-classification. The LSTM models outperform because LSTM can capture the temporal dependency presented in sequence of packets, while MLP cannot. In addition, our LSTM models outperform the previous works \cite{Suleiman2018,Belouch2017,Vinayakumar2019}
\begin{figure}[h]
\centerline{\hbox{
\includegraphics[width=3.4in]{binary.png}\hspace{0.05in}}}
\caption{Binary-classification accuracy graphs on the validation data: M2M, and M2M with embedding. The horizontal axis indicates the length of sequence.}
\label{fig:binary_graph}
\end{figure}
Among the LSTM models, the M2M+EMB model achieved the highest performance for both binary and multiple classification tasks. It is because categorical features includes distinguishable information and feature embedding is efficient to capture the information for neural networks.
Actually, when comparing EMB models to corresponding non-EMB models, the EMB models have better performance (around 1\% higher for binary classification and 2\% higher for multi-classification) and more stable results as shown in Figs. \ref{fig:binary_graph} and \ref{fig:multi_graph}.
\begin{figure}[h]
\centerline{\hbox{
\includegraphics[width=3.4in]{multi.png}}}
\caption{Multi-classification accuracy graphs on the validation data: M2M, and M2M with embedding. The horizontal axis indicates the length of sequence.}
\label{fig:multi_graph}
\end{figure}
In addition, for binary-classification, M2B can be applied, but makes no significant influence on performance. The results of M2B and non-M2B models are almost the same.
For practical consideration, we checked the prediction time with different sequence length in the model, and the results are summarized in Fig. \ref{fig:time_consumption}, where we can see that the prediction time is linear to the sequence length.
\begin{figure}[h]
\centerline{\hbox{
\includegraphics[width=3.75in]{time.png}}}
\caption{Prediction time in seconds per sequence with various sequence lengths.}
\label{fig:time_consumption}
\end{figure}
\section{Conclusion}
In this paper, we proposed and experimented several IDS models based on LSTM and feature embedding. Evaluation was based on the UNSW-NB15 dataset which is suitable to reflect latest network traffic patterns. LSTM outperformed MLP with a significant margin (around 16\% point or 13\%) in accuracy and F1 score. Among LSTM models, the one with feature embedding was the best, since the embedding technique could capture categorical information which is crucial for attack recognition.
We expect that real-time detection is possible in practice. Our future work includes making the model compatible with embedded system and Internet of things (IoT) by reducing the model complexity and shortening the necessary sequence length.
\section*{Acknowledgement}
This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (2017R1D1A1B03033341), and by Institute for Information \& communications Technology Promotion(IITP) grant funded by the Korea government(MSIT) (No. 2018-0-00749, Development of virtual network management technology based on artificial intelligence).
\bibliographystyle{splncs04}
|
2,877,628,090,322 | arxiv | \section{Introduction and notation}
Throughout the paper, $\CF=(0,1,+,-,\cdot)$ denotes the language of fields, $\TF$ the theory of fields and $\ACF$ the theory of algebraically closed fields. We use the notation and terminology of \cite{M} for the basic concepts of model theory. In most cases, our notation is the standard one. Nevertheless, we recall some notions at the end of this section for convenience of the reader.
In 1948 A. Tarski proved (in an unpublished paper, see \cite{Rob} for the details) that $\ACF$ admits quantifier elimination. This is one of the most fundametal facts in model theory. Therefore, there is a number of proofs of Tarski's theorem in the literature. Standard ones are existential, that is, they do not provide the form of the quantifier-free formula equivalent with the given one. This paper aims to provide that form. More precisely, for a given $\CF$-formula $\varphi$ we construct a quantifier-free formula $\varphi'$ such that $\ACF\vdash\varphi\lra\varphi'$. Our construction is based on results of \cite{P}. In that paper we set a bound on the length of ascending chains of ideals in multivariate polynomial rings. These ideals are generated by polynomials of degrees less or equal to fixed natural numbers. In a sense, we rediscovered in \cite{P} some of the main results of \cite{Mo} and \cite{AsPo} (see also \cite{Se1}) in order to prove Tarski's theorem in a constructive way.
We emphasize that the results of \cite{Mo} and \cite{AsPo}, together with argumentation similar to that of Section 3, enable to give an alternative constructive proof of Tarski's theorem. Moreover, there are many results on effective quantifier elimination in $\ACF$, see for example \cite{PS}. Therefore we are not pioneer in these considerations.
The paper is organized as follows. In Section 2 we recall some results from \cite{P}. We aim to present Corollary 2.3 (Corollary 4.5 in \cite{P}) which is the main tool in our proof. The constructive proof of Tarski's theorem is presented in Section 3 which is the core of the paper. The main result is Theorem 3.2. As a corollary of Theorem 3.2 we get a computable condition for the existence of a common root of multivariate polynomials, see Corollary 3.3. By \textit{computable condition} (or \textit{computable criterion}) we mean a procedure employing only finite number of arithmetic operations. The last section of the paper is devoted to show examples of application of Theorem 3.2 in mathematics and physics. These applications are connected with some problems of quantum information theory which we consider in \cite{JP} and \cite{PJ} (see also \cite{JKP} and \cite{PKJ} for similar topics).
The results of Section 3 are part of the author's master's thesis, supervised by Stanis{\l}aw Kasjan in 2007. The author is grateful to the supervisor for all discussions and support during the work on the thesis.
We introduce some notation and terminology. Assume that $\CL$ is a language and $\varphi_{1},...,\varphi_{n}$ are $\CL$-formulas. Then $\bigwedge_{i=1}^{n}\varphi_{i}$ and $\bigvee_{i=1}^{n}\varphi_{i}$ denote the formulas $\varphi_{1}\wedge...\wedge\varphi_{n}$ and $\varphi_{1}\vee...\vee\varphi_{n}$, respectively. If $\und{x}=(x_{1},...,x_{m})$ is a sequence of variables and $Q$ is a quantifier, then $Q_{\und{x}}$ is the abbreviation of $Q_{x_{1}}...Q_{x_{m}}$. Generally, if $A=\{a_{1},...,a_{s}\}$ is a set of variables, then $Q_{A}$ is the abbreviation of $Q_{b_{1}}...Q_{b_{s}}$ where $b_{1},...,b_{s}$ is any permutation of $a_{1},...,a_{s}$. This is consistent since, for any $\CL$-formula $\varphi$, the formulas $Q_{b_{1}}...Q_{b_{s}}\varphi$ and $Q_{a_{1}}...Q_{a_{s}}\varphi$ are equivalent.
If $\varphi$ is an $\CL$-formula and $a_{1},...,a_{n}$ are all free variables of $\varphi$, then sometimes we write $\varphi(\und{a})$ instead of $\varphi$ where $\und{a}=(a_{1},...,a_{n})$. Recall that if $\varphi(\und{a})$ is an atomic $\CF$-formula ($\CF$ denotes the language of fields), then $\varphi(\und{a})$ has the form $F=0$ or $F\neq 0$ where $F$ is a mutivariate polynomial in $\ZZ[a_{1},...,a_{n}]$.
If $\und{a}=(a_{1},...,a_{n})$, then $\ZZ[\und{a}]$ denotes the ring $\ZZ[a_{1},...,a_{n}]$. If $\und{x}=(x_{1},...,x_{m})$, then $\ZZ[\und{a}][\und{x}]$ denotes the ring of polynomials in $m$ variables $x_{1},...,x_{m}$ over the ring $\ZZ[\und{a}]$. A polynomial $F$ in $\ZZ[\und{a}][\und{x}]$ has the form $\sum_{\alpha\in\NN^{m}}f_{\alpha}\cdot\und{x}^{\alpha}$ where $f_{\alpha}\in\ZZ[\und{a}]$ for any $\alpha\in\NN^{m}$ and $f_{\alpha}=0$ for almost all $\alpha\in\NN^{m}$. Here, $\und{x}^{\alpha}$ denotes $x_{1}^{\alpha_{1}}...x_{m}^{\alpha_{m}}$ where $\alpha=(\alpha_{1},...,\alpha_{m})\in\NN^{m}$. The degree of $F$ with respect to $x_{1},...,x_{m}$ is denoted by $\deg(F)$. Generally, if $C$ is a set of variables, then $\ZZ[C][\und{x}]$ is the ring of polynomials in $m$ variables $x_{1},...,x_{m}$ over the ring $\ZZ[C]$ of polynomials in variables from $C$.
We denote by $\NN$ the set of all natural numbers and by $\NN_{1}$ the set $\NN\setminus\{0\}$. Assume that $m\in\NN_{1}$. We view the set $\NN^{m}$ as a monoid with respect to the pointwise addition, denoted by $+$. We denote by $\und{0}$ the neutral element $(0,...,0)\in\NN^{m}$ of $+$. If $\alpha,\beta\in\NN^{m}$ and $\alpha+\gamma=\beta$ for some $\gamma\in\NN^{m}$, then we write $\alpha\para\beta$. Note that $\para$ defines an order on $\NN^{m}$ and $\NN^{m}$ is an ordered monoid with respect to $+$ and $\para$. Sometimes we treat the elements of the set $\NN^{m}$ as sequences of natural numbers. If $\alpha\in\NN^{m}$ and $\alpha=(a_{1},...,a_{m})$, then we set $|\alpha|=a_{1}+...+a_{m}$.
\section{Ascending chains of ideals in the polynomial ring}
In this section we recall the results of \cite{P} which are the main tool in constructive proof of Tarski's theorem. The first goal is to recall the construction of a function with the \textit{bounding property}. We use this function in Theorem 2.2 to set a bound on the length of ascending chains of ideals in $K[x_{1},...,x_{m}]$ ($K$ is a field) which are generated by polynomials of degrees less or equal to fixed natural numbers. Then we present Corollary 2.3 which we directly apply in the proof of Tarski's theorem. This section does not contain any proof. We refer to \cite{P} for all the proofs and other details.
We denote by $\FF$ the set of all non-decreasing functions $\N_{1}\ra\N_{1}$. We write $f\leq f'$ if and only if $f,f'\in\FF$ and $f(n)\leq f'(n)$ for any $n\in\NN_{1}$. If $f\in\FF$ and $s\in\NN$, then ${}^{s}f:\NN_{1}\ra\NN_{1}$ is a function such that ${}^{s}f(n)=f(s+n)$ for any $n\in\NN_{1}$. Observe that ${}^{s}f\in\FF$. A sequence $\alpha_{1},...,\alpha_{t}\in\NN^{m}$ is an \textit{antichain} if and only if $\alpha_{i}\npara\alpha_{j}$ for any $i<j$. Assume that $f\in\FF$. We say that an antichain $\alpha_{1},...,\alpha_{t}\in\NN^{m}$ is \textit{$f$-bounded} if and only if $|\alpha_{i}|\leq f(i)$ for any $i=1,...,t$. Assume that $m\geq 1$ is a natural number. We say that a function $\CB_{m}:\FF\ra\NN$ has the \textit{bounding property for $m$} if and only if the following conditions are satisfied:
\begin{enumerate}[\rm(1)]
\item $t\leq\CB_{m}(f)$ for any $f\in\FF$ and $f$-bounded antichain $\alpha_{1},...,\alpha_{t}\in\NN^{m}$ of length $t$,
\item $\CB_{m}(f)\leq\CB_{m}(f')$ for any $f,f'\in\FF$ such that $f\leq f'$.
\end{enumerate}
We say that a function $\CB:\N_{1}\times\FF\ra\NN$ has the \textit{bounding property} if and only if, for any $m\in\N_{1}$, the function $\CB_{m}:\FF\ra\NN$ defined by $\CB_{m}(f)=\CB(m,f)$, for any $f\in\FF$, has the bounding property for $m$.
The existence of a function with the bounding property is a consequence of the Compactness Theorem of first order logic, see \cite{F} and \cite[Proposition 3.25]{AsPo} for more details. However, this approach does not provide the explicit form of such a function.
We recall from \cite{P} the construction of a function with the bounding property. Equivalently, we give a sequence $(\CB_{m})_{m\in\NN_{1}}$ of functions such that $\CB_{m}:\FF\ra\NN$ has the bounding property for $m$. The construction is inductive with respect to the number $m$. It is given in two main steps, but the second step is divided in three parts.
Step 1. Assume that $m=1$. We define $\CB_{1}:\FF\ra\NN$ to be a function such that $\CB_{1}(f)=f(1)+1$ for any $f\in\FF$.
Step 2. Assume that $m\geq 2$ and the function $\CB_{m-1}:\FF\ra\NN$ is defined. In order to define $\CB_{m}:\FF\ra\NN$, we construct some sequence of functions $(\CB_{m}^{k})_{k=0}^{m}$, $\CB_{m}^{k}:\FF\times\NN^{k}\ra\NN$. This is done by the backward induction with respect to the number $k$. We give the construction in three steps.
Step 2.1. Assume that $k=m$. We define $\CB_{m}^{m}:\FF\times\NN^{m}\ra\NN$ to be a function such that $\CB_{m}^{m}(f,b_{1},...,b_{m})=(b_{1}+1)\cdot...\cdot(b_{m}+1)$ for any $f\in\FF$ and $(b_{1},...,b_{m})\in\NN^{m}$.
Step 2.2. Assume that $k\in\{0,...,m-1\}$ and the function $\CB_{m}^{k+1}:\FF\times\NN^{k+1}\ra\NN$ is defined. Suppose $f\in\FF$, $\beta\in\NN^{k}$ and let $g:\NN_{1}\ra\NN_{1}$ be a function such that $g(1)=1$ and $$g(n+1)=1+g(n)+\CB^{k+1}_{m}({}^{g(n)}f,\beta,f(g(n)))$$ for any $n\geq 1$. We have $g\in\FF$ and hence there is a function $\CF_{m}^{k}:\FF\times\NN^{k}\ra\FF$ such that $(f,\beta)\mapsto g$. We set $\CB_{m}^{k}(f,\beta)=g(\CB_{m-1}(f\circ g)+1)$ for any $f\in\FF$, $\beta\in\NN^{k}$ and $g=\CF_{m}^{k}(f,\beta)$.
Step 2.3. We identify $\CB_{m}$ with $\CB_{m}^{0}$.
\vsp
The above procedure defines a sequence of functions $(\CB_{m})_{m\in\NN_{1}}$, $\CB_{m}:\FF\ra\NN$. Let $\CB:\NN_{1}\times\FF\ra\NN$ be a function such that $\CB(m,f)=\CB_{m}(f)$ for any $m\in\NN_{1}$ and $f\in\FF$. In Section 3 of \cite{P} we prove the following theorem.
\begin{thm} The function $\CB_{m}:\FF\ra\NN$ has the bounding property for $m$, for any $m\in\NN_{1}$. Consequently, the function $\CB:\NN_{1}\times\FF\ra\NN$ has the bounding property.
\end{thm}
{\bf Proof.} See Proposition 3.1, Proposition 3.2 and Corollary 3.4 from \cite{P}. \epv
Assume that $K$ is a field, $m\geq 1$ is a natural number and $f:\N_{1}\ra\N_{1}$ is an arbitrary function. An ascending chain $I_{1}\subsetneq ...\subsetneq I_{t}$ of ideals in $K[x_{1},,...,x_{m}]$ is \textit{$f$-bounded} if and only if $I_{j}$ is generated by polynomials of degrees less or equal to $f(j)$, for any $j=1,...,t$.
Theorem 2.1 is used in \cite{P} to give a bound on the length of $f$-bounded ascending chains of ideals in $K[x_{1},,...,x_{m}]$ depending on $m$ and $f$. We recall the appropriate theorem below.
\begin{thm} Assume that $m\geq 1$ and $f:\NN_{1}\ra\NN_{1}$ is a function. Suppose that $I_{1}\subsetneq ...\subsetneq I_{t}$ is an $f$-bounded ascending chain of ideals in $K[x_{1},,...,x_{m}]$ of length $t$. Let $g:\NN_{1}\ra\NN_{1}$ be a non-decreasing function such that $g(n)$ is the greatest number of the set $\{f(1),f(2),...,f(n)\}$, for any $n\in\NN$. Then $t\leq\CB(m,g)$. In particular, we have $t\leq\CB(m,f)$, if $f$ is non-decreasing.
\end{thm}
{\bf Proof.} See Theorem 4.2 from \cite{P}. \epv
Let $d\geq 1$ be a fixed natural number. By a string $3^{n}d$ we mean the function $f:\NN_{1}\ra\NN_{1}$ such that $f(n)=3^{n}d$. Set $m\geq 1$, $d\geq 1$ and define the function $\gamma_{m,d}:\NN\ra\NN$ such that $$\gamma_{m,d}(i)=(3^{\CB(m,3^{n}d)-1}-1)d+i$$ for any $i\in\NN$. Applying Theorem 2.2 and the theory of Gr\"obner bases (see \cite{AL}) we prove in \cite{P} the following result which plays a crucial role in constructive proof of Tarski's theorem.
\begin{cor} Assume that $m\geq 1$ and $d\geq 1$. Then for any $G\in K[x_{1},...,x_{m}]$ and $F_{1},...,F_{s}\in K[x_{1},...,x_{m}]$ such that $\deg(F_{i})\leq d$ for $i=1,...,s$ the following condition is satisfied: $G\in\lan F_{1},...,F_{s}\ran$ if and only if there exist $H_{1},...,H_{s}\in K[x_{1},...,x_{m}]$ such that $G=H_{1}F_{1}+...+H_{s}F_{s}$ and $\deg(H_{i})\leq\gamma_{m,d}(\deg(G))$ for $i=1,...,s$.
\end{cor}
{\bf Proof.} See Proposition 4.3, Corollary 4.4 and Corollary 4.5 from \cite{P}. \epv
\section{Tarski's theorem}
This section is devoted to the constructive proof of Tarski's theorem. We recall that it is enough to give the construction for some special formulas over the language $\CF$ of fields which we call \textit{common root formulas}.
Let $\CL$ be a language and assume that $\varphi$ is a formula over $\CL$. It is well known that $\varphi$ can be written in \textit{prenex normal form}, see for example \cite[Chapter 3]{Rot}. It follows from De Morgan's laws that $\varphi$ is equivalent with the formula $\bigvee_{i=1}^{t}\exists_{\und{x}}(\bigwedge_{j=1}^{s_{i}}\varphi_{ij})$ where $\varphi_{ij}$ are atomic formulas or negations of atomic formulas.
An $\CL$-formula is a \textit{conjunctive prenex normal formula} if it has the form $\exists_{\und{x}}(\bigwedge_{i=1}^{s}\varphi_{i})$ where each $\varphi_{i}$ is an atomic $\CL$-formula or a negation of such. Hence a theory $T$ over $\CL$ admits quantifier elimination if an only if for any conjunctive prenex normal $\CL$-formula $\varphi$ there is a quantifier-free $\CL$-formula $\varphi'$ such that $T\vdash\varphi\lra\varphi'$. We recall below the form of conjunctive prenex normal $\CF$-formulas.
Assume that $\und{a}=(a_{1},...,a_{n})$, $\und{x}=(x_{1},...,x_{m})$ and $F_{1},...,F_{s}\in\ZZ[\und{a}][\und{x}]$. Assume that $F_{i}=\sum_{\alpha\in\NN^{m}}f_{i,\alpha}\cdot\und{x}^{\alpha}$ where $f_{i,\alpha}\in\ZZ[\und{a}]$ for any $i=1,...,s$, $\alpha\in\NN^{m}$ and $f_{\alpha}=0$ for almost all $\alpha\in\NN^{m}$. A formula of the form $\exists_{\und{x}}(F_{1}(\und{x})=0\wedge...\wedge F_{s}(\und{x})=0)$ is a \textit{common root formula}.
\begin{prop}
Any conjunctive prenex normal $\CF$-formula is equivalent with some common root formula.
\end{prop}
{\bf Proof.} Assume that $\und{a}=(a_{1},...,a_{n})$ and $\varphi(\und{a})$ is a conjunctive prenex normal $\CF$-formula. Then $$\varphi(\und{a})=\exists_{\und{x}}(F_{1}(\und{x})=0\wedge...\wedge F_{r}(\und{x})=0\wedge G_{1}(\und{x})\neq 0\wedge...\wedge G_{t}(\und{x})\neq 0)$$ where each $F_{i},G_{j}$ is a polynomial of the form $\sum_{\alpha\in\NN^{m}}f_{\alpha}\cdot\und{x}^{\alpha}$ where $f_{\alpha}\in\ZZ[\und{a}]$ and $f_{\alpha}=0$ for almost all $\alpha\in\NN^{m}$. Since the formula $G_{1}(\und{x})\neq 0\wedge...\wedge G_{t}(\und{x})\neq 0$ is equivalent with $(G_{1}\cdot...\cdot G_{t})(\und{x})\neq 0$, the formula $\varphi(\und{a})$ is quivalent with $$\varphi'(\und{a})=\exists_{\und{x},z}(F_{1}(\und{x})=0\wedge...\wedge F_{r}(\und{x})=0\wedge zG(\und{x})-1=0)$$ where $G=G_{1}\cdot...\cdot G_{t}$. This shows the assertion. \epv
Common root formulas play a crucial role in the constructive proof of Tarski's theorem. We aim to give an equivalent quantifier-free form of common root formulas.
Assume that $d,d'\geq 1$ are some fixed natural numbers. Let $F_{1},...,F_{s}\in\ZZ[\und{a}][\und{x}]$ be polynomials such that $\deg(F_{i})\leq d$ and $F_{i}=\sum_{|\alpha|\leq d}f_{i,\alpha}\cdot\und{x}^{\alpha}$ where $f_{i,\alpha}\in\ZZ[\und{a}]$ for any $i=1,...,s$ and $\alpha\in\NN^{m}$. Let $A^{d,d'}_{F_{1},...,F_{s}}=A$ be a matrix with rows indexed by elements of the set $X=\{\delta\in\NN^{m}|d+d'\geq|\delta|\}$, columns indexed by elements of $\{1,...,s\}\times X$ and $$A(\delta,(i,\beta))=\left\{\begin{array}{cccc}f_{i,\delta-\beta}&&\textnormal{if $\beta\para\delta$,}\\0&&\textnormal{otherwise}\end{array}\right.$$ where $\delta,\beta\in X$ and $i\in\{1,...,s\}$. Let $\widehat{A}^{d,d'}_{F_{1},...,F_{s}}=\widehat{A}$ be an augmented matrix $(A|B)$ where $B$ is a column with $\{1,...,s\}\times X$ rows such that $B=\left[\begin{matrix}0&&\hdots&&0&&1\end{matrix}\right]^{T}$. Assume that $S(A)$ and $S(\widehat{A})$ are the sets of all square submatrices of $A$ and $\widehat{A}$, respectively. Moreover, assume that $S(\widehat{A},n)$ is the subset of $S(\widehat{A})$ consisting of the matrices of order greater than $n$. We define a quantifier-free formula $$\Delta^{d,d'}_{F_{1},...,F_{s}}(\und{a})=\bigwedge_{M\in S(A)}(\det M\neq 0\ra(\bigvee_{N\in S(\widehat{A},o_{M})}\det D\neq 0))$$ where $o_{M}$ denotes the order of the matrix $M$. Assuming that $\und{a}$ is a tuple of elements of some field, the formula $\Delta^{d,d'}_{F_{1},...,F_{s}}(\und{a})$ holds if and only if the rank of the matrix $\widehat{A}$ is greater then the rank of $A$.
In the following theorem we show that common root formulas are equivalent with quantifier-free formulas of the form $\Delta^{d,d'}_{F_{1},...,F_{s}}(\und{a})$. This theorem is a constructive version of Tarski's theorem on quantifier elimination in the theory of ACF, because any $\CF$-formula can be easily written as a disjunction of common root formulas.
The aforementioned theorem is the main result of the paper. The proof is based on Corollary 2.3 and hence on the results of \cite{P} recalled in Section 2.
\begin{thm} Assume that $\und{a}=(a_{1},...,a_{n})$, $\und{x}=(x_{1},...,x_{m})$, $F_{1},...,F_{s}\in\ZZ[\und{a}][\und{x}]$ and $\varphi(\und{a})=\exists_{\und{x}}(F_{1}(\und{x})=0\wedge...\wedge F_{s}(\und{x})=0)$. Assume that $d$ is the maximum of degrees of polynomials $F_{1},...,F_{s}$ and $d'=\gamma_{m,d}(0)$. Then $\ACF\vdash\varphi(\und{a})\lra\Delta^{d,d'}_{F_{1},...,F_{s}}(\und{a})$.
\end{thm}
{\bf Proof.} Assume that $F_{i}=\sum_{|\alpha|\leq d}f_{i,\alpha}\cdot\und{x}^{\alpha}$ where $f_{i,\alpha}\in\ZZ[\und{a}]$ for any $i=1,...,s$ and $\alpha\in\NN^{m}$. Assume that $K$ is an algebraically closed field and $\und{a}\in K^{n}$. Then $f_{i,\alpha}(\und{a})\in K$ for any $i=1,...,s$, $\alpha\in\NN^{m}$ and thus it follows from Hilbert's Nullstellensatz that $\varphi(\und{a})$ holds if and only if $1\notin\langle F_{1},...,F_{s}\rangle$. Corollary 2.3 implies that $1\notin\langle F_{1},...,F_{s}\rangle$ is equivalent with non-existence of polynomials $H_{1},...,H_{s}\in K[x_{1},...,x_{m}]$ such that $1=H_{1}F_{1}+...+H_{s}F_{s}$ and $\deg(H_{i})\leq\gamma_{m,d}(0)=d'$ for any $i=1,...,s$. The fact that $\deg(H_{i})\leq d'$ enables to write the latter condition in the first order language.
We introduce some sets of variables. Assume that $C_{i}=\{c_{i,\beta}\}_{|\beta|\leq d'}$ where $\beta\in\NN^{m}$ and $i=1,...,s$. Let $H_{i}\in\ZZ[C_{i}][\und{x}]$ be a polynomial of the form $H_{i}=\sum_{|\beta|\leq d'}c_{i,\beta}\cdot\und{x}^{\beta}$ for $i=1,...,s$. Set $C=\bigcup_{i=1}^{s}C_{i}$ and consider the formula $\psi(\und{a})=\forall_{C}H_{1}F_{1}+...+H_{s}F_{s}\neq 1$ which is equivalent with $\varphi(\und{a})$. Observe that $$H_{1}F_{1}+...+H_{s}F_{s}=\sum_{|\delta|\leq d+d'}(\sum_{\beta+\alpha=\delta}c_{1,\beta}f_{1,\alpha}+...+c_{s,\beta}f_{s,\alpha})\und{x}^{\delta}$$ where $\delta\in\NN^{m}$, and hence the formula $\psi(\und{a})$ expresses the non-existence of solution of some system of linear equations with the set $C$ as a set of variables. This system can be written in such a way that the matrices $A=A^{d,d'}_{F_{1},...,F_{s}}$ and $\widehat{A}=\widehat{A}^{d,d'}_{F_{1},...,F_{s}}$ are its coefficient matrix and augmented matrix, respectively. Then it follows from the Kronecker-Capelli theorem that $\psi(\und{a})$ holds if and only if $\rk(\widehat{A})>\rk(A)$ where $\rk(M)$ denotes the rank of the matrix $M$. This is equivalent with $\Delta^{d,d'}_{F_{1},...,F_{s}}(\und{a})$. \epv
As a direct consequence of our considerations we get the following computable condition for the existence of a common root of multivariate polynomials.
\begin{cor} Assume that $K$ is an algebraically closed field, $d$ is a natural number, $F_{1},...,F_{s}\in K[x_{1},...,x_{m}]$ and $F_{i}=\sum_{|\alpha|\leq d}a_{i,\alpha}\cdot\und{x}^{\alpha}$ for $i=1,...,s$. Set $d'=\gamma_{m,d}(0)$. The polynomials $F_{1},...,F_{s}$ have a common root if and only if $\rk(\widehat{A})>\rk(A)$ where $A$ and $\widehat{A}$ are matrices obtained from $A^{d,d'}_{F_{1},...,F_{s}}$ and $\widehat{A}^{d,d'}_{F_{1},...,F_{s}}$, respectively, by replacing the elements $f_{i,\alpha}$ by $a_{i,\alpha}$ for any $i=1,...,s$, $\alpha\in\NN^{m}$. \epv
\end{cor}
{\bf Proof.} The proof is a simplified version of the proof of Theorem 3.2. \epv
\section{Applications}
In this section we present some applications of Theorem 3.2 in mathematics and physics, especially in quantum information theory. We concentrate on the problem of the existence of common invariant subspaces of square complex matrices and related problems. In that sense, we continue our research (and generalize the results) from \cite{JP} and \cite{PJ}, see also \cite{JKP} and \cite{PKJ}.
We give this section an expository character and leave the details for further papers. We recommend \cite{BZ} (see also \cite{HZ}) as a comprehensive monograph on quantum information theory and quantum mechanics in general.
Assume that $A,A_{1},...,A_{s}$ are $n\times n$ matrices over the field $\mathbb{C}$ of complex numbers and $V$ is a subspace of $\mathbb{C}^{n}$. We say that $V$ is \textit{$A$-invariant} if and only if $Av\in V$ for any $v\in V$. We say that $V$ is a \textit{common invariant subspace} of $A_{1},...,A_{s}$ if and only if $V$ is $A_{i}$-invariant for any $i=1,...,s$.
The problem of the existence of common invariant subspaces of square complex matrices appears in many areas of mathematics and physics. Therefore, computable conditions for the existence of such subspaces are heavily studied. In \cite{Sh} the author gives a computable condition for the existence of a common eigenvector (i.e. a common invariant subspace of dimension one) of two matrices. This result is generalized to a finite number of matrices in \cite{JP}, see also \cite{PJ} for similar concepts. In \cite{AGI}, \cite{AI}, \cite{GI} and \cite{Ts} only two matrices are considered, but the authors study common invariant subspaces of dimensions higher than one. In this case it is often assumed that given matrices have pairwise different eigenvalues. This assumption is made in \cite{GI} and \cite{Ts} where the authors reduce the general problem to the question of the existence of a common eigenvector of suitable \textit{compound matrices}, see \cite{MM}.
The general version of the problem, with arbitrary finite number of matrices and arbitrary dimension of common invariant subspaces, was solved only in 2004 in \cite{ArPe}. In the solution some basic techniques of Gr{\"o}bner bases theory and algebraic geometry are used.
Here we apply Theorem 3.2 (or Corollary 3.3) to solve the general problem of the existence of a common invariant subspace. Assume that $A_{1},...,A_{s}$ are complex $n\times n$ matrices. Let $V=\{v_{i}^{j}|i=1,...,k,j=1,...,n\}$ be a set of variables and $\widehat{V}$ the set of all $\mathbb{C}$-linear combinations of elements of $V$. Set $\und{v}_{i}=\left[\begin{matrix}v_{i}^{1}&&\hdots&&v_{i}^{n}\end{matrix}\right]^{T}$ for $i=1,...,k$ and denote by $M_{V}$ the augmented matrix $(\und{v}_{1}|...|\und{v}_{k})$. The formula $\rk(M_{V})=k$ states that the vectors $\und{v}_{1},...,\und{v}_{k}$ are linearly independent, and can be written in the first order language. The formula $\bigwedge_{j=1}^{k}A_{i}\und{v}_{j}\in\widehat{V}$, for any $i=1,...,s$, states that $\widehat{V}$ is $A_{i}$-invariant, and can be written in the first order language. Thus the first order formula $$\exists_{V}\rk(M_{V})\wedge(\bigwedge_{i=1}^{s}\bigwedge_{j=1}^{k}A_{i}\und{v}_{j}\in\widehat{V}),$$ where $k\leq n$, expresses the existence of a common invariant subspace of $A_{1},...,A_{s}$ of dimension $k$. It is easy to see that this is in fact a common root formula and hence Theorem 3.2 (or Corollary 3.3) yields its equivalent quantifier-free form. We call this quantifier-free form a \textit{$\CIS_{k}$-formula} for $A_{1},...,A_{s}$. Such a formula can be viewed as a computable condition for the existence of a common invariant subspace of $A_{1},...,A_{s}$ of dimension $k$.
Common invariant subspaces, sometimes satisfying some additional conditions, play a prominent role in quantum information theory. We show this role on two examples concerning quantum channels (so our treatment of the subject is far from being complete): irreducible quantum channels and decoherence-free subspaces. In these examples we apply $\CIS_{k}$-formulas and Theorem 3.2 to generalize some results from \cite{JP} and \cite{PJ}.
Assume that $\mathbb{M}_{n}(\mathbb{C})$ is the vector space of all $n\times n$ complex matrices. A \textit{quantum channel} is a trace preserving completely positive map $\Phi:\mathbb{M}_{n}(\mathbb{C})\ra\mathbb{M}_{n}(\mathbb{C})$ (we refer to \cite{HZ} for all the definitions). It follows from \cite[5.2.3]{HZ} that there are matrices $A_{1},...,A_{s}\in\mathbb{M}_{n}(\mathbb{C})$ such that $\Phi(X)=\sum_{i=1}^{s}A_{i}XA_{i}^{*}$ for any $X\in\mathbb{M}_{n}(\mathbb{C})$ where $A^{*}$ denotes the matrix adjoint to $A$.
Important subclass of the class of all quantum channels is formed by \textit{irreducible} quantum channels. We refer to \cite{BZ} and \cite{HZ} for the definition and main properties of these channels. It is proved in \cite{Fa} that a quantum channel $\Phi(X)=\sum_{i=1}^{s}A_{i}XA_{i}^{*}$ is irreducible if and only if the matrices $A_{1},...,A_{s}$ do not have a nontrivial common invariant subspace. Hence the $\CIS_{k}$-formulas for $A_{1},...,A_{s}$ provide a computable condition for irreducibility of $\Phi$. This generalizes the main results of \cite{JP}, see especially Sections 3 and 4 of \cite{JP}.
Quantum channels are used to transmit quantum information. Unfortunately, quantum information may be easily corrupted by a number of factors, see \cite{BL}. Any such a factor is described as a \textit{decoherence}. A way to overcome the effects of decoherence is to "hide" quantum information from the environment in some "quiet corner". This quiet corner is called the \textit{decoherence-free subspace} (DFS).
There are few different mathematical definitions of DFS in the literature, see \cite{KMSW} for the details. In \cite{PJ} we define DFS as the \textit{common reducing unitary subspace}. We recall this definition below.
Assume that $A,A_{1},...,A_{s}\in\mathbb{M}_{n}(\mathbb{C})$ and $W$ is a subspace of $\mathbb{C}$. We say that $W$ is a \textit{reducing} subspace of $A$ (or \textit{$A$-reducing}) if and only if $W$ is an invariant subspace for $A$ and $A^{*}$. We say that $W$ is a \textit{common reducing subspace} of $A_{1},...,A_{s}$ if and only if $W$ is $A_{i}$-reducing for any $i=1,...,s$.
Assume that $A_{1},...,A_{s}\in\mathbb{M}_{n}(\mathbb{C})$ and $\Phi(X)=\sum_{i=1}^{s}A_{i}XA_{i}^{*}$ is a quantum channel. A nonzero subspace $W$ of $\mathbb{C}^{n}$ is a \textit{common reducing unitary subspace} (or a \textit{decoherence-free subspace}) for $\Phi$ if and only if $W$ is a common reducing subspace of $A_{1},...,A_{s}$ and there exists a unitary matrix $U\in\mathbb{M}_{n}(\mathbb{C})$ and complex numbers $g_{1},...,g_{s}$ such that $A_{i}w=(g_{i}U)w$ for any $w\in W$ and $i=1,...,s$.
The conditions that $U\in\mathbb{M}_{n}(\mathbb{C})$ is a unitary matrix and $A_{i}w=(g_{i}U)w$ for any $w\in W$ and $i=1,...,s$ can be written in the first order language. Hence there is a formula expressing the existence of a common reducing unitary subspace of dimension $k$. This formula is similar to $\CIS_{k}$-formula. Consequently, Theorem 3.2 provides a computable condition for the existence of decoherence-free subspaces. This generalizes the main results of \cite{PJ}, see especially Section 3 of \cite{PJ}.
The contents of the section reveal that there is an impact of quantifier elimination theory on applied mathematics. This impact has been recently noticed in a number of papers, see for example \cite{WCP-G} and \cite{SZ}.
The results of Section 3 imply that every problem which can be written in the first order language of fields can be equivalently expressed as a computable condition. Moreover, Theorem 3.2 provides the exact form of this condition. It is our opinion that this opens the possibility for other applications of quantifier elimination in mathematical sciences.
\section*{Acknowledgements} This research has been supported by grant No. DEC-2011/02/A/ST1/00208 of National Science Center of Poland.
|
2,877,628,090,323 | arxiv | \section{Introduction}
\label{sec:intro}
Shaped and characterized femtosecond pulses are in widespread demand amongst the quantum control community \cite{Goswami2003,Ohmori2009,Dantus2004}. Through tailoring the phase, amplitude or polarization of the control pulse, the evolution of a quantum state may be manipulated in order to steer it towards a desired outcome. A typical scenario is the design of optical fields to control molecular motion, including the prospect of achieving site-specific chemistry and intramolecular rearrangements. During the last two decades, many impressive results \cite{Bonacic-Koutecky2006,Levis2001,Weinacht1999,Monmayrant2006} have arisen from technological breakthroughs in the generation of arbitrarily tailored pulses \cite{Monmayrant2010}.
Two principal active pulse-shaping techniques for ultrashort pulses are at the disposal of the experimentalist: a spatial light modulator (SLM) placed in the Fourier plane of a $4f$ zero-dispersion line \cite{Weiner2000,Monmayrant2004} or an acousto-optic programmable dispersive filter (AOPDF) \cite{Verluise2000}. Extensive studies of the $4f$ line have extended its available wavelength range and characterized its behaviour. In particular, it is now well known both experimentally and theoretically that such devices lead to spatio-temporal coupling effects, whereby the shaped electric field is dependent on the spatial position in the beam \cite{Danailov1989,Wefers1995,Wefers1996,Dorrer1998,Tanabe2002}. These studies have more recently been extended to the focal volume after a lens \cite{Sussman2008,Frei2009}.
By contrast, the AOPDF --- a newer technology within the control field --- has been less well characterized. Its first application entailed the corrective shaping of ultrashort pulses before an amplifier in order to improve compression of the amplified output \cite{Seres2003}. More recently, an angular dispersion effect which could affect such a laser chain has been presented \cite{Borzsonyi2010}. Further to this application, the AOPDF's shaping versatility, together with the large spectral range spanned (from the UV \cite{Coudreau2006} through the visible \cite{Monmayrant2005} to the near IR \cite{Verluise2000,Pittman2002}) renders it a valuable tool for control experiments \cite{Form2008}. In particular, the first--excited-state transitions of many organic and inorganic molecules lie in the UV wavelength range; hence the development of a practical UV pulse shaper is a great challenge and active field within the community. Very recently, interesting results have been obtained using shaped ultraviolet pulses \cite{Tseng2009, Roth2009,Greenfield2009}, and the variety of implementations of AOPDFs and SLMs is constantly increasing \cite{Monmayrant2010}. Amongst these contenders, the UV AOPDF based upon a KDP crystal is a good candidate, since it is versatile and tunable on a broad spectral range (\unit[250-410]{nm}) matching typical molecular electronic absorption bands \cite{Weber2010}. Nonetheless, to date no complete characterization has been performed of the spatio-temporal characteristics of AOPDF-shaped pulses --- in particular in the UV range. Indeed, some sources even assert AOPDFs to be entirely free of such effects \cite{Lee2009}, in contrast to the much maligned $4f$ line. At least one distortion, however, has already been identified: a lateral displacement which depends on the acoustic wave profile in the crystal \cite{Krebs2010}.
In this paper, we have undertaken the complete characterization of the space-time coupling effects produced by the AOPDF using spatially and spectrally resolved Fourier-transform interferometry (SSI) \cite{Monmayrant2010}. SSI is an interferometric technique that entails a relative measurement of the spectral phase between a reference and unknown pulse --- it thus lends itself to the measurement of the transfer function of a pulse shaper. As a metrology tool, SSI is suited to low pulse energies since it does not necessitate any nonlinear processes. (In the event that knowledge of the spectral phase of the input pulse \emph{per se} is required, absolute pulse characterization techniques may be applied \cite{Baum2004,Kane1994}.) This technique provides spatio-temporal resolution of the shaped pulses; it thus facilitates a comprehensive quantitative analysis of the ubiquitous spatio-temporal coupling induced by the AOPDF together with an explanation and numerical description of the physical mechanism. Our analysis encompasses a range of pulse shapes that are of the broadest utility to the control community.
\section{Methods}
\label{sec:methods}
\begin{figure}
\centering
\includegraphics[width = 0.9\columnwidth]{McCabe200910_Fig1.jpg}
\caption{The AOPDF (Fastlite Dazzler\texttrademark) SI characterization setup. The pulse shaper is placed in one arm of an interferometer. The unknown and reference arms are recombined at the entrance slit to a two-dimensional spectrometer with a slight angle and variable delay. The imaging spectrometer measures the resultant interference fringes, from which the relative spectral phase may be extracted. The spectrometer measures a spatially resolved spectrum along the slit axis $x$. A cylindrical lens focusses the beams onto the entrance slit of the spectrometer along the non-imaged spatial axis. A half-waveplate rotates the polarization in the reference beam arm.}
\label{fig:layout}
\end{figure}
The ultrafast source used for these experiments is an ultraviolet (UV) pulse train generated from a chirped-pulse amplified Ti:sapphire laser (CPA) \cite{Backus1998} via subsequent nonlinear interactions. The \unit[800]{nm} pulses are combined with their second harmonic at \unit[400]{nm} in order to generate the sum frequency at $\lambda_0 = \unit[267]{nm}$. Typical characteristics of the UV source are \unit[2]{$\mu$J} pulses with \unit[2]{nm} full-width at half-maximum (FWHM) bandwidth at the \unit[1]{kHz} repetition rate of the master laser. A typical beam width is around \unit[1-2]{mm}. Spatially resolved cross-correlation measurements of the UV pulses indicate a \unit[250]{fs} pulse duration without significant spatial wavefront distortion. (The pulse bandwidth would support a transform-limited duration of around \unit[50]{fs}; the difference is attributable to dispersive effects within the nonlinear crystals of the source.)
AOPDF pulse shapers are based on the dispersive propagation of light within an acousto-optic crystal. An incident ordinary optical wave interacts with a collinear acoustic wave, resulting in the diffraction of the optical wave onto the extraordinary axis. The spectral phase of a femtosecond optical pulse may be shaped via manipulation of the diffraction location for each spectral component along the length of the birefringent crystal; meanwhile the amplitude may be modulated via the size of the acoustic wave \cite{Kaplan2002}. A commercial AOPDF (the Fastlite Dazzler\texttrademark T-UV-260-410/T2), based on a \unit[75]{mm} KDP crystal designed for use at UV wavelengths, is employed for these experiments \cite{Coudreau2006,Weber2010}. The programmable temporal window is essentially fixed by the length of the crystal and the difference in refractive index of the crystal axes, and is about \unit[7]{ps} for this apparatus. A part of this window (for example, \unit[3]{ps} for a shaping window of three times the FWHM bandwidth according to the parameters given above) is required to self-compensate the natural dispersion induced by the KDP itself; if necessary this could be obviated by means of an external compressor.
The performance of the AOPDF is characterized using SSI \cite{Monmayrant2010,Tanabe2002}. The AOPDF is placed in one arm of an interferometer and its shaped output interferes with an unshaped reference arm (see Fig.\ \ref{fig:layout}). Since the AOPDF rotates polarization, a half-waveplate was placed in the reference beam arm. The two arms are combined with a small angle and a controllable relative delay at the entrance slit of a home-built imaging spectrometer \cite{Austin2009}. Since the spectrometer employs a two-dimensional detector, it is able to make a measurement of the spectrum as a function of position along the slit (aligned parallel with the plane of diffraction of the AOPDF). The detector is a charge-coupled device (CCD) camera (EHD Imaging UK-1158UV) with a pixel size of \unit[6.45]{$\mu$m}, and the spectrometer has an optical resolution of \unit[0.08]{nm} and \unit[40]{$\mu$m} along the spectral and spatial axes respectively. In order to increase signal, a cylindrical lens focusses the beams onto the entrance slit along the orthogonal spatial axis (i.e.\ along the non-imaged axis of the beam). The ensuing interferograms are detected with single-shot sensitivity.
The 2D interferogram measured by the spectrometer [see Fig.\ \ref{fig:data}(a)] is
\begin{align}
\label{eq:SI}
S(x,\omega) & = \modbr{A_\textrm{s}(x,\omega) e^{i\phi_\textrm{s}(x,\omega)} + A_\textrm{r}(x,\omega) e^{i\sqbr{\phi_\textrm{r}(x,\omega) + \omega \tau + k_x x}}}^2 \notag \\
& = \modbr{A_\textrm{s}(x,\omega)}^2 + \modbr{A_\textrm{r}(x,\omega)}^2 \notag \\ &+ \modbr{A_\textrm{s}(x,\omega)} \modbr{A_\textrm{r}(x,\omega)} \notag \\ & \times \cos \sqbr{\phi_\textrm{s}(x,\omega) - \phi_\textrm{r}(x,\omega) - \omega \tau - k_x x}.
\end{align}
Here $\tau$ is the time delay between the two pulses and $k_x$ is the difference between the transverse components of the propagation vectors (such that their subtended angle is $\theta = k_x/\modbr{\mathbf{k}}$). $A_\textrm{s}$, $A_\textrm{r}$, $\phi_\textrm{s}$ and $\phi_\textrm{r}$ denote the spatio-spectral amplitude and phase of the shaped (s) and reference (r) pulse respectively.
\begin{figure}
\centering
\includegraphics [width = \columnwidth]{McCabe200910_Fig2.pdf}
\caption{The Fourier filtering process. (a) A raw interferogram measured by the spectrometer camera. (b) A two-dimensional Fourier transform is performed. An a.c.\ term is filtered out within the Fourier domain. (c) An inverse two-dimensional Fourier transform of this term isolates the final term of equation \ref{eq:SI}. The mapping onto calibrated frequency and position axes is calculated. (d) Extracted phase difference $\phi_\textrm{s}(x,\omega) - \phi_\textrm{r}(x,\omega)$, modulo $2\pi$. A subsequent procedure calibrates the camera pixels into physical units of frequency and position.}
\label{fig:data}
\end{figure}
In order to extract a measurement of the spectral phase added by the AOPDF, this interferogram is Fourier transformed along both spatial and spectral dimensions. One of the a.c.\ terms is filtered out and inverse Fourier transformed with the carrier frequency removed. This isolates the final summand of equation \ref{eq:SI}, which contains the phase difference $\phi_\textrm{s}(x,\omega) - \phi_\textrm{r}(x,\omega)$ \cite{Takeda1982}. The spatial and spectral carriers, $k_x$ and $\tau$, are chosen in order to separate the a.c.\ and d.c.\ terms in the Fourier transform whilst ensuring that the fringe period is greater than the spectrometer resolution. In order to be able to handle complex temporal structure, a predominantly spatial carrier of $\theta \approx \unit[3]{mrad}$ and $\tau \approx 0$, giving rise to predominantly spatial fringes, is employed for these experiments. Typical data treated according to this process are shown in Fig.\ \ref{fig:data}.
In order to calibrate the intrinsic added second- and higher-order phase associated with the two arms of the interferometer (and specifically the dispersion of the beamsplitter and waveplate), an SSI measurement was taken with the AOPDF removed. The extracted relative higher-order phase varied by less than \unit[0.4]{rad} over the extent of the imaged spectrum.
\section{Results}
\label{sec:results}
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|l|c|}
\hline
Pulse shape & $H(\omega)$ \\
\hline
Pulse delay & $\exp\sqbr{-2(\omega-\omega_0)^2/\Delta\omega^2 + i\omega \tau}$ \\
$N$-pulse train & $\sum_{n=1}^N \exp(i \omega\tau_n)$\\
Chirps & $\exp \sqbr{-2(\omega-\omega_0)^2/\Delta\omega^2 + i\br{\omega-\omega_0}^2 \phi^{(2)}/2}$\\
$\pi$-step & $\exp\left\{ i \arctan \sqbr{(\lambda-\lambda_0)/\Delta\lambda_{\textrm{step}}} \right\}$ \\
\hline
\end{tabular}
\end{center}
\caption{Transfer function, $H(\omega)$, for the pulse shapes presented within Section \ref{sec:results}, where $\omega_0 = 2\pi c/\lambda_0$ is the central angular frequency. For the pulse delay and chirped-pulse cases, a narrowed spectral bandwidth of \mbox{$\Delta\lambda = (\lambda_0^2/2\pi c)\Delta\omega = \unit[1]{nm}$} was employed. All other parameters are defined in the text.}
\label{tab:transfer-fns}
\end{table}
A series of different phase and amplitude profiles that are of broad utility within control experiments were programmed into the AOPDF. In each case, it was verified that the device applied the correct complex
transfer function $H(\omega)$, such that the input and shaped output pulses were related by $E_\textrm{out}(\omega)= H(\omega)E_\textrm{in}(\omega)$. The spatial resolution of our system enabled this verification to be performed independently at all points in the beam.
It was also possible to test systematically for any spatial or spatio-temporal distortions caused by the AOPDF. In all cases, exactly one such distortion was detected: a frequency-dependent lateral
displacement of the output proportional to the applied group delay. Pulse shapes that entailed a range of group delays across the spectral bandwidth featured a corresponding spatio-spectral coupling in the output. In all cases, the effect was consistent with a coupling speed of \unit[0.25]{mm/ps} (i.e.\ a relative lateral displacement of the shaped pulse of \unit[0.25]{mm} per picosecond shift in the AOPDF diffraction window). No other spatial or spatio-temporal distortions were detected.
The precision of the measurements was as follows. In measuring the zeroth and first-order phase components of the AOPDF transfer function, the dominant source of error was instability of the interferometer, typically \unit[0.5]{fs} over the approximately fifteen-minute durations of the data acquisition runs. For measuring higher-order phase terms, as well as the amplitude of the AOPDF transfer function, the two most significant sources of error were camera shot noise and shot-to-shot fluctuations in the UV source itself. These limited the root-mean-square precision of the phase and intensity measurements to \unit[0.2]{rad} and \unit[10]{\%} respectively. These figures apply to regions where the intensity is greater than \unit[10]{\%} of the peak. It was verified that the presence of the AOPDF did not increase the size of any phase or intensity fluctuations.
The demonstration of such spatio-temporal coupling effects --- well known and studied for the case of $4f$-line pulse shaping --- gives important information to experimentalists wishing to use AOPDFs in a control experiment. These results are presented individually below. For each case, a mathematical expression for the transfer function employed is presented in Table \ref{tab:transfer-fns}.
\subsection{Pulse delay}
\label{sec:delays}
\begin{figure}
\centering
\includegraphics [width = 0.9\columnwidth]{McCabe200910_Fig3.pdf}
\caption{Spatio-temporal coupling for a single optical pulse as the diffraction position within the AOPDF crystal is varied. The central beam position along the spatial axis of the spectrometer is plotted as a function of delay $\tau$. A linear dependence is observed with a best-fit gradient of \mbox{\unit[0.249 $\pm$ 0.012]{mm/ps}}.}
\label{fig:delays}
\end{figure}
For the first experiment, an acoustic wave was launched inside the AOPDF that was designed to diffract a single optical pulse within the KDP crystal. The location of the acoustic wave was scanned along the length of the crystal in order to vary the pulse delay $\tau$. The acoustic wave was tailored in order to pre-compensate for the dispersion of the crystal, and the performance of this compensation was verified via the SI measurements. The pulse spectral FWHM intensity bandwidth was also narrowed using the AOPDF to $\Delta\lambda = (\lambda_0^2/2\pi c)\Delta\omega = \unit[1]{nm}$, where $c$ is the speed of light. This reduced the length of acoustic wave required to compensate for the crystal dispersion to \unit[2]{ps}, enabling a greater range of delays to be accessed without clipping the acoustic wave on the edges of the crystal.
The measured delays were found to be in agreement with the target delays to within an error of \unit[2]{\%}. The central beam position of the diffracted pulse was observed to vary linearly with delay with a coupling speed of \mbox{\unit[0.249 $\pm$ 0.012]{mm/ps}}. Not other variation, in either amplitude or phase, was identified. The results are presented in Fig.\ \ref{fig:delays}. This behaviour was also confirmed with a direct measurement of the beam position on a CCD camera.
\subsection{Pulse train}
\label{sec:train}
\begin{figure}
\centering
\includegraphics [width = \columnwidth]{McCabe200910_Fig4.pdf}
\caption{The reconstructed spatio-temporal amplitude distribution of a train of three pulses each separated by \unit[1.5]{ps}. The reconstructed pulse train exhibits a linear spatio-temporal coupling effect that is consistent with the \unit[0.25]{mm/ps} best-fit gradient observed for the pulse delay experiments (superimposed dotted line).}
\label{fig:train}
\end{figure}
Next, various trains of pulses with zero added second- and higher-order phase were prepared, with varying numbers of pulses ranging from two to thirteen. This entailed a sequence of acoustic waves localized at different points along the length of the AOPDF crystal.
A typical reconstructed spatio-temporal intensity distribution is shown in Fig.\ \ref{fig:train} for a train of three pulses separated by \unit[1.5]{ps} (such that $N=3$, $\tau_1 = \unit[-1.5]{ps}$, $\tau_2 = \unit[0]{ps}$ and $\tau_3 = \unit[1.5]{ps}$ according to the expression of Table \ref{tab:transfer-fns}). The pulse separation was verified to within \unit[1]{\%}. In order to make the most accurate measurement possible, the full temporal window of the pulse shaper was employed. The spatio-temporal coupling subsequently resulted in a worsened alignment for the third pulse in the train, concomitantly reducing the fringe visibility. This accounts for the apparent reduction in intensity for the final pulse in Fig.\ \ref{fig:train}.
A pronounced linear spatio-temporal coupling is observed in the reconstruction. The results are quantitatively consistent with the coupling speed observed during the pulse delay experiments (Section \ref{sec:delays}), as evinced by the superimposed best-fit line with the same coupling-speed gradient of \unit[0.249]{mm/ps}.
\subsection{Chirps}
\label{sec:chirps}
\begin{figure}
\centering
\includegraphics [width = \columnwidth]{McCabe200910_Fig5.pdf}
\caption{Spatio-spectral coupling effects for a series of chirped pulses ($\phi^{(2)}$ parameters as shown). (a) The reconstructed spatio-spectral intensities of pulses of different chirps. A spatio-spectral tilt is observed that is more significant for the more strongly chirped pulses and that changes sign with the sign of the chirp. (b) The extracted spatio-spectral coupling as a function of chirp (`+') together with a calculated best-fit coupling speed of \mbox{\unit[0.252 $\pm$ 0.004]{mm/ps}} (solid line). This value is in close agreement with the measurement of Section \ref{sec:delays}; the reconstructed pulse was otherwise found to be free of further spatio-temporal coupling effects. The vertical axis shows the change in central position of the beam across the spectral bandwidth of the pulse.}
\label{fig:chirp}
\end{figure}
A range of different pulses were prepared using the AOPDF bearing different chirps --- i.e.\ the parameter $\phi^{(2)}$ in Table \ref{tab:transfer-fns}. The AOPDF temporal shaping window allowed values within the range $\unit[-100000]{fs^2} \leq \phi^{(2)} \leq \unit[100000]{fs^2}$ to be assayed. A narrowed pulse bandwidth of $\Delta\lambda = \unit[1]{nm}$ was once again employed.
The extracted $\phi^{(2)}$ second-order polynomial phase coefficients matched the programmed values to within \unit[6]{\%}. The spatio-spectral intensities are shown in Fig.\ \ref{fig:chirp}(a) for a selection of $\phi^{(2)}$ values. A spatio-spectral tilt is observed that is stronger for more strongly chirped pulses and changes sign as the sign of the chirp is reversed [see the dashed lines of \mbox{Fig.\ \ref{fig:chirp}(a)}]. This observation has important consequences for control experiments with regard to spatial alignment with the sample. Besides the spatio-spectral tilt illustrated in Fig.\ \ref{fig:chirp}(a), however, the reconstructed pulse was found to reproduce the programmed pulse with good fidelity.
The spatio-spectral tilts for a range of chirps were extracted numerically and plotted in Fig.\ \ref{fig:chirp}(b). Since spectral chirp is intrinsically a frequency-dependent group delay, the best-fit gradient of these points can be related to a group-delay--dependent displacement via the corresponding chirped-pulse temporal duration. This fit took into account an intrinsic spatio-spectral tilt present on the reference beam corresponding to a \unit[0.35]{mm} shift in beam centre across the spectral bandwidth. The best-fit coupling speed for these experiments was \mbox{\unit[0.252 $\pm$ 0.004]{mm/ps}}, in very close agreement with Sections \ref{sec:delays} and \ref{sec:train}. This demonstrates that one single underlying physical mechanism is responsible for the different spatio-temporal coupling effects.
\subsection{$\pi$-step}
\label{sec:pi-step}
\begin{figure}
\centering
\includegraphics [width = \columnwidth]{McCabe200910_Fig6.pdf}
\caption{Spatio-spectral coupling effects for $\pi$ phase-steps of varying sharpnesses. (a) The reconstructed spatio-spectral intensities of a series of $\pi$ phase-steps with sharpnesses as indicated. A spatial shift is observed at the step frequency that is more pronounced for sharper steps. (b) Observed lateral displacement of the notch (data points) together with a calculation derived from the measured group delay at the phase step and a \unit[0.25]{mm/ps} spatio-temporal coupling speed (solid line). An example spectral phase across a slice through the middle of the pulse is shown inset (crosses) together with a fit of the function in Table \ref{tab:transfer-fns} (solid line).}
\label{fig:pi}
\end{figure}
The final experiment entailed the preparation of a transform-limited pulse with a $\pi$ phase-step at its central frequency as per the expression in Table \ref{tab:transfer-fns}. Phase steps of a range of sharpnesses, $\Delta\lambda_{\textrm{step}}$, were prepared. The results are shown in Fig.\ \ref{fig:pi}(a). A typical measured spectral phase across the centre of the pulse is shown inset in \mbox{Fig.\ \ref{fig:pi}(b)}. In general, the retrieved phase matched the programmed one with regard to the parameters of Table \ref{tab:transfer-fns}. The sharpest measured step sizes were of the order on \unit[0.08]{nm}; however, this was commensurate with the resolution of the spectrometer.
An important spatio-temporal coupling effect is observed in the reconstructed spectral intensities. A local spatial displacement occurs in the spectrum at the $\pi$-step frequency, resulting in a `notch' in the reconstructed spatio-spectral intensity [see arrow in \mbox{Fig.\ \ref{fig:pi}(a)]}. The size of the notch increases with the sharpness of the phase step. This spatio-temporal coupling effect has previously only been reported in a $4f$ zero-dispersion line \cite{Dorrer1998}; this study reveals similar behaviour for an AOPDF-based device.
This notch effect may once again be reconciled with a group-velocity--dependent displacement of the beam. The steep phase gradient at the location of the $\pi$ step is equivalent to a local group-delay term in the spectral phase, with a sharper step implying a steeper gradient in the spectral phase and hence a larger group delay. A group-velocity--dependent displacement therefore shifts spectral components spanned by $\Delta\lambda_{\textrm{step}}$ by an amount dependent on the step sharpness. A related effect in pixellated SLM pulse shapers is the complete spectral hole that appears for a sharpness equal to the spectral resolution of the device \cite{Wohlleben2004}. As the step sharpness is further increased in these AOPDF experiments, the $\pi$-step group delay will eventually exceed the temporal window of the crystal (which is inversely proportional to the AOPDF spectral resolution), and a spectral hole, rather than a notch, will be formed as a consequence.
This argument is supported by the calculations presented in Fig.\ \ref{fig:pi}(b) based on these experimental data. In this figure, the notch sizes for each image within Fig.\ \ref{fig:pi}(a) were extracted and plotted as a function of step sharpness (data points). The local group-delay terms at the phase step were calculated according to $\phi^{(1)} = \frac{\partial \phi \br{\omega}}{\partial \omega} \approx \frac{\pi}{\Delta \omega}$ where $\Delta \omega_{\textrm{step}} = (2\pi c/\lambda_0^2) \Delta\lambda_{\textrm{step}}$. They were then multiplied by the \unit[0.25]{mm/ps} coupling speed previously observed (solid line), and the resultant calculation shows good agreement with experiment (solid line). Once again, the results are found to be quantitatively consistent with the same spatio-temporal coupling effect as above, reinforcing the evidence for a single underlying physical mechanism for all of these manifestations.
\section{Discussion}
\label{sec:discussion}
Section \ref{sec:results} presented a spatially resolved SSI analysis of a range of different pulse shapes: a single transform-limited pulse with a variable delay, a train of pulses, chirped pulses and pulses with a $\pi$ phase-step at the centre of their spectrum. These pulse shapes lie at the heart of many ultrafast quantum control experiments and this study represents the first complete investigation of spatio-temporal coupling effects performed for an AOPDF pulse shaper. In each case, spatio-temporal and spatio-spectral couplings were observed in the reconstructed field. Each effect was shown to be consistent with a single effect that took the form of a group-delay--dependent position of the shaped pulses, as mentioned previously \cite{Krebs2010}. Incidentally, the time-to-space mapping produced by this coupling means that the spatio-spectral intensity profile of the Dazzler output pulse resembles a spectrogram, assuming that the Dazzler input pulse is near transform-limited so that the only contribution to the group-delay in the output arises from the Dazzler itself. Furthermore, each coupling was consistent with a coupling speed of \unit[0.25]{mm/ps}. No further spatio-temporal coupling effects were identified, and the AOPDF was otherwise found to reproduce the programmed pulse shapes faithfully. In particular, no significant angular dispersion effects (as reported by B\"{o}rzs\"{o}nyi \emph{et al.~} \cite{Borzsonyi2010}) were found. This is to be expected since B\"{o}rzs\"{o}nyi \emph{et al.~} only found this to be significant at high repetition rates where the acoustic-wave energy dissipation gave rise to thermal effects.
The results above highlight the need for experimentalists to pay close attention to these coupling issues during the design of control experiments based on an AOPDF pulse shaper. Such concerns have been studied extensively for the more widespread $4f$-line shapers, with coupling speed ranging from \unit[0.083]{mm/ps} \cite{Monmayrant2004} through \unit[0.145]{mm/ps} \cite{Wefers1996} to \unit[0.595]{mm/ps} \cite{Tanabe2002} already reported in the literature. For the $4f$-line geometry, the coupling speed $v$ is related to the available temporal shaping window $T$ and the input beam waist $\Delta x_\textrm{in}$ by $\modbr{v} = \Delta x_\textrm{in}/T$ \cite{Monmayrant2010}. The coupling speed reported here of \unit[0.25]{mm/ps} is therefore non-negligible by comparison.
It is thus apparent that a single spatio-temporal coupling mechanism within the AOPDF accounts for all the manifestations reported in Section \ref{sec:results}. In order to explain the physical nature of this group-delay--dependent displacement, it is necessary to consider a couple of effects present within the Dazzler: the birefringent and geometrical walk-off effects of the diffracted relative to the undiffracted beam, and the fact that each optical wavelength within the ultrafast pulse is diffracted at a given position in the AOPDF. These two effects combine to lead to a natural spatial chirp, with a coupling speed as quantified above.
To recapitulate, the birefringent walk-off concerns the phenomenon that the intensity distribution of a beam in an anisotropic crystal drifts away from the direction of the wave vector. The angle between the Poynting vector (which defines the direction of energy transport) and the $k$-vector is called the walk-off angle. Spatial walk-off occurs only for a beam with extraordinary polarization, which sees a refractive index $n_{\textrm{e}}$ during its propagation that depends on the angle between $\boldsymbol{k}$ and the optical axes. This angle depends on the crystal and parameters of the optical pulse; for the KDP crystal in this experiment, at \unit[268]{nm}, the walk-off angle is $\alpha \simeq \unit[32]{mrad}$. The geometrical walk-off, meanwhile, concerns the fact that during Bragg diffraction the beam is deviated by an angle corresponding to the phase-matching condition. For this experiment, this deviation is $\theta= \unit[-5.2]{mrad}$. It should be noted that both the geometric and birefringent walk-offs actually vary as a function of wavelength; however, this effect is negligible for the pulse bandwidth employed.
Thus the spatio-temporal effect can simply be seen as a shift $\delta x$ in the position of the diffracted beam that could be expressed as $\delta x =L\tan(\theta+\alpha)$, where $L$ is the distance of propagation along the extraordinary axis. The coupling speed is thus determined by $v = \delta x/T$, being a function of the walkoff-induced shift and the temporal window, rather than the input beam waist as for the case of a $4f$ line. This experiment employs a crystal of length \unit[75]{mm} such that the maximum shift is calculated as \unit[2]{mm}. Considering the fact that the temporal window available at this wavelength is $T = \unit[7.7]{ps}$, this implies an expected group-delay--dependent displacement of \mbox{\unit[$0.260 \pm 0.005$]{mm/ps}}, which is in very close agreement with our experimental measurements.
The birefringent and geometric walkoff effects are therefore confirmed as the single physical cause for the spatio-temporal coupling effect reported in the AOPDF pulse shaper. This coupling has important consequences for the application of AOPDF-shaped pulses to control experiments, since the displacement of the control pulses with a variation of pulse parameters may result in a worsened alignment with the target. One possible solution is to translate a lens before the AOPDF in order to bring the geometric plane of overlap of the spatially shifted output pulses into alignment with the gaussian focal plane \cite{Krebs2010}. Another might be to extend the walk-off compensation methods developed in non-linear optics be using a double-pass setup or a second crystal \cite{Smith1998}. It should be noted that the coupling speed depends on the parameters of the ultrafast pulses as well as the choice of crystal (indeed, the walk-off effects in TeO$_2$, which is used for AOPDFs in the IR wavelength range, are significantly less than in KDP); thus the calculation should be repeated along the lines above in order to make an informed choice of shaper in light of individual experimental tolerances for coupling effects.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have presented a systematic study of spatio-temporal coupling in an AOPDF pulse shaper that operates at UV wavelength ranges via spatially resolved phase and amplitude analysis of the shaped pulses. Such coupling effects have been widely studied for $4f$ zero dispersion lines due to the importance of the ramifications for control experiments. The AOPDF is an increasingly popular alternative shaping device thanks to its versatility, compactness, ease of alignment and wide wavelength range. Until now, however, its spatio-temporal coupling effects have not been comprehensively studied for a range of complex pulse shapes of interest to the control community.
We have discovered that there is one single significant effect at kilohertz repetition rates: a group-delay--dependent displacement of the shaped output. Further to this one effect, the AOPDF was found to produce faithfully the desired pulse shape. This coupling effect was manifested differently in the measured pulse depending on the class of pulse shape employed; however, in each case the coupling effect may be described by the same mechanism with consistent quantitative agreement. We have explained the physical origin of this mechanism and have shown excellent agreement between its calculated and measured values. Finally, we have identified some approaches that may allow the impact of this spatio-temporal coupling to be minimized during applications to control experiments.
\begin{acknowledgements}
The authors are grateful to N.\ Forget and A.\ Wyatt for useful discussions as well as to E.\ Baynard and S.\ Faure for technical assistance. This work was supported by the Marie Curie Initial Training Network (grant no.\ CA-ITN-214962-FASTQUAST), EPSRC grants EP/H000178/1 and EP/G067694/1, and Alliance (PHC/British Council).
\end{acknowledgements}
|
2,877,628,090,324 | arxiv | \section{Introduction}
It is expected to have more computing intensive tasks at user equipment (UE) such as smart phones, while UEs are limited in terms of computing power and energy source. Thus, as in \cite{Yang08} \cite{Kumar10}, computation offloading can be considered when computation cost exceeds communication cost to save UE's energy and exploit the powerful computing power of cloud servers. In cellular systems, mobile edge computing (MEC) \cite{Mach17} \cite{Abbas18} can be employed so that MEC servers integrated into base stations (BS) can provide computing power without excessive latency thanks the physical proximity.
Multiple UEs or users can co-exist in a cell and want to perform computation offloading simultaneously. In this case, it is necessary to consider resource allocation for multiuser MEC offloading.
In \cite{Barbarossa13}, an optimization problem is formulated to minimize the total transmit power of multiple users subject to constraints on latency. Since mobile devices have limited energy sources, in \cite{You17}, using an objective function based on the total energy consumption, an optimization problem is formulated and solved. In multiuser MEC offloading, since multiple users are to compete for the shared resource (especially, limited radio bandwidth), in \cite{Chen16}, a non-cooperative game is formulated and a distributed algorithm is devised. Compared to the approaches in \cite{Barbarossa13} \cite{You17}, the approach in \cite{Chen16} is suitable for the cases where a BS
cannot obtain the necessary information to solve optimization problems. For example, some users may not want to share their computation capacity information (e.g., the number of central processing unit (CPU) cycles per second) with the BS or cannot predict available computation capacity due to other possible tasks in the future.
Multiuser MEC offloading has been extended to incorporate other aspects.
For example,
in \cite{Hu18} \cite{Feng19} \cite{Min19}, offloading is considered with wireless power transfer, and optimization problems are extended to include energy harvesting. In addition, multiuser MEC offloading is extended in \cite{Zhou21} to deal with different user requirements in heterogeneous networks, and in \cite{Anajemba20} for cooperative offloading in a heterogeneous network consisting of small-cell BSs and wireless relays.
In the paper, we consider a different setup from a conventional one, e.g., the setup used in \cite{You17} \cite{Chen16}. In particular, we consider the case that users are devices that have computation tasks at random times in Internet-of-Things (IoT) applications.
Smart sensors and devices may not only collect data, but also process data with limited computing power. Thus, there are devices that perform tasks on their own, while some devices may choose
to offload sporadic tasks to a BS that is associated with an MEC server so that the MEC server can perform tasks.
In most IoT applications, traffic is considered sporadic \cite{Bockelmann16} \cite{Metzger19}, and events of interest, especially in emergency IoT applications, are often rare and sporadic \cite{Sisinni20}.
Thus, as in \cite{Patel17}, in order
to speed up an analytic task in certain IoT applications such as real-time video analytics, cloud gaming, and smart factories,
offloading can considered for parallelization.
Of course, offloading decisions can depend on many factors and each device and task as in conventional offloading approaches.
The main difference, however, is that IoT devices' tasks that require offloading occur sporadically and are relatively small, so
optimizing resource allocation between offloading devices and coordinating for multiuser offloading by the BS centrally becomes inefficient (as in most conventional approaches), because the time and signaling overhead to gather the necessary information offsets the benefits of offloading.
This is quite similar to machine-type communication (MTC) \cite{Bockelmann18} where devices have sporadic traffic and become active to transmit data packets at random times in a number of IoT applications \cite{Ding_20Access}.
Thus, unlike human-type communication (HTC), non-coordinated transmission schemes such as random access are employed for MTC due to low signaling overhead without any specific channel resource allocations for multiple devices with sporadic traffic. For example, two-step random access \cite{Kim21} \cite{Choi_MWC} is an MTC scheme introduced in 5th generation (5G) for a large number of devices with sporadic traffic.
Note that in \cite{Toma14}, computation offloading for sporadic tasks is studied for a fixed number of users without any limitation on accessing wireless channels. In IoT applications, however, due to a large number of devices with limited bandwidth, it is necessary to study offloading along with how efficiently a number of devices utilize the shared radio resources. In this paper, as mentioned earlier, the proposed approach leverages the two-step
random access in 5G to access the shared radio resource, making it suitable for IoT applications.
To support offloading for devices with sporadic tasks, we propose a two-stage approach. In the first stage, devices send requests through (multiple) random access channels to a BS. If the BS can successfully receive requests without collisions, it schedules devices' uploading the input data for offloading. In the second stage, offloading enabled devices send their input data according to a given schedule. The proposed approach has the advantage that it can support with low signaling overhead when device offloading is needed sporadically.
The main contributions of the paper can be summarized as follows.
\begin{itemize}
\item To support offloading by devices with sporadic tasks, a two-stage offloading approach with low signaling overhead is proposed, where each device makes a decision on offloading locally. This approach leverages two-step random access, a new MTC protocol in 5G, and suitable for IoT devices.
Note that
conventional offloading methods, in which the BS centrally optimizes radio resources, becomes inefficient due to devices' sporadic tasks (because the signaling overhead to coordinate multiple users' offloading tasks may be excessive and result in undesirable latency).
\item The stability of the two-stage offloading approach is studied with control approaches for a stable system, which is important as the proposed approach is a distributed system.
\item The latency outage probability is analyzed to see the performance from a device perspective for statistical guarantees on task completion times.
\end{itemize}
The rest of the paper is organized as follows. In Section~\ref{S:SM}, we present a system model for the two-stage offloading approach with details for each stage. A system perspective for the two-stage offloading approach is discussed in Section~\ref{S:SP}, along with stability and methods to ensure a stable system by controlling key parameters. We also discuss a device perspective in Section~\ref{S:LO} with the analysis of latency outage probability. Simulation results are presented in Section~\ref{S:Sim}. We finally conclude the paper with some remarks in Section~\ref{S:Con}.
The definitions of key parameters are as follows.
\begin{tabular}{ll}
$K$: & number of active devices (per round) \cr
$W$: & number of contending devices \cr
$S$: & number of offloading devices \cr
$M$: & number of random access channels \cr
$B$: & total system bandwidth \cr
$\Delta$: & interval of random access round \cr
$U_{\rm max}$: & maximum size of input for offloading \cr
\end{tabular}
\section{System Model} \label{S:SM}
Suppose that a system consists of one BS and multiple devices. Each device may have a (computation) task to be performed and can choose computation offloading so that its task can be performed at an MEC server connected to the BS.
For computation offloading, we consider the following two-stage approach:
\begin{enumerate}
\item Stage 1: Once a device decides to offload its computing task, it sends a request to the BS using multichannel random access. To this end, grant-free or two-step random access \cite{Kim21} \cite{Choi_MWC} is used, where the device sends a preamble and a short packet of offloading request. This packet includes metadata such as the size of the input data in bits to be uploaded for offloading.
\item Stage 2: The BS sends the feedback signal of positive or negative acknowledgement (ACK or NACK) to inform the success of transmission of request with the time to start uploading the input data through the dedicated uplink channel in a scheduled time division multiple access (TDMA) manner.
\end{enumerate}
As mentioned earlier, devices have sporadic tasks, which makes resource
optimization for offloading with specific devices at any given moment difficult. Therefore, when a device needs offloading, it is appropriate to immediately request offloading by sending packets of offloading requests to the BS. Due to multiple devices that can send request simultaneously, in stage 1, we consider multichannel random access, especially two-step random access proposed in 5G.
In two-step random access, the payload is fixed in size and short \cite{Choi_MWC}, making it suitable for sending offloading requests to devices with sporadic tasks. The BS can schedule uploads from the devices that successfully send requests without collisions.
For the two-stage approach, the total uplink system bandwidth, $B$, is divided into two groups as follows:
\begin{equation}
B = B_{\rm o} + B_{\rm a},
\label{EQ:BBB}
\end{equation}
where $B_{\rm a}$ is the bandwidth allocated for random access to perform the first stage and $B_{\rm o}$ is the bandwidth allocated for uploading in the second stage. The channel of bandwidth $B_{\rm o}$ is referred to as the offloading channel (OC), while that of bandwidth $B_{\rm a}$ the random access channel (RAC).
It is further assumed that
there are $M$ multiple sub-channels within the RAC so that $B_{\rm a} = M b$, where $b$ is the bandwidth of one channel for RAC and
$$
M \in {\cal M} = \left\{1, \ldots, \frac{B}{b} \right\}, \ B \gg b.
$$
Note that if $M = 1$, it is single-channel ALOHA \cite{BertsekasBook}.
It is assumed that a random access round is executed periodically every $\Delta$ seconds.
In Fig.~\ref{Fig:Sys}, we illustrate the bandwidth allocation for two stages. An active device to offload its computing task performs Stage 1 in the RAC (i.e., two-step random access to send an offloading request packet). If this request is accepted by the BS, the BS sends ACK and schedules the uploading so that the device can upload its input data in scheduled TDMA manner in the OC of bandwidth $B_{\rm o}$ as Stage 2.
By keeping the OC and RAC separate, Stages 1 and 2 can be performed concurrently for devices with sporadic tasks occurring at different times.
From this, devices do not need to synchronize, nor do they need to have the same size of input data for computation offloading.
\begin{figure}[thb]
\begin{center}
\includegraphics[width=0.80\columnwidth]{Sys.pdf}
\end{center}
\caption{The bandwidth allocation for two stages that can take place simultaneously. The scheduled TDMA for Stage 2 with different size of input data, denoted by $T_n$, will be explained below.}
\label{Fig:Sys}
\end{figure}
The time for random access rounds is denoted by $\tilde t \in \{i \Delta\}$, where $i$ is the integer time index for random access rounds, as shown in Fig.~\ref{Fig:times}. We assume that the current time for random access round is $\tilde t = 0$ or $i =0$, and there are $N$ existing offloading tasks, where $N \ge 0$.
Denote by $t_n$ the time when the uploading for scheduled offloading task $n$ begins in a scheduled TDMA manner. Since $t_0$ is the time that the earliest existing task at time $\tilde t = 0$, we assume that $t_0 < 0 < t_1$ as shown in Fig.~\ref{Fig:times}.
Then, we have
\begin{equation}
t_{n+1} = t_n + T_n,
\end{equation}
where
\begin{equation}
T_n = \frac{U_n}{B_{\rm o} \log_2 (1 + \gamma_n) } .
\label{EQ:T_n}
\end{equation}
Here, $U_n$ is the number of bits to be uploaded for offloading
and $\gamma_n$ the SNR of the device associated with existing offloading task $n$.
\begin{figure}[thb]
\begin{center}
\includegraphics[width=0.80\columnwidth]{Fig1.pdf}
\end{center}
\caption{Random access rounds and existing tasks at time $\tilde t = 0$
(i.e., the 0th random access round), where $t_N = X_0$.}
\label{Fig:times}
\end{figure}
We assume that the BS broadcasts the time that all the existing uploads are complete in each random access round, i.e., $t_N$ is available as the state of system at the devices in the random access round at $\tilde t = 0$
according to Fig.~\ref{Fig:times}.
For convenience,
denote by $X_i$
the time that all the existing uploads scheduled prior to the $i$th random access round finish. For example, as shown in Fig.~\ref{Fig:times},
when $i = 0$,
$t_N$ becomes $X_i = X_0$. Throughout the paper, $X_i$ is referred to as the state variable.
\section{System Perspective: Stability and Offloading Criteria} \label{S:SP}
In this section, we discuss the random access based offloading approach
in Section~\ref{S:SM} from a system perspective.
Under certain assumptions, the stability of the system will be considered to avoid the upload time from continuously growing.
\subsection{Stability and Main Assumptions}
For a given round of random access of duration $\Delta$, say round $i$,
denote by $D_i$ the total
upload time during which the devices that have allowed offloading through stage 1 (random access round) can upload data through OC.
Thus, it can be shown that
$$
D_i = X_{i+1} - X_i,
$$
which is known to the devices once $X_{i+1}$ is sent by the BS (clearly, $D_i$ is unavailable to devices at the beginning of random access round $i$).
While the duration of random access round is fixed, the total upload time can vary as it depends on the number of the offloading devices\footnote{A device that successfully sends its offloading request in stage 1 and its upload for offloading is scheduled in stage 2 is called an offloading device.}, the sizes of the data to be uploaded, and channel conditions.
Suppose that there are $Q$ rounds. It can be seen that the system may not suffer from a significant uploading delay if
\begin{equation}
\sum_{i=0}^{Q-1} D_i \le Q \Delta.
\end{equation}
If $D_i$ is independent and identically distributed (iid), due to the law of large numbers \cite{Mitz05}, we have
\begin{equation}
\lim_{Q \to \infty} \frac{1}{Q}\sum_{i=0}^{Q-1} D_i = {\mathbb E}[D_i] \le \Delta,
\label{EQ:stable}
\end{equation}
where ${\mathbb E}[\cdot]$ represents the statistical expectation.
Consequently, \eqref{EQ:stable} can be seen as a (asymptotic) stability condition for a large $Q$.
To find ${\mathbb E}[D_i]$, we consider one random access round and omit the index of round, $i$, in the rest of this section.
For a given round,
suppose that there are $K$ active devices with computation tasks at time $\tilde t = 0$.
Denote by $U_{(k)}$ and $\gamma_{(k)}$ the number of bits to be uploaded and the SNR of device $k$, respectively.
Throughout this section, we will consider the following assumptions.
\begin{itemize}
\item[{\bf A1})] The number of new active devices with computation tasks in each round of length $\Delta$, i.e., $K$, follows a Poisson distribution with parameter ${\mathbb E}[K] = \lambda$. For a finite number of devices in a cell, this assumption is an approximation. In particular, if the total number of devices is $G$ and each device can have a task sporadically with a probability of $\epsilon$, $K$ will follow a binomial distribution, i.e.,
$K \sim \Pr(K = k) = \binom{G}{k} \epsilon^k (1-\epsilon)^{G-k}$. When $G\gg 1$ and $\epsilon$ is sufficiently small, $K$ can be well approximated by a Poisson random variable with mean $\lambda = G \epsilon$ \cite{Mitz05}.
\item[{\bf A2})] The $U_{(k)}$'s are independent and follow the same distribution that is an exponential distribution with ${\mathbb E}[U_{(k)}] = \frac{1}{\mu}$, i.e.,
\begin{equation}
U_{(k)} \sim {\rm Exp}(\mu) = f_U (u) = \mu e^{-\mu u}, \ u \ge 0.
\end{equation}
Here, $\frac{1}{\mu}$ represents the average size of input.
\item[{\bf A3})] The SNRs, $\gamma_{(k)}$, are independent
and ${\mathbb E}\left[ \frac{1}{\gamma_{(k)}} \right]$ is finite. For example,
$\gamma_{(k)}$ follows a truncated exponential distribution, i.e.,
\begin{equation}
f(\gamma_{(k)} = \gamma) =
\left\{
\begin{array}{ll}
\frac{ \exp\left( - \frac{\gamma- \Gamma_{(k)}}{\bar \gamma } \right) }{\bar \gamma } , & \mbox{if $\gamma \ge \Gamma_{(k)}$;} \cr
0, & \mbox{o.w.} \cr
\end{array}
\right.
\label{EQ:gdist}
\end{equation}
Here, $\bar \gamma = {\mathbb E}[\gamma_{(k)}]$ and
$\Gamma_{(k)} > 0$ is the SNR threshold.
\end{itemize}
To model sporadic tasks generated at devices, the assumption of {\bf A1} is considered. In general, it is expected that $\lambda$ increases with $\Delta$. The size of input data varies from a task to another. Thus, in the assumption of {\bf A2}, an exponential distribution is considered. It is noteworthy that the two assumptions of {\bf A1} and {\bf A2} result in an M/M/1 queue in queueing theory \cite{Kleinrock79}.
The assumption of {\bf A3} is essential to avoid an excessive upload time due to a low SNR.
In fact, if the SNR is low in stage 1, the device cannot send an offloading request packet, so it can be assumed that the SNR in stage 2 is high enough.
\subsection{Decision Criteria}
For each active device to choose offloading, there should be a criterion. We can consider two different types of criteria as follows.
\begin{itemize}
\item Energy-based criteria: For a given task, each active device can find the energy for local computing and that for offloading. If the energy for offloading is lower than that for local computing, it can choose to offload the task. In multiuser cases, optimization problems can be formulated and solved for radio resource allocation (e.g., \cite{Chen16} \cite{You17} \cite{Wang17}). To this end, devices need to send information to the BS including their computing power (i.e., CPU cycles per second), levels of energy for computing, and so on.
\item Latency-based criteria: Each active device needs to find the time to complete a task under two possible scenarios: local computing and offloading. In multiuser cases, the device cannot exactly determine the time to complete task under offloading as there can be other devices. Thus, the devices need to send their information to the BS so that the BS can optimize and decide whether or not devices can offload.
\end{itemize}
In this paper, we consider local decision for offloading at devices. In particular, in this section, a local decision rule based on the size of input data is considered, where active device $k$ chooses local computing
if the size of input data, $U_{(k)}$, is greater than a threshold, denoted by $U_{\rm max}$, as the transmission time and the energy consumption to upload the input data can be too long and high, respectively.
\subsection{Mean of Total Uploading Time}
Let $W \ (\le K)$ be the number of devices that decide to send offloading requests among $K$ active devices, which are called the contending devices, with probability
$q_{\rm o}$. This probability is referred to as the offloading probability. For given $U_{\rm max}$, according to the assumption of {\bf A2}, the offloading probability is given by
\begin{align}
q_{\rm o} &= \Pr(U_{(k)} \le U_{\rm max} ) \cr
& = 1 - e^{-\mu U_{\rm max}}.
\label{EQ:qo}
\end{align}
Then, for given $K$, the number of the contending devices has the following distribution:
\begin{equation}
\Pr(W=w\,|\,K) = \binom{K}{w} q_{\rm o}^w (1- q_{\rm o})^{K-w}.
\end{equation}
From this, according to the assumption of {\bf A1}, it can be shown that
\begin{equation}
W \sim {\rm Pois}(\lambda q_{\rm o}).
\label{EQ:WP}
\end{equation}
Then, there are $W$ contending devices in stage 1.
Let $S \ (\le M)$ be the number of contending devices that can successfully transmit their requests without collisions in stage 1, which are called the offloading devices. That is, $S$ stands for the number of the offloading devices.
Clearly, $S \le \min\{W,M\}$.
Finally, the total upload time is given by
\begin{equation}
D = \sum_{m=1}^S \tilde T_{(m)},
\label{EQ:DX}
\end{equation}
where $\tilde T_{(m)}$ represents the upload time of the $m$th offloading device through OC, which is
\begin{equation}
\tilde T_{(m)} = \frac{\tilde U_{(m)}}{B_{\rm o} \log_2 (1 + \tilde \gamma_{(m)}) } .
\label{EQ:tXm}
\end{equation}
Here, $\tilde U_{(m)}$ and
$\tilde \gamma_{(m)}$ are the number of bits for offloading and the SNR of the $m$th offloading device, respectively.
Clearly, $\{\tilde T_{(m)}, \ldots, \tilde T_{(M)}\}$ is a subset of
$\{T_{(1)}, \ldots, T_{(K)} \}$.
\begin{mylemma} \label{L:1}
Under the assumptions of {\bf A1} -- {\bf A3} with an identical distribution of $\gamma_{(k)}$, the mean of the total upload time is given by
\begin{align}
{\mathbb E}[D]
& = {\mathbb E}[S] {\mathbb E}[\tilde T_{(m)}] \cr
& = \underbrace{\lambda q_{\rm o} e^{- \frac{\lambda q_{\rm o}}{M}}}_{={\mathbb E}[S]} \frac{1}{B_{\rm o} \mu_{\rm max} } {\mathbb E}\left[ \frac{1}{\log_2 (1+ \tilde \gamma_{(m)})}
\right] \cr
& \le \lambda q_{\rm o} e^{- \frac{\lambda q_{\rm o}}{M}}
\frac{\ln 2}{B_{\rm o} \mu_{\rm max}} \left(\frac{1}{2} + {\mathbb E}\left[\frac{1}{ \tilde \gamma_{(m)}} \right]
\right) \label{EQ:L1} \\
& \le \lambda q_{\rm o} e^{- \frac{\lambda q_{\rm o}}{M}}
\frac{\ln 2}{B_{\rm o} \mu} \left(\frac{1}{2} + {\mathbb E}\left[\frac{1}{ \tilde \gamma_{(m)}} \right] \right), \label{EQ:L1_b}
\end{align}
where
\begin{equation}
\mu_{\rm max} =
\frac{1}{{\mathbb E}[\tilde U_{(m)}\,|\, \tilde U_{(m)} \le U_{\rm max} ]} = \frac{\mu (1 - e^{-\mu U_{\rm max}})}
{1 - e^{-\mu U_{\rm max}} (1+ \mu U_{\rm max})}.
\end{equation}
\end{mylemma}
As shown in \eqref{EQ:L1}, ${\mathbb E} \left[\frac{1}{\tilde \gamma_{(m)} } \right] < \infty$ of the assumption of {\bf A3} is a sufficient condition for the existence of a finite ${\mathbb E}\left[ \frac{1}{\log_2 (1+ \tilde \gamma_{(m)})} \right]$.
\begin{IEEEproof}
See Appendix~\ref{A:1}.
\end{IEEEproof}
Note that we can have the following closed-form expression for ${\mathbb E}\left[\frac{1}{ \tilde \gamma_{(m)}} \right]$ if the distribution of $\tilde g_{(m)}$ is given as in \eqref{EQ:gdist}:
\begin{align}
{\mathbb E}\left[\frac{1}{ \tilde \gamma_{(m)}} \right]
= \frac{1}{\bar \gamma} e^{\frac{\Gamma}{\bar \gamma}}
E_1 \left( \frac{\Gamma}{\bar \gamma} \right),
\end{align}
where $E_1 (x) = \int_x^\infty \frac{e^{-z}}{z} d z$ is the exponential integral and $\Gamma = \Gamma_{(k)}$ for all $k$.
Recall that active devices are to send the offloading requests through RAC in stage 1. Since there are $M$ channels, multiple active devices can successfully send their offloading requests at a time. As shown in \eqref{EQ:BBB}, as $M$ increases, the uploading time increases (because $B_{\rm o}$ decreases as shown in \eqref{EQ:T_n}). In addition, the increase of $U_{\rm max}$ results in more offloading devices and the increase of the uploading time. Formally, we have the following observations.
\begin{mylemma} \label{L:2}
The mean of $D$, ${\mathbb E}[D]$, increases \emph{i)} with $M$ and \emph{ii)} with $U_{\rm max}$ for $\lambda \le M$.
\end{mylemma}
\begin{IEEEproof}
For the first part, in \eqref{EQ:L1}, we can see that ${\mathbb E}[S]$ increases with $M$.
In addition, we have
${\mathbb E}[\tilde T_{(m)}] \propto \frac{1}{B_{\rm o}} =
\frac{1}{B - M b}$ from \eqref{EQ:BBB}. Thus, ${\mathbb E}[D]$ increases with
$M$.
For the second part, since ${\mathbb E}[ \tilde U_{(m)} \,|\, \tilde U_{(m)} \le U_{\rm max}]$, it increases with $U_{\rm max}$. In addition, from \eqref{EQ:qo}, we can also see that $q_{\rm o}$ increases with $U_{\rm max}$. Thus, if $\lambda \le M$, ${\mathbb E}[S] = \lambda q_{\rm o} e^{- \frac{\lambda q_{\rm o}}{M}}$ increases with $q_{\rm o}$ or $U_{\rm max}$. Since ${\mathbb E}[S]$ and ${\mathbb E}[ \tilde U_{(m)} \,|\, \tilde U_{(m)} \le U_{\rm max}]$ increase with $U_{\rm max}$, ${\mathbb E}[D]$ increases with $U_{\rm max}$,
which completes the proof.
\end{IEEEproof}
In addition, suppose that each offloading device can perform ideal power control so that the SNR can be constant, i.e., $\gamma_{(k)} = \bar \gamma > 0$.
Then, from \eqref{EQ:L1}, we can show that
\begin{align}
\mu_{\rm max} {\mathbb E}[D]
& = \frac{{\mathbb E}[S] }{B_{\rm o} \log_2 (1+ \bar \gamma)} \cr
& \le \frac{M e^{-1}}{B_{\rm o} \log_2 (1+ \bar \gamma)},
\end{align}
since $x e^{-x} \le e^{-1}$ for $x\ge 0$.
From this,
a sufficient condition for the stable system becomes
\begin{equation}
\frac{M e^{-1}}{ \mu_{\rm max} } \le
\underbrace{\Delta B_{\rm o} \log_2 (1+ \bar \gamma) }_{= \ {\rm number\ of\ Bits\ per\ Round}} ,
\label{EQ:SC}
\end{equation}
where $\frac{e^{-1}M}{ \mu_{\rm max} } $ is the product of the maximum average number of offloading devices, $\max {\mathbb E}[S]= M e^{-1}$, and the average size of input data, ${\mathbb E}[ \tilde U_{(m)} \,|\, \tilde U_{(m)} \le U_{\rm max}] = \frac{1}{\mu_{\rm max}}$.
That is, the left-hand side (LHS) term on \eqref{EQ:SC} is the average number of input bits per round, while the right-hand side (RHS) term is the average number of bits that can be transmitted through OC per round.
Consequently, there are two key control parameters, $M$ and $U_{\rm max}$ in the two-stage approach. While $U_{\rm max}$ is used for local decision of offloading at devices, $M$ is a system parameter that can be decided to limit the number of offloading devices, $S$, so that the total upload time does not grow for a stable system (e.g., if $M$ decreases, the total upload time decreases because the number of offloading devices, $S$, decreases as well as the bandwidth for uploading, $B_{\rm o}$, increases).
That is,
to stabilize the system, the value of $M$ or $U_{\rm max}$ can be controlled. For example, from Lemma~\ref{L:2}, $U_{\rm max}$ can be adjusted as follows:
\begin{equation}
\hat U_{\rm max} (i+1) = \hat U_{\rm max} (i) - \eta_i ( D_{i}-\Delta),
\label{EQ:SA}
\end{equation}
where $\eta_i > 0$ is the step size and $\hat U_{\rm max} (i)$ stands for the value of $U_{\rm max}$ used in round $i$. Provided that
$$
\lim_{U_{\rm max} \to \infty} {\mathbb E}[D] > \Delta,
$$
$\hat U_{\rm max}(i)$ in \eqref{EQ:SA} can approach the value of $U_{\rm max}$ that satisfies ${\mathbb E}[D] = \Delta$. In particular, as shown in Lemma~\ref{L:2}, ${\mathbb E}[D]$ is a nondecreasing function of $U_{\rm max}$. Thus,
if $D_i$ is iid, with a sequence of $\eta_i$ satisfying $\sum_i \eta_i = \infty$ and $\sum_i \eta_i^2 < \infty$, $\hat U_{\rm max} (i)$ in \eqref{EQ:SA} converges to the solution of the equation ${\mathbb E}[D] = \Delta$ with probability 1 \cite{Kushner03}.
Likewise, the value of $M$ can be adaptively decided to keep the system stable.
It is noteworthy that the devices are discouraged to offload their tasks as $U_{\rm max}$ and $M$ decrease, while decreasing $M$ and $U_{\rm max}$ have different implications. The decrease of $M$ is independent of the offloading probability. Thus, when $M$ decreases, more contending devices experience collisions in stage 1, which are then forced to perform local computing (accordingly, such devices have a disadvantage in that local computing is delayed by $\Delta$),
while the total upload time effectively decreases as the bandwidth of OC increases (see \eqref{EQ:BBB}) as well as the number of offloading devices decreases.
On the other hand, a decrease in $U_{\rm max}$ can directly reduce the number of contending devices, thereby reducing the burden on stage 1 and saving the decision time for local computing. However, adjusting the value of $M$ provides a better control over the total upload time, $D_i$, compared to $U_{\rm max}$ (related simulation results will be presented in Section~\ref{S:Sim}.
Unfortunately, while a finite offloading latency can be obtained (by ensuring a stable system), it is difficult to guarantee a specific offloading latency. Thus, in the next section, we consider the latency outage from a device perspective to see if a certain offloading latency target can be met.
\section{Device Perspective: Analysis of Latency Outage} \label{S:LO}
In this section, we analyze its latency outage probability
from a device perspective. For simplicity, we assume that $U_{\rm max} = \infty$ (i.e., all the active devices become the contending devices so that $W = K$) unless stated otherwise.
\subsection{Latency Outage Probability} \label{SS:LOP}
Suppose that active
device $k$ can have
the following latency threshold that increases with its upload time:
\begin{equation}
\tau_{(k)} = T_{(k)} +\tau,
\label{EQ:tau}
\end{equation}
where $\tau > 0$ (which
is an additional latency in addition to its own upload time, $T_{(k)}$, to account for other previously scheduled uploads), and wants to know the probability that the upload can be completed with the latency threshold, $\tau_{(k)}$.
The BS will schedule their uploading according to a certain order, which is unknown to active devices.
Active device $k$ can assume that it succeeds to send offloading request and its upload is placed the $S(k)$th order among the $S$ active devices that successfully send the requests in stage 1,
where $1 \le S(k) \le S$.
Then, active device $k$ can expect to complete the upload at the following time:
\begin{equation}
Y_{(k)} = t_N + \underbrace{ \sum_{m=1}^{S_{(k)} -1} \tilde T_{(m)} }_{=Z_{(k)}} + T_{(k)}.
\label{EQ:Yk}
\end{equation}
In \eqref{EQ:Yk}, $t_N$ is known as the BS sends this information prior to the random access round and $T_{(k)}$ is also known to active device $k$. However, $Z_{(k)}$ is unknown as it depends on $S$ and the scheduling order.
Then, from \eqref{EQ:tau} and \eqref{EQ:Yk},
the probability
that active device $k$ fails to meet the latency constraint
is given by
\begin{align}
\Pr(Y_{(k)} > \tau_{(k)} )
& = \Pr \left( Z_{(k)} > \tau- t_N \right) \cr
& = g\left( Z_{(k)} > \tau- t_N \right),
\label{EQ:Yt}
\end{align}
where $g(\cdot)$ is the complement of the cumulative distribution function (cdf)
of $Z_{(k)}$, which will be discussed in Subsection~\ref{SS:UB}.
This probability, which will be referred to as the latency outage probability, is a function of $\tau-t_N$. From the known state variable, $X_i = x_N$, at round $i$, each device can find the latency outage probability as a quality-of-service (QoS) indicator.
\subsection{An Upper-bound} \label{SS:UB}
In this subsection, we consider a closed-form expression for an upper-bound on the function $g(\cdot)$ in \eqref{EQ:Yk}.
In \eqref{EQ:Yk}, $Z_{(k)}$ represents the delay due to the uploads of the other active devices at $\tilde t = 0$ scheduled before device $k$, which is referred to as the intra-delay.
Note that
in \eqref{EQ:Yk}, we simply assume that devices $m= 1, \ldots, S_{(k)}-1$ are the devices successfully sending the requests and their upload orders are placed before device $k$. To device $k$, at time $t$, it is unknown that which are the devices that successfully send their requests and are placed earlier than it for uploading. Thus, $\tilde T_{(m)}$, $m \ne k$, are random variables to active device $k$.
To characterize $Z_{(k)}$ at the $k$th active device, under
the assumptions of {\bf A1} and {\bf A2},
we can consider the following approximation:
\begin{equation}
\tilde T_{(m)} \approx
V_{(m)} = \frac{\tilde U_{(m)}}{B_{\rm o}}
{\mathbb E} \left[ \frac{1}{\log_2 (1+ \gamma)} \right]
\sim {\rm Exp} (\theta),
\label{EQ:tXe}
\end{equation}
where
\begin{equation}
\frac{1}{\theta} = \frac{1}{\mu B_{\rm o}} {\mathbb E} \left[ \frac{1}{\log_2 (1+ \gamma)} \right] .
\end{equation}
Thus, $Z_{(k)}$ can be seen as a sum of $S_{(k)} - 1$ exponential random variables with parameter $\theta$. If the variation of the SNR is not significant, \eqref{EQ:tXe} becomes a good approximation. In particular, if an ideal power control is employed so that $\gamma_{(k)}$ becomes a constant, \eqref{EQ:tXe} becomes accurate.
Using the Chernoff bound \cite{Mitz05}, an upper-bound on the latency outage probability can be obtained as follows:
\begin{align}
\Pr (Z_{(k)} > \tau)
& = g(\tau) \cr
& \le \bar g(\tau) = \min_{\nu \ge 0} e^{-\nu \tau}
{\mathbb E}[ e^{\nu Z_{(k)}} ] ,
\label{EQ:CB}
\end{align}
where the upper-bound, $\bar g(\tau)$, decreases exponentially with $\tau$.
Assuming that the BS randomly schedules the uploads for $S$
offloading devices, $S_{(k)}$ can have the following
distribution:
\begin{equation}
\Pr(S(k) = s\,|\, S) = \frac{1}{S}, \ s \in \{ 1,\ldots, S \}.
\end{equation}
Then, from \eqref{EQ:tXe}, after some manipulations, we can show that
\begin{align}
{\mathbb E} [ e^{\nu Z_{(k)}} ]
& = {\mathbb E} \left[ \left(
\frac{\theta}{\theta - \nu} \right)^{S(k)-1}
\right] \cr
& = {\mathbb E} \left[{\mathbb E} \left[ \frac{1}{S} \sum_{s=0}^{S-1}\left(
\frac{\theta}{\theta - \nu} \right)^{s} \,\biggl|\, S
\right] \right] \cr
& = {\mathbb E} \left[ \frac{1}{S} \left(\frac{1- z^S}{1-z} \right) \right],
\label{EQ:EvZ}
\end{align}
where $z = \frac{\theta}{\theta - \nu}$.
To find a tight upper-bound in \eqref{EQ:CB}, we need to have a closed-form expression for ${\mathbb E} [ e^{\nu Z_{(k)}} ]$. To this end, we consider
another approximation.
For a sufficient large $M$, the number of offloading devices, $S$, can be approximated by a Poisson random variable with mean ${\mathbb E}[S] =
\bar \lambda = \lambda e^{-\frac{\lambda }{M}}$ (because we consider the case of $q_{\rm o}=1$).
Based on this Poisson approximation, we have the following result.
\begin{mylemma} \label{L:3}
If $S$ is a Poisson random variable and $\tilde T_{(m)}$ is an
exponential random variable as in \eqref{EQ:tXe}, we have
\begin{align}
{\mathbb E} [ e^{\nu Z_{(k)}} ] =
\frac{e^{-\bar \lambda}}{1-z} \lim_{n_{\rm max} \to \infty}
\sum_{n=2}^{n_{\rm max}} \bar \lambda \beta_n (\bar \lambda) -
\bar \lambda z \beta_n (\bar \lambda z),
\label{EQ:L3}
\end{align}
where
\begin{equation}
\beta_n (x) =
\frac{(n-2)!}{x^n} \left( e^x - \sum_{s=0}^{n-1} \frac{x^s}{s!} \right) .
\end{equation}
Here, with a finite $n_{\rm max} \ge 2$, we can have an approximation of ${\mathbb E} [ e^{\nu Z_{(k)}}]$.
\end{mylemma}
\begin{IEEEproof}
See Appendix~\ref{A:3}.
\end{IEEEproof}
With a sufficiently large $n_{\rm max}$, we can obtain
${\mathbb E} [ e^{\nu Z_{(k)}} ]$ in \eqref{EQ:L3}, which can be used for the minimization in \eqref{EQ:CB} for the Chernoff-bound.
Note that if $U_{\rm max}$ is finite, $q_{\rm o}$ can be less than 1. In this case, with $\bar \lambda =\lambda q_{\rm o} e^{-\frac{\lambda q_{\rm o}}{M}}$, we can also obtain ${\mathbb E} [ e^{\nu Z_{(k)}} ]$ from \eqref{EQ:L3}.
\section{Numerical Results} \label{S:Sim}
In this section, we present theoretical and simulation results under the assumptions of {\bf A1} -- {\bf A3} with \eqref{EQ:gdist}. For convenience, the bandwidth of one channel for RAC, $b$, is normalized (i.e., $b = 1$), and $B = b M_{\rm max} = M_{\rm max}$, where $M_{\rm max}$ represents the maximum number of channels for RAC.
\subsection{System Perspective}
In Fig.~\ref{Fig:plt1}, we show the performance in terms of
the percentage of offloading devices, $\frac{{\mathbb E}[S]}{\lambda}$ in \%, and the average total upload time, ${\mathbb E}[D]$,
for different values of the traffic intensity or average number of new devices per round, ${\mathbb E}[K] = \lambda$, and the average size of input, ${\mathbb E}[U_{(k)}]= \frac{1}{\mu}$,
when $M_{\rm max} = B = 50$, $M = 30$, $\Delta = 10^{-3}$, $U_{\rm max} = 10 \Delta$,
and $(\bar \gamma, \Gamma_{(k)}) = (10, 6)$ in dB. We can see that most devices can offload
when the system is lightly loaded (i.e., $\lambda$ is small) in Fig.~\ref{Fig:plt1} (a). As $\lambda$ approaches $M$, the system becomes fully loaded and about $36\%$ of active devices can offload their tasks as shown in Fig.~\ref{Fig:plt1} (a) and (b) (as ${\mathbb E}[S] \le M e^{-1}$).
It is also shown that ${\mathbb E}[D]$ increases with $\lambda$ and $\frac{1}{\mu}$, while ${\mathbb E}[D]$ is less than $\Delta = 10^{-3}$. This shows that the system is stable (i.e., ${\mathbb E}[D] \le \Delta$). In fact, with $M = 30$ and $\frac{1}{\mu} = U_{\rm max} = 10 \Delta$, from \eqref{EQ:L1}, we have ${\mathbb E}[D] = 6.06 \times 10^{-4} < \Delta = 10^{-3}$, which indicates that the system is stable for all the range of parameter values in Fig.~\ref{Fig:plt1} (a) and (b), where we can also find that the theoretical results agree with the simulation results.
\begin{figure}[thb]
\begin{center}
\subfigure[Performance in terms of $\lambda$]{\label{fig 0 ay}
\includegraphics[width=0.4\textwidth]{plt_lams.pdf}}
\subfigure[Performance in terms of $\mu$]{\label{fig 0 by}
\includegraphics[width=0.4\textwidth]{plt_mus.pdf}}
\end{center}
\caption{Percentage offloading devices and total upload time
when $M_{\rm max} = B = 50$, $M = 30$, $\Delta = 10^{-3}$, $U_{\rm max} = 10 \Delta$,
and $(\bar \gamma, \Gamma_{(k)}) = (10, 6)$ in dB: (a) for different values of $\lambda$ with $\frac{1}{\mu} = 2 \Delta$; (b) for different values of $\mu$ with $\lambda = 30$.}
\label{Fig:plt1}
\end{figure}
In Fig.~\ref{Fig:plt2}, $\lambda$ and $\mu$ are fixed, while
$M$ and $U_{\rm max}$ vary when $M_{\rm max} = B = 50$, $\Delta = 10^{-3}$, $\frac{1}{\mu} = 2 \Delta$, $\lambda = 30$,
and $(\bar \gamma, \Gamma_{(k)}) = (10, 6)$ in dB. We can see that ${\mathbb E}[D]$ becomes greater than $\Delta$ when $M \ge 40$ in Fig.~\ref{Fig:plt2} (a). Clearly, since more devices choose to offload as $M$ increases, the total upload time increases and the system can be unstable. On the other hand, in Fig.~\ref{Fig:plt2} (b), we see that although $U_{\rm max} \to \infty$, ${\mathbb E}[D]$ is still less than $\Delta$, which results from the fact that ${\mathbb E}[D] \to 3.17 \times 10^{-4}$ as $\mu_{\rm max} \to \mu$ (which is the case when $U_{\rm max} \to \infty$). As mentioned earlier,
when comparing Fig.~\ref{Fig:plt2} (a) and (b), it can be seen that compared to $U_{\rm max}$, $M$ is a system parameter to more effectively control the mean of the total upload time, ${\mathbb E}[D]$.
\begin{figure}[thb]
\begin{center}
\subfigure[Performance in terms of $M$]{\label{fig 1 ay}
\includegraphics[width=0.4\textwidth]{plt_Ms.pdf}}
\subfigure[Performance in terms of $U_{\rm max}$]{\label{fig 1 by}
\includegraphics[width=0.4\textwidth]{plt_Us.pdf}}
\end{center}
\caption{Percentage offloading devices and total upload time
when $M_{\rm max} = B = 50$, $\Delta = 10^{-3}$, $\frac{1}{\mu} = 2 \Delta$, $\lambda = 30$,
and $(\bar \gamma, \Gamma_{(k)}) = (10, 6)$ in dB: (a) for different values of $M$ with $U_{\rm max} = 10 \Delta$; (b) for different values of $U_{\rm max}$ with $M = 30$.}
\label{Fig:plt2}
\end{figure}
As mentioned earlier, it is possible to adjust the values of $M$ and/or $U_{\rm max}$ to ensure ${\mathbb E}[D] = \Delta$ using the iterative method for stochastic approximation in \eqref{EQ:SA}.
In Fig.~\ref{Fig:plt3}, we show the results when
$M_{\rm max} = B = 40$, $\Delta = 10^{-3}$, $\lambda = 30$,
and $(\bar \gamma, \Gamma_{(k)}) = (10, 6)$ in dB. For the case that $M$ is adapted as in Fig.~\ref{Fig:plt3} (a), we assume that $\frac{1}{\mu} = 2 \Delta$ and $U_{\rm max} = 5 \Delta$. It is shown that $M (i) \to 34$ as the number of rounds, $i$, increases, while $D_i$ is around $\Delta$.
As shown in Fig.~\ref{Fig:plt3} (b), $U_{\rm max}$ can also be adjusted, where $U_{\rm max}(i) \to 5.94 \times 10^{-3}$ as $i$ increases. Note that ${\mathbb E}[D] \to 1.58 \times 10^{-3}$ as $U_{\rm max} \to \infty$ in the case of Fig.~\ref{Fig:plt3} (b). Thus, there exists $U_{\rm max}$ that satisfies ${\mathbb E}[D] = \Delta$. As shown in Fig.~\ref{Fig:plt3} (a) and (b),
we can see that the adaptation of $M$ and $U_{\rm max}$ can make the system stable using an iterative method for stochastic approximation.
\begin{figure}[thb]
\begin{center}
\subfigure[Adaptation of $M$]{\label{fig 2 ay}
\includegraphics[width=0.4\textwidth]{sys_M.pdf}}
\subfigure[Adaptation of $U_{\rm max}$]{\label{fig 2 by}
\includegraphics[width=0.4\textwidth]{sys_U.pdf}}
\end{center}
\caption{Total upload time and trajectory of key parameters over time: (a) $\hat M(i)$ with $\frac{1}{\mu} = 2 \Delta$ and $U_{\rm max} = 5 \Delta$; (b) $\hat U_{\rm max}(i)$ with $\frac{1}{\mu} = 5 \Delta$ and $M = 30$.}
\label{Fig:plt3}
\end{figure}
\subsection{Latency Outage Probability}
In this subsection, we focus on the latency outage probability with
$U_{\rm max} = \infty$ (unless stated otherwise) and $\tau_N = 0$. In addition, $n_{\rm max}$ in \eqref{EQ:L3} is set to 20 in finding the upper-bound using \eqref{EQ:CB}.
In Fig.~\ref{Fig:ub1}, the latency outage probability is shown as a function of $\tau$
when $M_{\rm max} = B = 50$, $\Delta = 10^{-3}$, $\frac{1}{\mu} = 3 \Delta$, $\lambda = 20$, $M = 30$,
and $(\bar \gamma, \Gamma_{(k)}) = (10, 6)$ in dB. As expected,
the latency outage probability exponentially decreases with $\tau$. We can also see that \eqref{EQ:CB} is an upper-bound on the latency outage probability, while it is not tight. At a reasonably low outage probability, say $10^{-2} \sim 10^{-4}$, it can be seen that the actual outage probability is about $1/10$ of the upper-bound. Since the Chernoff bound in \eqref{EQ:CB} is known to be asymptotically tight, a scaling factor can be introduced for a good prediction, i.e., $g(\tau) \approx c_0 \bar g(\tau)$, where $c_0$ is a constant. In our case, $c_0$ is around $1/10$.
\begin{figure}[thb]
\begin{center}
\includegraphics[width=0.80\columnwidth]{plt_ub1.pdf}
\end{center}
\caption{The latency outage probability as a function of $\tau$
when $M_{\rm max} = B = 50$, $\Delta = 10^{-3}$, $\frac{1}{\mu} = 3 \Delta$, $\lambda = 20$, $M = 30$,
and $(\bar \gamma, \Gamma_{(k)}) = (10, 6)$ in dB.}
\label{Fig:ub1}
\end{figure}
Fig.~\ref{Fig:ub2} shows the latency outage probability as a function of $M$
when $M_{\rm max} = B = 50$, $\Delta = \tau = 10^{-3}$, $\frac{1}{\mu} = 2 \Delta$, $\lambda = 20$,
and $(\bar \gamma, \Gamma_{(k)}) = (10, 6)$ in dB. The latency outage probability decreases as $M$ decreases. Since the decrease of $M$ leads to the increases of the bandwidth of OC, $B_{\rm o}$, and the decrease of the number of the offloading devices, $S$, we can see that the latency outage probability can be significantly lowered by decreasing $M$. Thus, in order to ensure a low latency outage probability, the number of channels for RAC, $M$, should be limited.
\begin{figure}[thb]
\begin{center}
\includegraphics[width=0.80\columnwidth]{plt_ub2.pdf}
\end{center}
\caption{The latency outage probability as a function of $M$
when $M_{\rm max} = B = 50$, $\Delta = \tau = 10^{-3}$, $\frac{1}{\mu} = 2 \Delta$, $\lambda = 20$,
and $(\bar \gamma, \Gamma_{(k)}) = (10, 6)$ in dB.}
\label{Fig:ub2}
\end{figure}
The impact of the average size of input, $\frac{1}{\mu}$, on the latency outage probability is shown in Fig.~\ref{Fig:ub3}
when $M_{\rm max} = B = 50$, $\Delta = \tau = 10^{-3}$, $\lambda = 20$, $M = 30$,
and $(\bar \gamma, \Gamma_{(k)}) = (10, 6)$ in dB. It is shown that the increase of $\frac{1}{\mu}$ results in a rapid increase of the
latency outage probability. By limiting the size of input data using $U_{\rm max}$, a lower latency outage probability can be achieved
as shown in Fig.~\ref{Fig:ub3}. It is also shown in Fig.~\ref{Fig:plt2} (b) that the decrease of $U_{\rm max}$ leads to the decrease of ${\mathbb E}[D]$. Thus, the control of $U_{\rm max}$ can be seen as a self-censoring mechanism where a small value of $U_{\rm max}$ discourages offloading. In other words, devices compares their sizes of input to $U_{\rm max}$ and decide offloading themselves
if the system is within a stabilization range or a low enough latency is possible.
\begin{figure}[thb]
\begin{center}
\includegraphics[width=0.80\columnwidth]{plt_ub3.pdf}
\end{center}
\caption{The latency outage probability as a function of $\frac{1}{\mu}$
when $M_{\rm max} = B = 50$, $\Delta = \tau = 10^{-3}$, $\lambda = 20$, $M = 30$,
and $(\bar \gamma, \Gamma_{(k)}) = (10, 6)$ in dB.}
\label{Fig:ub3}
\end{figure}
\section{Concluding Remarks} \label{S:Con}
In this paper, we studied multiuser MEC offloading for devices with sporadic tasks in IoT applications. To support offloading of sporadic tasks with low signaling overhead, multichannel random access was employed for offloading requests in the proposed two-stage approach. The two key parameters, the number of channels for RACs, $M$, and maximum size of input for offloading, $U_{\rm max}$, have been identified to stabilize the system using stochastic approximation, where their values can be adaptively adjusted. Although a finite upload time is guaranteed in a stable system, each device may want to see a QoS indicator. To this end, we analyzed the latency outage probability and its upper-bound was found.
The approach in this paper can be extended in a number of directions. For example, NOMA can be used for stage 2 so that the upload time can be shortened. The notion of priority queue can be introduced to provide different QoS for devices with different latency constraints.
\appendices
\section{Proof of Lemma~\ref{L:1}} \label{A:1}
In \eqref{EQ:DX}, since $S$ and $\tilde T_{(m)}$ are independent,
due to Wald's identity \cite{Mitz05}, we have
${\mathbb E}[D] = {\mathbb E}[S] {\mathbb E}[\tilde T_{(m)}]$, which is the equality in \eqref{EQ:L1}.
From \eqref{EQ:WP}, it can be shown that
\begin{align}
{\mathbb E}[S] & = {\mathbb E}\left[{\mathbb E}[S\,|\, W] \right] \cr
& = {\mathbb E} \left[ W \left(1 - \frac{1}{M} \right)^{W-1}
\right] = \lambda q_{\rm o} e^{ - \frac{\lambda q_{\rm o}}{M}}.
\label{EQ:ES}
\end{align}
From \eqref{EQ:tXm}, we have
\begin{align}
{\mathbb E}[\tilde T_{(m)} ] & = {\mathbb E}\left[ \frac{\tilde U_{(m)}}{B_{\rm o} \log_2 (1 + \tilde \gamma_{(m)}) } \right] \cr
& = \frac{{\mathbb E}[\tilde U_{(m)}]}{B_{\rm o} }
{\mathbb E}\left[ \frac{1}{ \log_2 (1+ \tilde \gamma_{(m)} )}
\right].
\label{EQ:tXx}
\end{align}
Under the assumption of {\bf A2}, we have
$$
{\mathbb E}[\tilde U_{(m)} \,|\, \tilde U_{(m)} \le U_{\rm max}] = \frac{1}{\mu_{\rm max}}.
$$
In addition, since
$\ln (1+x) \ge \frac{2x}{2+x}$, $x \ge 0$, we have \eqref{EQ:L1} from
\eqref{EQ:ES} and \eqref{EQ:tXx}.
The inequality in \eqref{EQ:L1_b} is due to ${\mathbb E}[\tilde U_{(m)}] = \frac{1}{\mu} \ge {\mathbb E}[\tilde U_{(m)} \,|\, \tilde U_{(m)} \le U_{\rm max}] =
\frac{1}{\mu_{\rm max}}$.
\section{Proof of Lemma~\ref{L:3}} \label{A:3}
From \eqref{EQ:EvZ}, it follows
\begin{align}
{\mathbb E} [ e^{\nu Z_{(k)}} ] & = \frac{1}{1-z}
{\mathbb E} \left[ \frac{1- z^S}{S} \right] \cr
& = \frac{1}{1-z} \sum_{s=1}^\infty \frac{1- z^s}{s}
\frac{e^{-\bar \lambda} \bar \lambda^s}{s!} \cr
& = \frac{e^{-\bar \lambda}}{1-z}
\left( \sum_{s=0}^\infty \frac{\bar \lambda \psi_{\bar \lambda} (s)}{(s+1)^2} -
\frac{\bar \lambda z \psi_{\bar \lambda z} (s)}{(s+1)^2}
\right),
\label{EQ:A31}
\end{align}
where $\psi_x (s) = \frac{x^s}{s!}$.
In order to find a closed-form expression, we need to have the following
result.
\begin{myproposition}
It can be shown that
\begin{align}
\frac{1}{(s+1)^2} & = \frac{1}{(s+1) (s+2)}
+ \sum_{n=3}^\infty \frac{(n-2)!}{\prod_{i=1}^n (s+i)} \cr
& = \sum_{n=2}^\infty \frac{(n-2)!}{\prod_{i=1}^n (s+i)} .
\label{EQ:app_s12}
\end{align}
\end{myproposition}
\begin{IEEEproof}
It can be shown that
\begin{align*}
\frac{1}{(s+1)^2} & = \frac{1}{(s+1) (s+2)}
+\left( \frac{1}{(s+1)^2} - \frac{1}{(s+1) (s+2)} \right) \cr
& = \frac{1}{(s+1) (s+2)} +
\underbrace{\frac{1}{(s+1)^2 (s+2)}}_{=R_3(s)}.
\end{align*}
Then, the difference term, $R_3 (s)$, can also be written as
\begin{align*}
R_3(s)
= \frac{1}{(s+1) (s+2) (s+3)}
+ \underbrace{\frac{2}{(s+1)^2 (s+2) (s+3)}}_{=R_4 (s)}.
\end{align*}
The difference term, $R_4 (s)$, can be further expressed as
\begin{align*}
R_4 (s) =
\frac{2}{(s+1) (s+2) (s+3) (s+4)} + R_5(s),
\end{align*}
where $R_n (s)$, $n \ge 4$, can be defined as
$$
R_{n+1} (s) = R_{n} (s) - \frac{(n-2)!}{\prod_{i=1}^n (s+i)}.
$$
Then, after some additional manipulations,
we can show \eqref{EQ:app_s12}.
The difference term, $R_n (s)$, becomes smaller as $n$ increases
for $s \ge 0$.
\end{IEEEproof}
In \eqref{EQ:A31}, to find each term on the right-hand side (RHS),
using \eqref{EQ:app_s12},
we can use the following expression:
\begin{align}
\sum_{s=0}^\infty \frac{\psi_x(s)}{(s+1)^2}
& = \sum_{s=0}^\infty \frac{x^s}{(s+1)^2 s!} \cr
& = \sum_{n=2}^\infty
\frac{(n-2)!}{x^n} \sum_{s=0}^\infty \frac{x^{s+n}}{(s+n)!}
= \sum_{n=2}^\infty \beta_n (x). \qquad
\label{EQ:beta_a}
\end{align}
Substituting \eqref{EQ:beta_a} into \eqref{EQ:A31},
we have \eqref{EQ:L3}.
\bibliographystyle{ieeetr}
|
2,877,628,090,325 | arxiv | \section{Introduction}
Let $X$ and $X'$ be two compact connected Riemann surfaces of genus $g$. The classical Torelli theorem says that if their Jacobians $J(X)$ and $J(X')$ are isomorphic as polarized varieties, with the polarization given by the theta divisor, then $X$ is isomorphic to $X'$.\\
In \cite{MN68} Mumford and Newstead proved a similar result for $\mathcal{M}_X^{2,\xi}$, the moduli space of stable vector bundles over $X$ with rank $2$ and fixed determinant $\xi$ (assuming $g\geq 2$). Later, in \cite{NR75} Narasimhan and Ramanan extended this result for any rank. They consider the intermediate Jacobian associated to $H^3(\mathcal{M}_X^{r,\xi})$. They showed that this has a polarization defined by the positive generator of $\textnormal{Pic}(\mathcal{M}_X^{r,\xi})$, and this polarized intermediate Jacobian is isomorphic (as a polarized variety) to the Jacobian of the curve $X$, with the polarization given by the theta divisor. So the result follows from the classical Torelli theorem. A Torelli theorem for the moduli space of symplectic bundles was proved in \cite{BH12} and \cite{BGM12}.\\
The notion of parabolic bundles over a curve was described by Mehta and Seshadri in \cite{MS80}, where they also constructed its moduli space using Geometric Invariant Theory. In \cite{BR89} Bhosle and Ramanathan extended this notion to parabolic $G$-bundles where $G$ is a connected reductive group, and constructed their moduli space. The notion of symplectic parabolic bundles were described in \cite{BMW11}. They are parabolic vector bundles with a nondegenerate (in a suitable sense) anti-symmetric form taking values in a parabolic line bundle.\\
Hitchin, in \cite{H87} showed that the moduli space of stable Higgs bundles over a curve $X$ forms an algebraically completely integrable system fibered, over a space of invariant polynomials, either by a Jacobian or a Prym variety of spectral curves. Later, in \cite{M94} Markman extended this result for the moduli space of stable $L$-twisted Higgs bundles where $L$ is a positive line bundle on $X$ satisfying $L \geq K_X$.\\
In \cite{BBB01} Balaji, Baño and Biswas proved a Torelli theorem for parabolic bundles of rank $2$, and in \cite{BG03} Biswas and Gómez proved a Torelli theorem for Higgs bundles (with genus $g\geq 2$). Gómez and Logares in their paper \cite{GL11} proved a Torelli theorem for parabolic Higgs bundles of rank $2$ by applying the Torelli thoerem of \cite{BBB01}. In \cite{AG19}, Alfaya and Gómez proved a Torelli theorem for parabolic bundles of any rank (assuming $g \geq 4$). Our goal in this article is to prove a Torelli theorem for symplectic parabolic Higgs bundles.\\
The main result is the following theorem (see section $2$ for notation):
\begin{theorem}\label{thm2}
Let $(X,D)$ and $(X',D')$ be two compact Riemann surfaces of genus $g \geq 4$ with the set of marked points $D \subset X$ and $D' \subset X'$. Let $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$ and $\mathcal{N}'_{\textnormal{Sp}}(2m,\alpha,L)$ be the moduli spaces of stable symplectic parabolic Higgs bundles over $X$ and $X'$, respectively. If $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$ is isomorphic to $\mathcal{N}'_{\textnormal{Sp}}(2m,\alpha,L)$, then $(X,D)$ is isomorphic to $(X',D')$, i.e. there exist an isomorphism $X \cong X'$ sending $D$ to $D'$.
\end{theorem}
In section $2$ we recall the notion of symplectic parabolic bundles and some properties of their moduli space. We also prove some technical lemmas regarding $(1,0)$-stable symplectic parabolic bundles. In section $3$ we give a description of the \textit{Hitchin map} and then we study the locus of the singular spectral curves, called the \textit{Hitchin discriminant}. We can intrinsically describe the image of the Hitchin discriminant as an abstract variety from the geometry of $\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$, the moduli space of stable symplectic parabolic bundles. In section $4$ we will use this description to prove the Torelli theorem for the moduli space of symplectic parabolic bundles (Theorem \ref{thm1}).\\
In section $5$ we again consider the Hitchin map for $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$, the moduli space of symplectic parabolic Higgs bundles. The fiber over the origin is called the nilpotent cone. The moduli space $\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$ is embedded in the nilpotent cone as a component and we will see that it is the unique irreducible component of the nilpotent cone that doesn't admit a non-trivial $\mathbb{C}^*$-action. In other words, if we were given $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$ and the Hitchin map, we would recover $\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$, and then we can apply the Torelli theorem for $\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$ to recover $X$.\\
We are given only the isomorphism class of $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$. So we have to recover the Hitchin map. The idea is same as in \cite{BG03}. Consider an algebraic variety $Y$, which is isomorphic to $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$. Then the natural map
\[
m : Y \to \textnormal{Spec}(\Gamma(Y))
\]
is isomorphic to the Hitchin map up to an automorphism. More precisely, by Proposition \ref{prop5}, $\textnormal{Spec}(\Gamma(Y)) \cong \mathbb{A}^N$, where $N= \dim \mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$, and there is a commutative diagram \ref{diag1}.\\
The standard $\mathbb{C}^*$-action on $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$ given by sending a Higgs pair $(E,\Phi)$ to $(E,t\Phi)$ descends to an action on the Hitchin space, and the origin is the only fixed of this descended action. In section $6$ we use the Kodaira-Spencer map to prove that if $g$ is a $\mathbb{C}^*$-action, having exactly one fixed point, and admitting a lift to $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$, then this fixed point is the origin of the Hitchin space (see Proposition \ref{prop10}). So using this property we recover the origin of the Hitchin space. Hence we recover the nilpotent cone, and by the previous observations, the proof of the theorem is complete.
\section{Preliminaries}
\subsection{Parabolic Vector Bundles}Let $X$ be a compact Riemann surface, and let $D \subset X$ be a finite subset of $n \geq 1$ distinct points. A \textit{parabolic vector bundle} $E_*$ on $X$ is a holomorphic vector bundle $E$ of rank $r$ over $X$ together with a parabolic structure, i.e. for every point $p \in D$, we have
\begin{enumerate}
\item a filtration of subspaces $$E|_p=: E_{p,1}\supsetneq \dots \supsetneq E_{p,r(p)} \supsetneq \{0\}, $$
\item a sequence of real numbers (parabolic weights) satisfying $$0\leq \alpha_1(p) < \alpha_2(p) < \dots < \alpha_r(p) < 1.$$
\end{enumerate}
The parabolic structure is said to have \textit{full flags} whenever dim$(E_{p,i}/E_{p,i+1}) = 1$ \hspace{1cm}$\forall i, \forall p\in D$. We denote $\alpha = \{(\alpha_1(p),\dots,\alpha_r(p)) \}_{p \in D}$ to the system of weights corresponding to the fixed parabolic structure. \\
The \textit{parabolic degree} of a parabolic vector bundle $E_*$ is defined as
\[
\operatorname{pardeg}(E_*):= \deg(E)+ \sum\limits_{p\in D}\sum\limits_{i} \alpha_i(p) \cdot \dim(E_{p,i}/E_{p,i+1})
\]
and the real number $\operatorname{pardeg}(E_*)/\text{rank}(E)$ is called the \textit{parabolic slope} of $E_*$ and it is denoted by $\mu_{par}(E_*)$. The dual of a parabolic bundle and the tensor product of two parabolic bundles can be defined in a natural way (see \cite{Y95}).\\
A \textit{parabolic homomorphism} $\phi : E_* \to E^\prime_*$ between two parabolic bundles is a homomorphism of vector bundles that satisfies
the following: at each $p \in D$ we have $\phi_p(E_{p,i}) \subset E_{p,i+1}^\prime$ whenever $\alpha_i(p) > \alpha_{i+1}^\prime(p)$. Furthermore, we call such morphism \textit{strongly parabolic} if $\alpha_i(p) \geq \alpha_{i+1}^\prime(p)$ implies $\phi_p(E_{p,i}) \subset E_{p,i+1}^\prime$ for every $p \in D$.
\subsection{Symplectic Parabolic Bundles} Fix a parabolic line bundle $L_*$. Let $E_*$ be a parabolic bundle and let
\[
\psi : E_* \otimes E_* \to L_*
\]
be a homomorphism of parabolic bundles. Tensoring both sides with the parabolic dual $E^\vee_*$ we get a homomorphism
\[
\psi \otimes Id : (E_* \otimes E_*) \otimes E^\vee_* \to L_* \otimes E^\vee_*.
\]
The trivial line bundle $\mathcal{O}_X$ equipped with the trivial parabolic structure (meaning parabolic weights are all zero) is realized as a parabolic subbundle of $E_* \otimes E^\vee_*$. Let
\[
\tilde{\psi} : E_* \to L_* \otimes E^\vee_*
\]
be the homomorphism defined by the composition
\[
E_* = E_* \otimes \mathcal{O}_X \xhookrightarrow{} E_* \otimes (E_* \otimes E^\vee_*) = (E_* \otimes E_*) \otimes E^\vee_* \xrightarrow{\psi \otimes Id} L_* \otimes E^\vee_*.
\]
\begin{definition} A \textit{symplectic parabolic bundle} is a pair $(E_*,\psi)$ of the above form such that $\psi$ is anti-symmetric and the homomorphism $\tilde{\psi}$ is an isomorphism.
\end{definition}
\subsection{Parabolic Higgs Bundles} Let $K$ be the canonical bundle on $X$. We write $K(D) \coloneqq K \otimes \mathcal{O}(D)$. A \textit{parabolic Higgs bundle} on $X$ is a parabolic bundle $E_*$ on $X$ together with a Higgs field $\Phi : E_* \to E_* \otimes K(D)$ such that $\Phi$ is strongly parabolic.\
Higgs field associated to a parabolic Higgs bundle is called \textit{parabolic Higgs field}.
\subsection{Symplectic Parabolic Higgs Bundles} Let $(E_*,\psi)$ be a symplectic parabolic bundle on $X$. A parabolic Higgs field on $E_*$ will induce a parabolic Higgs field on $L_* \otimes E^\vee_*$. A parabolic Higgs field $\Phi$ is said to be compatible with $\psi$ if $\tilde{\psi}$ takes $\Phi$ to the induced parabolic Higgs field on $L_* \otimes E^\vee_*$.\
A \textit{symplectic parabolic Higgs bundle} $(E_*,\Phi,\psi)$ is a symplectic parabolic bundle $(E_*,\psi)$ togther with a parabolic Higgs field $\Phi$ on $E_*$ which is compatible with $\psi$.
\begin{definition} A holomorphic subbundle $F \subset E$ is called \textit{isotropic} if $\psi(F \otimes F) = 0$. The parabolic structure on $E$ will induce a parabolic structure on $F$. A symplectic parabolic Higgs bundle $(E_*,\Phi,\psi)$ is called \textit{stable} (resp. \textit{semistable}) if for every isotropic subbundle $F \subset E$ of positive rank, the following condition holds
\[
\mu_{par}(F_*) < \mu_{par}(E_*) \hspace{0.4cm}(\text{resp.} \hspace{0.15cm} \mu_{par}(F_*) \leq \mu_{par}(E_*)).
\]
\end{definition}\
In \cite{BBN01} and \cite{BBN03}, principal bundles with parabolic structures were defined when all parabolic weights are rational. By \cite{BMW11}, the definition of symplectic parabolic bundle coincides with the definition in \cite{BBN01} and \cite{BBN03} when all parabolic weights are rational.\\
Let
\[
J = \begin{bmatrix} O_{m \times m} & I_{m \times m} \\ -I_{m \times m} & O_{m \times m} \end{bmatrix}
\]
be the standard symplectic form on $\mathbb{C}^{2m}$. Consider the group
\begin{equation}\label{eqn6}
\text{Gp}(2m,\mathbb{C}) = \{ A \in \text{GL}(2m,\mathbb{C}) : A^tJA=cJ \text{ for some } c \in \mathbb{C}^*\}.
\end{equation}
This group is an extension of $\mathbb{C}^*$ by the symplectic group $\text{Sp}(2m,\mathbb{C})$
\[
1 \to \text{Sp}(2m,\mathbb{C}) \xrightarrow{} \text{Gp}(2m,\mathbb{C}) \xrightarrow[]{p} \mathbb{C}^* \xrightarrow[]{} 1,
\]
where $p(A)=c$ for $A$ and $c$ as in (\ref{eqn6}). It follows that $\det(A)=p(A)^m$ for all $A \in \text{Gp}(2m,\mathbb{C})$. When all parabolic weights are rational, giving a symplectic parabolic bundle of rank $2m$ is equivalent to giving a parabolic principal $\text{Gp}(2m,\mathbb{C})$-bundle.
\subsection{Moduli Space of Symplectic Parabolic Higgs bundles}
The moduli space of stable parabolic $G$-bundles of rank $r$ and degree $d$ and fixed parabolic structure $\alpha$ was described in \cite{BR89} and \cite{BBN01}. It is a normal projective variety. Fix a parabolic line bundle $L$ with trivial parabolic structure.
Let $\mathcal{M}_{\text{Sp}}(2m,\alpha,L)$ denote the moduli space of stable symplectic parabolic bundles of rank $2m$ $ (m > 1)$ and fixed parabolic structure $\alpha$, with the symplectic form taking values in $L$. When the parabolic structure $\alpha$ have full flags, it is of dimension
\[
\dim \mathcal{M}_{\text{Sp}}(2m,\alpha,L) = m(2m+1)(g-1) + m^2n,
\]
where $n$ is the number of marked points on $X$.\\
The moduli space $\mathcal{N}_{\text{Sp}}(2m,\alpha,L)$ of stable symplectic parabolic Higgs bundles of rank $2m$ is a smooth irreducible complex variety. The moduli space $\mathcal{M}_{\text{Sp}}(2m,\alpha,L)$ is embedded in $\mathcal{N}_{\text{Sp}}(2m,\alpha,L)$, by considering the zero Higgs fields. By the parabolic version of Serre duality, $\mathcal{N}_{\text{Sp}}(2m,\alpha,L)$ contains the cotangent bundle $T^*\mathcal{M}_{\text{Sp}}(2m,\alpha,L)$ as an open subset. Therefore,
\[
\dim\mathcal{N}_{\text{Sp}}(2m,\alpha,L) = 2\dim \mathcal{M}_{\text{Sp}}(2m,\alpha,L) = 2m(2m+1)(g-1) + 2m^2n.
\]
From now on we assume the parabolic structure to have full flags and rational parabolic weights. We also assume that the parabolic weights are small enough so that the stability of the symplectic parabolic Higgs bundle is equivalent to the stability of the underlying vector bundle.
\begin{definition}
A symplectic parabolic bundle $(E_*,\psi)$ is $(k,l)$-stable (res. $(k,l)$- semistable) if for all non-zero isotropic subbundles $F \subset E$,
\[
\frac{\text{pardeg}(F_*) + k}{\text{rank}(F)} < \frac{\text{pardeg}(E_*) - l}{\text{rank}(E)} \hspace{1cm}(\text{res. } \leq )
\]
holds.
\end{definition}
Observe that if $k,l \geq 0$, then a $(k,l)$-stable symplectic parabolic bundle is stable in the usual sense.
\begin{prop}\label{prop1}
For $ g \geq 3$, the locus of $(1,0)$-stable symplectic parabolic bundles is a non-empty Zariski open subset of $\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$.
\end{prop}
\begin{proof} The proof is analogous to the proof of \cite[Proposition 2.7]{BG06}. If a stable symplectic parabolic bundle $(E_*,\psi)$ is not $(1,0)$-stable, then there exist an isotropic subbundle $F\subset E$ such that
\[
\frac{\text{pardeg}(F_*) + 1}{\text{rank}(F)} \geq \frac{\text{pardeg}(E_*)}{\text{rank}(E)}.
\]
Or equivalently, $F_*$ satisfies the following
\[
\frac{\text{pardeg}(F_*) + 1}{\text{rank}(F)} \geq \frac{\text{pardeg}(E_*/F_*)}{\text{rank}(E/F)}.
\]
Therefore the ranks, degrees and weight-multiplicities for quotients $E_*/F_*$ of $E_*$ vary over finite sets. Hence, by the properness of the (parabolic) Quot scheme, the complement of $(1,0)$-stable bundles in $\mathcal{M}_{\text{Sp}}(2m,\alpha,L)$ is a finite union of closed sets. Thus the locus of $(1,0)$-stable bundles is an open subset of $\mathcal{M}_{\text{Sp}}(2m,\alpha,L)$. So it remains to show that the locus of $(1,0)$-stable bundles is non-empty.\\
Let $(E_*,\psi)$ be a stable symplectic parabolic bundle which is not $(1,0)$-stable and let $P$ be the parabolic principal $\text{Gp}(2m,\mathbb{C})$-bundle corresponding to $(E_*,\psi)$. Let $F\subset E$ be an isotropic subbundle which contradicts the $(1,0)$-stability condition. This gives a reduction of the structure group $P^H \subset P$ to a maximal parabolic subgroup $H \subset \text{Gp}(2m,\mathbb{C})$.\\
From the openness of the stability condition, it follows that a deformation of $P^H$ as parabolic principal $H$-bundle will give deformations of $P$ which are not $(1,0)$-stable but are still stable. Also, any deformation of $P$ which is not $(1,0)$-stable comes from a deformation of $P^H$, for some maximal parabolic subgroup $H \subset \text{Gp}(2m,\mathbb{C})$.\\
The dimension of the tangent space of these deformations is $h^1(P^H(\mathfrak{H})) - g$, where $\mathfrak{H}$ is the Lie algebra of $H$ and $P^H(\mathfrak{H})$ is the adjoint bundle. \\
We will show that $h^0(P^H(\mathfrak{H})) = 1$. First note that \begin{equation}\label{eqn4}
h^0(P^H(\mathfrak{H})) \geq \dim\mathfrak{z}(\mathfrak{H})=1,
\end{equation}
where $\mathfrak{z}(\mathfrak{H}) \subset \mathfrak{H}$ is the center.\\ Since $P$ is a stable parabolic $\text{Gp}(2m,\mathbb{C})$-bundle and the parabolic weights are small enough such that the underlying vector bundle $E$ is also stable, we have
\[
H^0(X,P(\mathfrak{gp}(2m,\mathbb{C}))) \subset H^0(X,E(\mathfrak{gl}(2m,\mathbb{C})))= \mathfrak{z}(\mathfrak{gl}(2m,\mathbb{C})).
\]
Since $\mathfrak{H}$ is a submodule of the $H$-module $\mathfrak{gp}(2m,\mathbb{C})$, the adjoint bundle $P^H(\mathfrak{H})$ is a subbundle of $P(\mathfrak{gp}(2m,\mathbb{C}))$. So,
\[
h^0(P^H(\mathfrak{H})) \leq h^0(P(\mathfrak{gp}(2m,\mathbb{C}))) \leq \dim \mathfrak{z}(\mathfrak{gl}(2m,\mathbb{C})) =1.
\]
This inequality together with the inequality in (\ref{eqn4}) gives $h^0(P^H(\mathfrak{H})) = 1$.\\
By Riemann-Roch theorem,
\[h^1(P^H(\mathfrak{H})) - g = -\text{pardeg}(P^H(\mathfrak{H})) + (\text{rank}(P^H(\mathfrak{H})) - 1)(g - 1).
\]
Hence by Lemma \ref{lemma1}, the dimension of the subscheme $Z \subset \mathcal{M}_{Sp}(2m,\alpha,L)$ defined by all stable symplectic parabolic bundles which are not $(1,0)$-stable satisfies the inequality
\[
\begin{aligned}
\dim Z & \leq \max_{1\leq r \leq m}\{2m + (2m^2 + m -2mr + \frac{3r^2 - r}{2})(g-1)\}\\
& = \dim\mathcal{M}_{Sp}(2m,\alpha,L) - m^2n +2m + \max_{1\leq r \leq m}r(-2m + \frac{3r - 1}{2})(g-1)\\
& \leq \dim\mathcal{M}_{Sp}(2m,\alpha,L) - m^2n +2m - 1\\
& < \dim\mathcal{M}_{Sp}(2m,\alpha,L).
\end{aligned}
\]
Here we have used that $m > 1$ and $g \geq 3$. This completes the proof assuming Lemma \ref{lemma1}.
\end{proof}
\begin{lemma}\label{lemma1}
Let $(E_*, \psi)$ be a stable symplectic parabolic bundle of rank $2m$, where $\psi : E_* \otimes E_* \to L$. Let $P$ be the parabolic principal $\textnormal{Gp}(2m,\mathbb{C})$-bundle corresponding to $(E_*,\psi)$. Let $F \subset E$ be an isotropic subbundle of rank $r$ that contradicts the $(1,0)$-stability condition for $(E_*,\psi)$. The subbundle $F_*$ gives a reduction of the structure group $P^H \subset P$ to a parabolic subgroup $H \subset \textnormal{Gp}(2m,\mathbb{C})$. Then
\[
\textnormal{pardeg}(P^H(\mathfrak{H})) \geq -2m,
\]
where $P^H(\mathfrak{H})$ is the adjoint bundle of $P^H$. Also, $\textnormal{rank}(P^H(\mathfrak{H})) = 2m^2 + m - 2mr + \frac{3r^2-r}{2} + 1$.
\end{lemma}
\begin{proof}
Let $P^H(\mathfrak{gl}(2m,\mathbb{C}))$ be the associated bundle for the adjoint action of $H$ on $\mathfrak{gl}(2m,\mathbb{C})$. So, $P^H(\mathfrak{gl}(2m,\mathbb{C})) = \text{End}(E_*)$. Since $\mathfrak{H}$ is a submodule of the $\mathfrak{gp}(2m,\mathbb{C})$-module $\mathfrak{gl}(2m,\mathbb{C})$, the adjoint bundle $P^H(\mathfrak{H})$ is a subbundle of $P^H(\mathfrak{gl}(2m,\mathbb{C})) = \text{End}(E_*)$. The subbundle $P^H(\mathfrak{H})$ preserves the filtration
\begin{equation}\label{eqn3}
F_*\subset F_*^\perp \subset E_*,
\end{equation}
where $F_*^\perp$ is the orthogonal part of $F_*$ with respect to the symplectic structure on $E_*$. So $\textnormal{rank}(F_*)+ \textnormal{rank}(F_*^\vee)=2m$.\\
Let $L(H)$ denote the Levi quotient of $H$. Fixing $T \subset B \subset H$, where $T \subset \text{Gp}(2m,\mathbb{C})$ is a maximal torus and $B \subset \text{Gp}(2m,\mathbb{C})$ is a Borel subgroup, $L(H)$ can be realized as a subgroup of $H$. Indeed, $L(H)$ is identified with the maximal connected $T$-invariant reductive subgroup of $H$.\\
Using the projection of $H$ to the Levi quotient $L(H)$, we can extend the structure group of $P^H$ to obtain a $L(H)$-bundle, which we denote by $P^{L(H)}$. Let $P^{L(H)}(H)$ denote the $H$-bundle obtained by extending the structure group of $P^{L(H)}$ using the inclusion $L(H)\subset H$. The $H$-bundle $P^H$ is topologically isomorphic to the $H$-bundle $P^{L(H)}(H)$. Therefore their adjoint bundles $P^H(\mathfrak{H})$ and $P^{L(H)}(H)(\mathfrak{H})$ are topologically isomorphic. Hence we can assume that $P^H$ admits a reduction of the structure group $P^{L(H)}\subset P^H$ to the subgroup $L(H) \subset H$. Fix one such reduction of the structure group $P^{L(H)}\subset P^H$ to $L(H)$.\\
Using the reduction of the structure group $P^{L(H)} \subset P^H$ the above filtration (\ref{eqn3}) splits, i.e.
\begin{equation}\label{eqn1}
E_* \cong F_* \oplus (F_*^\perp/F_*) \oplus (E_*/F_*^\perp).
\end{equation}
Therefore, a locally defined section of $P^H(\mathfrak{H})$ has the form
\begin{equation}\label{eqn2}
A = \begin{pmatrix}
\alpha & \beta & \gamma \\
0 & \delta & \epsilon \\
0 & 0 & \eta
\end{pmatrix}.
\end{equation}
The symplectic structure $\psi : E_* \otimes E_* \to L$ induces an isomorphism
\[
\psi : E_* \to E_*^\vee \otimes L,
\]
which has the property that the composition
\[
F_*^\perp \xhookrightarrow{} E_* \xrightarrow{\psi} E_*^\vee \otimes L \xrightarrow{} F_*^\vee \otimes L
\]
vanishes. So, we obtain an induced isomorphism
\[
E_*/F_*^\perp \cong F_*^\vee \otimes L,
\]
denoted by $id$. Also, $\psi$ induces a symplectic structure on the parabolic bundle $F_*^\perp/F_*$, and this induced symplectic sturcture will produce an isomorphism
\[
\psi' : F_*^\perp/F_* \xrightarrow{} (F_*^\perp/F_*)^\vee \otimes L .
\]
Using (\ref{eqn1}), the isomorphism $\psi$ is of the form
\[
\psi = \begin{pmatrix}
0 & 0 & -id \\
0 & \psi' & 0 \\
id & 0 & 0
\end{pmatrix}.
\]
A parabolic subalgebra $\mathfrak{H}$ of $\mathfrak{gp}(2m,\mathbb{C})$ is of the form $ \mathfrak{H} = \mathfrak{H'}\oplus \mathbb{C}$, where $\mathfrak{H'}$ is a parabolic subalgebra of $\mathfrak{sp}(2m,\mathbb{C})$, and $\mathbb{C}$ is the center of $\mathfrak{gp}(2m,\mathbb{C})$. Since this decomposition is preserved by the adjoint action of $\text{Gp}(2m,\mathbb{C})$, we get that
\[
P^H(\mathfrak{H}) = P^H(\mathfrak{H'})\oplus \mathcal{O}_X
\]
The local section $A$ of $P^H(\mathfrak{H})$, defined in (\ref{eqn2}), lies in $P^H(\mathfrak{H'})$ if and only if
\[
\psi \circ A = \begin{pmatrix}
0 & 0 & -\eta \\
0 & \psi'\circ \delta & \psi' \circ \epsilon \\
\alpha & \beta & \gamma
\end{pmatrix} : E_* \to E_*^\vee \otimes L
\]
is symmetric, i.e. the following conditions hold:\\
$(1) \hspace{0.2cm}-\eta = \alpha^t$,\\
$(2) \hspace{0.2cm}\psi' \circ \epsilon = \beta^t$, and\\
$(3) \hspace{0.2cm}\psi' \circ \delta$ and $\gamma$ are symmetric. \\
Hence, we have an isomorphism
\[
P^H(\mathfrak{H'}) \cong \text{End}(F_*) \oplus \Bigg(\bigg(\frac{F_*^\perp}{F_*}\bigg)^\vee \otimes F_*\Bigg) \oplus ((\text{Sym}^2F_*) \otimes L^\vee) \oplus \Bigg(\text{Sym}^2\bigg(\frac{F_*^\perp}{F_*}\bigg)^\vee \otimes L\Bigg)
\]
defined by
\[
A \mapsto (\alpha,\beta, \gamma, \psi'\circ \delta).
\]
A straightforward calculation using this isomorphism gives \[\textnormal{rank}(P^H(\mathfrak{H}))= \textnormal{rank}(P^H(\mathfrak{H'})) + 1 = 2m^2 + m - 2mr + \frac{3r^2-r}{2} + 1.
\]
Also,
\[
\textnormal{pardeg}(P^H(\mathfrak{H}))=\textnormal{pardeg}(P^H(\mathfrak{H'})) = 2mf - rf - f -rml + \frac{r^2 - r}{2}l,
\]
where $l=\deg L$ and $f=\textnormal{pardeg}(F_*)$. Since $F_*$ contradicts $(1,0)$-stability condition, we have $2f \geq lr - 2$, and since $E_*$ is stable, we have $2f < lr$. Using these two inequalities, we obtain \[\textnormal{pardeg}(P^H(\mathfrak{H})) \geq -2m.
\]
This completes the proof of the lemma.
\end{proof}
\begin{lemma}\label{lemma2}
Let $(E_*,\psi)$ be a $(1,0)$-stable symplectic parabolic bundle. Let $x \in D$ and $1 \leq k \leq 2m$ be an integer. Suppose $E_{x,k}' \subsetneq E|_x$ such that
\[
E_{x,k-1} \supsetneq E_{x,k}' \supsetneq E_{x,k+1}.
\]
substitute $E_{x,k}$ by $E_{x,k}'$ to obtain a new quasi-parabolic structure. Then the symplectic parabolic bundle $E_{*'}$ with the new quasi-parabolic structure is stable.
\end{lemma}
\begin{proof}
Let $F \subset E$ be an isotropic subbundle. Let $F_*$ and $F_{*'}$ be the vector bundles with parabolic structures induced by $E_*$ and $E_{*'}$ respectively. Therefore, we have the relation
\[
\text{wt}_x(F_{*'})= \text{wt}_x(F_*) + (\alpha_k(x) - \alpha_{k-1}(x))(\dim(F|_x \cap E_{x,k}')- \dim(F|_x \cap E_{x,k})).
\]
Note that $E_{x,k+1}\subseteq E_{x,k}\cap E_{x,k}'$, and $E_{x,k+1}$ has codimension $1$ in both $E_{x,k}$ and $E_{x,k}'$. So $\dim(F|_x \cap E_{x,k}') \leq \dim(F|_x \cap E_{x,k}) + 1 $, and hence
\[
\text{wt}_x(F_{*'}) \leq \text{wt}_x(F_*) + (\alpha_k(x) - \alpha_{k-1}(x)) < \text{wt}_x(F_*) + 1.
\]
By the $(1,0)$-semistablity condition, we get
\[
\begin{aligned}
\frac{\text{pardeg}(F_{*'})}{\text{rank}(F)} &= \frac{\deg(F)+\sum_{x \in D}\text{wt}_x(F_{*'})}{\text{rank}(F)}\\
& < \frac{\deg(F)+\sum_{x \in D}\text{wt}_x(F_{*})+1}{\text{rank}(F)}\\
&= \frac{\text{pardeg}(F_{*})+1}{\text{rank}(F)}\\
&< \frac{\text{pardeg}(E_{*})}{\text{rank}(E)}\\
&= \frac{\text{pardeg}(E_{*'})}{\text{rank}(E)}
\end{aligned}
\]
Since this strict inequality holds for any isotropic subbundle $F \subset E$, we conclude that $E_{*'}$ is stable.
\end{proof}
We also need the following lemma for our purpose.
\begin{lemma}\label{lemma3}
Suppose that $g \geq 4$ and the parabolic weights are small enough so that the stability of the symplectic parabolic Higgs bundle is equivalent to the stability of the underlying vector bundle. Then $H^0(\textnormal{End}(E_*)(x))=0$ for a generic stable symplectic parabolic bundle $E_*$.
\end{lemma}
\begin{proof}
Let $f : \mathcal{M}_{\text{Sp}}(2m,\alpha,L) \to \mathcal{M}(2m)$ be the forgetful morphism sending a symplectic parabolic bundle to the underlying vector bundle. This morphism is well defined because of the small weights. For $g \geq 4$, there exists an open set $U \in \mathcal{M}(2m)$ such that for every $E \in U$ we have $H^0(\textnormal{End}(E)(x))=0$ \cite[Lemma 2.2]{BGM13}. Hence, for every $E_* \in f^{-1}(U)$, we have $H^0(\textnormal{End}(E_*)(x)) =0$.
\end{proof}
\section{Hitchin discriminant}
We will now describe the Hitchin map for symplectic parabolic Higgs bundles. An element of $\mathcal{N}_{\text{Sp}}(2m,\alpha,L)$ can be viewed as a stable parabolic bundle $E_*$ of rank $2m$ with a nondegenerate symplectic form $< , >$, together with a holomorphic section $\Phi \in H^0(X, \operatorname{End}(E_*) \otimes K(D))$ which satisfies
\[
<\Phi v,w> = - <v,\Phi w>.
\]
From the nondegeneracy of the symplectic form it follows that if $\mu$ is an eigenvalue then $-\mu$ is also an eigenvalue. Thus the characteristic polynomial of $\Phi$ is of the form
\[
\det(\lambda - \Phi) = \lambda^{2m} + s_2\lambda^{2m-2} + \cdots + s_{2m},
\]
where $s_{2i} \in H^0(X, K(D)^{2i})$ for all $1\leq i \leq m$. In fact, since $\Phi$ is strongly parabolic, the residue at each point of $D$ is nilpotent. Therefore, $s_{2i} \in H^0(X, K^{2i}D^{2i-1})$ for all $1 \leq i \leq m$. We define the \textit{Hitchin space} as
\[
W = H^0(K^2D) \oplus H^0(K^4D^3) \oplus \dots \oplus H^0(K^{2m}D^{2m-1}).
\]
Its dimension is $m(2m+1)(g-1) + m^2n$. The \textit{Hitchin map}
\[
h : \mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L) \longrightarrow W
\]
is defined by $h(E_*,\Phi,\psi) = (s_2,s_4,\dots,s_{2m})$ where $s_{2i}$'s are the coefficients of the characteristic polynomial of $\Phi$. Also, we have the restriction map
\[
h_0 : T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L) \longrightarrow W.
\]
Let $\mathcal{S}$ denote the total space of $K(D)$, let $p : \mathcal{S} \to X$ be the natural projection map, and let $t \in H^0(\mathcal{S},p^*K(D))$ be the tautological section. Given $s=(s_2,\dots, s_{2m}) \in W$, the \textit{spectral curve} $X_s$ in $\mathcal{S}$ is defined by the equation
\[
t^{2m} + s_2 t^{2m-2} + \cdots + s_{2m}=0,
\]
and it possesses an involution $\sigma(\eta) = -\eta $ (since all the exponents of $t$ are even). The involution $\sigma$ acts on the Jacobian $J(X_s)$ of the spectral curve. If $X_s$ is smooth, then the fiber $h^{-1}(s)$ is isomorphic to the Prym variety $\text{Prym}(X_s, X)=\{M \in J(X_s) : \sigma^*M \cong M^\vee \}$ \cite[Theorem 4.1]{R20}.\\
Let $\mathcal{D} \subset W$ be the divisor consisting of the characteristic polynomials whose corresponding spectral curve is singular. The inverse image $h^{-1}(\mathcal{D})$ is called the \textit{Hitchin discriminant}. If a spectral curve is singular over a parabolic point $x \in D$, it is singular precisely at $(x,0)$. Consider the following subsets of $\mathcal{D}$:\\
$(a)$ For each parabolic point $x \in D$, let $\mathcal{D}_x$ be the set of points whose spectral curve is singular over $x$.\\
$(b)$ Let $\mathcal{D}_1$ be the set of points whose spectral curve is smooth over every $x\in D$ but singular at $(y,0)$, where $y \notin D$.\\
$(c)$ Let $\mathcal{D}_2$ be the set of points whose spectral curve has two symmetrical nodes (i.e. $t^m + s_2t^{m-1} + \cdots + s_{2m-2}t + s_{2m} = 0$ has a node).\\
Therefore, $\mathcal{D} = \underset{x \in D}{\bigcup}\mathcal{D}_x \cup \overline{\mathcal{D}_1} \cup \mathcal{D}_2$, where $\overline{\mathcal{D}_1}$ is the set of points whose spectral curve is singular at $(y,0)$ where $y \notin D$ (but not necessarily smooth over $D$).\\
Let $\mathcal{D}_i^\mathrm{o} \subset \mathcal{D}_i, i=1,2$, denote the locus of the spectral curves which do not contain extra singularities.
\begin{lemma}\label{lemma4}
$\mathcal{D}_1$ and $\mathcal{D}_x$ are irreducible for every $x \in D$.
\end{lemma}
\begin{proof}
Suppose $s \in \mathcal{D}_x$, then the spectral curve is singular at $(x,0)$, i.e. $t=0$ and $s_{2m}$ has a double root at $x$. So $s_{2m} \in H^0(K^{2m}D^{2m-1}(-x))$. Therefore,
\[
\mathcal{D}_x = \bigoplus\limits_{i=1}^{m-1}H^0(K^{2i}D^{2i-1}) \oplus H^0(K^{2m}D^{2m-1}(-x))
\]
is irreducible for all $x \in D$. \\
Consider $s\in \mathcal{D}_1$, i.e. the spectral curve is singular at $(y,0)$, where $y \notin D$ but smooth over every $x \in D$. So
\[
s_{2m} \in H^0(K^{2m}D^{2m-1}(-2y)) \setminus \bigcup_{x\in D} H^0(K^{2m}D^{2m-1}(-2y -x)) \coloneqq \mathcal{H}_y
\]
Hence, we get that
\[
\mathcal{D}_1 = \bigoplus\limits_{i=1}^{m-1}H^0(K^{2i}D^{2i-1}) \oplus \mathcal{H}_y.
\]
By Riemann-Roch theorem, the last summand $\mathcal{H}_y$ is the complement of an hyperplane in $H^0(K^{2m}D^{2m-1}(-2y))$. Therefore, $\mathcal{D}_1$ is irreducible.
\end{proof}
\begin{prop}\label{prop2}
Let $Y$ be an integral curve which has a unique simple node. Also assume that $Y$ possess an involution $\sigma$. Let $\pi_Y : \tilde{Y} \to Y$ be the normalization. Then the compactified Jacobian $\bar{J}(Y)$ is birational to a $\mathbb{P}^1$-fibration over $J(\Tilde{Y})$. \\
Analogously, let $Y$ be an integral curve with two simple nodes and possess an involution $\sigma$ which interchanges these two nodes, and let $\tilde{Y}$ be the normalization of $Y$. Then $\bar{J}(Y)$ is birational to a $\mathbb{P}^1 \times \mathbb{P}^1$-bundle on $J(\tilde{Y})$.\\
And in either case the Prym variety, which is the fixed point variety of the involution, is an uniruled variety.
\end{prop}
\begin{proof}
The proof is same as in \cite{BGM12}. For convenience of the reader, we give details of the proof. Suppose $Y$ has a simple node at $y$, and let $x$ and $z$ be the preimages of $y$ in $\tilde{Y}$. Let $P$ be a $\mathbb{P}^1$-fibration over $J(\tilde{Y})$, whose fiber over any point $L \in J(\tilde{Y})$ is $\mathbb{P}^1(L_{x} \oplus L_{z})$. We claim that the compactified Jacobian $\bar{J}(Y)$ is birational to $P$. Let us construct a morphism $P \to \bar{J}(Y)$ as follows: a point of $P$ corresponds to a line bundle $L \in J(\tilde{Y})$ and an one dimensional quotient $q : L_{x} \oplus L_{z} \twoheadrightarrow \mathbb{C}$ (up to a scalar multiple). The image $L'$ of $L$ under this morphism is defined as
\[
0 \rightarrow L' \rightarrow (\pi_Y)_*L \overset{q}\rightarrow \mathbb{C}_y \rightarrow 0.
\]
For the details of the proof, see \cite[Theorem 4]{B96}.\\
The involution $\sigma$ lifts to an involution $\tilde{\sigma}$ of $\tilde{Y}$, which will induce an involution in $P$ as follows: suppose $(L,q)$ is a point of $P$, and the quotient $q : L_{x} \oplus L_{z} \twoheadrightarrow \mathbb{C}$ is represented by $[a : b]$. Then the induced involution sends the point $(L,q)$ to $(\tilde\sigma^*L^\vee, q^\vee := [b:a])$. The definition of $q^\vee$ makes sense: if $[a : b] \in \mathbb{P}(L_{x} \oplus L_{z})$, then
\[
[b : a] \in \mathbb{P}(L_{z} \oplus L_{x}) = \mathbb{P}(L_{x}^\vee \oplus L_{z}^\vee).
\]
Therefore, the involution on $P$ induces an involution in $\bar{J}(Y)$, which restricts to $M \mapsto \sigma^*M^\vee$ on $J(Y)$. The Prym variety $\textnormal{Prym}(Y,\sigma) = \{M \in J(Y): \sigma^*M\cong M^\vee\} \subset \bar{J}(Y)$ is an uniruled variety since we have a surjective morphism from the $\mathbb{P}^1$-fibration $P|_{\text{Prym}}$ defined by the pullback
\[
\begin{tikzcd}
P|_{\text{Prym}} \arrow{r}{} \arrow[swap]{d}{} & P \arrow{d}{} \\%
\textnormal{Prym}(\tilde{Y},\tilde{\sigma}) \arrow{r}{}& J(\tilde{Y})
\end{tikzcd}
\]
Analogously, if $Y$ is an integral curve with two simple nodes $y_1$ and $y_2$ and $\pi_Y^{-1}(y_i) = \{x_i,z_i\}$, then $\bar{J}(Y)$ is birational to a $\mathbb{P}^1 \times \mathbb{P}^1$-bundle $P$ on $J(\tilde{Y})$, where the two $\mathbb{P}^1$-factors correspond to the quotients $q_1 : L_{x_1} \oplus L_{z_1} \twoheadrightarrow \mathbb{C}$ and $q_2 : L_{x_2} \oplus L_{z_2} \twoheadrightarrow \mathbb{C}$.\\
The induced involution on $P$ sends $(L,q_1,q_2)$ to $(\tilde{\sigma}^*L^\vee, q_2^\vee, q_1^\vee)$, and then induces an involution in $\bar{J}(Y)$. A fixed point $(L,q_1,q_2)$ in $P$ for this involution has $L \cong \tilde{\sigma}^*L^\vee$ and $q_2 = q_1^\vee$, therefore the fixed point variety is a $\mathbb{P}^1$-fibration on $\textnormal{Prym}(\tilde{Y},\tilde{\sigma})$, and the image under the birational map is the fixed point locus on $\bar{J}(Y)$, denoted by $\textnormal{Prym}(Y,\sigma)$. So this Prym variety is also an uniruled variety.
\end{proof}
\begin{prop}\label{prop3}
Let $g \geq 4$. The following statements hold for the Hitchin map $h_0 : T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L) \to W$: \\
(a) For $s \in W -\mathcal{D}$, the fiber $h_0^{-1}(s)$ is an open subset of an abelian variety.\\
(b) For $s \in \mathcal{D}_i^\mathrm{o}$ $(i = 1,2)$, the fiber $h_0^{-1}(s)$ is an open subset of an uniruled variety.\\
(c) For a generic $s \in \mathcal{D}_x$, the fiber $h_0^{-1}(s)$ contains a complete rational curve.
\end{prop}
\begin{proof}
The Hitchin map $h : \mathcal{N}_{\text{Sp}}(2m,\alpha,L) \to W$ is proper (see \cite{M94} for details). By \cite[Theorem, 4.1]{R20}, $h^{-1}(s)$ is an open subset of an abelian variety (a Prym variety) for $s \in W - \mathcal{D}$.\\
The complement $\mathcal{N}_{\text{Sp}}(2m,\alpha,L) - T^*\mathcal{M}_{\text{Sp}}(2m,\alpha,L)$ has codimension $\geq 3$ (following the computations in Faltings \cite[Theorem II.6(iii)]{F93}, for $g \geq 4$). Therefore $(\mathcal{N}_{\text{Sp}}(2m,\alpha,L) - T^*\mathcal{M}_{\text{Sp}}(2m,\alpha,L)) \cap h^{-1}(\mathcal{D}_i)$ is of codimension at least $2$ in $h^{-1}(\mathcal{D}_i)$, so for $s\in \mathcal{D}_i^\mathrm{o}$, we get that
\[
h^{-1}(s) - h_0^{-1}(s) \subset h^{-1}(s)
\]
has codimension at least two. Therefore, by Proposition \ref{prop2}, $h_0^{-1}(s)$ is an open subset of an uniruled variety.\\
It remains to prove part $(c)$, that a generic fiber over $\mathcal{D}_x$ contains a complete rational curve. The idea of the proof is same as in \cite[Proposition 4.2]{AG19}. Let $V \subset \mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$ denote the intersection of open subsets defined by Proposition \ref{prop1} and Lemma \ref{lemma3}, i.e. $V$ consists of $(1,0)$-stable symplectic parabolic bundles $(E_*,\psi)$ such that $H^0(\textnormal{End}(E_*)(x))=0$. Thus, for all $(E_*,\psi) \in V$ and all $x \in D$, we have
\[
H^1(\textnormal{End}(E_*)\otimes K(D-x))= H^0(\textnormal{End}(E_*)(x))^\vee = 0.
\]
Therefore the evaluation map
\[
\text{ev} : H^0(\textnormal{End}(E_*)\otimes K(D)) \to \textnormal{End}(E_*)\otimes K(D)|_x
\]
is surjective.\\
For $1 < k \leq 2m$, consider the subspace $N_k(E_*) \subset \textnormal{End}(E_*)\otimes K(D)|_x$ whose elements are matrices with a zero at $(k-1,k)$. For $k=1$, the subspace $N_1(E_*)$ consists of matrices with a zero at $(2m,1)$. Let $\tilde{N}_k(E_*)$ be the preimage of $N_k(E_*)$ under the evaluation map. For $k>1$, we can describe $\tilde{N}_k(E_*)$ as follows : consider the subfiltraion of $E|_x$ obtained by removing the subspace $E_{x,k}$. Denote the parabolic bundle with this new filtration by $E_{*_k}$. Then
\[
\tilde{N}_k(E_*) = H^0(\textnormal{End}(E_{*_k}) \otimes K(D)).
\]
Let $(E_*,\Phi,\psi) \in h^{-1}(\mathcal{D}_x)\cap T^*V$. We can describe the Higgs field $\Phi$ in a basis corresponding to the parabolic filtration as
\[
\Phi(z) = \begin{bmatrix} za_{1,1} & a_{1,2} & \cdots & a_{1,2m} \\ za_{2,1} & za_{2,2} & \cdots & a_{2,2m} \\ \vdots & \vdots & \ddots & \vdots \\ za_{2m,1} & za_{2m,2} & \cdots & za_{2m,2m} \end{bmatrix}
\]
where $a_{i,j}$ are local sections of $K(D)$. Then $(E_*,\Phi,\psi) \in h^{-1}(\mathcal{D}_x)$ is and only if $z^2 | \det(\Phi(z))$. The only summand of the determinant that is not a multiple of $z^2$ is $za_{2m,1}a_{1,2}a_{2,3}\cdots a_{2m-1,2m}$. Therefore, $z^2|\det(\Phi(z))$ if and only if $\text{ev}(\Phi) \in N_k(E_*)$ for some $1< k \leq 2m$. Since the evaluation map is surjective we conclude that
\[
h^{-1}(\mathcal{D}_x) \cap T_{E_*}^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L) = \bigcup_{k=1}^{2m}\tilde{N}_k(E_*)
\]
By Lemma \ref{lemma2}, for all $E_* \in V$, for all $x \in D$, all $1< k \leq 2m$ and all $E_{x,k}'$ such that $E_{x,k-1} \supsetneq E_{x,k}' \supsetneq E_{x,k+1}$, we know that $E_{*'}$ is stable. Since $\Phi$ sends $E_{x,k-1}$ to $E_{x,k+1}$, we have that $\Phi \in H^0(\textnormal{End}(E_{*'})\otimes K(D))$ for all $E_{x,k}'$. Therefore $E_{*'} \in h_0^{-1}(\mathcal{D}_x)$ for all such $E_{x,k}'$. Since $E$ and $\Phi$ remain the same, all those Higgs bundles lie over the same point. The space of possible compatible steps form a complete rational curve.
\end{proof}
\begin{prop}\label{prop4}
Let $\mathcal{C}\subset T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$ be the union of the (complete) rational curves in $T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$. Then $\mathcal{D}$ is the closure of $h_0(\mathcal{C}) \subset W$.
\end{prop}
\begin{proof}
Let $l \cong \mathbb{P}^1 \subset T^*\mathcal{M}_{\text{Sp}}(2m,\alpha,L)$ be a complete rational curve. So $h_0(l) \subset W$ is a point as it is a complete curve. Therefore $l$ is contained in a fiber of the Hitchin map. By Proposition \ref{prop3}, fiber over $s \in W - \mathcal{D}$ is an open subset of an abelian variety, so $l$ cannnot be contained in a fiber over $W-\mathcal{D}$.\\
Again by Proposition \ref{prop3}, we know that generic fiber over $\mathcal{D}_x$ and $\mathcal{D}_i^\mathrm{o},i= 1,2$, contain a complete rational curve. Therefore, $h_0(\mathcal{C})$ is dense in $\mathcal{D}$. Since $\mathcal{D}\subset W$ is closed, we get that $\mathcal{D}= \overline{h_0(\mathcal{C})}$.
\end{proof}
\section{Torelli theorem for symplectic parabolic bundles}
\begin{prop}\label{prop5}
The global algebraic functions $\Gamma(T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L))$ produce a map
\[
\tilde{h} : T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L) \longrightarrow \mathrm{Spec}(\Gamma(T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L))) \cong W \cong \mathbb{C}^N,
\]
which is the Hitchin map upto an automorphism of $\mathbb{C}^N$, where $N=\dim W =\dim\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$.
Moreover, if we consider the standard dilation action of $\mathbb{C}^*$ on the fibers of the cotangent bundle $T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$, then there is a unique $\mathbb{C}^*$-action on $W$ such that $\tilde{h}$ is $\mathbb{C}^*$-equivariant, i.e. $\tilde{h}(E_*, \lambda\Phi)= \lambda \cdot \tilde{h}(E_*,\Phi)$.
\end{prop}
\begin{proof}
The Hitchin map
\[
h : \mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L) \longrightarrow W
\]
is proper and the generic fibers are compact and connected - a Prym variety. Since $W$ is smooth, every fibre is compact and connected. Therefore any holomorphic function $f : \mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L) \to \mathbb{C}$ is constant on each fiber and so $f$ factors through $W$ and, as $W$ is affine, we get that
\[
\mathrm{Spec}(\Gamma(\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L))) \cong \mathrm{Spec}(\Gamma(W)) \cong W.
\]
Let $g : T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L) \to \mathbb{C}$ be a holomorphic function. Since the codimension of $T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L) \subset \mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$ is at least two, and $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$ is smooth, by Hartog's theorem $g$ extends to a holomorphic function $\tilde{g} : \mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L) \to \mathbb{C}$. Therefore, we get \[\Gamma(T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)) \cong \Gamma(\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)).
\]
Hence, we obtain a map
\[
\tilde{h} : T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L) \longrightarrow \mathrm{Spec}(\Gamma(T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L))) \cong W,
\]
and the $\mathbb{C}^*$- action on the cotangent bundle $T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$ then induces a unique action on $\mathrm{Spec}(\Gamma(T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)))$ making $\tilde{h}$ a $\mathbb{C}^*$-equivariant map.
\end{proof}
The Proposition \ref{prop5} allow us to recover the base $W$ of the Hitchin fibration, and gives us the natural $\mathbb{C}^*$- action on $W$. Also the subspaces $W_{\geq 2k} = \bigoplus_{i=k}^{m}W_{2i}$ (where $W_{2i} = H^0(K(D)^{2i})$) are uniquely determined for each $k=1,\dots, m$ (these are the subspaces where the rate of decay is at least $\lambda^{2k}$, for $\lambda \to 0$). In particular, we can recover uniquely the subspace $W_{2m} = H^0(K(D)^{2m}) \subset W$.
\begin{prop}\label{prop6}
The intersection $\mathcal{C} := W_{2m} \cap \mathcal{D} \subset W_{2m}$ has $n+1$ irreducible components
\[
\mathcal{C} = \mathcal{C}_X \cup \bigcup_{x \in D} \mathcal{C}_x.
\]
Moreover $\mathbb{P}(\mathcal{C}_X ) \subset \mathbb{P}(W_{2m})$ is the dual variety of $X \subset \mathbb{P}(W_{2m}^*)$ and for each $x \in D$, $\mathbb{P}(\mathcal{C}_x) \subset \mathbb{P}(W_{2m})$ is the dual variety of $x \xhookrightarrow{} X \subset \mathbb{P}(W_{2m}^*)$ for the embedding given by the linear series $|K^{2m}D^{2m-1}|$.
\end{prop}
\begin{proof}
Let $s = s_{2m} \in H^0(K^{2m}D^{2m-1}) \subset H^0(K(D)^{2m})$. Then the spectral curve $X_s$ is given by the equation $t^{2m} + s_{2m}(y)=0$. Therefore, $X_s$ is singular at the points $(x,0)$, where $x$ is zero of order at least two of $s_{2m}$. Therefore $s \in \mathcal{C}$ if and only if $s_{2m} \in H^0(K(D)^{2m}(-2x))$, by considering $s_{2m}$ as a section of $H^0(K(D)^{2m})$. Since $s_{2m} \in H^0(K^{2m}D^{2m-1})$ the following are the possible cases:\\
$(a)$ \hspace{0.0001cm}$s_{2m} \in H^0(K^{2m}D^{2m-1}(-2x))$ for some $x \notin D$\\
$(b)$ \hspace{0.0001cm}$s_{2m} \in H^0(K^{2m}D^{2m-1}(-x))$ for some $x \in D$.\\
So, we can write
\begin{align*}
\mathcal{C} &= \bigcup_{x\in X \setminus D} H^0(K^{2m}D^{2m-1}(-2x)) \cup \bigcup_{x \in D} H^0(K^{2m}D^{2m-1}(-x)) \\
&= \bigcup_{x\in X} H^0(K^{2m}D^{2m-1}(-2x)) \cup \bigcup_{x \in D} H^0(K^{2m}D^{2m-1}(-x)).
\end{align*}
Denote
\begin{align*}
\mathcal{C}_X &= \bigcup_{x\in X} H^0(K^{2m}D^{2m-1}(-2x))\\
\mathcal{C}_x &= H^0(K^{2m}D^{2m-1}(-x)) \hspace{0.3cm} x \in D
\end{align*}
By Lemma \ref{lemma4}, we know that $\mathcal{C}_X$ and, $\mathcal{C}_x$ are irreducible for all $x \in D$ and both have codimension $1$ in $W_{2m}$. Therefore, the first statement follows.\\
The linear system $|K^{2m}D^{2m-1}|$ is very ample and induces an embedding $X \subset \mathbb{P}(W_{2m}^*)$. The set of hyperplanes in $\mathbb{P}(W_{2m}^*)$ which are tangent to $X$ at $x$ is precisely $\mathbb{P}(H^0(K^{2m}D^{2m-1}(-2x)))$. Therefore, $\mathbb{P}(\mathcal{C}_X ) \subset \mathbb{P}(W_{2m})$ is the dual variety of $X$. Furthermore, for every $x \in D$, the set of hyperplanes in $\mathbb{P}(W_{2m}^*)$ passing through $x$ is $\mathbb{P}(H^0(K^{2m}D^{2m-1}(-x)))$. Therefore, $\mathbb{P}(\mathcal{C}_x) \subset \mathbb{P}(W_{2m})$ is the dual variety of $x \xhookrightarrow{} X \subset \mathbb{P}(W_{2m}^*)$.
\end{proof}
Note that $\mathbb{P}(\mathcal{C}_x) \not \cong \mathbb{P}(\mathcal{C}_X)$, as $\mathbb{P}(\mathcal{C}_x)$ is the dual variety of a point and $\mathbb{P}(\mathcal{C}_X)$ is the dual variety of a compact Riemann surface. Also $\mathcal{C}_x \subset W_{2m}$ is an hyperplane for all $x \in D$. So $\mathcal{C}_X \subset \mathcal{C}$ is the only irreducible component which is not an hyperplane in $W_{2m}$.
\begin{theorem}\label{thm1}
Let $(X,D)$ and $(X',D')$ be two compact Riemann surfaces of genus $g \geq 4$ with set of marked points $D \subset X$ and $D' \subset X'$. Let $\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$ and $\mathcal{M}'_{\textnormal{Sp}}(2m,\alpha,L)$ be the moduli spaces of stable symplecic parabolic bundles over $X$ and $X'$ respectively. If $\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$ is isomorphic to $\mathcal{M}'_{\textnormal{Sp}}(2m,\alpha,L)$, then $(X,D)$ is isomorphic to $(X',D')$, i.e. there exist an isomorphism $X \cong X'$ sending $D$ to $D'$.
\end{theorem}
\begin{proof}
Suppose $\Psi : \mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L) \longrightarrow \mathcal{M}'_{\textnormal{Sp}}(2m,\alpha,L)$ is an isomorphism. Then there is an induced isomorphism $d(\Psi^{-1}) : T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L) \longrightarrow T^*\mathcal{M}'_{\textnormal{Sp}}(2m,\alpha,L)$, which is $\mathbb{C}^*$-equivariant for the standard dilation action. By Proposition \ref{prop5}, there exist unique $\mathbb{C}^*$- actions on $W$ and $W'$ induced from the $\mathbb{C}^*$-action by dilations on the fibers. Therefore, we have the following commutative diagram
\[
\begin{tikzcd}
T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L) \arrow{r}{d(\Psi^{-1})} \arrow[swap]{d}{} & T^*\mathcal{M}'_{\textnormal{Sp}}(2m,\alpha,L) \arrow{d}{} \\%
W \arrow{r}{f}& W'
\end{tikzcd}
\]
for some $\mathbb{C}^*$-equivariant isomorphism $f : W \longrightarrow W'$. Hence, $f$ sends the subspace $W_{2m} \subset W$ to the subspace $W_{2m}' \subset W'$. The restriction map $f : W_{2m} \longrightarrow W_{2m}'$ is $\mathbb{C}^*$-equivariant and homogeneous of degree $2m$, so it is linear. \\
Since $d(\Psi^{-1})$ is an isomorphism, it maps the complete rational curves to the complete rational curves. By Proposition \ref{prop4}, the locus of singular spectral curves is preserved by $f$, i.e. $f$ sends $\mathcal{D} \subset W$ to $\mathcal{D}' \subset W'$. So the restriction map sends $\mathcal{C}= \mathcal{D} \cap W_{2m}$ to $\mathcal{C}' = \mathcal{D}' \cap W_{2m}'$. This induces an isomorphism $f^\vee : \mathbb{P}(W_{2m}^*) \longrightarrow \mathbb{P}((W_{2m}^{'})^*)$. Since $\mathcal{C}_X \subset \mathcal{C}$ is canonically identified as the only irreducible component which is not an hyperplane, by Proposition \ref{prop6} $f^\vee$ sends $X$ to $X'$. Moreover, again by Proposition \ref{prop6} the divisor $D \subset X$ is the dual of the rest of the components $\mathbb{P}(\mathcal{C}_x) \subset \mathbb{P}(\mathcal{C})$. Therefore, $f^\vee$ must send $D$ to $D'$. Hence, an isomorphism $f^\vee : (X,D) \longrightarrow (X',D')$ is obtained.
\end{proof}
\section{The Nilpotent Cone}
The fiber $h^{-1}(0)$ is called the nilpotent cone. It is Lagrangian (see \cite{G01}, the proof also works for the parabolic case). A symplectic parabolic Higgs bundle is in the nilpotent cone if and only if the Higgs field is a nilpotent endomorphism. The moduli space of symplectic parabolic bundle is embedded in the nilpotent cone as a component and we will see that it is the unique component of the nilpotent cone that doesn't admit a non-trivial $\mathbb{C}^*$-action. A non-trivial $\mathbb{C}^*$-action gives a non-zero vector field, so it is enough to show that $H^0(\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L),T_{\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)})=0$.
\begin{prop}\label{prop7}
$\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$ doesn't admit a non-zero vector field, i.e.\\ $H^0(\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L),T_{\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)})=0$.
\end{prop}
\begin{proof}
Let $s \in H^0(\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L),T_{\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)})$. It gives a holomorphic function $\tilde{s}$ on $T^*\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$. By Hartog's theorem $\tilde{s}$ extends to a holomorphic function on $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$. Recall that the generic fiber of the Hitchin map $h$ is compact and connected. Since $h$ is proper and $W$ is smooth, every fiber is compact and connected. Therefore $\tilde{s}$ is constant on each fiber and hence
\[
\tilde{s} = f \circ h
\]
for some holomorphic function $f : W \to \mathbb{C}$.\\
Now consider the standard $\mathbb{C}^*$-action (induced by the map $\Phi \to \lambda\Phi$) on $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$. Since $s$ is a vector field, we have
\[
\tilde{s}(\lambda \cdot E_*)= \lambda\tilde{s}(E_*) \hspace{0.3cm} \text{for} \hspace{0.1cm} E_* \in \mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)
\]
If $h(E_*) = (s_2, \dots , s_{2m})$, then $h(\lambda \cdot E_*) = (\lambda^2s_2, \dots, \lambda^{2m}s_{2m}) $. Therefore $f$ satisfies
\begin{equation}\label{eqn5}
f(\lambda^2s_2, \dots, \lambda^{2m}s_{2m})= \lambda f(s_2,\dots, s_{2m}).
\end{equation}
Since there is no nonzero holomorphic function $f : W \to \mathbb{C}$ which satisfies the condition (\ref{eqn5}), we conclude that $\tilde{s} = 0$.
\end{proof}
We can modify Simpson's Lemma $11.9$ in \cite{S95} to the symplectic parabolic case with nonzero degree, and hence we get the following
\begin{lemma}\label{lemma5}
Let $(E_*,\Phi)$ be a symplectic parabolic Higgs bundle in the nilpotent cone, with $\Phi \not= 0$. Consider the standard $\mathbb{C}^*$-action sending $(E_*,\Phi,\psi)$ to $(E_*,t\Phi)$. Assume that $(E_*,\Phi)$ is a fixed point, i.e. for every $t$ there is an isomorphism with $(E_*,t\Phi)$. Then there is another Higgs bundle $(E_*',\Phi')$ in the nilpotent cone, not isomorphic to $(E_*,\Phi)$ such that $\lim_{t \to \infty}(E_*',t\Phi')=(E_*,\Phi)$.
\end{lemma}
Therefore we obtain the following
\begin{prop}\label{prop8}
There is only one component inside the nilpotent cone that does not admit a nontrivial $\mathbb{C}^*$-action, and it is the moduli space of symplectic parabolic bundles $\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$ on $X$.
\end{prop}
\begin{proof}
The map that sends $E_*$ to $(E_*,0)$ defines an embedding of $\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$ in the nilpotent cone. Since both have same dimensions, it is one component of the nilpotent cone. By Proposition \ref{prop7}, we know that $\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$ doesn't admit a non-trivial $\mathbb{C}^*$-action.\\
The $\mathbb{C}^*$-action in the rest of the components is given by $(E_*,\Phi) \mapsto (E_*,t\Phi)$. This action is nontrivial because of lemma \ref{lemma5}.
\end{proof}
\section{Kodaira-Spencer map}
Our next goal is to identify the nilpotent cone among the fibers of the Hitchin map.\\
Let $\mathcal{N}(2m,\alpha,\xi)$ denote the moduli space of parabolic Higgs bundles of rank $2m$ on $X$ with fixed determinant $\xi$. Then there is a Hitchin map
\[
h' : \mathcal{N}(2m,\alpha,\xi) \to W'
\]
where $W' = H^0(K^2D) \oplus H^0(K^3D^2) \oplus \cdots \oplus H^0(K^{2m}D^{2m-1})$. Let $s \in W'$ be a point such that the spectral curve $X_s$ is smooth. The Kodaira-Spencer map
\[
u' : T_sW' \cong W' \to H^1(X_s,T_{X_s})
\]
sends a vector in the tangent space $T_sW'$ to the corresponding infinitesimal deformation of $X_s$, which is an element of $H^1(X_s,T_{X_s})$. We can also consider the restriction of $u'$ to $W$ obtaining another Kodaira-Spencer map
\[
u : T_sW \cong W \to H^1(X_s,T_{X_s}).
\]
By \cite[Proposition 5.2]{GL11}, we have an exact sequence
\[
0 \to H^0(X,\mathcal{O}_X) \xrightarrow{} T_sW' \xrightarrow{u'} H^1(X_s,T_{X_s}),
\]
and hence $\dim\text{Ker}(u')=1$. There are some elements in $H^0(X,\mathcal{O}_X)$ which are in the kernel of the restricted Kodaira-Spencer map $u$. Let $\lambda \in \mathbb{C} \cong H^0(X, \mathcal{O}_X)$, and $(x,v) \in X_s$. Then the deformation sending $(x,v)$ to $(x,e^\lambda v)$ doesn't change the isomorphism class of $X_s$. This deformation is produced by the standard $\mathbb{C}^*$-action and it is in the kernel of the restricted Kodaira-Spencer map $u$. Therefore we obtain the following
\begin{prop}\label{prop9}
The kernel of the Kodaira-Spencer map $u$ has dimension $1$.
\end{prop}
The standard $\mathbb{C}^*$-action on $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$, sending $(E_*,\Phi)$ to $(E_*,t\Phi)$ induces an action on $W$, whose only fixed point is the origin.
\begin{prop}\label{prop10}
Let $g : \mathbb{C}^* \times W \to W$ be an action, having exactly one fixed point, and admitting a lift to $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$. Then this fixed point is the origin of $W$.
\end{prop}
\begin{proof}
The proof is completely analogous to the proof in \cite[Proposition 5.1]{BG03}. For convenience of the reader, we only give a sketch of the proof.\\
Let $s \in W$ be a point such that the corresponding spectral curve $X_s$ is smooth. The tangent vector defined at $s$ by the standard $\mathbb{C}^*$-action is contained in the kernel of the Kodaira-Spencer map $u$, because the standard action doesn't change the spectral curve (upto isomorphism). We are going to show that the tangent vector defined by any action which admits a lift to $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$ is also in the kernel of the Kodaira-Spencer map $u$.\\
Denote $J=J(X)$, $J_s = J(X_s)$ and $P_s = \text{Prym}(X_s/X)$. There is an unramified covering map
\[
q : J \times P_s \to J_s
\]
sending $(L_1,L_2)$ to $L_1 \otimes \pi^*L_2$, where $\pi : X_s \to X$ is the projection map. Therefore, a deformation of $J_s$ gives a deformation of $J \times P_s$.\\
Let $\eta$ be the tangent vector defined by the given action $g : \mathbb{C}^* \times W \to W$. The image $u(\eta)$ under the Kodaira-Spencer map produces a deformation of the spectral curve $u(\eta)= \eta_1 \in H^1(X_s,T_{X_s})$, its Jacobian $\eta_2 \in H^1(J_s, T_{J_s})$, and the Prym variety $\eta_3 \in H^1(P_s,T_{P_s})$. A deformation of $X_s$ produces a deformation of $J_s$. We have the following homomorphisms
\[
H^1(X_s,T_{X_s}) \xhookrightarrow{i} H^1(J_s,T_{J_s}) \xhookrightarrow{\epsilon} H^1(J\times P_s,T_{J\times P_s}) \xhookleftarrow{} H^1(P_s,T_{P_s})
\]
The map $i$ is injective because of the classical Torelli theorem (a nonzero deformation of a curve produces a nonzero deformation of its Jacobian). The image $\epsilon \circ i$ actually lies in $H^1(P_s,T_{P_s})$, since a deformation of $J\times P_s$ induced by a deformation of $X_s$ is induced by a deformation of $P_s$.\\
The fiber of the Hitchin map $h$ over $s$ is isomorphic to $P_s$. Since the action $g$ lifts to $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$, the fiber $P_s$ is isomorphic to $P_{g(t,s)}$ at $g(t,s)$ for all $t \in \mathbb{C}^*$. Therefore we have $\eta_3=0$, and hence $\eta_1=0$. So the tangent vector $\eta$ defined by the action $g$ lies in the kernel of the Kodaira-Spencer map $u$. By Proposition \ref{prop9}, we can say that the orbit of $g$ through $s$ is included in the orbit of the standard action through $s$. In particular, since the origin is the fixed point of the standard action, it is a limiting point of the orbit of $g$ through $s$. But the origin is not in the orbit as the fiber over the origin is not isomorphic to the fiber over $s$. Since the limiting points of an orbit are the fixed points, the origin is a fixed point of the action. Since $g$ has exactly one fixed point, the origin is the only fixed point.
\end{proof}
\section{Proof of main theorem}
\begin{proof}[Proof of Theorem \ref{thm2}]
Consider $Y$ an algebraic variety isomorphic to the moduli space $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$. Since the fibers of the Hitchin map $h$ are compact, the ring of global functions of $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$ factorizes through the $h$ and therefore
\[
\text{Spec}(\Gamma(Y)) \cong \text{Spec}(\Gamma(W)) \cong \mathbb{C}[y_1,y_2,\dots,y_{N}],
\]
where $N = \dim \mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L) = m(2m+1)(g-1)+m^2n$. Hence, there is an isomorphism $\beta : \mathbb{A}^{N} \to W$ such that the following diagram commutes
\begin{equation}\label{diag1}
\begin{tikzcd}
Y \arrow[r, "\sim", "\alpha"'] \arrow[swap]{d}{m} & \mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L) \arrow{d}{h} \\%
\mathbb{A}^{N} \arrow[r,"\sim", "\beta"']& W
\end{tikzcd}
\end{equation}
Consider a $\mathbb{C}^*$-action $g : \mathbb{C}^* \times \mathbb{A}^{N} \to \mathbb{A}^{N}$ with exactly one fixed point $v$ and admitting a lift to $Y$. Such an action exists because of the standard $\mathbb{C}^*$-action on $W$ and the isomorphism $\beta$.\\
By Proposition \ref{prop10}, $\beta(v)$ is the origin of $W$. Therefore, the fiber $m^{-1}(v)$ is isomorphic to the nilpotent cone. By Proposition \ref{prop8}, the moduli space $\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$ of stable symplectic parabolic bundles is the unique irreducible component of the nilpotent cone which doesn't admit a nontrivial $\mathbb{C}^*$-action. Therefore, if $\mathcal{N}_{\textnormal{Sp}}(2m,\alpha,L)$ is isomorphic to $\mathcal{N}'_{\textnormal{Sp}}(2m,\alpha,L)$, then $\mathcal{M}_{\textnormal{Sp}}(2m,\alpha,L)$ is isomorphic to $\mathcal{M}'_{\textnormal{Sp}}(2m,\alpha,L)$, and by Theorem \ref{thm1}, $(X,D)$ is isomorphic to $(X',D')$.
\end{proof}
|
2,877,628,090,326 | arxiv | \section{Introduction}
Many concurrent research areas are relying the quantized conductances, such as electric \cite{Wees_1988}, thermal \cite{schwab2000,schwab2001,schwab2006}, integer \cite{Klitzing1980}, fractional \cite{tsui1982,laughlin1983} quantum Hall, and spin quantum Hall effect. Many recent studies focus on half-integer quantum Hall effect \cite{dora2015} in topological insulators, and in the quantized light–matter interaction on the edge state of a quantum spin Hall insulator \cite{gulacsi2016}. Furthermore, the electron transport through individual molecules as a candidate for future nanoelectronic circuits, also exhibits similar interesting properties \cite{dora2009,geresdi2011,geresdi2014}. \\
Based on the example of the quantized electric and thermal conductances the quantized entropy current can be introduced. This step enables us to incorporate the thermodynamic concepts into quantum processes. We show that the entropy change during the transfer of an energy package of $h \nu$ can be expressed. The description allows us to formulate a mathematical equation that expresses the minimal entropy production principle for the microscopic world. These results may be useful to acquire the maximal physical limits of reversibility of coherent quantum systems, e.g. in quantum dots and quantum computers relying on these. \\
The discussed theory incorporates various fields of physics. To aid the reader we give a brief summary of the used phenomena. Afterwards, a coherent frame arises in which the main aims and the applicable methods are discussed.
\section{Historical considerations}
\subsection{The quantized electric conductance}
First, a short introduction to quantized electric conductance is given below. Landauer \cite{landauer_1957,landauer_1981,landauer_1989} theoretically predicted the existence and the amount of the quantized electric conductance which can be expressed as
\begin{equation} \label{kvantalt_el_vez}
G = \frac{2e^2}{h} = 7.75 \cdot 10^{-5} \, \textrm{S},
\end{equation}
where $e$ is the elementary charge and $h$ is the Planck constant. In general
\begin{equation} \label{kvantalt_el_vez_kond}
G = R^{-1} = \sigma A/L,
\end{equation}
where $R$ is the ohmic resistance, $\sigma$ is the specific electric conductivity, $A$ is the cross section and $L$ is the length of the conductor. The experimental proof of the theoretical prediction was elaborated by van Wees \emph{et al} \cite{Wees_1988}. Theoretically, the quantized conductance may appear in a nanowire with width, $w$, comparable to the length of Fermi wavelength $\lambda_F$, and the length, $L$, is less than the free electron path. The quantized electric conductance was measured in a 2D electron gas realized within an AlGaAs–GaAs boundary layer. The gate voltage dependence of the electric conductance is shown in Fig. \ref{kvantalt_elektromos_vezetokepesseg}.
\begin{figure}[h]
\centering
\includegraphics[width=8 cm, height=6 cm]{kvantalt_elektromos_vezetokepesseg.eps}
\caption{The electric conductance as a function of gate voltage. The channel width, $w$, can be modulated by the gate voltage. The quantized behavior can be read out directly from the figure. The quantum of electric conductance is $2e^{2}/h$ based on theoretical predictions. Adopted from Reference \cite{Wees_1988} $\copyright$ (1988) American Physical Society.} \label{kvantalt_elektromos_vezetokepesseg}
\end{figure}
Theoretically, the confined electrons, present in a long, straight 2D quantum wire can be described by the following wave function:
\begin{equation}
\Psi_{k,j}(x,y) \sim \sin{(ikx)} \sin{\left( \frac{j\pi}{w}y \right)},
\end{equation}
\noindent where $k$ is the wavenumber, $j$ is an integer quantum number. The first factor pertains to the propagating plane wave in the direction $x$, while the second factor describes the cross-modes in the direction $y$. The corresponding energy to the wave function $\Psi_{k,j}(x,y)$ expresses a continuum spectrum arising from the $x$-axis solution, while it contains quantized levels, due to the finite width $w$ in the $y$ direction \cite{nawrocki2008}:
\begin{equation}
\varepsilon (k,j) = \frac{{\hbar}^{2}k^{2}}{2m} + \frac{{\hbar}^{2}{\pi}^{2}}{2mw^{2}} j^{2}.
\end{equation}
The quantized number of states $\varepsilon (k,j)$ below Fermi surface is $N \sim 2w/\lambda_F$. Moreover, if the thermal energy can be neglected to the chemical potential difference $\Delta\mu$ between the contacts, the electic current of the $j^{\textrm{th}}$ channel can be expressed as
\begin{equation}
I_j = e v_j \left( \frac{dn}{dE} \right)_j \Delta\mu = e^2 v_j \left( \frac{dn}{dE} \right)_j V,
\end{equation}
where $v_j$ is the propagation velocity along the direction y and ${dn}/{dE}_j$ is the density of states at the Fermi level for $j^{\textrm{th}}$ state, $V$ is the voltage difference $V = \Delta\mu / e$. The number of states between $k$ and $k + dk$ in one dimension for unit length is
\begin{equation}
\frac{dn}{dk} = \frac{1}{2\pi},
\end{equation}
by which the density of states can be obtained as
\begin{equation}
\left( \frac{dn}{dE} \right)_j = \left( \frac{dn}{dk} \frac{dk}{dE} \right)_j = \frac{2}{h v_j},
\end{equation}
where the factor $2$ arises from the spin degeneracy. The total current
\begin{equation}
I = \sum_{j=1}^{N} I_j = \frac{2e^2}{h} N V
\end{equation}
can be expressed, where $N$ is the number of channels. The quantized electric conductance can be read out from the formula.
\subsection{The quantized thermal conductance}
Based on termodynamic and information theory assumptions, Pendry --- analogously to the Landauer's formula --- intuitively predicted the expression of the maximal rate of cooling (transferable energy $Q$ per unit time giving the quantum limits for information flow) for one channel \cite{pendry1983} as
\begin{equation} \label{pendry_flow}
\frac{dQ}{dt} \leq \frac{{\pi} k_{B}^{2} T^2}{3\hbar},
\end{equation}
\noindent where $k_B$ is the Boltzmann constant, $\hbar$ is the reduced Planck constant, and $T$ is the temperature. Dividing by $T$, the maximal entropy current ($dS/dt$) in one channel can be obtained as
\begin{equation} \label{pendry_entropy}
\frac{dS}{dt} \leq \frac{{\pi} k_{B}^{2} T}{3\hbar}.
\end{equation}
Later, Rego and Kirczenow \cite{rego1998} deduced the thermal conductance of a quantum wire using more sophisticated calculations. Their result is
\begin{equation} \label{Rego_Kirczenow}
\Lambda = \frac{{\pi}^2 k_{B}^{2} T}{3h}.
\end{equation}
\noindent Here, the $\Lambda$ notation is introduced for the quantum of thermal conductance. Comparing Eqs. (\ref{pendry_entropy}) and (\ref{Rego_Kirczenow}) a factor 2 difference can be noted. Furthermore, according to Eq. (6) it appears that the maximal entropy change and the quantized thermal conductance are related to each other. The origin of quantized thermal conductivity is explored by many theoretical groups from various viewpoints \cite{angelescu98,nishiguchi97,blencowe99,blencowe2004,li03}. \\
Other considerations based on the Drude-Lorentz theory also hints the existence of quantized thermal conductance. In the model, the relation between the $\lambda$ thermal conductivity and $\sigma$ electric conductivity can be expressed as
\begin{equation}
\lambda = \frac{{\pi}^{2}}{3} \left( \frac{k_{B}}{e} \right)^{2} T \sigma,
\end{equation}
\noindent where $\sigma$ is the specific electric conductivity \cite{solyom}. Substituting the half of the specific electric conductivity from Eq. (\ref{kvantalt_el_vez_kond}), following expression for the thermal conductivity can be derived as
\begin{equation} \label{kvantalt_hovezeto_kepesseg}
\lambda = \frac{{\pi}^{2} k_{B}^{2} T}{3h} \frac{L}{A}.
\end{equation}
\noindent This might look only a formal analogy, since the quantized thermal measurements were elaborated in semicondutors, while Drude-Lorentz model is formulated to describe conducting electrons. The factor 2 appears in the electric conductance $G=2e^{2}/h$ (Eq. (\ref{kvantalt_el_vez})) due to the two spin states of the electron. However, in semiconductors the phonons carry the heat conduction thus $G = e^{2}/h$ can be considered. Similarly to the electric conductance the thermal conductance
\begin{equation} \label{kvantalt_termikus_vezetokepesseg}
\Lambda = \lambda \frac{A}{L} = \frac{{\pi}^{2} k_{B}^{2} T}{3h} = 9.46 \cdot 10^{-13} T \left[ \frac{\textrm{W}}{\textrm{K}} \right]
\end{equation}
can be obtained. This means that the thermal conductance has a close relation with the quantum of entropy current. \\
The quantized thermal conductance was first measured in Si$_{3}$N$_{4}$ waveguide (see Fig. \ref{termikus_hullamvezeto}) by Schwab et al \cite{schwab2000}. Both the experimental setup and the measurement technique are fascinatingly sophisticated.
\begin{figure}[h]
\centering
\includegraphics[width=8 cm, height=6 cm]{termikus_hullamvezeto.eps}
\caption{Experimental realization of a Si$_{3}$N$_{4}$ waveguide \cite{schwab2000}. Physical dimensions: length: $L \sim 1 \, \mu$m; width: $w = 200$ nm; layer thickness: $d = 60$ nm. Adopted from Reference \cite{schwab2000} $\copyright$ (2000) Springer Nature.} \label{termikus_hullamvezeto}
\end{figure}
The obtained result for the thermal conductivity in the temperature range of 60 mK to 6 K is shown in Fig. \ref{termikus_vezetes_kvantum}. The quantum of thermal conductivity based on theoretical considerations is
\begin{equation}
g_{0} = \frac{{\pi}^{2} k_{B}^{2} T}{3h}.
\end{equation}
Taking into account that in the measurement there four waveguides are present and each is expected to carry just four populated modes below the critical temperature of
\begin{equation}
T < T_{c} = \frac{\pi \hbar v}{k_B w} = 0.8 \, \text{K},
\end{equation}
the thermal conductance of the arrangement should approach a limiting value of $16 g_{0}$. In the measurement the width of the channel was $200$ nm, while $v = 6000$ m/s is the speed of sound in the material. The measurement data, normalized by $16 g_{0}$ \cite{schwab2000,schwab2001,schwab2006} is presented in Fig. \ref{termikus_vezetes_kvantum}. Please note, that below 700 mK data points significantly deviate from the linear fashion and converge to the value of $16 g_0$.
\begin{figure}[h]
\centering
\includegraphics[width=8 cm, height=6 cm]{Schwab2000_quantum_thermal_conductivity.eps}
\caption{The quantized behavior of thermal conductance \cite{schwab2000,schwab2001,schwab2006}. The appeared plato can be well recognized in the temperature range 60-700 mK. Adopted from Reference \cite{schwab2000} $\copyright$ (2000) Springer Nature.} \label{termikus_vezetes_kvantum}
\end{figure}
The constant thermal conductivity of $16g_0$ in the temperature range of $60-700$ mK proves the quantized behavior of the thermal conductance.
\subsection{Lagrangian description of heat conduction}
The quantum limit for information flow raised by Pendry is presumably related to an extreme-value problem. If so, it is probable that it is inherited from its thermodynamic background. To investigate, it is worth reviewing the extremum principle formulated for thermodynamics and the generating potential $\varphi$ introduced into the principle. The significance of these will be highlighted below. \\
The theory is based on the least action principle
\begin{equation}
S_{\textrm{action}} = \int_{t_1}^{t_2} L dt = extremum,
\end{equation}
wherel $L$ is the Lagrange function of the problem. To proceed the principle has to be applied for the Fourier equation for heat conduction, which is a constant coefficient linear parabolic differential equation for temperature $T$
\begin{equation} \label{foueq}
\varrho c_{v}\frac{{\partial}T}{{\partial}t} - {\lambda\nabla^2}T = 0,
\end{equation}
\noindent and which equation cannot be derived directly from the Hamiltonian principle for temperature $T$. Here, $\lambda$ is the thermal conductivity, introduced earlier, $c_v$ is the specific heat, and $\varrho$ denotes the mass density. Assume, that a potential space, $\varphi$, exists, which produces a measurable local equilibrium (classical) temperature field as follows:
\begin{equation} \label{temp}
T(x,y,z,t) - T_0 = - \frac{{\partial}\varphi}{{\partial}t} - \frac{\lambda}{\varrho c_{v}} \nabla^2 \varphi = - \frac{{\partial}\varphi}{{\partial}t} - D \nabla^2 \varphi,
\end{equation}
\noindent where $T_0$ is a reference temperature. The presence of this reference temperature grants that the potential $\varphi$ has at least one well defined zero value and does not increase beyond all limits, e.g. limited from above. To simplify notation, it is worth introducing thermal diffusivity as
\begin{equation}
D = \frac{\lambda}{\varrho c_{v}}.
\end{equation}
\noindent Substituting the expression from Eq. \ref{temp} into the equation of heat conduction in Eq. (\ref{foueq}), the equation of motion of the problem can be obtained by the potential function of $\varphi$ as
\begin{equation} \label{mozgasegy}
0 = - \frac{{\partial}^{2}\varphi}{{\partial}t^{2}} +
D^{2} \nabla^2 (\nabla^2 \varphi).
\end{equation}
\noindent The equation of motion (field equation) described in Eq. (\ref{mozgasegy}) is the Euler-Lagrange equation of heat conduction problem. The equation contains only self adjoint operators and thus it can be dedudced from the following Lagrangian
\begin{equation} \label{lagrange}
L = \frac{1}{2} \left( \frac{{\partial}\varphi}{{\partial}t} \right)^{2} +
\frac{1}{2} D^{2} {({\nabla^2 \varphi})^{2}}
\end{equation}
\noindent \cite{mg,gm1994,gambar2016}. The presented method is also a good example of how the Hamilton's principle can be applied to dissipative processes \cite{szegleti2020}.\\
The Lagrangian theory can be quantized and the arising energy packages can be assigned to the thermal propagation \cite{markus95}. Performing the calculations for a silicon film with width 100 nm and cross section $10^{-6}$ m$^{2}$ at temperature $T=80$ mK the obtained first energy level is ${\epsilon}_{1}=7.0 \cdot 10^{-14}$ J = $4.375 \cdot 10^{5}$ eV \cite{vmg2009pre}. This energy value agrees well with the value of the transferred energy per unit time per unit temperature ${\epsilon}=7.6 \cdot 10^{-14}$ J = $4.75 \cdot 10^{5}$ eV calculated from the previously mentioned experimental results for for silicon nitride in Schwab et al \cite{schwab2000,schwab2001,schwab2006}. These results prove the material independence of quantized thermal conductance.
\section{The quantized behavior of the conductance of entropy current and the entropy production}
The change of an extensive physical quanity in a volume depends on the in- or outgoing flow via the total surface of the considered volume and the production or loss of the quantity within the volume. If the extensive quantity is the entropy, $S$, then the balance equation read as
\begin{equation}
\frac{dS}{dt} = -I_S + \Sigma ,
\end{equation}
where $I_S$ is the entropy current and $\Sigma$ is the entropy production. \\
Upon thermal propagation the relation between the entropy current density $J_s$ ($J_s = I_s / A$) and the heat current density ($J_q = - \lambda \nabla T$) can be expressed as usual. Applying Fourier's law yields
\begin{equation} \label{Fourier_tv}
J_q = -\lambda \nabla T .
\end{equation}
With the use of Eq. (\ref{Fourier_tv}) the formulation of the entropy current density, $J_s$, in the presence of thermal conductivity can be made:
\begin{equation} \label{entropy_current}
J_s = \frac{J_q}{T} = -\frac{\lambda \nabla T}{T} = -\frac{\lambda}{T} \nabla T .
\end{equation}
At this point we take the form of the quantized thermal conductivity given by Eq. (\ref{kvantalt_termikus_vezetokepesseg}). Similarly to $\Lambda$
\begin{equation}
\Lambda_s = \frac{\Lambda}{T} = \frac{{\pi}^{2} k_{B}^{2}}{3h} = 9.46 \cdot 10^{-13} \frac{\text{J/K}}{\text{K s}}
\end{equation}
can be introduced as the quantized entropy conductance. This denotes the entropy flow per unit time and unit temperature. For a given temperature difference of $dT$ the entropy current is
\begin{equation}
I_S = -\Lambda_s dT.
\end{equation}
Recalling the relation from Eq. (\ref{temp}) --- and neglecting the $D \nabla^2 T$ term --- yields
\begin{equation}
dT = T - T_0 \sim -\frac{\partial\varphi}{\partial t}.
\end{equation}
In the present form the expression is integrable, by which the transferred entropy $S_{tr}$ can be expressed by the potential $\varphi$ as
\begin{equation} \label{transferred_entropy}
S_{tr} = \Lambda_s (\varphi - \varphi_0) = \frac{{\pi}^{2} k_{B}^{2}}{3h}(\varphi - \varphi_0).
\end{equation}
At this point it is noted that the potential difference drives the system to equilibrium, and it leads to entropy change. This gives us a deeper meaning that the coefficient $\Lambda_s$ is the quantum of entropy conductance. \\
On the other hand, if $dT$ is related to the transmission of an energy packet, then using the relation of $\varepsilon = k_{B} dT$ yields that the entropy current can be formulated as
\begin{equation}
I_S = \frac{\Lambda_s}{k_{B}} \varepsilon = \frac{{\pi}^{2} k_{B}}{3h} \varepsilon .
\end{equation}
If the energy package is a single quantum with the energy of $\varepsilon = h \nu$ (e.g. a phonon or a photon) then the entropy current carried by it is
\begin{equation} \label{entropia_aram}
I_S = \frac{{\pi}^{2} k_{B}}{3} \nu ,
\end{equation}
where $\nu$ is the frequency. \\
To proceed, it is required to formulate the entropy production density upon thermal transfer \cite{groot}
\begin{equation}
\sigma = J_q {\nabla} \frac{1}{T} = \lambda \left( \frac{\nabla T}{T} \right)^2 .
\end{equation}
This can be readily reformulated in quantized form using Eqs. (\ref{entropy_current}) and (\ref{kvantalt_hovezeto_kepesseg}). The obtained result is
\begin{equation}
\sigma = \frac{J_s^2}{\lambda} = \frac{1}{T} \frac{\pi^2 \varepsilon^2}{3h} \frac{1}{A L}.
\end{equation}
Multiplying this equation by the volume $V = A L$ and introducing the entropy production $\Sigma = \sigma V$ of the considered system, the entropy production is thus
\begin{equation}
\Sigma = \frac{1}{T} \frac{\pi^2 \varepsilon^2}{3h}.
\end{equation}
If the energy transfer can be expressed by an energy packet (or a quasi particle) with frequency $\nu$, then for the entropy production the following relation holds:
\begin{equation} \label{entropia_prod}
\Sigma = \frac{1}{T} \frac{\pi^2}{3} h \nu^2 .
\end{equation}
It is remarkable that the expression relates to the square of frequency $\nu$.
\section{Examples and applications}
The examples are detailed below to apply the elaborated framework of quantized thermodynamic condutances.
\subsection{Entropy increase during a single quantum transfer}
Let us consider two subdomains $\fbox{1}$ and $\fbox{2}$ with temperatures of $T_1$ and $T_2 < T_1$, respectively. Between the two subdomains a $h\nu$ energy transfer is taken into account. The energy packet is created in subdomain $\fbox{1}$, by which undergoes an entropy production of
\begin{equation}
\Sigma_{1} = -\frac{1}{T_1} \frac{\pi^2}{3} h \nu^2.
\end{equation}
The negative sign is due to the formation of the quantum. The generated wave packet leaves the domain $\fbox{1}$, which has an entropy current of
\begin{equation}
I_{S_1} = -\frac{{\pi}^{2} k_{B}}{3} \nu
\end{equation}
originated from the domain $\fbox{1}$. Thus the total entropy decrease of the domain $\fbox{1}$ holds
\begin{equation}
\frac{dS_1}{dt} = -\frac{{\pi}^{2}k_{B}}{3} \nu -\frac{1}{T_1} \frac{\pi^2}{3} h \nu^2 .
\end{equation}
The energy packet arrives to the domain $\fbox{2}$, which yields an entropy current income of
\begin{equation}
I_{S_2} = \frac{{\pi}^{2} k_{B}}{3} \nu .
\end{equation}
On the other hand, the energy packet is spread in the volume causing an entropy production
\begin{equation}
\Sigma_{2} = \frac{1}{T_2} \frac{\pi^2}{3} h \nu^2
\end{equation}
during this dissipation process. Consequently, the total entropy increase of domain $\fbox{2}$ is:
\begin{equation}
\frac{dS_2}{dt} = \frac{{\pi}^{2}k_{B}}{3} \nu + \frac{1}{T_2} \frac{\pi^2}{3} h \nu^2 .
\end{equation}
The total entropy increase in the volume containing $\fbox{1+2}$ is thus:
\begin{equation}
\frac{dS}{dt} = \frac{dS_1}{dt} + \frac{dS_2}{dt} = \left( -\frac{1}{T_1} + \frac{1}{T_2} \right) \frac{\pi^2}{3} h \nu^2 > 0,
\end{equation}
satisfying the $2^{\textrm{nd}}$ law of thermodynamics, as expected. During the formation of the quantum, the entropy decreases in the subdomain $\fbox{1}$ at temperature $T_1$. During the absortion the entropy increases in the subdomain $\fbox{2}$ at $T_2$. Since $T_1 > T_2$, the net entropy change is positive. If $T_1 = T_2$, e.g. in the thermal equilibrium, no further entropy is produced. The entropy current is independent of the temperature, so the transfer process has no further contribution the entropy increase. It is possible that a cooler subdomain may emit a quantum to the hotter one, but the resulting entropy increase must be positive, so reciprocally, the hotter subdomain must emit a higher energy (higher frequency) packet to the cooler one. This is the quantized formulation of second law of thermodynamics.
\subsection{Spin-lattice relaxation}
The spin–lattice relaxation is such a mechanism in which the parallel component of the nuclear magnetic moment relaxes from a higher energy instabil equilibrium state to the thermodynamic equilibrium. In the initial condition the magnetic moment is antiparallel to the constant magnetic field and the temperature equals to the temperaure of the surroundings, of the surrounding thermal bath. During the relaxation process the energy difference of the Zeeman levels has to be considered
\begin{equation}
\Delta E = \varepsilon = \gamma \hbar B_0,
\end{equation}
where $\gamma$ is the gyromagnetic ratio, and $B_0$ is the external magnetic field. To apply the results obtained above it is necessary to express the relevant frequency expression, like the Larmor frequency, which is
\begin{equation}
\omega = \gamma B_0 \,\,\, \textrm{or} \,\,\, \nu = \frac{1}{2\pi} \gamma B_0.
\end{equation}
The entropy current of a single spin relaxation, using the expression in Eq. (\ref{entropia_aram}), can be formulated in the expression
\begin{equation}
I_{S} = \frac{{\pi}^{2} k_{B}}{3} \nu = \frac{{\pi} k_{B} \gamma B_0}{6}.
\end{equation}
The entropy production during the relaxation process can be obtained by the application of Eq. (\ref{entropia_prod}) as
\begin{equation}
\Sigma = \frac{1}{T} \frac{\pi^2}{3} h \nu^2 = \frac{1}{T} \frac{1}{12} h \gamma^2 B_0^2 .
\end{equation}
These results may be useful to understand, in general, the spin relaxation, basic thermodynamic relations in spintronics to achieve the minimal loss of spin-waves \cite{qin2021}, and in magnetic resonances \cite{csosz2020}, in the study of magnetic storage systems or quantum computing. Similar considerations can be made for any process with relaxation or interaction with light, e.g. photoluminescence.
\section{An additional consequence of the least action principle}
Let us turn back to the action principle of heat conduction. We could see that the action
\begin{equation} \label{action}
\tilde{S}(t) = \int\limits_{0}^{t} \frac{1}{2} (T-T_0)^{2} dt
\end{equation}
is the extremum (minimum) of the equalization process, i.e., it is minimal for the realizable motion. As it can be realized from Eq. (\ref{transferred_entropy}) pertains to the entropy transfer, thus the transferred entropy should be minimum for the realistic motion. Furthermore, this also means that the transferred energy during time $t$ is also related to the minimal entropy conductance. The factor, $\Lambda_s$, appeared in Eq. (\ref{transferred_entropy}) and multiplied by the action in Eq. (\ref{action}) returns a quantity which has the units of energy. Considering the aforementioned reasoning, the obtained quantity yields
\begin{equation}
\tilde{E} = \Lambda_s \tilde{S}(t) = \frac{{\pi}^{2} k_{B}^{2}}{3h} \int\limits_{0}^{t} \frac{1}{2} (T-T_0)^{2} dt,
\end{equation}
which corresponds to the transferred enegy during the process. Finally, we conclude that the above formulated action principle is equivalent with the minimal entropy production principle of time dependent (non-stationary) processes. The formulated relations bring us closer to both the understanding of entropy current conductance and, eventually, to the meaning of the Lagrangian. \\
\section{Summary}
It is increasingly essential to understand the irreversibility of quantum mechanical and quantized transport processes on a microscopic scale. It is pointed out that both quantized entropy current and entropy production can be introduced and interpreted during the transfer of a single energy quantum. This completes the thermodynamic description of the process, including the validity of the second law of thermodynamics. Integrating into the theoretical framework of the of least action principle on thermal propagation, it became apparent that this principle may express as the principle of minimum entropy production on a quantum scale. We believe that our work is useful in the field of quantum computing understating how information loss can occur and thus how it can be tackled. \\
\section{Acknowledgments}
We acknowledge the support of the NKFIH Grant nos. K119442 and 2017-1.2.1-NKP-2017-00001. The research has also been supported by the NKFIH Fund (TKP2020 IES, grant no. BME-IE-NAT) based on the charter of bolster issued by the NKFIH Office under the auspices of the Ministry for Innovation and Technology.
\baselineskip 12pt
\bibliographystyle{ieeetr}
|
2,877,628,090,327 | arxiv | \section{Introduction}
Over the last two decades, the rogue wave phenomenon has received acute attention, with extensive research being performed theoretically, numerically and experimentally. One particular area of interest within this field is that of the emergence of rogue waves within crossing sea states, or in other words a sea state composed of two distinct wave systems, propagating with different directions relative to each other. Such a wave system is not an uncommon occurrence in the ocean, and there are notable rogue wave incidents recorded in such states, such as the Suwa Maru (see Tamura \textit{et al.} (2009)), Louis Majesty (see Cavaleri \textit{et al.} (2012)) and Prestige incidents (see Trulsen \textit{et al.} (2015)). The Draupner incident, which is perhaps the most famous rogue wave event, has also been linked (for example, see Adcock \textit{et al.} (2011)) with potential crossing sea activity. More recently, it was conjectured by Fedele \textit{et al.} (2017) that the El Faro rogue wave also occurred in a multidirectional sea state.
We note here that the term ``bimodality" has sometimes been used to describe crossing seas (for example, see Toffoli \textit{et al.} (2006)). However, the classical definition of bimodality refers to a different phenomenon, namely the propagation of more wave energy at an angle to the wind than in the wind direction. Time-resolved measurements of ocean waves have shown a prevalence of directional bimodality at frequencies above twice the peak frequency (for example, see Young \textit{et al.} (1995) and Wang and Hwang \textit{et al.} (2001)). These were confirmed by airborne remote sensing techniques (for example, see Romero and Melville (2010)), and by stereo-video systems (for example, Peureux \textit{et al.} (2018)). The bimodality is apparently caused by the nonlinear cascade of free wave energy from dominant to high frequencies. Bimodality is also found by solving the free-surface Euler equations for the temporal evolution of initially unimodal directional wave spectra (Toffoli \textit{et al.} (2010)). We will not consider bimodality any longer in this paper.
Theoretically, coupled nonlinear Schr\"{o}dinger (CNLS) equations have been used in the study of crossing wave-trains and their nonlinear interactions, and in particular, the Benjamin--Feir (or modulational) instability. Narrowband wave trains are well known to be susceptible to this, and the instability has been studied thoroughly in terms of the nonlinear Schr\"{o}dinger Equation (NLS) for deep water narrowband wave envelopes. Indeed, the Benjamin--Feir instability has been put forward as a possible generating mechanism for rogue waves, given the appropriate conditions. Onorato \textit{et al.} (2006) derived a set of CNLS equations in 2 + 1 dimensions to study the instability in non-colinearly propagating wave trains, and performed a stability analysis for perturbations confined to the $x-$axis (i.e. propagation in one dimension). Shukla \textit{et al.} (2006), using these equations, performed a similar analysis incorporating perturbations in two directions. Laine-Pearson (2010) developed a theory for the long-wave instability of short-crested waves. Short-crested waves are the resonant interaction of two wave systems each with a different direction of propagation. The stability of these wave interactions is closely associated with the stability of the oblique nonresonant interaction between two waves. By considering the long-wave instability of such waves, Laine-Pearson demonstrated that instability growth rates of two crossing waves can be larger than those given by short-crested waves, concluding that only considering true resonant interactions can underestimate the contribution from unstable crossing sea states to the possible formation of rogue waves. Ruban (2009,2010) considered the formation of rogue waves in crossing seas using a system of equations with weak three-dimensional effects.
A stability analysis for the CNLS system yields an expression for the instability growth rate in the crossing wave trains, which is in fact dependent on the angle between the directions of propagation of each wave train. Let the wave vectors of the two propagation directions be given by $(k_x,k_y)$ and $(k_x,-k_y$), where $k_x$ is the wave vector component along the $x-$axis and $\pm k_y$ the wave vector components along the $y-$axis. The crossing angle between them is given by $\Omega = 2\tan^{-1}(k_y/k_x)$. The growth rate becomes negative at critical angle $\Omega_c = 2\tan^{-1}(1/\sqrt{2}) = 70.53^{o}$, and so focusing instability was found for angles $0 < \Omega < 70.53^{o}$. The instabilities become defocusing after this point, and nonlinear interactions were found to strengthen as the crossing angle approached $\Omega_c$.
Masson (1993) considered the nonlinear coupling by way of resonant interactions between swell and wind wave sea states via the Hasselmann equation. It was found that significant energy transfer may take place, causing the swell system to grow at the expense of the short systems. This transfer depends on the crossing angle, reaching its maximum at approximately $40^{\circ}$. Furthermore, the ratio between the peak frequencies of each system influenced the strength of the coupling, with no significant interaction for ratios less than $0.6$. However, given that ocean waves are typically low frequency, this type of coupling is thus negligible unless the spectral peaks are so close together that their double peaked structure is difficult to identify.
As touched on earlier, it is reasonable to speculate that at the time of both the Draupner event and the El Faro events, the directional spectra actually resulted from the crossing of two spectra travelling at an angle of each other. This speculation is supported by the fact that a positive mean sea level, or `set up', was measured at the apex of both rogue waves. This is quite puzzling, as a `set down' (i.e., a negative mean sea level) is generally expected to be observed beneath such large crests, and this is probably a consequence of the potential crossing sea feature. In this article, we seek to address the short time evolution of crossing sea states, examining the statistical moments linked to nonlinear wave interactions. Two classes of crossing sea states are analysed: one using directional spectra from the Draupner wave crossing at different angles, another considering a Draupner-like spectra crossed with a narrowband JONSWAP state to model spectral growth between wind sea and swell. These two classes of crossing sea states are constructed using the spectral output of a WAVEWATCH III hindcast on the Draupner rogue wave event. We measure ensemble statistical moments as functions of time, finding that although the crossing angle influences the statistical evolution to some degree, there are no significant third order effects present. Furthermore, we inquire on the nature of mean sea level measured beneath extreme crest heights, the elevation of which (set up or set down) is shown to be related to the spectral content in the low wavenumber region of the corresponding spectrum.
\section{Crossing sea model}
To efficiently perform high resolution, phase resolved simulations of the evolution of crossing sea states, we employ the Higher Order Spectral (HOS) numerical method of West \textit{et al.} (1987), to solve the free surface Euler equation system with a flat sea bottom. This requires appropriate initial conditions for the free surface $\eta(x,y,t)$ and associated surface velocity potential $\psi(x,y,t)$, which is obtained as follows. We begin by utilising a hindcast directional spectrum $S(\omega, \theta)$ for the Draupner wave event (Fig. \ref{fig:dsts}), produced using WAVEWATCH III, which is the same hindcast spectrum used in Fedele \textit{et al.} (2016). The Draupner wave itself was measured by Statoil at the Draupner oil platform on the $1^{st}$ of January, 1995, where the water depth is $70\,$m.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{figs/draupner_S_ts.eps}\\
\caption{Left panel: WAVEWATCH III hindcast directional wave spectrum $S(\omega,\theta)$ used as input for the HOS simulations (Draupner). Here, $\omega$ is the angular frequency and $\theta$ the direction in degrees. The spectrum has been normalized with respect to the spectral peak value. Right Panel: Measured Draupner time series. Thick line: Crest profile. Thin line: mean sea level. Dashed line: zero sea level. Note the positive mean sea level beneath the crest. Here, $\eta/\eta_{\max}$ is the surface displacement relative to its maximum value and $t$ is the time relative to the dominant wave period $T_p$ (which is 14.93 seconds for the Draupner spectrum).}
\label{fig:dsts}
\end{figure}
It is important to recognise that nonlinear wave-wave interactions are modelled in WAVEWATCH III by the Hasselmann equation, and thus non-resonant interactions are not included (see Tolman \textit{et al.} (2014)). As such, the coupling between the HOS method and WAVEWATCH III is advantageous in the sense that the latter can perform wave hindcast or forecast on a global scale, over large periods, which is not feasible due to the resources required to do so with the HOS method. The HOS method provides higher resolution simulations and a more complete picture of the nonlinear wave wave interactions. Note that we use a third order HOS expansion in our simulations, which is equivalent to the Zakharov Hamiltonian formalism.
Given that this hindcast spectrum does not feature a crossing sea state, we use it to artificially construct such a spectrum from the base hindcast by summing together our hindcast Draupner spectrum, with an identical copy of itself rotated by a crossing angle $\Omega = 90^{o}$. Additionally, we consider two other crossing angles of $45^{o}$ and $22.5^{o}$, which allow us to see the influence of $\Omega$ on the sea state evolution. Note that we refer below to this case as ``Draupner-Draupner (DD)". To simulate spectral growth between wind sea and swell, we also construct a crossing sea state using again the Draupner spectrum (swell) and a narrowband JONSWAP spectrum (wind), for the same crossing angles as before. The JONSWAP peak is constructed using the following parameters: peak shape parameter $\gamma = 10$, angular frequency, $\omega_p = 0.8 \omega_{p_D}$, $H_s = 1.1 H_{s_D}$, where the subscript $D$ indicates Draupner hindcast values, and the subscript $p$ implies dominant (or peak) wave modes. The directional spread of the JONSWAP spectrum is set to $20^{o}$. Note that we refer below to this case as ``Draupner-JONSWAP (DJ)''.
Once we have obtained the directional spectrum $S(\omega, \theta)$, we convert to a cartesian wavenumber coordinate system $(k_x, k_y)$ via the Jacobian transform
\begin{equation}
S(k_x,k_y) = \frac{1}{k}\frac{d\omega(k)}{dk}S(\omega(k),\theta),
\end{equation}
where $k = |\mathbf{k}|$, and the direction is assumed to be $\theta = \tan^{-1}(k_y/k_x)$. The initial wavenumber spectra for each case are presented in Fig. \ref{fig:init_spec}.
\begin{figure}
\centering
\caption{Initial crossing spectra used in the present HOS simulations, plotted in cartesian coordinates (as defined in the text). Results are shown for the Draupner-Draupner and Draupner-JONSWAP cases for different crossing angles.}
\label{fig:init_spec}
\end{figure}
Random phase is introduced via the random variable $\beta$, uniformly distributed over $[0,2\pi]$, and finally, the Fourier spectra for $\eta$ and $\psi$ are given by
\begin{eqnarray}
\hat{\eta}(k_x,k_y) =& \; \exp(i\beta)\sqrt{S(k_x,k_y)} + c.c., \\
\hat{\psi}(k_x,k_y) =& \; -i\sqrt{\frac{g}{k\tanh(kd)}}\exp(i\beta)\sqrt{S(k_x,k_y)} + c.c.,
\end{eqnarray}
where we have used linear theory to construct $\hat{\psi}$. The physical initial conditions are then recovered via an inverse Fourier transform. Note that this initial condition is linear, and so to ensure stable initial evolution, the nonlinear terms in the evolution equations are smoothly introduced via the Dommermuth ramping function (Dommermuth 2000), over a period $T_r \approx O(5T_p)$, where $T_p$ is the peak wave period of the spectrum. Furthermore, we implement the phenomenological based filter proposed by Xiao \textit{et al.} (2013) to account for energy dissipation due to wave breaking:
\begin{equation}
F(\mathbf{k}|k_p, f_1, f_2) = \exp\left(-\bigg{\vert} \frac{\mathbf{k}}{f_1k_p} \bigg{\vert}^{f_2} \right),
\end{equation}
with parameters set as $[f_1, f_2] = [8,30]$.
Simulations are performed using $1024 \times 1024$ dealiased Fourier modes. The wave fields are scaled as $L_{x,y} = O(\epsilon^{-2} k_p^{-1})$, based on the Benjamin--Feir scale, and simulations run for times similarly chosen as $T = O(\epsilon^{-2} T_p)$.
\section{Numerical simulation results}
Ensemble statistics are measured for each crossing sea spectrum considered. The temporal evolution of excess kurtosis and skewness are computed from an ensemble of 20 simulations, for each case. Note that each member of the various ensembles is initialised with a newly generated random phase (i.e,. we are performing a Monte Carlo type anaylsis). Skewness is correlated with three wave interactions, while kurtosis is correlated to four wave interactions, in particular the Benjamin--Feir instability. There is a known relationship between kurtosis and the Benjamin--Feir index (BFI), a measure of a wave systems susceptibility to the instability. Thus, knowledge of these statistical moments can be used to infer the strength of second and third order nonlinearities. Onorato \textit{et al.} (2006) and Toffoli \textit{et al.} (2011) performed similar simulations using narrowband JONSWAP spectra, measuring the evolution of kurtosis as a function of crossing angle $\Omega$. By increasing $\Omega$, they found increased values of kurtosis, with maximum kurtosis occurring for $40^{o} < \Omega < 60^{o}$, in agreement with theoretical work on the CNLS system.
Ensemble statistical measurements for all crossing cases are analogous with the measurements from the singular Draupner hindcast simulations of Fedele \textit{et al.} (2016). The ensemble averages of kurtosis and skewness are shown in Fig. \ref{fig:dd_stat} (Draupner-Draupner case) and Fig. \ref{fig:dj_stat} (Draupner-JONSWAP case), while the mean measurements are given in Table \ref{tb:cross_stats}. These statistical measurements are measured spatially from each snapshot from the temporal evolution of the wave field. It is clear from the table that decreasing the crossing angle towards $22.5^{\circ}$ actually leads to increased kurtosis and skewness, contrasting with the narrowband simulation results mentioned above, though this is not surprising, given that the particular phenomenon is associated with the Benjamin--Feir instability. This effect is more dramatic for the Draupner-Draupner cases. The presence of a crossing spectral peak does not seem to influence significantly the nature of the statistical moments, at least for these broad banded oceanic spectra.
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\begin{tabular}{lccc}
\textbf{Crossing Spectrum} & \textbf{Mean kurtosis} & \textbf{Mean skewness} \\[3pt]
DD: $\Omega = 22.5^{\circ}$ & 0.0588 & 0.1863 \\
DD: $\Omega = 45^{\circ}$ & 0.0462 & 0.1792 \\
DD: $\Omega = 90^{\circ}$ & 0.0395 & 0.1520 \\
DJ: $\;\Omega = 22.5^{\circ}$ & 0.0384 & 0.1612 \\
DJ: $\; \Omega = 45^{\circ}$ & 0.0386 & 0.1577 \\
DJ: $\; \Omega = 90^{\circ}$ & 0.0296 & 0.1582 \\
\end{tabular}
\caption{Mean values for kurtosis and skewness for Draupner-Draupner (DD) and Draupner-JONSWAP (DJ) crossing seas.}
\label{tb:cross_stats}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{figs/DD_STATS.eps}\\
\caption{Ensemble (averaged over 20 simulations each) evolution of kurtosis, $C_4$, and skewness, $C_3$, for Draupner-Draupner simulations. Solid blue lines: ensemble averaged measurements, dashed black lines: ensemble variance, dotted black lines: $95\%$ confidence intervals.}
\label{fig:dd_stat}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{figs/DJ_STATS.eps}\\
\caption{Ensemble (averaged over 20 simulations each) evolution of kurtosis, $C_4$, and skewness, $C_3$, for Draupner-JONSWAP simulations. Solid blue lines: ensemble averaged measurements, dashed black lines: ensemble variance, dotted black lines: $95\%$ confidence intervals.}
\label{fig:dj_stat}
\end{figure}
Next we inspect spectral growth, examining both the omnidirectional spectra $O_k(k)$(Fig. \ref{fig:SK_plot}) and directional distributions $D(\theta)$ (Fig. \ref{fig:DT_plot}) associated with each crossing case. We define them as follows:
\begin{align}
O_k(k) = & \int_{\theta}kS(k,\theta)d\theta, & \\
D(\theta) = & \int_{0}^{\infty}S(k,\theta)dk. &
\end{align}
There is modest spectral distortion through the simulation, with some energy leaking from spectral sidebands to the peak. The Draupner-Draupner crossing case with $\Omega = 22.5^{\circ}$ is quite similar to the original Draupner simulation; given the small crossing angle and identical peak wavenumbers, this is not unexpected. Finally, there does not seem to be any clear case of one peak growing significantly at the expense of the other. In other words, there is little energy transfer between the peaks. Note that we normalise each spectrum by the initial peak value.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/omni_dir1.eps}\\
\caption{Omnidirectional spectrum at the end of the simulations (solid lines), compared with the spectrum used as initial condition (dashed lines).}
\label{fig:SK_plot}
\end{figure}
Finally, for the various rogue waves recorded in our simulations (typically 10-15 waves recorded per ensemble member), we compare their measured crest height with the associated mean sea level (Fig. \ref{fig:msl_plot}). Rogue waves themselves are identified by measuring the local space-time maxima within each simulation, and taking those whose crest height exceeds $1.25$ times the significant wave height of the wave field at the time of the observation. As in Fedele \textit{et al.} (2016), mean sea level is estimated by low pass filtering the time series of each rogue crest below cut off frequency $f \sim f_p/2$. Interestingly, the crossing angle markedly influences the mean sea level beneath the measured extreme crests. For the Draupner-Draupner cases, increasing the angle leads to increased development of set up beneath the crest. In fact, for a crossing angle $\Omega = 90^{\circ}$, nearly all the measured crests coincided with a set up of mean sea level! As already mentioned, Adcock \textit{et al.} (2011) pointed out that the presence of a set up beneath the Draupner wave was possibly due to the perpendicular crossing of the spectrum with one of similar design. Remarkably, this phenomenon is reversed for the Draupner-JONSWAP cases. although we note that even for $\Omega = 22.5^{\circ}$ there is still a noteworthy portion of set up measurements.
Comparing with Figs. \ref{fig:SK_plot} and \ref{fig:DT_plot}, it would appear that the prominence of a measured set up is related to the presence of a distinct second spectral peak in the low end of the wavenumber spectrum. Given that it is this part of the spectrum that will remain after the second order difference low pass filter is applied, an associated positive mean sea level is reasonable. The case which features this second peak most prominently is the $\Omega = 90^{\circ}$ Draupner-Draupner case, and moreover, all Draupner-JONSWAP cases contain this feature to some degree. For small crossing angles in the Draupner-Draupner regime, the peaks are simply not discernible enough.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/dir_dist1.eps}\\
\caption{Directional distribution at the end of the simulation (solid lines), compared with the initial condition (dashed lines). Note that we take the square root of the directional distribution, to emphasise the difference between initial condition and end of simulation.}
\label{fig:DT_plot}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figs/msl_plot.eps}\\
\caption{Mean sea level versus crest height. Results are shown for the Draupner-Draupner and Draupner-JONSWAP cases for different crossing angles.}
\label{fig:msl_plot}
\end{figure}
\section{Conclusion}
We have considered the evolution of crossing sea states, simulated via the higher order spectral method of West \textit{et al.} (1987). Based on speculation that the Draupner wave may have occurred in a crossing sea state, two different spectra are considered: a crossing between the original Draupner spectrum with itself at various crossing angles ($22.5^{\circ}, 45^{\circ},$ and $90^{\circ}$), and also, the Draupner spectrum with a JONSWAP `swell' system for the same angles.
The statistical evolution of the studied wave fields is pointedly similar to the Draupner simulations found in Fedele \textit{et al.} (2016). Although it is known that crossing wave systems can possess enhanced growth rate of modulational instability, we note that if the isolated systems themselves are notably insusceptible to the instability (i.e., broad short crested ocean systems), the crossing of such systems likely will not stimulate it. Systems with small crossing angles seem to possess somewhat larger kurtosis and skewness, although the measurements are not outside the realms of possibility for wave fields with mono-peaked wave spectra. It would appear that, at least for the cases considered, that there is no extraordinary nonlinear third order interactions - at least those responsible for the Benjamin-Feir instability - when compared to regular oceanic sea states. We do note, however, that this does not rule out enhanced growth rates of the instabilities in the right circumstances, i.e., the crossing of narrow band spectra. As observed in Fedele \textit{et al.} (2016), evidence suggests that the evolution of crossing sea states in typical oceanic conditions is most likely dominated by second order nonlinearities, with extreme or rogue waves developing as a result of constructive interference enhanced by second-order bound nonlinearities.
By simulating these bi-peaked spectra, we have observed a possible explanation for the set up of mean sea level beneath the Draupner rogue wave. It would appear that prominent set up is connected to significant secondary peak in the low wavenumber portion of the wave field omnidirectional spectrum. Although we have induced this secondary peak by simulating crossing sea states, it is not implausible for it to develop naturally in a mono-peaked system. Thus, it transpires that a set up of mean sea level beneath a large wave crest is also not a remarkable feature, and may just be a consequence of the low wavenumber (frequency) portion of spectrum containing a relatively large proportion of the energy of the system.
\clearpage
\section{Acknowledgements}
This work is supported by the European Research Council (ERC) under the research projects ERC-2011-AdG
290562-MULTIWAVE and ERC-2013-PoC 632198-WAVEMEASUREMENT, and Science Foundation Ireland
under grant number SFI/12/ERC/E2227.
|
2,877,628,090,328 | arxiv | \section{Introduction}
\label{sec:intro}
The goal of \acf{LTR} in \acf{IR} is to optimize models that rank documents according to user preferences. As modern search engines may combine hundreds of ranking signals they rely on models that can combine such signals to form optimal rankings. Traditionally, this was done through Offline Learning to Rank, which relies on annotated sets of queries and documents with their relevance assessed by human raters. Over the years, the limitations of this supervised approach have become apparent: annotated sets are expensive and time-consuming to produce \cite{letor,Chapelle2011}; in some settings creating such a dataset would be a serious breach of privacy \cite{najork2016using, wang2016learning}; and annotations are not necessarily in line with user preferences \cite{sanderson2010}. As a reaction, interest in \acf{OLTR}, where models learn from interactions with users, has increased \cite{chaudhuri2016online, zhao2016constructing, schuth2016mgd, oosterhuis2016probabilistic}. While this resolves many of the issues with the offline \ac{LTR} setting, it brings challenges of its own. Firstly, \ac{OLTR} algorithms cannot directly observe their performance and thus have to infer from user interactions how they can improve. Secondly, they have to perform their task, i.e., decide what rankings to display, while simultaneously learning from user interactions.
In stark contrast with other work on \ac{LTR}, existing work in \ac{OLTR} has only considered optimizing linear models and merely focussed on improving gradient estimation. We argue that this limitation is due to a \emph{speed-quality tradeoff} that previous work has faced. This tradeoff is a result of the dual nature of the \ac{OLTR} task: algorithms are evaluated both on how they perform the task while learning and on the final ranking model they converge towards. This duality is especially important as \ac{OLTR} involves human interactions: some strategies may result in an optimal ranking model but may frustrate users during learning.
Consider the experiment visualized in Figure~\ref{fig:intro:tradeoff}. Here, a Linear Model (MGD) and a simpler \ac{Sim-MGD} are optimized on user interactions. The latter learns faster and fully converges in fewer than 200 impressions, while the Linear Model initially trails Sim-MGD but is more expressive, requires more impressions, and ultimately exceeds Sim-MGD in offline performance (as measured in NDCG).
\begin{figure}[h]
\centering
\includegraphics[clip,trim=0mm 2mm 0mm 0mm,width=\columnwidth]{img/introductionexample/ExampleGap_offline_informational}
\caption{Offline performance (NDCG) of a Linear Model (MGD) and a Similarity Model (Sim-MGD) on the NP2003 dataset under an informational click model. (Full details of the experimental setup are provided in Section~\ref{sec:experiments}.)}
\label{fig:intro:tradeoff}
\end{figure}
\ac{OLTR} models that are less complex, i.e., that require fewer user interactions to converge, may provide a good user experience as they adapt quickly. However, because of their limited complexity they often lack expressiveness, causing them to learn suboptimal rankings. Conversely, a more complex \ac{OLTR} model may ultimately find the optimal rankings but requires more user interactions. Thus, such models ultimately produce a better experience but risk deterring users before this level of performance is reached. As a result, a fundamental tradeoff has to be made: a good user experience during training resulting in suboptimal rankings vs.\ the risk of frustrating users while finding superior rankings in the end. We call this dichotomy the \emph{speed-quality tradeoff}.
To address the speed-quality tradeoff, a method for combining the properties of multiple models is required. In this paper we meet this challenge by making two contributions. First, we introduce a novel model that uses document feature similarities (\ac{Sim-MGD}) to learn more rapidly than the state-of-the-art, \ac{MGD} \cite{schuth2016mgd, oosterhuis2016probabilistic}. However, \ac{Sim-MGD} converges towards rankings inferior to \acs{MGD} as predicted by the speed-quality tradeoff.
Secondly, we propose a novel cascading \ac{OLTR} approach, called \ac{C-MGD},
that uses two \ac{OLTR} models, a fast simple model and a slower complex model. Initially the cascade lets the faster model learn by interacting with its users. Later, when the faster learner has converged it is used to initialize the expressive model and discarded. \ac{C-MGD} then continues optimization by letting the expressive model interact with the user.
Consequently, the user experience is improved, both short term and long term, as users initially interact with a fast adapting model, while ultimately the better ranker using the complex model is still found.
Our empirical results show that the cascade approach, i.e., \ac{C-MGD}, can combine the improved user experience from \ac{Sim-MGD} while still maintaining the optimal convergence of the state-of-the-art.
In this paper we address the following research questions:
\begin{enumerate}[label={\bf RQ\arabic*},leftmargin=*,nosep,topsep=1pt]
\item Is the user experience significantly improved when using \ac{Sim-MGD}? \label{rq:simmgd}
\item Can the cascading approach, \ac{C-MGD}, combine an improvement in user experience while maintaining convergence towards state-of-the-art performance levels? \label{rq:cmgd}
\end{enumerate}
To facilitate replicability and repeatability of our findings, we provide open source implementations of both \ac{Sim-MGD} and~\ac{C-MGD}.\footnote{\url{https://github.com/HarrieO/BalancingSpeedQualityOLTR}}
\section{Related Work}
\label{sec:relatedwork}
We provide a brief overview of \ac{LTR} and \ac{OLTR} before describing methods for combining multiple models in Machine Learning.
\subsection{Learning to rank}
\acf{LTR} is an important part of \acf{IR} and allows modern search engines to base their rankings on hundreds of relevance signals \cite{liu2009learning}. Traditionally, a supervised approach is taken where human raters annotate whether a document is relevant to a query \cite{Qin2013Letor, Chapelle2011}. Additionally, previous research has considered semi-supervised approaches that use unlabeled sample data next to annotated data \cite{szummer11:semi, Joachims2002}. Both supervised and semi-supervised approaches are typically performed offline, meaning that training is performed after annotated data has been collected. When working with previously collected data, the speed-quality tradeoff does not arise, since users are not involved during training. Consequently, complex and expressive models have been very successful in the offline setting \cite{Joachims2002, burges2010ranknet}.
However, in recent years several issues with training on annotated datasets have been found. Firstly, gathering annotations is time-consuming and costly \cite{letor, Qin2013Letor, Chapelle2011}, making it infeasible for smaller organisations to collect such data. Secondly, for certain search contexts collecting data would be unethical, e.g., in the context of search within personal emails or documents \cite{wang2016learning}. Thirdly, since the datasets are static, they cannot account for future changes in what is considered relevant. Models derived from such datasets are not necessarily aligned with user satisfaction, as annotators may interpret queries differently from actual users \cite{sanderson2010}.
\subsection{Online learning to rank}
\acf{OLTR} attempts to solve the issues with offline annotations by directly learning from user interactions~\cite{yue09:inter}, as direct interactions with users are expected to be more representative of their preferences than offline annotations~\cite{radlinski08:how}. The task of \ac{OLTR} algorithms is two-fold: they must choose what rankings to display to users while simultaneously learning from interactions with the presented rankings. Although the \ac{OLTR} task can be modeled as a \ac{RL} problem~\cite{sutton1998:introduction}, it differs from a typical \ac{RL} setting because there is no observable reward.
The main difficulties with performing both aspects of the \ac{OLTR} task come in the form of \emph{bias} and \emph{noise}. Noise occurs when the user's interactions do not represent their true preferences, e.g., users often click on a document for unexpected reasons \cite{sanderson2010}. Bias arises in different ways, e.g., there is selection bias as interactions only involve displayed documents \cite{wang2016learning} and position bias as documents at the top of a ranking are more likely to be considered \cite{yue2010beyond}. These issues complicate relevance inference, since the most clicked documents are not necessarily the most relevant.
Consequently, state-of-the-art \ac{OLTR} algorithms do not attempt to predict the relevance of single documents. Instead, they approach training as a dueling bandit problem~\cite{yue09:inter} which relies on methods from online evaluation to compare rankers based on user interactions~\cite{radlinski2010comparing, radlinski2013practical}. Interleaving methods combine rankings from two rankers to produce a single result list; from large numbers of clicks on interleavings a preference for one of the two rankers can be inferred~\cite{radlinski2013optimized, hofmann2011probabilistic}. This approach has been extended to find preferences between larger sets of rankers in the form of multileaving~\cite{Schuth2014a,schuth2015probabilistic}. These comparison methods have recently given rise to \ac{MGD}, a more sensitive \ac{OLTR} algorithm that requires fewer user interactions to reach the same level of performance \cite{schuth2016mgd}. The improvement is achieved by comparing multiple rankers at each user impression, the results of which are then used to update the \ac{OLTR} model. Initially, the number of rankers in the comparison was limited to the SERP length \cite{Schuth2014a}. Probabilistic multileaving~\cite{schuth2015probabilistic} allows comparisons of a virtually unlimited size, leading to even better gradient estimation~\cite{oosterhuis2016probabilistic}.
In contrast to Offline \acs{LTR} \cite{Joachims2002, burges2010ranknet}, work in \ac{OLTR} has only considered optimizing linear combinations of ranking features~\cite{yue09:inter, hofmann12:balancing}.
Recent research has focused on improving the gradient estimation of the \ac{MGD} algorithm \cite{schuth2016mgd, oosterhuis2016probabilistic}.\
We argue this focus is a consequence of the speed-quality tradeoff; since \acs{OLTR} algorithms are evaluated by the final model they produce (i.e., offline performance) and the user experience during training (i.e., online performance), improvements should not sacrifice either of these aspects. Unfortunately, every model falls on one side of the tradeoff. For instance, more complex models like regression forests or neural networks \cite{burges2010ranknet} are very prominent in offline \acs{LTR} but they require much larger amounts of training data than for instance a simpler linear model. Thus initially more users will be shown inferior rankings when training such a complex model . Although such models may eventually find the optimal rankings, they sacrifice the user experience during training and thus will not beat the \acs{MGD} baseline in online performance. Our solution to this tradeoff is meant to stimulate the exploration of a wider range of ranking models in \acs{OLTR}.
\subsection{Multileave gradient descent}
We build on the \acl{MGD} algorithm \cite{schuth2016mgd}; see Algorithm~\ref{alg:mgd}. Briefly, at all times the algorithm has a \emph{current best} ranker $\mathbf{w}^t_0$ that is the estimate of the optimal ranker at timestep $t$. Initially, this model starts at the root $\mathbf{w}^0_0=0$, then after each issued query, another $n$ rankers $\mathbf{w}_t^n$ are sampled from the unit sphere around the \emph{current best} ranker (Line~\ref{line:mgd:candidate}). These sampled rankers are candidates: slight variations of the \emph{current best}; \acs{MGD} tries to infer if these variations are an improvement and updates accordingly. The candidates produce rankings for the query, which are combined into a single multileaved result list, e.g., by using Probabilistic Multileaving \cite{oosterhuis2016probabilistic, schuth2015probabilistic} (Line~\ref{line:mgd:multileave}). The resulting result list is displayed to the user and clicks are observed (Line \ref{line:mgd:click}); from the clicks the rankers preferred over the \emph{current best} are inferred (Line~\ref{line:mgd:infer}).
If none of the other rankers is preferred the \emph{current best} is kept, otherwise the model takes a $\eta$ step towards the mean of the winning rankers (Line~\ref{line:mgd:update}). After the model has been updated, the algorithm waits for the next query to repeat the process.
\begin{algorithm}[t]
\caption{Multileave Gradient Descent (MGD)~\cite{schuth2016mgd}.}
\label{alg:mgd}
\begin{algorithmic}[1]
\STATE \textbf{Input}: $n$, $\delta$, $\mathbf{w}^0_0$, $\eta$
\FOR{$t \leftarrow 1..\infty$ }
\STATE $q_t \leftarrow \mathit{receive\_query}(t)$\hfill \textit{\small // obtain a query from a user} \label{line:mgd:query}
\STATE $\mathbf{l}_0 \leftarrow \mathit{generate\_list}(\mathbf{w}^0_t,q_t)$ \hfill \textit{\small // ranking of current best}
\FOR{$i \leftarrow 1..n$}\label{line:mgd:loopstart}
\STATE $\mathbf{u}^i_t \leftarrow \mathit{sample\_unit\_vector}()$ \label{line:mgd:unitsphere}
\STATE $\mathbf{w}_t^i \leftarrow \mathbf{w}^0_t + \delta \mathbf{u}^i_t $ \hfill \textit{\small // create a candidate ranker} \label{line:mgd:candidate}
\STATE $\mathbf{l}_i \leftarrow \mathit{generate}\_list(\mathbf{w_t^i},q_t)$ \hfill \textit{\small // exploratory ranking} \label{line:mgd:loopstop}
\ENDFOR
\STATE $\mathbf{m}_t \leftarrow \mathit{multileave}(\mathbf{l})$\hfill \textit{\small // multileaving} \label{line:mgd:multileave}
\STATE $\mathbf{c}_t \leftarrow \mathit{receive\_clicks}(\mathbf{m}_t)$\hfill \textit{\small // show multileaving to the user} \label{line:mgd:click}
\STATE $\mathbf{b}_t \leftarrow \mathit{infer\_preferences}(\mathbf{l},\mathbf{m}_t,\mathbf{c}_t)$ \hfill \textit{\small // winning rankers} \label{line:mgd:infer}
\STATE $\mathbf{w}^0_{t+1} \leftarrow \mathbf{w}^0_{t} + \eta \frac{1}{|\mathbf{b}_t|}\sum_{j \in \mathbf{b}_t} \mathbf{u}^j_t$ \label{line:mgd:update} \hfill \textit{\small // winning set may be empty}
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Combining models in machine learning}
Combining models is a prevalent approach in machine learning~\cite{bishop2006pattern}; often, this is done by averaging the predictions of a set of models~\cite{breiman2001random}. Alternatively, some methods select which model to use based on the input variables~\cite{jacobs1991adaptive}. A set of multiple models whose output is averaged is called a committee, a concept that can be applied in different ways. The simplest way is by \emph{bagging}: training different models on bootstrapped datasets and taking the mean of their predictions \cite{breiman1996bagging}.
A more powerful committee technique is boosting \cite{freund1996experiments}, which trains models in sequence. Each model is trained on a weighted form of the dataset where the weights of a datapoint depend on the performance of the committee thus far. Hence, training will give more weight to points that are misclassified by the previous models. When the committee is complete their predictions are combined using a weighted voting scheme. This form of boosting is applicable to supervised classification \cite{freund1996experiments} and regression \cite{friedman2000additive}; it has also been used extensively in offline \acs{LTR}, e.g., in LambdaMART \cite{burges2010ranknet}. The main difference with our approach and ensemble methods is that their aim is to reduce the final error of the committee. None of the ensemble methods are based around user interactions; hence, none deal with the speed-quality tradeoff.
\smallskip\noindent%
On top of the related work discussed above we contribute the following: a novel \ac{OLTR} method that ranks based on feature similarities with example documents. This is the first \ac{OLTR} model that is not a direct linear model. Furthermore, we introduce a novel \ac{OLTR} algorithm that combines multiple ranking models, unlike the model combining methods discussed before this method does not combine the output of two models. Instead, different parts of the learning process are assigned to the models that are expected to perform best during that period, i.e., a model that requires less data will perform better in the initial phase of learning. This makes it the first algorithm that uses multiple ranking models to increase the user experience during learning.
\section{Sim-MGD: A Fast \ac{OLTR} Model Based on Document Feature Similarity}
\label{sec:simmgd}
In this section we introduce a novel ranking model for \ac{OLTR}, by basing result lists on feature similarities with reference documents it learns more rapidly than \ac{MGD}. However, as predicted by the speed-quality tradeoff, the increase in speed sacrifices some of the expressiveness of the model; Section~\ref{sec:cmgd} provides a method for dealing with this tradeoff.
Previous work in \ac{OLTR} has only considered optimizing linear combinations of features of documents.\footnote{Though learning the parameters for individual ranking features was researched \cite{schuth2014optimizing}.} Let $\mathbf{w}$ be the set of weights that is learned and $\mathbf{d}$ the feature representation of a query document pair. Then a document is ranked according to the score of:
\begin{align}
R_\mathit{MGD}(\mathbf{d}) = \sum_i w_i d_i. \label{eq:mgdmodel}
\end{align}
There are several properties of the \acs{LTR} problem that this model does not make use of. For instance, almost all~\ac{LTR} features are relevance signals (e.g., BM25 or PageRank), so it is very unlikely that any should be weighted negatively. However, \ac{MGD} does not consider this when exploring; it may even consider a completely negative ranker.
As an alternative, we propose a ranking model based on the assumption that relevant documents have similar features.
Here, a set of document-query pairs $\mathbf{D}_M = \{\mathbf{d}_1,\ldots,\mathbf{d}_m\}$ is used as reference points, documents are then ranked based on their weighted similarity to those in the set:
\begin{align}
R_\mathit{sim}(\mathbf{d}) = \sum^M_{m=1} \frac{w_m}{|\mathbf{d}_m|} \mathbf{d}^T\mathbf{d}_m \label{eq:simmodel}
\end{align}
where the documents in $\mathbf{D}_M$ are $L_2$-normalized.
Since this model consists of a linear combination, optimizing its weights $\mathbf{w}$ is straightforward with the existing \ac{MGD} (Algorithm~\ref{alg:mgd}) or with our novel algorithm \ac{C-MGD} (Algorithm~\ref{alg:cmgd}) to be introduced below. For clarity we have displayed \ac{MGD} optimizing the similarity model \ac{Sim-MGD} in Algorithm~\ref{alg:simmgd}. Unlike \ac{MGD}, \ac{Sim-MGD} requires a collection of document-query pairs from which the set $\mathbf{D}_m$ is sampled (Line~\ref{line:simmgd:sample}). \ac{Sim-MGD} is still initialized with $\mathbf{w}^0_0 = 0$ but the number of weights is now determined by the size of the reference set $M$. For each query that is received, a result list is created by the \emph{current best} ranker (Line~\ref{line:simmgd:cbranking}); here, the ranker is defined by the weights $\mathbf{w}^0_t$ and the set $\mathbf{D}_M$ according to Equation~\ref{eq:simmodel}. Then $n$ candidates are sampled around the \emph{current best} ranker (Line~\ref{line:simmgd:candidate}) and their result lists are also created using Equation~\ref{eq:simmodel} (Line~\ref{line:simmgd:loopstop}). The result lists are combined into a multileaving and presented to the user (Line~\ref{line:simmgd:multileave}--\ref{line:simmgd:click}); if preferences are inferred from their interactions with the displayed result list, the \emph{current best} ranker is updated accordingly (Line~\ref{line:simmgd:infer}--\ref{line:simmgd:update}).
\if0
In \ac{C-MGD} it makes sense to put the similarity model as $R_\mathit{simple} = R_\mathit{sim}$ and the linear model as $R_\mathit{complex} = R_\mathit{MGD}$.
\fi
The intuition behind \ac{Sim-MGD} is that it is easier to base a result list on good or bad examples than it is to discover how each feature should be weighed. Moreover, \ac{MGD} optimizes faster in spaces with a lower dimensionality \cite{yue09:inter}; thus, a small number of reference documents $M$ speeds up learning further.
In spite of this speedup, the similarity model is less expressive than the standard linear model (Equation~\ref{eq:mgdmodel}). Regardless of $\mathbf{D}_M$, the similarity model can always be rewritten to a linear model:
\begin{align}
R(\mathbf{d}) = \sum^M_{m=1} \frac{w_m}{|\mathbf{d}_m|} \mathbf{d}^T\mathbf{d}_m
= \mathbf{d}^T \sum^M_{m=1} \frac{w_m}{|\mathbf{d}_m|} \mathbf{d}_m. \label{eq:simtolin}
\end{align}
However, not every linear model can necessarily be rewritten as a similarity model, especially if the reference set $\mathbf{D}_M$ is small. Thus the space of models is limited by $\mathbf{D}_M$, providing faster learning but potentially excluding the optimal ranker. Therefore, the similarity model falls on the speed side of the speed-quality tradeoff.
For this paper, different sampling methods for creating $\mathbf{D}_M$ (Line~\ref{line:simmgd:sample}) are investigated. First, a uniform sampling, expected to cover all documents evenly, is considered. Additionally, k-means clustering is used, where $k=M$ and the centroid of each cluster is used as a reference document; this increases the chance of representing all different document types in the reference set.
\ac{Sim-MGD} is expected to learn faster and provide a better initial user experience than \ac{MGD}. However, it is less expressive
and is thus expected to converge at an inferior optimum. Again, without the use of \ac{C-MGD} the similarity model falls on the speed side of the speed-quality tradeoff.
\begin{algorithm}[t]
\caption{\ac{MGD} with the Similarity Model (\ac{Sim-MGD}).}
\label{alg:simmgd}
\begin{algorithmic}[1]
\STATE \textbf{Input}: $\mathbf{C}$, $M$, $n$, $\delta$, $\mathbf{w}^0_0$, $\eta$
\STATE $\mathbf{D}_M \leftarrow \{\mathbf{d}_0,\ldots,\mathbf{d}_m\} \sim \textit{sample}(\mathbf{C})$ \hfill \textit{\small// sample reference documents} \label{line:simmgd:sample}
\FOR{$t \leftarrow 1..\infty$ }
\STATE $q_t \leftarrow \mathit{receive\_query}(t)$\hfill \textit{\small // obtain a query from a user} \label{line:simmgd:query}
\STATE $\mathbf{l}_0 \leftarrow \mathit{generate\_list}(\mathbf{w}^0_t,q_t,\mathbf{D}_M)$ \hfill \textit{\small// exploitive ranking (Eq.~\ref{eq:simmodel})} \label{line:simmgd:cbranking}
\FOR{$i \leftarrow 1..n$}\label{line:simmgd:loopstart}
\STATE $\mathbf{u}^i_t \leftarrow \mathit{sample\_unit\_vector}()$ \label{line:simmgd:unitsphere}
\STATE $\mathbf{w}_t^i \leftarrow \mathbf{w}^0_t + \delta \mathbf{u}^i_t $ \hfill \textit{\small // create a candidate ranker} \label{line:simmgd:candidate}
\STATE $\mathbf{l}_i \leftarrow \mathit{generate}\_list(\mathbf{w_t^i},q_t,\mathbf{D}_M )$ \hfill \textit{\small// exploratory ranking (Eq.~\ref{eq:simmodel})} \label{line:simmgd:loopstop}
\ENDFOR
\STATE $\mathbf{m}_t \leftarrow \mathit{multileave}(\mathbf{l})$\hfill \textit{\small // multileaving} \label{line:simmgd:multileave}
\STATE $\mathbf{c}_t \leftarrow \mathit{receive\_clicks}(\mathbf{m}_t)$\hfill \textit{\small // show multileaving to the user} \label{line:simmgd:click}
\STATE $\mathbf{b}_t \leftarrow \mathit{infer\_preferences}(\mathbf{l},\mathbf{m}_t,\mathbf{c}_t)$ \hfill \textit{\small // winning rankers} \label{line:simmgd:infer}
\STATE $\mathbf{w}^0_{t+1} \leftarrow \mathbf{w}^0_{t} + \eta \frac{1}{|\mathbf{b}_t|}\sum_{j \in \mathbf{b}_t} \mathbf{u}^j_t$ \label{line:simmgd:update} \hfill \textit{\small // winning set may be empty}
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{C-MGD: Combining \ac{OLTR} Models as a Cascade}
\label{sec:cmgd}
We aim to combine the initial learning speed of one model and the final convergence of another. This provides the best performance and user experience in the short and long term. Our proposed algorithm makes use of a cascade: initially it optimizes the faster model by letting it interact with the users until convergence is detected. At this point, the learning speed of the faster model will no longer be of advantage as the model is oscillating around a (local) optimum. Furthermore, it is very likely that a better optimum exists in a more expressive model space, especially if the faster model is relatively simple.
To make use of this likelihood, optimization is continued using a more complex model that is initialized with the first model. If this switch is made appropriately, the advantages of both models are combined: a fast initial learning speed and convergence at a better optimum. We call this algorithm \acfi{C-MGD}; before it is detailed, we discuss the main challenges of switching between models during learning.
\subsection{Detecting convergence}
\ac{C-MGD} has to detect convergence during optimization. After sufficiently many interactions, the performance of \ac{MGD} plateaus \cite{oosterhuis2016probabilistic, schuth2016mgd}. However, in the online setting there is no validation set to verify this. Instead, convergence of the model itself can be measured by looking at how much it has changed over a recent period of time. If the ranker has barely changed,
then either the estimated gradient is oscillating around a point in the model space, or few of the clicks prefer the candidates that \ac{MGD} has proposed. Both cases are indicative of finding a (local) optimum. Correspondingly, during \ac{MGD} optimization the convergence of a model $\mathbf{w}^t$ at timestep $t$ can be assumed if it has not changed substantially during the past $h$ iterations. \ac{C-MGD} considers a change significant if the cosine similarity between the current model and the model of $h$ iterations earlier exceeds a chosen threshold $\epsilon$:
\begin{align}
\frac{\mathbf{w}^{t} \cdot \mathbf{w}^{t-h}}{\|\mathbf{w}^{t}\|\cdot\|\mathbf{w}^{t-h}\|} < 1 - \epsilon.
\end{align}
The cosine similarity is appropriate here since linear combinations are unique by their direction and not their norm. Since scaling the weights of a model produces the same rankings, i.e., for a document pair $\{ \mathbf{d}_i , \mathbf{d}_j\}$:
\begin{align}
\mathbf{w} \cdot \mathbf{d}_i > \mathbf{w}\cdot \mathbf{d}_j \rightarrow \beta \mathbf{w}\cdot \mathbf{d}_i > \beta \mathbf{w}\cdot \mathbf{d}_j.
\end{align}
Therefore, a minor change in the cosine similarity indicates that the model creates rankings that are only slightly different.
\subsection{Difference in confidence}
\ac{C-MGD} has to account for the difference in confidence when changing model space.
Convergence in the simpler model space gives \ac{C-MGD} confidence that an optimum was found, but some of this confidence is lost when switching model spaces since a lot of the new space has not been explored.
\ac{MGD}'s confidence is indicated by the norm of its model's weights, which increases if a preference in the same direction is repeatedly found.
Consequently, when initializing the subsequent model \ac{C-MGD} has to renormalize for the difference in confidence due to switching model spaces.
This is not trivial as it affects exploration, since the norm determines how dissimilar the sampled candidates will be.
If the norm is set too low it will continue by exploring a large region of model space, thus neglecting the learning done by the previous model. But if \ac{C-MGD} starts with a norm that is too large it will continue with so little exploration that it may not find the new optimum in a reasonable amount of time.
Directly measuring confidence is not possible in the online setting. Instead, rescaling is estimated from the difference in dimensionality of the models:
\begin{align}
\|\mathbf{w}_\textit{complex}\| = \|\mathbf{w}_\textit{simple}\| \cdot \frac{\sqrt{D_\textit{simple}}}{\sqrt{D_\textit{complex}}},
\end{align}
where $D_\textit{simple}$ and $D_\textit{complex}$ are the dimensionality of the simple and complex model respectively.
In line with the regret bounds found by \citet{yue09:inter}, the algorithm's confidence decreases when more parameters are introduced.
\if
\else
Lastly, another issue with swapping models is that some complex model cannot be initialized with the simpler model. For instance, if a non-linear kernel is used in the similarity model then there is no equivalent linear model. This issue can be solved by interpreting as the simpler model as another ranking signal. For the similarity model a new feature $\phi_0(\mathbf{d}) = R_{sim}(\mathbf{d})$ can be added to a linear model with all zero weights except for the corresponding weight $w_0$. Since most models consists of linear combinations this approach is usually applicable, i.e. regression forests and neural networks both are linear combinations of either trees or nodes, adding the previous model to this combination is straightforward. As a result any combination of simple and complex models can be used by ultimately optimizing their linear combination.
\fi
\subsection{A walkthrough of \ac{C-MGD}}
Finally, \ac{C-MGD} is formulated in Algorithm~\ref{alg:cmgd}. As input, \ac{C-MGD} takes two ranking models $R_\mathit{simple}$ and $R_\mathit{complex}$ with dimensionalities $D_\mathit{simple}$ and $D_\mathit{complex}$.
\ac{C-MGD} will optimize its \emph{current best} weights $\mathbf{w}^0_t$ for its current model $R_*$. Initially, $R_*$ is set to the fast learner: $R_\mathit{simple}$ (Line~\ref{line:cmgd:initmodel}). Then, for each incoming query (Line~\ref{line:cmgd:query}) the ranking of the current model ($R_{*},\mathbf{w}^0_t$) is generated (Line~\ref{line:cmgd:generatelist}). Subsequently, $n$ candidates are sampled from the unit sphere around the current weights and the ranking of each candidate is generated (Line~\ref{line:cmgd:sample}--\ref{line:cmgd:candlist}). All of the rankings are then combined into a single multileaving \cite{schuth2015probabilistic} and displayed to the user (Line~\ref{line:cmgd:multileave}--\ref{line:cmgd:clicks}). Based on the clicks of the user, a preference between the candidates and the \emph{current best} can be inferred (Line~\ref{line:cmgd:pref}). If some candidates are preferred over the \emph{current best}, an update is performed to take an $\eta$ step towards them (Line~\ref{line:cmgd:update}). Otherwise, the \emph{current best} weights will be carried over to the next iteration. At this point \ac{C-MGD} will check for convergence by comparing the cosine similarity between the \emph{current best} and the weights from $h$ iterations before: $w^0_{t-h}$ (Line~\ref{line:cmgd:convergence}).
If convergence is detected, \ac{C-MGD} switches to the complex model (Line~\ref{line:cmgd:swap}) and the \emph{current best} weights are converted to the new model space (Line~\ref{line:cmgd:project}).
The weights now have to be renormalized to account for the change in model space and rescaled for the difference in confidence (Line~\ref{line:cmgd:rescaling}).
Optimization now continues without the check for convergence.
The result is an algorithm that optimizes a cascade of two models, combining the advantages of both. For this study we only considered a cascade of two models, extending this approach to a larger number is straightforward.
\begin{algorithm}[t]
\caption{Cascading Multileave Gradient Descent (C-MGD).}
\label{alg:cmgd}
\begin{algorithmic}[1]
\STATE \textbf{Input}: $n$, $\delta$, $\mathbf{w}^0_0$, $h$, $\epsilon$, $R_\mathit{simple}$, $R_\mathit{complex}$, $D_\mathit{simple}$, $D_\mathit{complex}$
\STATE $R_{*} \gets R_\textit{simple}$ \label{line:cmgd:initmodel}
\FOR{$t \leftarrow 1..\infty$ }
\STATE $q_t \leftarrow \mathit{receive\_query}(t)$\hfill \textit{\small // obtain a query from a user} \label{line:cmgd:query}
\STATE $\mathbf{l}_0 \leftarrow \mathit{generate\_list}(R_{*},\mathbf{w}^0_t,q_t)$ \hfill \textit{\small // ranking of current best} \label{line:cmgd:generatelist}
\FOR{$i \leftarrow 1...n$}
\STATE $\mathbf{u}^i_t \leftarrow \mathit{sample\_unit\_vector}()$ \label{line:cmgd:sample}
\STATE $\mathbf{w}_t^i \leftarrow \mathbf{w}^0_t + \delta \mathbf{u}^i_t $ \hfill \textit{\small // create a candidate ranker}
\STATE $\mathbf{l}_i \leftarrow generate\_list(\mathbf{w_t^i},q_t,R_{*})$ \hfill \textit{\small // exploratory ranking} \label{line:cmgd:candlist}
\ENDFOR
\STATE $\mathbf{m}_t, \mathbf{t}_t \leftarrow \mathit{multileave}(\mathbf{l})$\hfill \textit{\small // multileaving and teams} \label{line:cmgd:multileave}
\STATE $\mathbf{c}_t \leftarrow \mathit{receive\_clicks}(\mathbf{m}_t)$\hfill \textit{\small // show multileaving to the user} \label{line:cmgd:clicks}
\STATE $\mathbf{b}_t \leftarrow \mathit{infer\_preferences}(\mathbf{t}_t,\mathbf{c}_t)$ \hfill \textit{\small // winning candidates} \label{line:cmgd:pref}
\STATE $\mathbf{w}^0_{t+1} \leftarrow \mathbf{w}^0_{t} + \eta \frac{1}{|\mathbf{b}_t|}\sum_{j \in \mathbf{b}_t} \mathbf{u}^j_t$ \hfill \textit{\small // winning set may be empty} \label{line:cmgd:update}
\IF{$t \geq h \land R_{*} = R_\mathit{simple} \land 1 - \cos(\mathbf{w}^0_{t+1},\mathbf{w}^0_{t-h}) < \epsilon$} \label{line:cmgd:convergence}
\STATE $R_{*} \gets R_\mathit{complex}$ \label{line:cmgd:swap}
\STATE $\mathbf{w}' \gets \mathit{convert}_{R_\mathit{simple} \rightarrow R_\mathit{complex}}(\mathbf{w}_{t+1})$ \hfill \textit{\small // new model space} \label{line:cmgd:project}
\STATE $\mathbf{w}_{t+1} \gets \mathbf{w}' \cdot \frac{\|\mathbf{w}_{t+1}\|}{\|\mathbf{w}'\|} \cdot \frac{\sqrt{D_\mathit{simple}}}{\sqrt{D_\mathit{complex}}}$\label{line:cmgd:rescaling}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Experiments}
\label{sec:experiments}
This section describes the experiments we run to answer the research questions posed in Section~\ref{sec:intro}.
Firstly (\ref{rq:simmgd}), we are interested in whether \ac{Sim-MGD} provides a better user experience, i.e., online performance, than \ac{MGD}. Secondly (\ref{rq:cmgd}), we wish to know if \ac{C-MGD} is capable of dealing with the speed-quality tradeoff, that is, whether \ac{C-MGD} can provide the improved user experience of \ac{Sim-MGD} (online performance) while also having the optimal convergence of \ac{MGD} (offline performance).
Every experiment below is based around a stream of independent queries coming from users. The system responds to a query by presenting a list of documents to the user in an impression. The user may or may not interact with the list by clicking on one or more documents. The queries and documents come from static datasets (Section~\ref{sec:experiments:datasets}), users are simulated using click models (Section~\ref{sec:experiments:users}).
Our experiments are described in Section~\ref{sec:experiments:runs}
and our metrics in Section~\ref{sec:experiments:evaluation}.
\subsection{Datasets}
\label{sec:experiments:datasets}
Our experiments are performed over eleven publicly available \acs{OLTR} datasets with varying sizes and representing different search tasks. Each dataset consists of a set of queries and a set of corresponding documents for every query. While queries are represented only by their identifiers, feature representations and relevance labels are available for every document-query pair. Relevance labels are graded differently by the datasets depending on the task they model; for instance, the navigational datasets have binary labels for not relevant (0) and relevant (1), whereas most informational tasks have labels ranging from not relevant (0) to perfect relevancy (5).
Every dataset is divided in training, validation and test partitions.
The first publicly available \emph{Learning to Rank} datasets are distributed as LETOR 3.0 and 4.0~\cite{letor}; they use representations of 45, 46, or 64 features, respectively, that encode ranking models such as TF.IDF, BM25, Language Modelling, PageRank, and HITS on different parts of the documents. The datasets in LETOR are divided by their tasks, most of which come from the TREC Web Tracks between 2003 and 2008 \cite{craswell2003overview,clarke2009overview}: \emph{HP\-2003}, \emph{HP\-2004}, \emph{NP\-2003}, and \emph{NP\-2004} are based on navigational task
; both \emph{TD\-2003} and \emph{TD\-2004} implement the informational task of topic distillation. \emph{HP2003, HP2004, NP2003, NP2004, TD2003} and \emph{TD2004} each contain between 50 and 150 queries and 1,000 judged documents per query. The \emph{OH\-SU\-MED} dataset is based on a query log of the search engine on the MedLine abstract database, and contains 106 queries. Lastly, the two most recent datasets \emph{MQ2007} and \emph{MQ2008} were based on the Million Query Track \cite{allan2007million} and consist of 1700 and 800 queries, respectively, but have far fewer assessed documents per query.
In 2010 Microsoft released the \emph{MSLR-WEB30k} and \emph{MLSR-WEB10K} \cite{Qin2013Letor}, the former consists of 30,000 queries obtained from a retired labelling set of a commercial web search engine (Bing), the latter is a subsampling of 10,000 queries from the former dataset. The datasets uses 136 features to represent its documents, each query has around 125 assessed documents.
For practical reasons only \emph{MLSR-WEB10K} was used for this paper.
Lastly, also in 2010 Yahoo! organised a public Learning to Rank Challenge \cite{Chapelle2011} with an accompanying dataset. This set consist of 709,877 documents encoded in 700 features and sampled from query logs of the Yahoo! search engine spanning 29,921 queries.
\subsection{Simulating user behavior}
\label{sec:experiments:users}
\begin{table}[tb]
\caption{Instantiations of Cascading Click Models~\cite{guo09:efficient} as used for simulating user behavior in experiments.}
\centering
\begin{tabularx}{\columnwidth}{ l c c c c c c c c c c }
\toprule
& \multicolumn{5}{c}{\small $P(\mathit{click}=1\mid R)$} & \multicolumn{5}{c}{\small $P(\mathit{stop}=1\mid R)$} \\
\small $R$ & \small \emph{$ 0$} & \small \emph{$ 1$} & \small \emph{$ 2$} & \small \emph{$ 3$} & \small \emph{$ 4$}
& \small \emph{$0$} & \small \emph{$ 1$} & \small \emph{$ 2$} & \small \emph{$ 3$} & \small \emph{$ 4$} \\
\midrule
\small \emph{perf} & \small 0.0 & \small 0.2 & \small 0.4 & \small 0.8 & \small 1.0 & \small 0.0 & \small 0.0 & \small 0.0 & \small 0.0 & \small 0.0 \\
\small \emph{nav} & \small ~~0.05 & \small 0.3 & \small 0.5 & \small 0.7 & \small ~~0.95 & \small 0.2 & \small 0.3 & \small 0.5 & \small 0.7 & \small 0.9 \\
\small \emph{inf} & \small 0.4 & \small 0.6 & \small 0.7 & \small 0.8 & \small 0.9 & \small 0.1 & \small 0.2 & \small 0.3 & \small 0.4 & \small 0.5 \\
\bottomrule
\end{tabularx}
\label{tab:clickmodels}
\end{table}
Users are simulated using the standard setup for \acs{OLTR} simulations \cite{Hofmann2013a,schuth2016mgd,oosterhuis2016probabilistic}. First, a user issues a query simulated by uniformly sampling a query from the static dataset. Subsequently, the algorithm decides the result list of documents to display. The behavior of the user after it receives this result list is simulated using a \emph{cascade click model}~\cite{chuklin-click-2015,guo09:efficient}. This model assumes a user to examine the documents of the result list in their displayed order. For each document that is considered the user decides whether it warrants a click. This is modelled as the conditional probability $P(click=1\mid R)$, where $R$ is the relevance label provided by the dataset. Accordingly, \emph{cascade click model} instantiations increase the probability of a click with the degree of the relevance label. After the user has clicked on a document, their information need may be satisfied, otherwise they will continue by considering the remaining documents. The probability of the user not examining more documents after clicking is modeled as $P(stop=1\mid R)$,
where it is more likely that the user is satisfied from a very relevant document. For this paper $\kappa=10$ documents are displayed to the user at each impression.
Table~\ref{tab:clickmodels} lists the three instantiations of cascade click models that were used for this paper. The first models a \emph{perfect} user that considers every document and clicks on all relevant documents and nothing else.
Secondly, the \emph{navigational} instantiation models a user performing a navigational task and is mostly looking for a single highly relevant document. Finally, the \emph{informational} instantiation models a user without a very specific information need that typically clicks on multiple documents.
These three models have increasing levels of noise, as the behavior of each depends less on the relevance labels of the displayed documents.
\subsection{Experimental runs}
\label{sec:experiments:runs}
As a baseline, Probabilistic-\acs{MGD}~\cite{oosterhuis2016probabilistic} is used. Based on previous work this study uses $n=19$ candidates per iteration sampled from the unit sphere with $\delta=1$; updates are performed with $\eta = 0.01$ and weights are intialized as $\mathbf{w}^0_0 = \mathbf{0}$ \cite{yue09:inter, hofmann11:balancing, schuth2016mgd, oosterhuis2016probabilistic}. All runs are run over 10,000 impressions.
Probabilistic Multileaving inferences are computed using a sample-based method \cite{schuth2015probabilistic}, where the number of document assignments sampled for every inference is 10,000 \cite{oosterhuis2016probabilistic}.
\ac{Sim-MGD} uses $M=50$ reference documents that are selected from the training set at the start of each run.
The choice for $M=50$ was based on preliminary results on the evaluation sets.
Two selection methods are investigated: uniform sampling and k-means clustering. The clustering method uses $k=M$, i.e., producing a reference document for every cluster it finds. The expectation is that \ac{Sim-MGD} has a higher learning speed but is less expressive than \ac{MGD}, thus, we expect to see a substantial increase in online performance but a decrease in offline performance compared to \ac{MGD}.
Clustering is expected to provide reference documents that cover all kinds of documents better, potentially resulting in a further increase of online performance and a lower standard deviation compared to uniform sampling.
Finally, to evaluate whether \ac{C-MGD} can successfully combine speed and quality of two models, \ac{C-MGD} is run with \ac{Sim-MGD} as $R_\textit{simple}$ (Equation~\ref{eq:simmodel}) and the linear model as $R_\textit{complex}$ (Equation~\ref{eq:mgdmodel}). If the cascade can successfully swap models then we expect to see no significant decrease in offline performance but a substantial increase in online performance compared to \ac{MGD}. When comparing to \ac{Sim-MGD} we expect a significant increase in offline performance due to \ac{C-MGD}'s ability to switch models. However, it is very likely that a slight decrease in online performance is observed, since the change of model space introduces more exploration. Lastly, the reference document selection methods are expected to have the same effects on \ac{C-MGD} as they have on \ac{Sim-MGD}.
\subsection{Metrics and tests}
\label{sec:experiments:evaluation}
The task in \acs{OLTR} consists of two parts: a ranker has to be optimized and users have to be attended during optimization. Accordingly, both aspects are evaluated separately.
\emph{Offline performance} considers the quality of the learned model by taking the average NDCG score of the \emph{current best ranker} over a held-out set.
Performance is assessed using the NDCG~\citep{jarvelin2002:cumulated} metric:
\begin{align}
\mathit{NDCG} = \sum^{\kappa}_{i=1} \frac{2^{\mathit{rel}(\mathbf{r}[i])}-1}{\log_2(i+1)} \mathit{iDCG}^{-1}.
\end{align}
This metric calculates the Discounted Cumulative Gain (DCG) over the relevance labels $\mathit{rel}(\mathbf{r}[i])$ for each document in the top $\kappa$ of a ranking. Subsequently, this is normalized by the maximal DCG possible for a query: the ideal DCG (iDCG).
This results in Normalized DCG (NDCG) which measures the quality of a single ranked list of documents.
Offline performance is averaged over a held-out set after 10,000 impressions to give an indication at what performance the algorithms converge.
Conversely, the user experience during training is essential as well, since deterring users during training would compromise the purpose of the system. \emph{Online performance} is assessed by computing the cumulative NDCG of the rankings shown to the users \cite{Hofmann2013a, sutton1998:introduction}. For $T$ successive queries this is the discounted sum:
\begin{align}
\mathit{Online\_Performance} = \sum_{t=1}^T \mathit{NDCG}(\mathbf{m}_t) \cdot \gamma^{(t-1)}
\end{align}
where $\mathbf{m}_t$ is the ranking displayed to the user at timestep $t$. This metric is common in \emph{online learning} and can be interpreted as the expected reward with $\gamma$ as the probability that another query will be issued.
For \emph{online performance} a discount factor of $\gamma = 0.9995$ was chosen so that queries beyond the horizon of 10,000 queries have a less than $1\%$ impact \cite{oosterhuis2016probabilistic}.
Finally, all runs are repeated 125 times, spread evenly over the dataset's folds; results for each run are averaged and a two tailed Student's t-test is used to verify whether differences are statistically significant \cite{zimmerman1987comparative}. In total, our experiments are based on over 200 million user impressions.
\section{Results and Analysis}
\label{sec:results}
\begin{table*}[tb]
\centering
\caption{Online performance (Discounted Cumulative NDCG, Section~\ref{sec:experiments:evaluation}) for different instantiations of CCM (Table~\ref{tab:clickmodels}). The standard deviation is shown in brackets, bold values indicate the highest performance per dataset and click model, significant improvements and losses over the \acs{MGD} baseline are indicated by $^{\vartriangle}$\ (p $<$ 0.05) and $^{\blacktriangle}$\ (p $<$ 0.01) and by $^{\triangledown}$\ and $^{\blacktriangledown}$, respectively.}
\input{tables/main-online}
\label{tab:online}
\end{table*}
\begin{table*}[tb]
\centering
\caption{Offline performance (NDCG) after 10,000 impressions, notation is identical to Table~\ref{tab:online}.}
\input{tables/main-offline}
\label{tab:offline}
\end{table*}
This section presents the results of our experiments and answers the research questions posed in Section~\ref{sec:intro}.
\subsection{Improving the user experience with \ac{Sim-MGD}}
First we consider \ref{rq:simmgd}: whether \ac{Sim-MGD} improves the user experience compared to \ac{MGD}.
\subsubsection{Online performance.}
Table~\ref{tab:online} (Columns 2--4) displays the online performance of \ac{Sim-MGD} and \ac{MGD}. In the large majority of cases \ac{Sim-MGD} provides a significant increase in online performance over \ac{MGD}, both with the uniform and k-means document selection strategies. E.g., under the perfect user model, 7 out of 11 datasets for uniform and 8 out of 11 for k-means. Significant decreases in online performance are found for \emph{HP2003}, \emph{TD2003}, \emph{TD2004} and \emph{OHSUMED} for uniform and for
\emph{TD2003}, \emph{TD2004} and \emph{OHSUMED} for k-means. Interestingly, all of these datasets model informational tasks, which suggests that it is more difficult to create an appropriate reference set in these cases. Furthermore, the differences between \ac{Sim-MGD} and \ac{MGD} are consistent over the different click-models. Therefore, we conclude that \ac{Sim-MGD} is as robust to noise as \ac{MGD}.
Finally, Table~\ref{tab:online} (Columns 3--4) allows us to contrast the online performance of different document selection strategies: k-means beats uniform on the majority of datasets under all user models, and the noisier the user model is, the bigger the majority is. Therefore, it seems that clustering results in a faster learning speed of \ac{Sim-MGD}; this could be because k-means will provide more dissimilar reference documents. Hence, the parameters in \ac{Sim-MGD} will be less correlated making learning faster than for uniform sampling.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{img/Main_offline_perfect}
\includegraphics[width=\columnwidth]{img/Main_offline_navigational}
\includegraphics[width=\columnwidth]{img/Main_offline_informational}
\caption{Offline performance (NDCG) of \ac{MGD}, the \acs{Sim-MGD} and \ac{C-MGD} (k-means initialization) on the NP2003 dataset under three click models.}
\label{fig:mainoffline}
\end{figure}
In conclusion, \ac{Sim-MGD} improves the user experience most of the time, but is not reliable as it may provide a significantly worse experience depending on the dataset.
\subsubsection{Offline performance.}
Table~\ref{tab:offline} (Columns 2--4) displays the offline performance of \ac{Sim-MGD} and \ac{MGD}. As predicted by the speed-quality tradeoff, we see that the convergence of \ac{Sim-MGD} after 10,000 impressions is substantially worse than \ac{MGD}. This suggests that the optimum found by \ac{MGD} can generally not be expressed by the similarity model in \ac{Sim-MGD}, i.e., it is not a linear combination of document features.
Figure~\ref{fig:mainoffline} shows the offline performance of \ac{MGD} and \acs{Sim-MGD} on the \emph{NP2003} dataset for the three click models. Here, the improved learning speed is visible as \ac{Sim-MGD} outperforms \ac{MGD} in the initial phase of learning, under more click-noise \ac{MGD} requires more impressions to reach the same performance. For the \emph{informational} click model over 2000 impressions are required for \ac{MGD} to reach the performance \ac{Sim-MGD} had in fewer than 200. However, it is clear that \ac{Sim-MGD} has an inferior point of convergence, as it is eventually overtaken by \ac{MGD} under all click models.
Lastly, Table~\ref{tab:offline} (Columns 3--4) shows the scores for \ac{Sim-MGD} with different reference document selection methods. The k-means selection method provides a higher online performance and a slightly better point of convergence. Therefore, it seems that clustering helps in selecting reference documents but has a limited effect.
To answer \ref{rq:simmgd}, \ac{Sim-MGD} improves the user experience in most cases, i.e., on most datasets and under all user models, with a consistent benefit for the k-means document selection strategy. As predicted by the speed-quality tradeoff, \ac{Sim-MGD} converges towards inferior rankings than \ac{MGD}, due to its less expressive model.
\subsection{Resolving the speed-quality tradeoff with \ac{C-MGD}}
Next, we address the speed-quality tradeoff with \ref{rq:cmgd}: whether \ac{C-MGD} is capable of improving the user experience while maintaining the state-of-the-art convergence point.
\subsubsection{Learning speed} To evaluate the user experience, the online performance of \ac{C-MGD} and \ac{MGD} can be examined in Table~\ref{tab:online} (Column~2 vs.~5 and~6). The online performance of \ac{C-MGD} is predominantly a significant improvement over that of \ac{MGD}. Moreover, when k-means document selection is used, no significant decreases are measured on any dataset or click model. Even on datasets where \ac{Sim-MGD} performs significantly worse than \ac{MGD} in terms of online performance, no significant decrease is observed for \ac{C-MGD}. Thus, \ac{C-MGD} deals with the inferior performance of its starting model by effectively switching to a more expressive model space.
\subsubsection{Quality convergence}
Furthermore, the quality side of the tradeoff is examined by considering the offline performance after 10,000 impressions, displayed in Table~\ref{tab:offline} (Column~2 vs.~5 and~6).
In the vast majority of cases \ac{C-MGD} shows no significant change in offline performance compared to \ac{MGD}.
For \acs{C-MGD} with uniform selection only four instances of significant decreases in offline performance w.r.t.\ \acs{MGD} are found scattered over different datasets and user models; this number is further reduced when k-mean selection is used. Only for MQ2008 under the informational user model this difference is greater than 0.1 NDCG. In all other cases, the offline performance of \acs{MGD} is maintained by \acs{C-MGD} or slightly improved. Conclusively, \acs{C-MGD} converges towards rankings of the same quality as \acs{MGD}.
\subsubsection{Switching models}
Lastly, we consider whether \ac{C-MGD} is able to effectively switch between model spaces. As discussed in the previous paragraphs, Table~\ref{tab:online} and~\ref{tab:offline} show that \ac{C-MGD} improves the user experience of \acs{MGD} while maintaining the final performance at convergence. This switching of models can also be observed in Figure~\ref{fig:mainoffline}, where the offline performance of \ac{Sim-MGD}, \ac{MGD} and \acs{Sim-MGD} on the \emph{NP2003} dataset for the three click models is displayed. As expected, we see that initially \acs{Sim-MGD} learns very fast and converges in less than 300 impressions; \ac{C-MGD} has the same performance during this period. When convergence of \acs{Sim-MGD} is approached \ac{C-MGD} switches to the linear model. A small drop in NDCG is visible when this happens under the informational click model. However, from this point on \ac{C-MGD} uses the same model as \ac{MGD} and eventually reaches a higher performance than it had before the switch was made. This indicates that the switch was made effectively but had some minor short-term costs, which can be accounted to the change in confidence: after switching, \ac{C-MGD} will perform more exploration in the new model space. As a result, \ac{C-MGD} may explore inferior parts of the model space before oscillating towards the optimum. Despite these costs, when switching \acs{C-MGD} is able to provide a reliable improvement in user experience over \ac{MGD} while \ac{Sim-MGD} cannot (Table~\ref{tab:online}). Thus we conclude that the switching of models is done effectively by \ac{C-MGD} as evident by the reliable improvement of online performance over \acs{MGD} while also having the same final offline performance.
In conclusion, we answer \ref{rq:cmgd} positively: our results show that in spite of the speed-quality tradeoff, \acs{C-MGD} improves the user experience of \acs{MGD} while still converging towards the same quality rankings. These findings were made across eleven datasets and varying levels of noise in the user models employed.
\section{Conclusion}
\label{sec:conclusion}
In this paper we have addressed the speed-quality tradeoff that has been facing the field of \ac{OLTR}. Expressive models are capable of learning the most optimal rankings but require more user interactions and as a result frustrate more users during training. To put it bluntly, users may be frustrated in the initial phase of learning; models that converge at the best rankings frustrate the users for the longest period.
As a solution we have introduced two methods. The first method is a ranking model that ranks by feature similarities with reference documents (\acs{Sim-MGD}). \acs{Sim-MGD} learns faster and consequently provides a much better initial user experience. As predicted by the speed-quality tradeoff it converges towards rankings inferior to \acs{MGD}. The second is a cascading approach, \acs{C-MGD}, that deals with the speed-quality tradeoff by using a cascade of models. Initially the simplest model in the cascade interacts with the users until convergence is detected; at this point a more expressive model continues the learning process. By doing so the cascade combines the best of both models: fast initial learning speed and optimal convergence.
The introduction of \acs{C-MGD} opens an array of possibilities.
A natural extension is to consider expressive models that have been successful in Offline-\acs{LTR} and place them in \acs{C-MGD} as the short-term user experience can be addressed by \ac{C-MGD}. E.g., an \acs{OLTR} version of LambdaMart \cite{burges2010ranknet} could be appended to
a cascade that starts with the \acs{Sim-MGD} model, then switches to \acs{MGD} and finally switches to a novel \acs{OLTR} regression forest.
Currently, there is no \acs{OLTR} method of gradient estimation for non-linear structures like regression trees: the introduction of \acs{C-MGD} removes an important hurdle for research into such methods. Additionally, an initialization method has to be introduced to enable the switch between such models. Ideally, the cascading approach should be extended to predict whether switching model space will have a positive effect; multileaving may be adapted to infer such differences.
|
2,877,628,090,329 | arxiv | \section{Introduction}
Axion-like particles (ALPs) are hypothetical pseudoscalar particles appearing in
several extensions of the Standard Model \cite{Irastorza:2018dyq}. Original axions
were introduced in order to resolve the strong CP-problem in QCD
\cite{Peccei:1977hh,Peccei:1977ur}. Later, it was argued that ALPs can appear in a
low-energy phenomenological description of string theory
\cite{Svrcek:2006yi,Arvanitaki:2009fg}.
The efforts toward the searches for ALPs include such types of experiments as:
helioscopes~\cite{Anastassopoulos:2017ftl},
haloscopes~\cite{Kahn:2016aff,Asztalos:2001tf,JacksonKimball:2017elr},
light-shining-through-wall (LSW)
experiments~\cite{Spector:2019ooq,Tan:2018tcs}, space-based gamma-ray telescopes~\cite{Egorov:2020cmx},
accelerator-based experiments~\cite{Feng:2018pew,Berlin:2018bsc,Dusaev:2020gxi,Banerjee:2020fue},
neutrino experiments~\cite{Brdar:2020dpr} and reactor experiments~\cite{Dent:2019ueq}.
In addition, astrophysics and cosmology observations imply that ALPs are well
motivated candidates for dark matter content \cite{Preskill:1982cy, Abbott:1982af, Dine:1982ah}.
Moreover, several exotic scenarios of DM can be associated with
ALPs~\cite{Berezhiani:1989fp, Berezhiani:1992rk}.
A properties of axion dark matter are sensitive to the self-interaction parameters. In particular, the relevant dark matter
can be
clumped into miniclusters \cite{Hogan:1988mp, Kolb:1993zz}, or form other inhomogeneous
structures \cite{Vilenkin:1984ib,Sakharov:1994id, Sakharov:1996xg}. Axion field can also
form exotic compact objects (bose stars \cite{Colpi:1986ye, Tkachev:1991ka}) providing a
possible explanation for fast radio bursts \cite{Levkov:2020txo, Buckley:2020fmh}.
More generally, the axion field $a$ with mass $m_a$ and dimensionful coupling to
photons $g_{a\gamma\gamma}$ is described by the Lagrangian
\begin{equation}\label{Lagrangian}
\mathcal{L}=-\frac{1}{4} F_{\mu\nu}F^{\mu\nu}+\frac12(\partial_\mu a)^2 - \frac12{m_a}^2 a^2 +\frac{g_{a\gamma\gamma}}{4}\,a\,F_{\mu\nu}\tilde{F}^{\mu\nu}\;,
\end{equation}
where $F_{\mu\nu}$ is the electromagnetic tensor and $\tilde{F}^{\mu\nu}=\epsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}/2$ is its dual.
The Lagrangian~\eqref{Lagrangian} yields the following
equations for axion and electromagnetic fields,
\begin{equation}
(\partial_\mu \partial^\mu +{m_a}^2)\,a=\frac{g_{a\gamma\gamma}}{4} F_{\mu\nu}\tilde{F}^{\mu\nu}\, ,
\label{EqForAx}
\end{equation}
\begin{equation}
\label{Max-eqns}
\partial_\mu F^{\mu\nu}=g_{a\gamma\gamma}\,\tilde{F}^{\mu\nu}\partial_\mu a.
\end{equation}
If the electromagnetic invariant $F_{\mu\nu}\tilde{F}^{\mu\nu} = - 4 (\Vec{E}\cdot\Vec{B})$ is non-vanishing then Eq.~\eqref{EqForAx} implies that axion field
can be produced.
This
may be realized in laboratory by combination of two strong electromagnetic (EM) waves\footnote{For a monochromatic EM wave in vacuum $(\Vec{E}\cdot\Vec{B})$ vanishes.}
or by a single EM wave in
a strong magnetic field. Strong enough EM field with high level of coherence can be produced within optical range by lasers or within radio-frequency range inside SRF cavities.
The axion field once being produced may interact with the
EM field in a non-linear way according to Eq.~\eqref{Max-eqns}.
So that the axion-induced EM field may be detected within the same production cavity~\cite{Bogorad:2019pbu} or within an additional detection cavity~\cite{Hoogeveen:1992nq,Janish:2019dpr,Gao:2020anb}.
In the latter case both cavities should be screened in order to suppress the external EM field penetration. This setup illustrates so-called
LSW type of laboratory experiments for axion searches. For instance, both production and detection cavities are filled with the strong magnetic field in order to initiate effective axion-photon conversion.
This setup was proposed and realized for both optical \cite{VanBibber:1987rq} and RF ranges \cite{Hoogeveen:1992nq}.
Both optical LSW experiment ALPS \cite{Ehret:2010mh} and RF experiment
CROWS \cite{Betz:2013dza} gives the same order of magnitude constraints\footnote{The same order of magnitude constraint came
from optical polarization experiment PVLAS \cite{Zavattini:2012zs}.} (for details see Ref.~\cite{Irastorza:2018dyq}) in the plane
$(g_{a\gamma\gamma},m_a)$. In particularly, for small axion masses one has $g_{a\gamma\gamma} \lesssim 10^{-7}\,\mbox{GeV}^{-1}$,
which is still the best pure laboratory constraint. Although significantly better constraints (up to
$g_{a\gamma\gamma} \lesssim 10^{-10}\, \mbox{GeV}^{-1}$) come from null results of dark matter searches
or solar axion detection (see, e.~g.~Ref.~ \cite{Anastassopoulos:2017ftl}), these constraints are sensitive to the model of axion production. On the other hand, the production of ALPs in laboratory experiments is
straightforward. However, both cosmic and laboratory methods for ALPs searches complement each
others.
The classical LSW setup requires external magnetic field in both
production and detection cavities. However, the quality factor of cavities is constrained at level $Q\lesssim 10^5$ that implies the limitation on the amplitude of cavity modes. Therefore, sensitivity of ALPs detection decreases. The much bigger quality factor $Q\sim 10^{12}$ can be achieved with superconducting radiofrequency (SRF) cavities, but the price to be payed is that one can not apply strong magnetic field inside the cavity due to degradation of surepconducting state.
The maximal amplitudes for SRF cavity modes are constrained by the overall magnetic field near the cavity walls. In particular, for superconducting niobium \cite{Kittel} the critical magnetic field is $\sim 0.2$~T. Given that
constraint, the authors of Ref.~\cite{Janish:2019dpr} suggested the LSW setup involving cylindrical SRF production cavity and toroidal SRF detector. Moreover, it was pointed out that sensitivity depends essentially on the geometrical formfactor for emitter and converter cavities.
In our paper we address this issue in detail. In particular, we discuss spatial distributions of the produced axion field in cylindrical SRF cavities. We also estimate detection sensitivity of the cylindrical RF cavity ($Q\sim 10^{5}$) filled with the strong magnetic field, so that it can reach $10$~T. Recently, a similar setup was suggested in Ref.~\cite{Gao:2020anb}, particularly, authors discuss the LSW facility to probe ALPs with two screened cylindrical SRF cavities which are served as emitter and receiver of axion field respectively. The setup allows to achieve relatively large quality factors $Q\sim 10^{12}$, however, the peak EM field in the cavities is constrained by $\lesssim 0.2$~T. We show that sensitivity to probe ALPs in our LSW setup $g_{a\gamma\gamma} \lesssim \mathcal{O}(1)\times 10^{-11}\, \mbox{GeV}^{-1}$ is comparable to that performed in Ref.~\cite{Gao:2020anb}.
In Ref.~\cite{Bogorad:2019pbu} authors suggested the setup to probe ALPs inside a single SRF cavity filled with different modes. Axion-like particles realize a non-linear coupling between those modes.
Absence of the magnetic field allows to achieve the quality factor $Q\sim 10^{12}$.
However, sensitivity is leveraged by relatively small magnitude of
magnetic field $\lesssim 0.2$~T allowed in SRF.
We show that our LSW setup is also sensitive to probe ALPs for the region of parameter space, which is close to one discussed in Ref.~\cite{Bogorad:2019pbu}.
This paper is organized as follows. In Sec.~\ref{sec:axion-production} we consider ALPs production using Green function approach. In Sec.~\ref{sec:num-results} we present
results of numerical calculation for time-averaged
energy density of axion field initiated by different pairs of cylindrical cavity eigenmodes. In Sec.~\ref{sec:detection} we consider ALPs detection in the RF cavity with the strong magnetic field. We also estimate constraints in the parameter space $(g_{a\gamma\gamma}, m_a)$.
In Sec.~\ref{sec:discuss} we discuss obtained results. Appendices contain technical details.
\section{ALP production in superconducting cavity}\label{sec:axion-production}
In this section we consider production of the axion field by electromagnetic radio-frequency modes pumped into a superconducting cavity. The generated axion
field is described by a solution of Eq.~\eqref{EqForAx} respecting causality. In particular, it is given by
\begin{equation}\label{Green}
a(\Vec{x},t) = \int\limits_{-\infty}^{\infty} dt'
\int\limits_{V_{\mathrm{cav}}}d^3x'\,G_{\mathrm{ret}}(\vec{x}-\vec{x}',t-t')\;\times\; \left[ -g_{a\gamma\gamma}\,\left( \vec{E}(\vec{x}',t')\cdot \vec{B}(\vec{x}',t') \right)\right]\;
\end{equation}
where $G_{\mathrm{ret}}(\vec{x}-\vec{x}',t-t')$ is the retarded Green function,
$\vec{E}(\vec{x}',t')$ and $\vec{B}(\vec{x}',t')$ are the electric and magnetic
fields respectively inside the cavity of volume $V_{\mathrm{cav}}$. Since for a
single cavity mode the electric field is orthogonal to the magnetic one, at least
two cavity modes are necessary for ALPs productions. Therefore, one reads $\vec{E}(\Vec{x},t)=\vec{E}_1(\Vec{x},t)+\vec{E}_2(\Vec{x},t)$ and $\vec{B}(\Vec{x},t)=\vec{B}_1(\Vec{x},t)+\vec{B}_2(\Vec{x},t)$ for the electric and magnetic fields correspondingly, where the subscripts refer to the cavity modes at given frequencies $\omega_{1,\,2}$.
The time dependence for each mode decouples as follows
$$ \vec{E}_i(\Vec{x},t)=\sqrt{2} \Re \mathrm{e} \left[ \Vec{{\cal E}}_i(\vec{x},\omega_i) e^{-i\omega_i t} \right]\;, \qquad\qquad \vec{B}_i(\Vec{x},t)=\sqrt{2} \Re \mathrm{e} \left[ \Vec{{\cal B}}_i(\vec{x},\omega_i) e^{-i\omega_i t} \right]\;, $$
where $\Vec{{\cal E}}_i(\vec{x},\omega_i)$
are cavity eigenmodes without proper normalization to take into account its arbitrary amplitudes. The electromagnetic invariant $(\vec{E}\cdot\vec{B})$ for two modes can be represented as
\begin{equation}\label{Fpm}
\left( \vec{E}(\vec{x},t)\cdot \vec{B}(\vec{x},t) \right) = \Re \mathrm{e}\left[ F_+(\Vec{x})\cdot\mathrm{e}^{-i\omega_+ t} + F_-(\Vec{x})\cdot\mathrm{e}^{-i\omega_- t}\right]\;,
\end{equation}
where we defined $\omega_\pm=\omega_2 \pm \omega_1$ and
$$ F_+(\Vec{x}) \equiv \vec{{\cal E}}_1(\Vec{x}) \cdot \vec{{\cal B}}_2(\Vec{x}) + \vec{{\cal E}}_2(\Vec{x}) \cdot \vec{{\cal B}}_1(\Vec{x})\;, \qquad F_-(\Vec{x}) \equiv \vec{{\cal E}}_1^*(\Vec{x}) \cdot \vec{{\cal B}}_2(\Vec{x}) + \vec{{\cal E}}_2(\Vec{x}) \cdot \vec{{\cal B}}_1^*(\Vec{x})\;.
$$
Note that we consider cavity modes where the electric and magnetic fields are orthogonal, so that
$(\vec{{\cal E}}_i(\vec{x})\cdot\vec{{\cal B}}_i(\vec{x}))=0$ everywhere inside the cavity.
Since Eq.~(\ref{EqForAx}) is linear with respect to $a$, one has independent
propagation for each frequency component in Eq.~(\ref{Fpm}). The produced axion
field is then given by $a(\Vec{x},t)=a_+(\Vec{x},t)+a_-(\Vec{x},t)$, where
$a_\pm(\Vec{x},t)$ are independent components for $\omega_\pm$. One can easily
integrate out $t'$ in Eq.~\eqref{Green} for two cases depending on axion mass (see e.~g.~Appendix~\ref{app}),
\begin{equation}\label{GretTileSup2}
m_a< \omega_\pm:\qquad \qquad a_\pm(\Vec{x},t) = -g_{a\gamma\gamma} \; \Re \mathrm{e}\,\int\limits_{V_{\mathrm{cav}}} d^3x'\,\frac{F_\pm(\Vec{x}')}{4\pi |\Vec{x}-\Vec{x}'|} \mathrm{e}^{-i\omega_\pm t+i|\Vec{x}-\Vec{x}'|k_\pm }\;.
\end{equation}
\begin{equation}\label{GretTileSup3}
m_a> \omega_\pm:\qquad \qquad a_\pm(\Vec{x},t) = -g_{a\gamma\gamma} \; \Re \mathrm{e}\,\int\limits_{V_{\mathrm{cav}}} d^3x'\,\frac{F_\pm(\Vec{x}')}{4\pi |\Vec{x}-\Vec{x}'|} \mathrm{e}^{-i\omega_\pm t-|\Vec{x}-\Vec{x}'|\kappa_\pm }\;,
\end{equation}
where $k_\pm \equiv \sqrt{\omega_\pm^2-m_a^2}$. For the case $m_a>\omega_\pm$ Eq.~(\ref{GretTileSup3}) is obtained formally by a replacement of $i k_\pm$ in Eq.~(\ref{GretTileSup2}) by $\kappa_\pm = \sqrt{m_a^2-\omega_\pm^2}$, so that the axion field amplitude decreases exponentially far from the cavity volume.
We note that the functions $F_\pm(\Vec{x}')$ are generally complex what may give an additional phase of the integrand in Eqs.~(\ref{GretTileSup2})-(\ref{GretTileSup3}).
Since the axion field amplitude harmonically oscillates the time-averaged $a_\pm(\vec{x},t)$ goes to zero. Instead, we consider the following time-averaging,
\begin{equation}
\braket{a^2}= \frac{1}{T}\int\limits^{T}_0 dt \,\, a^2(t) = \frac{1}{T} \int\limits^{T}_0 dt \,\, (a_+(t) + a_-(t))^2 = \braket{a_+^2} +\braket{a_-^2}\;, \label{eq:averaging}
\end{equation}
where the mixed term $\braket{a_+a_-}$ vanishes because $a_+a_-$ is a sum of products of two harmonic functions with different frequencies and averaging over time period yields zero.
For two non-zero terms in Eq.~(\ref{eq:averaging}) we obtain
\begin{align}
m_a< \omega_\pm:\qquad \qquad
&\braket{a^2_\pm} = \frac{1}{2}\left(\left(A_\pm^C\right)^2 + \left(A_\pm^S\right)^2\right)\;,\\
m_a>\omega_\pm:\qquad \qquad
&\braket{a^2_\pm} = \frac{1}{2}B_\pm^2\;,
\end{align}
where
\begin{equation}
\label{ACAS}
A^{C(S)}_\pm = g_{a\gamma\gamma} \, \int\limits_{V_{\mathrm{cav}}} d^3x'\,
\frac{\left|F_\pm(\vec{x}')\right|}{4\pi|\Vec{x}-\Vec{x}'|} \left\{\begin{array}{c}
\cos \left(k_\pm |\Vec{x}-\Vec{x}'|\right) \\
\sin \left(k_\pm |\Vec{x}-\Vec{x}'|\right)
\end{array} \right\}\;,
\end{equation}
\begin{equation}
\label{Bpm}
B_\pm = g_{a\gamma\gamma} \, \int\limits_{V_{\mathrm{cav}}} d^3x'\,
\frac{\left|F_\pm(\vec{x}')\right|}{4\pi |\Vec{x}-\Vec{x}'|} \mathrm{e}^{-\kappa_\pm |\Vec{x}-\Vec{x}'|}\;.
\end{equation}
For a specific resonant case $m_a = \omega_\pm$ one has $k_\pm=\kappa_\pm=0$, so that
\begin{equation}
\label{Ares}
\left. A^C_\pm \right|_{\mathrm{res}}= \left. B_\pm \right|_{\mathrm{res}} = g_{a\gamma\gamma} \, \int\limits_{V_{\mathrm{cav}}} d^3x'\,
\frac{\left|F_\pm(\vec{x}')\right|}{4\pi |\Vec{x}-\Vec{x}'|}\;, \qquad \left. A^S_\pm \right|_{\mathrm{res}}=0\;.
\end{equation}
Next, far away from the cavity the integral $\int d^3x' \left|F_\pm(\vec{x}')\right| /|\Vec{x}-\Vec{x}'|$ is suppressed
for two transversal magnetic modes in the cavity (TM+TM),
since their overlap factor $\int d^3x' \left|F_\pm(\vec{x}')\right|$
is negligible. In that case the resonant production of ALPs in SRF cavity is
ineffective. In addition we note that for two transversal electric modes
(TE+TE) or transversal magnetic and electric modes (TM+TE) this overlap integral can be significant. Therefore the resonant production of
ALPs for these combinations of modes is more efficient.
To conclude this section let us list the formulae
for the energy density of the generated axion field,
\begin{equation}
\rho^E_\pm = \frac{1}{2}\dot{a}^2 + \frac{1}{2}(\partial_i a)^2 + \frac{m_a^2}{2}a^2\;. \label{eq:energy-den}
\end{equation}
In particular, for various masses $m_a$ of the axion field the time-averaged value $\langle\rho^E_\pm\rangle$ can be written as follows,
\begin{align}
m_a < \omega_\pm:& \qquad
\braket{\rho^E_\pm} =\frac{1}{4}
\left(
\left[m_a^2+\omega^2_\pm\right]((A_{\pm}^S)^2+(A_{\pm}^C)^2)+
(\partial_i A_{\pm}^S)^2+(\partial_i A_{\pm}^C)^2
\right)\;,
\\
m_a> \omega_\pm:& \qquad
\braket{\rho^E_\pm} =\frac{1}{4}
\left(
\left[m_a^2+\omega^2_\pm\right]B_{\pm}^2+
(\partial_i B_{\pm})^2
\right)\;,
\end{align}
where $A_{\pm}^{C(S)}$ and $B_\pm$ are given by
Eqs.~(\ref{ACAS})-(\ref{Bpm}). In Sec.~\ref{sec:num-results} we discuss
spatial distributions for the given quantities and study its properties for the
resonant case $m_a \simeq \omega_+$.
\section{Numerical results for ALP production in cylindrical cavity}\label{sec:num-results}
In this Section we consider axion production in cylindrical cavity. We use
TE$npq$/TM$npq$ notation to classify EM cavity modes \cite{Hill}. Given a height
$L$ and a radius $R$ of the cavity one has the following dispersion relations
for TM and TE modes respectively,
\begin{equation}
\omega^{TM}_{npq} =\sqrt{\left(\frac{x_{np}}{R}\right)^2 + \left( \frac{q\pi}{L} \right)^2}, \qquad\qquad \omega^{TE}_{npq} =\sqrt{\left(\frac{x'_{np}}{R}\right)^2 + \left( \frac{q\pi}{L} \right)^2}\;,
\end{equation}
where $x_{np}$ and $x'_{np}$ are p-th roots of the n-th order Bessel function $J_n(x)$ and its derivative $J'_n(x)$ correspondingly. Integers $n,\,p,\,q$ enumerate the full set of modes and refer to the ``winding''
numbers in $\phi,\,\rho,\, z$ directions respectively. The explicit expressions
for the given set of modes are presented in Appendix~\ref{AppB}.
One has to consider the following combinations of pump cavity modes:
(i) TM+TM, (ii) TE+TE, (iii) TE+TM.
It is straightforward to show using expressions from Appendix~\ref{AppB} that
for (i) and (ii) cases the functions $F_+$ and $F_-$ are purely imaginary and for
the case (iii) $F_+$ and $F_-$ are purely real. Since ${\cal E}^{TE}_z={\cal B}^{TM}_z=0$ one has
\begin{equation}
\label{FTETM}
(TE+TM): \qquad \left| F_\pm\right| = \left| {\cal E}_z^{TM}{\cal B}_z^{TE} + {\cal E}_\rho^{TM}{\cal B}_\rho^{TE} + {\cal E}_\phi^{TM}{\cal B}_\phi^{TE} \pm \left( {\cal E}_\rho^{TE}{\cal B}_\rho^{TM} + {\cal E}_\phi^{TE}{\cal B}_\phi^{TM} \right) \right|\;,
\end{equation}
\begin{equation}
\label{FTT}
(TE+TE\text{ or }TM+TM): \qquad\left| F_\pm\right| = \left| {\cal E}_\rho^1{\cal B}_\rho^2 + {\cal E}_\phi^1{\cal B}_\phi^2 \pm \left( {\cal E}_\rho^2{\cal B}_\rho^1 + {\cal E}_\phi^2{\cal B}_\phi^1 \right) \right|\;.
\end{equation}
Functions $F_\pm$ vanish for the cases (i) and (ii) as soon as its ``winding'' numbers $n_1=n_2=0$. On the other hand, two modes with zero $n$ give
non-vanishing terms for the case (iii).
We performed numerical calculations\footnote{Multidimensional numerical integration in \cite{Github} is based on the package \cite{package}.} \cite{Github}
of the time-averaged axion energy density $\braket{\rho^E_\pm}$ generated by two TE/TM modes in the cylindrical cavity with various dimensions $R$ and $L$. We also assumed $g_{a\gamma\gamma}=10^{-10}\,\mbox{GeV}^{-1}$ as a benchmark for ALPs coupling. One also has to fix the amplitudes ${\cal E}_0$ (${\cal B}_0$), which appears as a normalization constants in the expressions for mode components ${\cal E}_z^{TM}$ (${\cal B}_z^{TE}$) of Appendix~\ref{AppB}. Its maximum value is limited by a requirement that the magnetic field on the superconducting cavity walls should not exceed the critical value $\sim 0.1$ T. We note that the components ${\cal E}^{TM}_\rho$ (${\cal B}^{TE}_\rho$) and ${\cal E}^{TM}_\phi$ (${\cal B}^{TE}_\phi$) can be larger than ${\cal E}_0$ (${\cal B}$) for a ``pancake-like'' design of the cylindrical cavity with $R\gg L$. Therefore, in numerical calculations we require typical values $|\Vec{{\cal E}}|$, $|\Vec{{\cal B}}| \lesssim 0.1$~T for both TM and TE modes.
\begin{figure}[h!]
\begin{minipage}[h]{0.49\linewidth}
\center{\includegraphics[width=1\linewidth]{minusTM010TE011m.pdf}}
\center{\includegraphics[width=1\linewidth]{minusTM010TE011rho.pdf}}
\end{minipage}
\hfill
\begin{minipage}[h]{0.49\linewidth}
\center{\includegraphics[width=1\linewidth]{TM010TE011m.pdf}}\\
\center{\includegraphics[width=1\linewidth]{TM010TE011rho.pdf}}
\end{minipage}
\caption{ Results of numerical calculations for TM010+TE011 pump modes. Top: Contour plots for the time-averaged energy densities $\braket{\rho^E_{-}}$ and $\braket{\rho^E_{+}}$ evaluated on the cylinder axis as function of axion mass $m_a$ and distance $z$ from center of cavity with TM010+TE011 pump modes. Cavity dimensions: $L = 1$ m, $R = 1$ m. Left panel: $\omega_-=5.1 \cdot 10^{-7}$ eV. Right panel: $\omega_+=14.7\cdot 10^{-7}$ eV. Bottom: Spatial distribution of the time-averaged energy density $\braket{\rho^E_-}$ and $\braket{\rho^E_+}$ on the cavity section along its axis $(\rho,\,z)$ with TM010+TE011 pump modes. Cavity dimensions: $L = 1$ m, $R = 1$ m. Left panel: $m_a=0$, $\omega_-=5.1 \cdot 10^{-7}$ eV. Right panel: $m_a=\omega_+=14.7\cdot 10^{-7}$ eV.} \label{fig:1}
\end{figure}
Let us consider the production cavity with dimensions $R=L=1$~m and
the simplest combination of pump modes TM010+TE011. The evaluated axion energy density for each frequency component $a_\pm$ is presented in Fig.~\ref{fig:1}. Plots at the top in Fig.~\ref{fig:1} show the
time-averaged energy density on the cylinder axis ($\rho=0$) as function of both distance from the cavity center $z$ and axion mass $m_a$. Top-right panel in
Fig.~\ref{fig:1} shows the resonance in $\langle \rho^E_+ (m_a\simeq\omega_+) \rangle$ in the center of cavity. On the other hand, on the top-left panel in Fig.~\ref{fig:1} one can see a significant suppression of $\langle \rho^E_- \rangle$ in the center of cavity and relatively larger amplitude at $m_a=0$.
That suppression can be explained as follows. One can see that $F_-$ in Eq.~(\ref{FTETM}) is a difference between two positive terms of the same order of magnitude, which compensate each other.
However $F_+$ is a sum of the relevant terms. Therefore the energy density associated with $\omega_+$ is almost two orders of magnitude larger than that for $\omega_-$.
The spatial distribution of energy density as function of radius $\rho$
and height $z$ is shown at the bottom in Fig.~\ref{fig:1}. These plots
respect an axial symmetry since the chosen cavity modes do not depend on $\phi$. Distributions for $\omega_+$ and $\omega_-$
components of axion field were calculated for the cases
$m_a=\omega_+$ and $m_a=0$ respectively. For both
cases the axion energy density is localized close to center of the production
cavity and decreases outside the cavity.
In particular, the energy density $\langle \rho_+^E \rangle$ near the cavity ends at $z=0.5$~m drops by factor 2 with respect to a resonant value at $z=0$~m.
The results of numerical calculations for other modes are shown in Figs.~\ref{fig:C1}-\ref{fig:C2}. For these modes we consider the ALPs density
$\langle \rho_+^E \rangle$ only, because $\langle \rho_-^E \rangle$ term is negligible.
One has the resonances at $m_a \simeq \omega_+$ as expected. However, the intensity of the
resonance may
drastically depend on the combination of modes. In particular,
for certain combinations of
modes the axion production rate is suppressed by factor $\sim 10^{2}$ in
comparison with other combinations. This suppression occurs if (i) $q_1 + q_2$ is
even, or (ii) $n_1 \neq n_2$. In fact, in those two cases an overlap factor for two modes is zero, $\int d^3x' F_\pm (x') = 0$ (see Appendix in
Ref.~\cite{Berlin:2019ahk}). Therefore, for given modes the resonant amplitude
(\ref{Ares}) tends to zero more rapidly far away from cavity.
\section{Detection}
\label{sec:detection}
In order to obtain some information about produced axions we have to include a second cavity as a detector in the setup\footnote{It is feasible to detect axions within the same cavity as it was proposed in Ref.~\cite{Bogorad:2019pbu}. However, we leave resonant detection of the ALPs for the given setup to future.}. There are two detection options, in first one assumes that the detection cavity filled with the strong magnetic field (the setup similar to haloscope \cite{Sikivie:2020zpn}). The second option is associated with the oscillating electromagnetic field inside the detection cavity (see, e.~g. recent Ref.~\cite{Berlin:2019ahk}). The detection cavity in our setup is filled with the constant magnetic field $\Vec{B}_e(\Vec{x})$. In particular, we consider the setup with two coaxial cylindrical cavities separated by a screening plate of width $\Delta$, see Fig.\ref{fig:pic}, where indices $1$ and $2$ refer to the production and detection regions respectively.
\begin{figure}[t]
\center{\includegraphics[width=0.5\linewidth]{Pic.pdf}}
\caption{ A scheme for the setup considered. The detection cavity ($R_2$) is separated from the production cavity ($R_1$) by a screening plate of width $\Delta$. }
\label{fig:pic}
\end{figure}
The detection cavity response to the external axion field is determined by Eq.~\eqref{Max-eqns}.
The latter one decouples into a pair of Maxwell's equations with axion-induced current,
\begin{equation}
(\vec{\nabla}\cdot\Vec{E})=-g_{a\gamma\gamma}\,(\Vec{B}\cdot\vec{\nabla}\,a)\;, \qquad\qquad
[\vec{\nabla}\times\Vec{B}]=\frac{\partial}{\partial t}\Vec{E}+g_{a\gamma\gamma}\left(\Vec{B}\frac{\partial}{\partial t}a-[\Vec{E}\times\vec{\nabla}\,a]\right)\;.
\end{equation}
The electric field of a given signal mode $\vec{E}^{\mathrm{sig}}(\Vec{x},t)$ inside the detection cavity obeys
the following equation
\begin{align}
\left( \frac{\partial^2}{\partial t^2} + \Gamma \frac{\partial}{\partial t} - \Delta \right)\Vec{E}^{\mathrm{sig}}(\Vec{x},t)=g_{a\gamma\gamma}\left(\vec{\nabla}\,(\vec{B}_e(\Vec{x})\cdot\vec{\nabla}\,a(\Vec{x},t))-\vec{B}_e(\Vec{x})\frac{\partial^2}{\partial t^2}a(\Vec{x},t)\right
,
\label{MainEqOnE}
\end{align}
where we introduced the damping coefficient $\Gamma$ to take into account dissipation effects; r.h.s
is associated with the ALPs and magnetic field $\Vec{B}_e(\vec{x})$ inside the cavity.
Next, to simplify our considerations we take uniform magnetic field $\vec{B}_e$ directed along $z$ axis inside the cavity.
We use a mode expansion for the signal electric field inside the detection cavity,
\begin{equation}
\vec{E}^{\mathrm{sig}} (\Vec{x},t)=\sum_m\Vec{{\cal E}}_m(\Vec{x})E^{\mathrm{sig}}_m(t)\;,
\label{SeriesEm}
\end{equation}
where $\Vec{{\cal E}}_m(\Vec{x})$ are cavity eigenmodes with fixed normalization,
\begin{equation}
\Delta\,\Vec{{\cal E}}_m(\Vec{x})+\omega_m^2\Vec{{\cal E}}_m(\Vec{x})=0\;, \qquad \qquad \int\limits_{\mathrm{2\,cav}}d^3x\,(\Vec{{\cal E}}_m(\Vec{x})\cdot\Vec{{\cal E}}_n(\Vec{x}))=V_2\cdot \delta_{mn}\;,
\label{norm}
\end{equation}
satisfying necessary boundary conditions. We note that index $m$ accounts for $n,\,p,\,q$ winding numbers of the
cavity. The integration in (\ref{norm}) is performed
over the volume of the detection cavity $V_2$.
Since the signal modes are orthogonal, one obtains the following equation by
substituting \eqref{SeriesEm} into (\ref{MainEqOnE}) and multiplying each cavity mode by $\Vec{{\cal E}}_m(\Vec{x})$,
\begin{equation}
\label{DetectEq}
\left(\frac{\partial^2}{\partial t^2}+\Gamma\frac{\partial}{\partial t}+\omega_m^2\right)E_m^{\mathrm{sig}}(t)=-g_{a\gamma\gamma}\int\limits_{\mathrm{2\,cav}}\frac{d^3x}{V_2}\left[(\Vec{{\cal E}}_m(\Vec{x})\cdot\Vec{B}_e)\,\ddot{a}(\vec{x},t) - \left(\Vec{{\cal E}}_m(\Vec{x})\cdot \left(\vec{B}_e \vec{\nabla}\right)\vec{\nabla} a(\Vec{x},t) \right)\right]\;,
\end{equation}
where the r.h.s is an axion-induced driven force. The second term in the r.h.s. of
(\ref{DetectEq}) is suppressed compared to the first one if the produced axions are non-relativistic.
However, in the general case these two terms seem to be of the
same order. Since the magnetic field in the detection cavity $\vec{B}_e$ is directed along $z$ axis, the scalar product
$(\Vec{{\cal E}}_m(\Vec{x})\cdot\Vec{B}_e)$ does not vanish if only ${\cal E}^z_m$ does not depend on $z$, or the winding number for the detection mode $q=0$. Moreover, it is more convenient to use the detection mode TM010. For that mode the
only nonzero component is ${\cal E}^z_m$. The corresponding electric field, including normalization factor, reads ${\cal E}^z_{TM010}(\Vec{x}) = 1.92 \, J_{0}\left(\dfrac{x_{01}}{a}\rho\right) $.
Thus, (\ref{DetectEq}) simplifies,
\begin{equation}
\label{DetectEq1}
\left(\frac{\partial^2}{\partial t^2}+\Gamma\frac{\partial}{\partial t}+\omega_m^2\right)E_m^{\mathrm{sig}}(t)= -g_{a\gamma\gamma}B_e^z\int\limits_{\mathrm{2\,cav}}\frac{d^3x}{V_2}{\cal E}_m^z(\vec{x})\,\left( \ddot{a}(\vec{x},t) - \partial_z^2 a(\vec{x},t) \right)\,,
\end{equation}
If the r.h.s. of Eq.~(\ref{DetectEq1}) oscillates as $e^{-i\omega t}$, the complex solution of Eq.~(\ref{DetectEq1})
reads the r.h.s. multiplied to $ (-\omega^2 - i\omega \Gamma + \omega_m^2)^{-1}$.
This is exactly the case of aforementioned produced axion field determined by Eqs.~(\ref{GretTileSup2}) and~(\ref{GretTileSup3}). For
$m_a < \omega_\pm$ the signal electric field $E_m^{\mathrm{sig}}(t)$ reads
\begin{equation}
\label{DetectSol}
E_m^{\mathrm{sig}}(t)=\frac{-g_{a\gamma\gamma}^2 B_e^z}{\omega_\pm^2-\omega_m^2 + i\omega_\pm \Gamma} \int\limits_{\mathrm{2\,cav}}\frac{d^3x}{V_2}{\cal E}_m^z(\vec{x})\,\int\limits_{\mathrm{1\,cav}} d^3x'\frac{F_\pm(x')}{4\pi}\left( \omega_\pm^2+\partial_z^2\right) \frac{\mathrm{e}^{-i\omega_\pm t + i k_\pm|x-x'|}}{|x-x'|}\;.
\end{equation}
The signal field $E_m^{\mathrm{sig}}(t)$ has a narrow peak at $\omega_m \simeq \omega_\pm$.
The frequency $\omega_m$ of cavity detection eigenmode can be adjusted to a
value $\omega_m \simeq \omega_\pm$ by fixing the radius $R_2$ of the
detection cavity.
The damping coefficient $\Gamma$ can be expressed via the quality factor $Q$ of the
detection cavity, $\Gamma = \omega_m/Q$. Thus, averaged squared amplitude for
electric field of the signal mode reads
\begin{equation}
\langle |E_m^\mathrm{sig}(t)|^2\rangle
\equiv 1/2 \left(E^{\pm c}_m\right)^2+1/2 \left(E^{\pm s}_m\right)^2\;,
\end{equation}
where
\begin{equation}
E^{\mathrm{\pm c(s)}}_m=\frac{g_{a\gamma\gamma}^2E_0^2Q {B}^z_c}{4\pi } \cdot
\frac{V_{\mathrm{1\,cav}}}{\Delta}\cdot \kappa_m^{\pm c(s)}, \qquad
\kappa_m^{\pm c(s)}=
\left(\alpha^{\mathrm{\pm c(s)}}_m + \frac{\beta^{\mathrm{\pm c(s)}}_m}{\omega_\pm^2 L_1^2}\right)\;,
\end{equation}
and $\Delta$ is a distance between borders of two cavities (see Fig. 3), and $\alpha^{\mathrm{\pm c(s)}}_m$ and $\beta^{\mathrm{\pm c(s)}}_m$ are dimensionless geometrical form-factors,
\begin{equation}
\alpha^{\mathrm{\pm c(s)}}_m= \int\limits_{\mathrm{2\,cav}}\frac{d^3x}{V_2}\,{\cal E}^z_m(\Vec{x})\int\limits_{\mathrm{1\,cav}} \frac{d^3x'}{V_1}\frac{\left|F_\pm(\Vec{x}\,')\right|}{E_0^2}\cdot \frac{\Delta}{|\Vec{x}-\Vec{x}'|}\left\{\begin{array}{c} \cos \left(k_\pm|\Vec{x}-\Vec{x}'|\right) \\ \sin \left(k_\pm|\Vec{x}-\Vec{x}'|\right) \end{array} \right\},
\end{equation}
\begin{eqnarray}
\beta^{\mathrm{\pm c(s)}}_m & = & \int\limits_{\mathrm{2\,cav}}\frac{d^3x}{V_2}\,{\cal E}^z_m(\Vec{x})\int\limits_{\mathrm{1\,cav}} \frac{d^3x'}{V_1}\frac{\partial^2_{z'}\left|F_\pm(\Vec{x}\,')\right|}{E_0^2}\cdot \frac{\Delta \cdot L_1^2}{|\Vec{x}-\Vec{x}'|}\left\{\begin{array}{c} \cos \left(k_\pm|\Vec{x}-\Vec{x}'|\right) \\ \sin \left(k_\pm|\Vec{x}-\Vec{x}'|\right) \end{array} \right\} - \notag\\
& - & \int\limits_{\mathrm{2\,cav}} \dfrac{d^3 x}{V_2} \,{\cal E}^z_m(\vec{x})\int\limits_{\mathrm{1\,S}} \dfrac{\rho'd\rho'd\varphi'}{V_1} \dfrac{\partial_{z'}|F_{\pm}(\Vec{x'})|}{E_0^2}\cdot \dfrac{\Delta \cdot L_1^2}{|\Vec{x} - \Vec{x}'|}\left\{\begin{array}{c} \cos \left(k_\pm|\Vec{x}-\Vec{x}'|\right) \\ \sin \left(k_\pm|\Vec{x}-\Vec{x}'|\right) \end{array} \right\} \bigg|_{z' = 0}^{z' = L}\;.
\label{beta}
\end{eqnarray}
In deriving the expression (\ref{beta}) for simplicity of numerical calculations we transferred the derivative over $z$ to $z'$. The last term of Eq.~(\ref{beta}) appeared as the result of integrating by parts, see Appendix C for details.
We estimate the number of signal photons for a given mode in receiving
cavity as follows~\cite{Irastorza:2018dyq},
\begin{equation}
N_s \simeq \frac{1}{2 \omega} \int\limits_{\mathrm{2\,cav}}d^3x\,\langle |\Vec{E}^{\mathrm{sig}}(\vec{x},t)|^2 \rangle \simeq \frac{V_2}{2 \omega} \langle |E_m^{\mathrm{sig}}(t)|^2\rangle\;.
\end{equation}
Signal-to-noise ratio has the following form~\cite{Bogorad:2019pbu},
\begin{equation}
\text{SNR} \simeq \frac{N_s}{N_{\mathrm{th}}} \frac{1}{2L_2 Q} \sqrt{\frac{t}{B}}\;,
\end{equation}
where $t$ is a time of measurement, $B$ is a bandwidth of the
signal, $L_2$ is a receiving cavity length, $N_{\mathrm{th}}=T/\omega$
is a number of thermal photons at $T\gg \omega$.
One has the following limit on coupling,
\begin{equation}
g_{a \gamma \gamma} \simeq \left(\frac{128 \pi^2 TL_2 \Delta^2}{E_0^4 (B^z_c)^2 Q ((\kappa_m^{c+})^2 +(\kappa_m^{s+})^2)V_1^2 V_2} \sqrt{\frac{B}{t}} \text{SNR}\right)^{1/4}\;.
\label{gaggLinear}
\end{equation}
However, the volume of detecting cavity $V_2=\pi R_2^2\cdot L_2$ is not an independent variable for fixed detection mode. The resonant condition $\omega_{TM010}=\omega_\pm$
for detection with TM010 mode yields $R_2=x_{01}/\omega_\pm$. Thus, we have
\begin{equation}
\label{gaaa}
g_{a \gamma \gamma} \simeq 2.52\left(\frac{ T \cdot \Delta^2 \,\cdot \, \omega_\pm^2}{E_0^4 (B^z_c)^2 Q ((\kappa_m^{c+})^2 +(\kappa_m^{s+})^2)V_1^2} \sqrt{\frac{B}{t}} \text{SNR}\right)^{1/4}\;.
\end{equation}
In particular, Eq.~(\ref{gaaa})
can be expressed as
\begin{figure}[t]
\centering{\includegraphics[width=0.9\linewidth]{gagg.png}
\caption{Projected sensitivity of the proposed setup to ALP mass $m_a$ and $g_{a\gamma\gamma}$ parameter for different sets of pump modes in the production cavity. The black line shows the solar axion constraint from CAST \cite{Anastassopoulos:2017ftl}.
}
\label{fig:Exclusion_plot1}
\end{figure}
\begin{align}
\label{gagg_dependence}
g_{a \gamma \gamma} &\simeq 2.7\cdot 10^{-12}\, \mbox{GeV}^{-1} \left(\frac{T}{1.5\, \mbox{K}}\right)^{1/4} \left(\frac{\Delta}{0.2 \, \mbox{m}}\right)^{1/2}
\left(\frac{\omega_\pm}{1.5\,\cdot \,10^{-6}\, \mbox{eV}}\right)^{1/2}
\left(\frac{V_1}{ 1\, \mbox{m}^3}\right)^{-1/2} \left(\frac{Q}{10^5}\right)^{-1/4}
\times \notag
\\
&\times\left(\left(\kappa_c^+\right)^2 + \left(\kappa_s^+\right)^2\right)^{-1/4}
\left(\frac{E_0}{0.1\, \mbox{T}}\right)^{-1}
\left(\frac{B_e^z}{10\, \mbox{T}}\right)^{-1/2}
\left(\frac{t}{10^6\, \mbox{sec}}\right)^{-1/8} \left(\frac{B}{1\, \mbox{Hz}}\right)^{1/8}
\left(\frac{\mbox{SNR}}{5}\right)^{1/4}.
\end{align}
The constraints (\ref{gagg_dependence}) at the parametric plane $(g_{a\gamma\gamma},m_a)$ are shown
in Fig.~\ref{fig:Exclusion_plot1} for different sets of pump modes. The magnitudes of the external parameters,
$T,\,\Delta,\,Q,\,E_0,\,B_e^z,\,t,\,B$ and SNR correspond to their typical values in brackets.
The resonant constraints on $g_{a\gamma\gamma}$ for larger number of different pump mode combinations and for different ratio $R_1/L_1$ are shown in Table~1.
Let us discuss the parametric dependence in Eq.~(\ref{gagg_dependence}).
First, these bounds are sensitive to the magnitude of the field $E_0$. In particular, factor $0.1$ in the amplitude implies suppression of the bound (\ref{gagg_dependence}) by factor $10$. Second, the dependence on the linear sizes of the setup can be seen from Eq.~(\ref{gaggLinear}). We note that dimensionless form-factors $\kappa_{c(s)}^\pm$ feebly depend on $V_1$ and $V_2$. Therefore, by increasing the linear sizes of the setup by factor of $2$ one can achieve the improved limit on $g_{a\gamma\gamma}$ by factor $2^{3/2}\simeq 2.8$.
\section{Discussion}\label{sec:discuss}
In the present paper we have calculated numerically the time-averaged axion energy
density $\langle \rho^E \rangle$
generated by two electromagnetic modes in the superconducting cylindrical cavity. In
particular, we have studied
the spatial distribution of $\langle \rho^E \rangle$ for various axion masses and for
different types of cylindrical cavity.
We have shown numerically that there is a parametric resonance if axion produced in nonrelativistic regime for
the frequency component $\omega_+ = |\omega_1+\omega_2|$ and for certain
combination of pump modes.
On the contrary, for the frequency component $\omega_-=|\omega_1-\omega_2|$ there is no significant
resonance in that regime.
Considering different combinations of pump modes in the production cavity we have shown that they may
have different
efficiency for axion production. In particular, the highest amplitude for the produced axion field
came from TE$n_1 p_1 q_1$+TM$n_2 p_2 q_2$ modes with
$n_1=n_2$ and even $q_1+q_2$. Concerning
different radius-to-length ratio $R/L$ for the
cavity, we have shown that for a ``pancake-like'' cavity, $L/R \lesssim 1$, the maximum ALP energy
density outside the cavity is produced along the cylinder axis $z$.
Otherwise, for ``prolonged'' design of the cavity,
$L/R \gg 1$, the maximum energy density outside the cylinder is
produced near the side surface (this case was studied in Ref.~\cite{Janish:2019dpr}).
We also discuss the detection of ALPs in the additional
separated cavity, which is filled with the strong magnetic field.
We estimated the projected sensitivity of the proposed setup on
the ALPs parameter space $(m_a,g_{a\gamma\gamma})$.
We have shown that the best constraint came from the mode combination TM010+TE011 in the production cavity.
In addition, we have calculated the sensitivity curves
for high order pump modes of the production cavity.
The advantage of generating high order pump modes is that one can
probe the region with higher masses of ALPs.
However the peak sensitivity to the ALPs coupling is decreased in
that case. Although the resonances at
Fig.~\ref{fig:Exclusion_plot1} are quite narrow, we can test
larger area of parameters by exciting
different set of high order pump modes. Nevertheless, for fixed cavity geometry
there are still spaces between resonant peaks at the exclusion
plot. One can modify eigenmodes by adjusting the geometry of given cavity. Therefore, the
relevant unconstrained area can be probed.
Note that our setup can be easily generalized to
other designs of the detection cavity. In particular, the analysis
can be easily applied the toroidal detection cavity
\cite{Janish:2019dpr}.
By exciting high order pump modes one can shift the resonant
bounds of \cite{Janish:2019dpr} to the area of higher ALP masses.
Another interesting task is to consider parametric resonances
for the single cavity which produces and detects
ALPs~\cite{Bogorad:2019pbu}. I addition, it is instructive to probe
hidden photon~\cite{Kim:2020ask} in SRF cavity.
We leave these tasks for further work.
\paragraph{Acknowledgements} The authors thank Sergey Troitsky, Dmitry Levkov, Alexander Panin, Ilya Kopchinskii, Yuri Senichev, Leonid Kravchuk, Valentin Paramonov, Andrey Egorov, Leonid Kuzmin and Andrey Pankratov for helpful discussions. Numerical calculations were performed on the Computational Cluster of the Theoretical division of INR RAS. The work of P.S. and M.F. was supported by RSF grant 18-12-00258.
|
2,877,628,090,330 | arxiv | \section{Introduction.}
The Diophantine equation $x^4\pm y^4=z^2$, where $x,y,$ and $z$ are integers, was studied by Fermat, who proved that there exist no nontrivial solutions. Fermat proved this using the \emph{infinite descent} method, proving that if a solution can be found, then there exists a smaller solution (see for example \cite[Proposition 6.5.3]{coh}). This was the first particular case of Fermat's Last Theorem proven (the theorem was completely proven by Wiles in \cite{wil}).
The same Diophantine equation, but now with $x,y,$ and $z$ being Gaussian integers, i.e., elements of $\mathbb Z[i]$, was later examined by Hilbert (see \cite[Theorem 169]{hil}). Once again, it was proven that there exist no nontrivial solutions. Other authors also examined similar problems. In \cite{sz} equations of the form $ax^4+by^4=cz^2$ in Gaussian integers with only trivial solutions were studied. In \cite{cr} a different proof than Hilbert's is given, using descent, that $x^4+y^4=z^4$ has only trivial solutions in Gaussian integers. The equations $x^4\pm y^4=z^2$ over some quadratic fields were also considered in \cite{sz2} and \cite{sz3}, again proving that there exist no nontrivial solutions. Some applications of Diophantine equations of this type can be found in \cite{xu1} and \cite{xu2}.
In this short note, we will examine the Diophantine equation
$$x^4\pm y^4=iz^2$$
in Gaussian integers and find all the solutions of this equation. Also, we will give a new proof of Hilbert's results. Our strategy will differ from the one used by Hilbert and will be based on elliptic curves. Elliptic curves have also been used in \cite{lam} to prove that the Diophantine equation $x^3+y^3=z^3$ has only trivial solutions in Gaussian integers, but in a somewhat different way than in this note. We will use elliptic curves over quadratic fields that have nontrivial torsion, while in \cite{lam}, an elliptic curve with trivial torsion over the rationals was examined.
For an elliptic curve $E$ over a number field $K$, it is well known, by the Mordell-Weil theorem, that the set $E(K)$ of points on the curve $E$ with coordinates in the field $K$ is a finitely generated abelian group. The group $E(K)$ is isomorphic to $T\oplus\mathbb Z^r$, where $r$, which is called the \emph{rank}, is a nonnegative integer and $T$, which is called the \emph{torsion subgroup}, is the group of all elements of finite order. Thus, there are finitely many points on an elliptic curve over a field if and only if it has rank 0.
We will be interested in the case when $K=\mathbb Q(i)$. We will work only with elliptic curves with rational coefficients, and by a recent result of the author (see \cite{fn}), if an elliptic curve has rational coefficients, then the torsion of the elliptic curve over $\mathbb Q(i)$ is either cyclic of order $m$, where $1 \leq m \leq 10$ or $m=12$, of the form $\mathbb Z_2 \oplus \mathbb Z_{2m}$, where $1 \leq m \leq 4$, or $\mathbb Z_4 \oplus \mathbb Z_4$.
Throughout this note, the following extension of the Lutz-Nagell Theorem is used to compute torsion groups of elliptic curves.
\newtheorem*{tm100}{Theorem (Extended Lutz-Nagell Theorem)}
\begin{tm100}
Let $E$ be an elliptic curve in the form $E: y^2=x^3+Ax+B$ with $A,B\in \mathbb Z[i]$. If a point $(x,y)\in E(\mathbb Q(i))$ has finite order, then:
\begin{enumerate}
\item $x,y\in \mathbb Z[i].$
\item Either $y=0$ or $y^2|4A^3+27B^2.$
\end{enumerate}
\end{tm100}
The main step in the proof of the Lutz-Nagell Theorem (for curves over $\mathbb Q$) is to show that all the torsion points have integer coordinates. This is done by showing that no prime can divide the denominators of the coordinates of the torsion points. The proof of the Lutz-Nagell Theorem can easily be extended to elliptic curves over $\mathbb Q(i)$. Details of the proof can be found in \cite[Chapter 3]{th}. An implementation in Maple can be found in \cite[Appendix A]{th}.
Note that every elliptic curve can be put in the form $E: y^2=x^3+Ax+B$ (this is the \emph{short Weierstrass form}) over any field of characteristic zero, and thus in particular over $\mathbb Q(i)$. The Extended Lutz-Nagell Theorem enables us to easily get a finite list of possible candidates for the torsion points, and then check which ones are actually torsion points. Although when all the torsion points are found, one could easily compute the group structure of the torsion subgroup using addition laws, all we need is the list of all the torsion points.\\
\vspace{0.1cm}
\section{The Diophantine equation $x^4\pm y^4=iz^2$.}
\theoremstyle{definition}
\newtheorem{tm5}{Definition}
\begin{tm5}
We call a solution $(x,y,z)$ of the Diophantine equation
$$x^4\pm y^4=cz^2,$$
for some given $c\in \mathbb C$, \emph{trivial} if $xyz=0$.
\end{tm5}
We are now ready to prove our main result.
\theoremstyle{plain}
\newtheorem{tm2}{Theorem}
\begin{tm2}
\begin{itemize}
\item[(i)] The equation $x^4-y^4=iz^2$ has only trivial solutions in Gaussian integers.
\item[(ii)] The only nontrivial solutions satisfying $\gcd(x,y,z)=1$ in Gaussian integers of the equation $x^4+y^4=iz^2$ are $(x,y,z)$, where $x,y\in \{\pm i, \pm 1\},\ z=\pm i(1+i)$.
\end{itemize}
\end{tm2}
\emph{Proof:}\\
(i) Suppose $(x,y,z)$ is a nontrivial solution. Dividing the equation by $y^4$ and making the variable change $s= x/y,\ t= z/y^2$, we obtain the equation
$s^4-1=it^2$, where $s,t\in \mathbb Q(i)$. We can rewrite this equation as
\begin{equation}
r=s^2,
\end{equation}
\begin{equation}
r^2-1=it^2.
\end{equation}
Multiplying these equations we obtain $i(st)^2=r^3-r$. Again, making the variable change $a=st,\ b=-ir$ and dividing by $i$, we obtain an equation defining an elliptic curve $$E:a^2=b^3+b.$$ Using the program \cite{sim}, written in PARI, we compute that the rank of this curve is 0. It is easy to compute, using the Extended Lutz-Nagell Theorem, that $E(\mathbb Q(i))_{tors}=\mathbb Z_2\oplus\mathbb Z_2$ and that $b\in\{0,\pm i\}$. It is obvious that all the possibilities lead to trivial solutions.\\
(ii) Suppose $(x,y,z)$ is a nontrivial solution satisfying $\gcd(x,y,z)=1$. Dividing the equation by $y^4$ and making the variable change $s= x/y,\ t=z/y^2$, we obtain the equation
$s^4+1=it^2$, where $s,t\in \mathbb Q(i)$. We can rewrite this equation as
\begin{equation}
r=s^2,
\label{eq1}
\end{equation}
\begin{equation}
r^2+1=it^2.
\end{equation}
Multiplying these equations we obtain $i(st)^2=r^3+r$. Again, making the variable change $a=st,\ b=-ir$ and dividing by $i$, we obtain an equation defining an elliptic curve $$E:a^2=b^3-b.$$ Using the program \cite{sim}, we compute that the rank of this curve is 0. Using the Extended Lutz-Nagell Theorem we compute that $E(\mathbb Q(i))_{tors}=\mathbb Z_2\oplus\mathbb Z_4$ and that $b\in\{0,\pm i,\pm 1\}$. Obviously $b=0$ leads to a trivial solution. It is easy to see that $b=\pm 1$ leads to $r=\pm i$ and this is impossible, since $r$ has to be a square by (\ref{eq1}). This leaves us the possibility $b=\pm i$. Since we can suppose that $x$ and $y$ are coprime, this case leads us to the solutions stated in the theorem. \qed\\
\section{A new proof of Hilbert's results.}
We now give a new proof of Hilbert's result, which is very similar to Theorem 1.
\newtheorem{tm}[tm2]{Theorem}
\begin{tm}
The equation $x^4\pm y^4=z^2$ has only trivial solutions in Gaussian integers.
\end{tm}
\emph{Proof:}\\
Suppose $(x,y,z)$ is a nontrivial solution. Dividing the equation by $y^4$ and making the variable change $s= x/y,\ t=z/y^2$, we obtain the equation
$s^4\pm 1=t^2$, where $s,t\in \mathbb Q(i)$. We can rewrite this equation as
\begin{equation}
r=s^2,
\end{equation}
\begin{equation}
r^2\pm 1=t^2,
\end{equation}
and by multiplying these two equations, and making the variable change $a=st$, we get the two elliptic curves
$$a^2=r^3\pm r.$$
As in the proof of Theorem 1, both elliptic curves have rank 0 and it is easy to check that all the torsion points on both curves lead to trivial solutions.\qed\\
\theoremstyle{remark}
\newtheorem*{rem}{Remark}
\begin{rem}
Note that from the proofs of Theorems 1 and 2 it follows that the mentioned solutions are actually the only solutions over $\mathbb Q(i)$, not just $\mathbb Z[i]$.
\end{rem}
\paragraph{Acknowledgments.} The author would like to thank the referees for many helpful suggestions. The author was supported by the Ministry of Science, Education and Sports, Republic of Croatia, Grant 037-0372781-2821.
|
2,877,628,090,331 | arxiv | \section{Introduction}
As genomic-scale sequencing has become increasingly common, attention in phylogenetics has shifted from inferring trees of evolutionary relationships for individual genetic loci from a set of species to inferring relationships between the species themselves. A substantial complication is that population genetic processes within species, as modeled by the \emph{Multispecies Coalescent} (MSC) model can lead to individual gene trees having quite different topological structures than the tree relating the species overall. If the evolutionary history of the species also involved hybridization or other forms of horizontal gene flow, so that a species network is a more suitable depiction of relationships, the relationships of gene trees to the network, as modeled by the \emph{Network Multispecies Coalescent} (NMSC) model, is even more complex.
Inference of species networks, through a combined NMSC and sequence substitution model, can be performed in a Bayesian framework \cite{Zhang2017,Wen2018} but computational demands severely limit both the number of taxa and the number of genetic loci considered. Other methods take a faster two-stage approach, first inferring gene trees which are treated as ``data" for a second inference of a species network. Approaches include maximum pseudolikelihood using either rooted triples (PhyloNet) or quartets (SNaQ) displayed on the gene trees \cite{Yu2015,Solis-Lemus2016}, or the faster, distance-based analysis built on gene quartets of NANUQ \cite{ABR2019}. Still, the first stage of these approaches, the inference of individual gene trees, can be a major computational burden. Avoiding such gene tree inference, and passing more directly from sequences to an inferred network, could substantially reduce total computational time in data analysis pipelines.
The goal of this paper is to show that most topological features of a level-1 species network can be identified from logDet intertaxon distances computed from aligned genomic-scale sequences. In particular this can be done without partitioning the sequences by genes, under a combined model of the NMSC and a mixture of general time-reversible (GTR) substitution processes on gene trees. While the main result, that the logDet distances retain enough information to recover most of the species network, despite having lost information on individual genes, is a theoretical one, it points the way toward faster algorithms for practical inference. In particular, since the computation of logDet distances requires little effort, it suggests that a distance-based approach similar to NANUQ's, but avoiding individual gene tree inference, may offer substantially faster analyses than current methods.
\medskip
The model of sequence evolution underlying our result accounts not only for base substitutions along each gene tree, but also for variation in gene trees due to their formation under a coalescent process combined with hybridization or similar gene transfer. Our model extends to networks the mixture of coalescent mixtures model on species trees of \cite{Allman2019},
which itself extended the coalescent mixture introduced by Chifman and Kubatko \cite{Chifman2015}. More specifically, for a fixed species network, gene trees are formed under the Network Multispecies Coalescent model \cite{Meng2009,Yu2011,Zhu2016} for each site independently. GTR substitution parameters for base evolution on each site's tree are then independently chosen from some distribution, leading to a site pattern distribution. These site distributions are finally combined to give a
site pattern distribution for genomic sequences. (As discussed in Section \ref{sec::networksmodels}, this distribution also applies to a more realistic model in which multisite genes with a single substitution process have lengths chosen independently from some distribution.)
While this pattern frequency distribution thus reflects the substitution processes on all the gene trees, information about pattern frequencies arising on any individual gene tree is hidden.
The logDet distance was first introduced in the context of a single class general Markov model of sequence evolution on a single gene tree \cite{Steel94,Lockhart94}, and has been used both to obtain both gene tree identifiability results and for inference of individual gene trees.
Considering genomic sequences, Liu and Edwards \cite{Liu2009}, and independently
Dasarathy et al. \cite{Roch2015}, showed that for a Jukes-Cantor substitution model and an ultrametric species tree, the Jukes-Cantor distances obtained under the coalescent mixture model still allowed for consistent inference of topological species trees. By passing to the logDet distance, Allman et al. \cite{Allman2019} extended this result to the more realistic mixture of coalescent mixtures model, showing that the logDet distance allowed for consistent inference of a topological species tree, assuming it is ultrametric in generations. This study builds on all these works on gene and species tree models, but considers level-1 species networks on which all extant species are equidistant from the root.
Passing from species trees to networks is a substantial step, however, and our approach is strongly motivated by the approach taken by Ba\~nos \cite{Banos2019} in studying identifiability of features of unrooted level-1 topological species networks from gene tree quartet concordance factors (probabilities of the different quartet topologies displayed on gene trees). In the ultrametric setting of this work, we show that logDet distances computed from genomic sequences suffice to determine 4-cycles on undirected rooted triple networks, and then that this 4-cycle information for different rooted triples can be combined to determine all cycles of size 4 or more, and even all hybrid nodes in those cycles of size 5 or more. We do not obtain information on 2- or 3- cycles, so our results closely parallel those in \cite{Banos2019}, despite the rather different source of information.
\medskip
There are a number of other theoretical works in the literature on determining phylogenetic networks from limited information. For instance, \cite{Jansson2006} investigates determining a level-1 network from the rooted triple trees it displays,
\cite{HuberEtAl2017,HuberEtAl2018} discuss how knowledge of trinets (induced 3-taxon directed rooted networks) and quarnets (induced 4-taxon undirected unrooted networks) determine larger networks, and \cite{VanIersaletal2020} explores determination of networks from distances. However, the question of how, or whether, these results can be applied to biological data is not addressed, and the setting of these works is not directly applicable to obtaining our results.
Other works \cite{GrossLong18,GrossEtAl2020,HolleringSullivant2021} use algebraic approaches to show that certain types of level-1 networks can be identified from joint pattern frequency arrays under group-based models of sequence evolution such as the Jukes-Cantor and Kimura models. In addition to their restriction on sequence evolution models, these works do not incorporate a coalescent process. That is, all sequence sites are assumed to have evolved on one of the finitely-many trees displayed on the network. Since the absence of a coalescent process is a limiting case of our coalescent-based model, our results allowing for mixtures of more general sequence evolution models extend those results in the ultrametric case. Algebraic study of a network model combined with the general Markov model, again with no coalescent process, was also conducted in \cite{CasanellasF-S2020}.
\medskip
This paper proceeds as follows. Section \ref{sec::networksmodels} defines the networks and models under consideration, as well as the logDet distance. Section
\ref{sec::comb} uses combinatorial arguments to show how information on undirected rooted triple networks can be used to determine features of a larger directed network from which they are induced. Expected frequencies of site patterns for sequences produced by the mixture of coalescent mixtures model are studied in Section \ref{sec::freqs}, and shown to be expressible as convex combinations of pattern frequencies from simpler networks. In Section \ref{sec::logDet} we show that the ordering by magnitude of logDet distances for triples of taxa tells us about the induced rooted triple species network, and by combining this with the result of Section \ref{sec::comb} we obtain our main identifiability result, Theorem \ref{thm::main}. Section \ref{sec::other} further studies the logDet distances from a rooted triple network, in order to better understand what triples of distances can arise under the mixture of coalescent mixtures model. We conclude in Section \ref{sec:discussion} with an outline of how these results can be developed into a practical inference algorithm.
\section{Networks and models}\label{sec::networksmodels}
\subsection{Phylogenetic Networks}\label{sec::networks}
Although there are many variations on the notion of a phylogenetic network in the literature, we adopt ones appropriate to the Network Multispecies Coalescent (NMSC) model. This model, which describes the formation of trees of gene lineages in the presence of both incomplete lineage sorting and hybridization, will be further developed in the next subsection. First, we focus on setting forth combinatorial aspects of the networks.
\begin{definition} \label{def::network} \cite{Solis-Lemus2016,Banos2019}
A \emph{topological binary rooted phylogenetic network} $\mathcal{N}^+$
on taxon set $X$ is a connected directed acyclic graph with vertices $V=V(\mathcal N^+)$ and edges $E=E(\mathcal N^+)$, where $V$ is a disjoint union $V = \{r\} \sqcup V_L \sqcup V_H \sqcup V_T$ and $E$ is a disjoint union $E = E_H \sqcup E_T$, with a bijective leaf-labeling function $f : V_L \to X$ with the
following characteristics:
\begin{itemize}
\item[1.] The \emph{root} $r$ has indegree 0 and outdegree 2.
\item[2.] A \emph{leaf} $v \in V_L$ has indegree 1 and outdegree 0.
\item[3.] A \emph{tree node} $v\in V_T$ has indegree 1 and outdegree 2.
\item[4.] A \emph{hybrid node} $v\in V_H$ has indegree 2 and outdegree 1.
\item[5.] A \emph{hybrid edge} $e=(v,w) \in E_H$ is an edge whose child node $w$ is hybrid.
\item[6.] A \emph{tree edge} $e=(v,w) \in E_T$ is an edge whose
child node $w$ is either a tree node or a leaf.
\end{itemize}
\end{definition}
When $|X|=3$ or $4$, we refer to $\mathcal{N}^+$ as a \emph{rooted triple network} or a \emph{rooted quartet network}, respectively.
The vertices, and edges, of $\mathcal N^+$ are partially ordered by the directedness of the graph. For instance, a node $u$ is \emph{below} a node $v$, and $v$ is \emph{above} $u$, if there exists a non-empty directed path in $\mathcal N^+$ from $v$ to $u$. The root is thus above all other nodes.
\medskip
A metric notion of the network above incorporates some of the parameters of the NMSC model. This introduces edge lengths, measured in generations throughout this article, as well as probabilities that a gene lineage at a hybrid node follows one or the other hybrid edge as it traces back in time toward the network root. Since we focus on binary networks, only hybrid edges are allowed to have length 0, to model possibly instantaneous jumping of a lineage from one population to another.
\begin{definition} A \emph{metric binary rooted phylogenetic network} $( \mathcal N^+, \{\ell_e\}_{e\in E},\{\gamma_e\}_{e\in E_H})$ is a topological binary rooted phylogenetic network together with an assignment of weights or \emph{lengths} $\ell_e$ to all edges and \emph{hybridization parameters} $\gamma_e$ to all hybrid edges, subject to the following restrictions:
\begin{itemize}
\item[1.] The length $\ell_e$ of a tree edge $e\in E_T$ is positive.
\item[2.] The length $\ell_e$ of a hybrid edge $e\in E_H$ is non-negative.
\item[3.] The hybridization parameters $\gamma_e$ and $\gamma_{e'}$ for a pair of hybrid edges $e,e'\in E_H$ with the same child hybrid node are positive and sum to 1.
\end{itemize}
\end{definition}
A metric network of this sort is said to be \emph{ultrametric} if every directed path from the root to a leaf has the same total length. This is equivalent to requiring the ultrametricity of
all trees displayed on the network. An example of a simple ultrametric network
is shown in Figure \ref{fig::network} (Right).
\begin{figure}
\begin{center}
\includegraphics[width=3.8in]{network.pdf}
\end{center}
\caption{ (Left) An ultrametric species network $\mathcal N^+$ with time $t$ in generations before the present, hybrid edges $h$ and $h'$ shown in red, and population functions $N_e(t)$ on each edge depicted by widths of ``tubes.'' The edge lengths $\tau$ are measured on the $t$-axis between the dashed lines indicating speciation and hybridization events. The dashed red/blue boundary represents a hybrid node, the top dashed line the root of the network, and other dashed lines tree nodes. (Right) A schematic of the same species tree, which does not show population sizes. Hybridization parameters $\gamma$ and $\gamma'$ are omitted from both drawings.}\label{fig::network}
\end{figure}
On directed networks there are several analogs \cite{Steel2016} of the most recent common ancestor of a set of taxa on a tree. The following is the most useful in this work.
\begin{definition}\label{def::LSA}\cite{Steel2016}
Let $\mathcal{N}^+$ be a (metric or topological) binary rooted phylogenetic network on a set of taxa $X$ and let $Z\subseteq X$. Let $D$ be the set of
nodes which lie on every directed path from the root $r$ of $\mathcal{N}^+$ to any $z\in Z$. Then the \emph{lowest stable
ancestor of $Z$ on $\mathcal{N}^+$}, denoted ${\operatorname{LSA}}(Z,{\mathcal{N}^+})$, is the unique node $v\in D$ such that $v$ is below all $u\in D$
with $u\neq v$. The \emph{lowest stable ancestor (LSA)} of a network on $X$ is ${\operatorname{LSA}}(X)$.
\end{definition}
Phylogenetic networks as defined here have no cycles in the usual sense for a directed graph. The term \emph{cycle} will thus be used to refer to a collection of edges that form a cycle when all edges are undirected. A cycle must contain at least two hybrid edges sharing a hybrid node, and may contain any non-negative number of tree edges. The class of networks we focus on is those in which cycles are separated, in the following sense.
\begin{definition}\label{def::level1}
A rooted binary phylogenetic network $\mathcal{N}^+$ is said to be \emph{level-1} if no two distinct cycles in $\mathcal{N}^+$ share an edge.
\end{definition}
Although this is not the standard definition of level-1 \cite{Rossello2009}, in the setting of binary networks it is equivalent.
Each cycle on a level-1 phylogenetic network contains exactly one hybrid node and two hybrid edges with that node as a child. Thus there is a one-to-one correspondence between cycles and the hybrid nodes they contain. A cycle composed of $n$ edges, 2 of which are hybrid, is called an \emph{$n$-cycle}. If the cycle's hybrid node has $k$ leaf descendants, it is an \emph{$n_k$-cycle}.
\medskip
Passing from a large network to one on a subset of the taxa is similar to the process for trees.
\begin{definition}
\emph{Suppressing a node} with both in- and out-degree 1 in a directed phylogenetic network means replacing it and its two incident edges with a single edge from its parent to its child. For a metric network, the new edge is assigned a length equal to the sum of lengths of the two replaced. If the outedge was hybrid, the new edge is also hybrid and retains the hybridization parameter.
Similarly, suppressing a node of degree 2 between two undirected edges means replacing it and its two incident edges with a single undirected edge.
\end{definition}
\begin{definition}\label{def::triplet}
Let $\mathcal{N}^+$ be a (metric or topological) binary rooted phylogenetic network on $X$ and let $Y\subset X$. The \emph{induced rooted network} $\mathcal{N}^+_{Y}$ on $Y$ is the network obtained from $\mathcal{N}^+$ by retaining nodes and edges in every path from the root $r$ on $\mathcal N^+$ to any $y\in Y$, and then suppressing all nodes with in- and out-degree 1. We then say $\mathcal{N}^+$ \emph{displays} $\mathcal{N}^+_{Y}$.
\end{definition}
We need the notion of a \emph{rooted undirected network}, in which all edges have been undirected but the root retained. Note that if a rooted network is a tree, knowledge of the root alone is enough to recover the direction of every edge, so this notion is not useful in that setting. If cycles are present, knowledge of the root determines only the direction of every cut edge (an edge whose deletion results in a graph with two connected components), and edges directly descended from cut edges. Knowing the root and all hybrid nodes in an undirected level-1 network does, however, determine the full directed network.
Several other notions of networks induced from a directed one are needed.
\begin{definition}\label{def::undirected}
Let $\mathcal{N}^+$ be a (metric or topological) binary rooted phylogenetic network on $X$.
\begin{enumerate}
\item \cite{Banos2019}
The \emph{LSA network} $\mathcal N^{\oplus }$
induced from $\mathcal N^+$ is the network on $X$ obtained by
deleting all edges and nodes above ${\operatorname{LSA}}(X, \mathcal{N}^+)$, and designating ${\operatorname{LSA}}(X, \mathcal{N}^+)$ as the root node.
\item
The \emph{undirected LSA network} $\mathcal N^\ominus$ is the rooted network obtained from the LSA network $\mathcal{N}^\oplus$ by undirecting all edges.
\item \cite{Banos2019}
The \emph{unrooted semidirected network} $\mathcal N^-$ is the unrooted network obtained from the LSA network $\mathcal{N}^\oplus$ by undirecting all tree edges and suppressing the root, but retaining directions of hybrid edges.
\end{enumerate}
\end{definition}
For a binary level-1 network $\mathcal N^+$, the only possible structure above the LSA has the form of a (possibly empty) chain of 2-cycles \cite{Banos2019}, an example of which is shown in Figure \ref{fig::chain2cyc}. The LSA network $\mathcal N^\oplus$ is obtained by simply deleting that chain.
\begin{figure}
\begin{center}
\includegraphics{chain2cyc.pdf}
\end{center}
\caption{ A rooted network $\mathcal N^+$ whose LSA network $\mathcal N^\oplus$ is the rooted tree $((a,b),c)$, but which has a chain of 2-cycles above ${\operatorname{LSA}}(a,b,c)$. }\label{fig::chain2cyc}
\end{figure}
Note that the terminology of ``$n_k$-cycles" can be applied to LSA networks $\mathcal N^\oplus$, as hybrid edges retain their direction.
On undirected LSA networks $\mathcal N^\ominus$, however, ``$n$-cycle" can still be applied, but ``$n_k$-cycle" generally cannot.
\begin{definition}
By \emph{suppressing a cycle} $C$ in a topological level-1 network we mean deleting all edges in $C$, identifying all nodes in $C$, and if the resulting node is of degree 2 suppressing it. If the network is rooted and this results in the root becoming a degree 1-node, then the resulting edge below the root is also deleted, with its child becoming the root. \end{definition}
Suppressing an $n$-cycle in a binary level-1 network results in a non-binary network when $n\ge 4$. However if only 2- and 3-cycles are suppressed, the result is binary.
\subsection{Coalescent Model on Networks}\label{ssec:NMSC} The
formation of gene trees within a species network, as ancestral lineages of sampled loci from extant taxa join together moving backwards in time, is given a
mechanistic description by the Network Multispecies Coalescent Model (NMSC) \cite{Meng2009,Yu2011,Zhu2016}.
Parameters of the NMSC for a set of taxa $X$
include a metric rooted binary phylogenetic network $(\mathcal N^+,\{\ell_e\},\{\gamma_e\})$ on $X$, with edge lengths $\ell_e$ in generations. In addition, for each edge $e=(u,v)$ fix a function $N_e:[0,\ell_e)\to \mathbb R^{>0}$ giving the (haploid) population size along the edge, where $N_e(0)$ is the population size at the child node $v$ and $N_e(t)$ is the population at time $t$ units above it. Finally, let $N_r:[0,\infty)\to \mathbb R^{>0}$ be an additional population size function for an infinite length `edge' ancestral to the root $r$ of the network.
The $N_e$ need not be constant nor equal, although those are common assumptions in other works. As in \cite{Allman2019} we make the biologically-plausible technical assumptions that the functions $N_e$ are bounded, and that all $1/N_e(t)$ are integrable over finite intervals.
Figure \ref{fig::network} (Left) depicts an example species network that is ultrametric in generations, with hybrid edges $h$ and $h'$, and population functions $N_e$ on each edge depicted by time-varying widths of the network edges. The edge lengths $\ell_e$ are measured on the $t$-axis between the horizontal lines indicating speciation and hybridization events. Figure \ref{fig::network} (Right) gives a schematic of the same species tree, without a depiction of population functions.
The standard Kingman coalescent models the formation of gene trees, with edge lengths in generations, within a single population edge $e$, with pairs of lineages coalescing independently as they trace backward in time, at instantaneous rate $1/N_e(t)$. The multispecies coalescent model (MSC) extends this to a tree of populations, by using the standard coalescent on each edge, as well as an infinite length edge above the root, allowing multiple gene lineages to enter a population from its descendant ones at a tree node. The NMSC extends this further, so that
lineages reaching hybrid nodes randomly enter one or the other hybrid edge above them, with the choice determined independently according to the hybridization parameter probabilities. Thus the NMSC parameters $(\mathcal N^+,\{\ell_e\},\{\gamma_e\})$ and $\{N_e\}$ determine a distribution of rooted metric gene trees. The structure of the NMSC also ensures that the distributions of gene trees obtained by marginalization to a subset $Y$ of taxa are the same as the distributions obtained from the NMSC on the displayed network $\mathcal N^+_Y$.
\subsection{Sequence substitution models on gene trees}\label{ssec::substitution}
The $k$-state \emph{general time-reversible model} (GTR) for sequence evolution is a continuous-time Markov process on a metric gene tree. Gene tree edge lengths are in substitution units, and sequences are composed of $k$ possible states, or bases. Model parameters are a $k\times k$ instantaneous rate matrix $Q$ together with a $k$-state distribution $ \pi $, with non-negative entries summing to 1, satisfying the following:
\begin{enumerate}
\item off-diagonals entries of $Q$ are positive,
\item row sums of $Q$ are 0,
\item $\operatorname{trace} Q=-1,$
\item $\pi Q=0$,
\item ${\operatorname{diag}}(\pi)Q$ is symmetric.
\end{enumerate}
In the ultrametric framework for our species networks, we introduce an additional time-dependent but lineage-independent rate scalar $\mu(t)$ for $Q$, where $t$ is measured in generations from leaves to the root and beyond, and $\mu(t)$ has units of substitutions/generation. We assume $\mu$ is piecewise-continuous, $\mu(t)>0$ for all $t\ge 0$ so that the mutations process never stops, and $\int_0^\infty \mu(t)dt=\infty$ so that the total amount of possible mutation is unbounded. Following \cite{Allman2019}, this substitution model is denoted by GTR+$\mu$.
For any node $u$ on a gene tree, let $t_u$ denote the distance, in generations, to that node from its descendant leaves. The states at a single site in sequences at the taxa at the leaves on the gene tree are then determined as follows:
A state is randomly chosen at the root of the tree from the distribution $ \pi$. For each edge $e=(u,v)$ descendant from a node $u$ the site undergoes random state changes with rates $\mu(t)Q$ for times $t\in[t_v,t_u]$ to obtain states at the child nodes. The full substitution process on the edge is thus described by the Markov matrix
$$M_e=\int_{t_v}^{t_u}\exp(\mu(t)Q)\,dt.$$
A similar process is then repeated for those nodes' children, and so on, until states at the taxa have been determined.
\subsection{Mixture of coalescent mixtures}\label{ssec::mixtures}
The model we focus on is the $m$-class \textit{mixture of coalescent mixtures} \cite{Allman2019} on an ultrametric network. This model has as parameters an ultrametric species network $( \mathcal N^+,
\{\ell_e\},\{\gamma_e\})$, population size functions
$\{N_e\}$, a finite collection $\{(Q_i,{ \pi}_i; \mu_i)\}_{i=1}^m$ of GTR+$\mu$ parameters for the $m$ classes, and a vector $ \lambda$ of $m$ positive class size parameters summing to 1.
Sequence data is generated as follows: For each site:
\begin{enumerate}
\item a gene tree $T$ is sampled according to the NMSC model on $(\mathcal N^+, \{\ell_e\},\{\gamma_e\})$ with population sizes $\{N_e\}$,
\item class $i$ is sampled from the distribution $ \lambda$ to determine parameters $(Q_i,{ \pi}_i; \mu_i)$,
\item the bases for each $x\in X$ are sampled under the GTR+$\mu$ process on $T$ with parameters $(Q_i,{ \pi}_i; \mu_i)$.
\end{enumerate}
This model is denoted by $\mathcal M= \mathcal M(\theta)$ where $$\theta=( (\mathcal N^+, \{\ell_e\},\{\gamma_e\}),\{N_e\}, \lambda, \{(Q_i,{ \pi}_i; \mu_i)\}).$$
Sampling $n$ independent sites from this model produces $k$-state aligned sequences of length $n$. As usual in phylogenetics, these are summarized through counts of site patterns across the sequences in an $|X|$-dimensional $k\times k\times\cdots\times k$ array. Marginalizations of this array to 2-dimensions give pairwise $k\times k$ site pattern count matrices that compare only the sequences for two taxa in $X$.
\medskip
In the tree context, two extensions of this model were discussed in \cite{Allman2019}. For the first, the model assumption of one independently drawn gene tree for each site is modified to a more realistic one for genomic sequences in which all sites for a genetic locus share a gene tree. If the lengths (in number of sites) of the loci are independent identically distributed draws from some distribution, then the expected site pattern distribution for such a model is unchanged from that determined by $\mathcal M$. Only the rate of convergence, as the number of sampled genes grows, of frequencies of sampled site patterns to the asymptotic distribution will be slowed. Although
the identifiability results of this paper are only formally stated for the model $\mathcal M$ on networks, they apply more generally to a similarly extended network model.
Another extension in the tree setting in \cite{Allman2019} allowed for relaxing the ultrametric condition while retaining strong results on identifiability from the logDet distances. In that extension, the scalar rate function was allowed to be edge dependent as long as a certain symmetry condition on mixture components resulted in ultrametricity in substitution units ``on average" across gene trees. While a similar model extension in the network setting seems likely to lead to similar results, it is not explored here, as the technical complications are greater than in the tree case.
\subsection{LogDet distance}\label{ssec:logDet}
The fundamental tool we use to study relationships of taxa under the mixture of coalescent mixtures model $\mathcal M$ is the logDet distance between a pair of aligned sequences.
It is computed as follows: For taxa $a,b\in X$, let $\widehat F^{ab}$ be
a $k \times k$ matrix of
empirical relative site-pattern frequencies, obtained by normalizing the site pattern count matrix for $a$ and $b$, so that its entries sum to 1. Thus the $ij$ entry of $\widehat F^{ab}$ is the proportion of sites in the
sequences exhibiting base $i$ for $a$ and base $j$ for $b$. With $\hat f_a$ and $\hat f_b$ the vectors of row and column sums
of $\widehat F^{ab}$, which give the
proportions of various bases in the sequences for $a$ and $b$, let $\hat g_a$ and $\hat g_b$ the products of
the entries of $\hat f_a$, $\hat f_b$, respectively. Then the empirical logDet distance is
\begin{align}\label{formula::logdet}
\hat d_{LD}(a,b)=- \frac{1}{k}\left(\ln \det \left(\widehat F^{ab}\right )-\frac{1}{2}\ln (\hat g_a \hat g_b)\right)
\end{align}
Under most phylogenetic models, including the mixture of coalescent mixtures model, individual site patterns in sequences are assumed to be independent and identically distributed. By the weak law of large numbers, $\widehat F^{ab}$ computed from a sample will converge in probability to its expected value $F^{ab}$ as the sequence length goes to $\infty$. By the continuous function theorem (e.g., \cite{vanderVaart}), the empirical logDet distance thus converges in probability to the logDet distance computed by the same formula from the expected $F^{ab}$, a quantity we refer to as the \emph{theoretical logDet distance} and denote by $d_{LD}(a,b)$.
\section{Rooted Networks from Undirected Rooted Triple Networks}\label{sec::comb}
The goal of this section is to establish Proposition \ref{prop::combnet}, a combinatorial result indicating features of a topological level-1 rooted $n$-taxon network that can be recovered from its induced undirected rooted triple networks with 2- and 3-cycles suppressed. This is a rooted analog of a key result of \cite{Banos2019} relating unrooted semidirected networks and their induced undirected quartet networks. Later sections of this paper focus on identifying these rooted triple networks under the model $\mathcal M$.
There are several possible routes to Proposition \ref{prop::combnet}. One approach would be to follow the argument of the quartet analog, with modifications throughout due to the rooted setting. Another would be to imitate the alternate proof of the quartet result given in \cite{ABR2019}, based on an extension of the intertaxon quartet distance of \cite{Rhodes2019}, but instead using the rooted triple distance also introduced in that work.
The argument presented here is shorter than these approaches, as it leverages information about undirected rooted triple networks to obtain information about undirected quartet networks, and then applies the theory of \cite{Banos2019}.
The following result, extracted from the proof of Theorem 4 of \cite{Banos2019}, will be used. In it, and throughout this work, by a network \emph{modulo 2- and 3-cycles} we mean the network obtained by suppressing all 2- and 3-cycles. Similarly, \emph{modulo directions of edges in 4-cycles} means that all edges in 4-cycles are undirected. As a result, which of the edges in a 4-cycle are hybrid, and therefore which node is hybrid, is not indicated.
\begin{lemma}[\cite{Banos2019}]\label{lem:banos}
Let $\mathcal N^+$ be a level-1 rooted binary topological phylogenetic network on $X$. Let $Q$ be the set of undirected quartet networks obtained from those displayed on $\mathcal N^+$ by unrooting, suppressing all cycles of size 2 and 3, and undirecting all edges. Then modulo 2- and 3-cycles and directions of edges in 4-cycles, the semidirected unrooted network $\mathcal N^-$ is determined by $Q$.
\end{lemma}
In order to apply this to rooted triples, we first recall some combinatorial properties of rooted triple and quartet networks.
\begin{lemma}[\cite{Banos2019}]\label{lem:numberofcycles}
Let $\mathcal{Q}^-$ be a level-1 unrooted semidirected binary quartet network. Then $\mathcal{Q}^-$ has no $k$-cycles for $k\geq 5$, and at most one $4$-cycle. If
$\mathcal{Q}^-$ has a 4-cycle, then it has neither $3$- nor $2_2$-cycles. If there is no $4$-cycle, then there are at most two $3$-cycles, with at most one of these a $3_2$-cycle.
\end{lemma}
Lemma \ref{lem:numberofcycles} can be used to characterize possible cycles in a rooted triple network, by attaching an outgroup at the root.
More specifically, by \emph{attaching an outgroup $o$ to the root} of an $n$-taxon network on taxa $X$ with $o\notin X$ we mean identifying the root $r$ of the network with the node $r$ on an edge $(r,o)$ and undirecting all tree edges.
This gives a
$(n+1)$-taxon unrooted semidirected network. The rooted triple networks displayed on the original network are then in one-to-one correspondence with induced semidirected quartet networks containing $o$ on the new network. This construction yields the following.
\begin{corollary}\label{lem::numcyctriplet}
Let $\mathcal N^+$ be a level-1 binary rooted triple network. Then $\mathcal N^+$ has no $k$-cycles for $k\geq 5$, and at most one $4$-cycle in which case there are no $3$- or $2_2$-cycles. If there is no $4$-cycle, then there are at most two $3$-cycles, with at most one of these a $3_2$-cycle.
\end{corollary}
Considering a rooted quartet network $\mathcal Q^+$, and the impact of passing to its associated unrooted semidirected quartet network $\mathcal Q^-$, Lemma \ref{lem:numberofcycles} also immediately yields the following.
\begin{corollary}\label{cor::numcycquart}
Let $\mathcal Q^+$ be a level-1 rooted binary quartet network. Then $\mathcal Q^+$ has no $k$-cycles for $k\geq 6$, and has at most a one 5-cycle or 4-cycle, but not both.
\end{corollary}
We now catalog the rooted quartet networks with 4- or 5-cycles, modulo smaller cycles.
\begin{lemma}\label{lem::triplets4c}
Let $\mathcal Q^+$ be a level-1 binary rooted quartet network with one $4$-cycle or one $5$-cycle. Then modulo 2- and 3- cycles and up to taxon relabelling, the LSA network $\mathcal Q^\oplus$ is one of those shown in Figure \ref{fig::quartet4c5c}. Thus $\mathcal Q^+$ displays either 1, 2, or 3 rooted triples with a 4-cycle.
\end{lemma}
\begin{proof}
Let $\mathcal Q^+$ be a rooted level-1 network on $\{a,b,c,d\}$ with a cycle $C$ of size $4$ or $5$. By Corollary \ref{cor::numcycquart}, $C$ is the only cycle of size greater than 3. Figure \ref{fig::quartet4c5c} shows the topologies, up to taxon relabeling, of all the rooted quartet networks with a $4$- or $5$-cycle and no $2$- or $3$-cycles, as determined by enumerating all possible locations for adding hybrid edges to a rooted 4-taxon tree. The top row of Figure \ref{fig::quartet4c5c} shows the quartet networks with exactly one displayed rooted triple, on $\{a,b,c\}$, having a 4-cycle. The middle row shows the networks with exactly two displayed rooted triples, on $\{a,b,c\}$ and $\{a,b,d\}$, having a 4-cycle. The bottom row shows those with exactly three displayed rooted triples, on $\{a,b,c\}$, $\{a,b,d\}$, and $\{a,c,d\}$, having a 4-cycle. \qed\end{proof}
\begin{figure}
\begin{center}
\includegraphics{quartet4c5c_v2.pdf}
\end{center}
\caption{All rooted directed topological quartet networks with a single $4$- or $5$-cycle, and no other cycles, up to relabeling of taxa. Networks in the top row display exactly one rooted triple with a 4-cycle, those in the middle row display two, and those in the bottom row display three.}\label{fig::quartet4c5c}
\end{figure}
Now we proceed to the main result of this section.
\begin{proposition}\label{prop::combnet}
Let $\mathcal N^+$ be a level-1 rooted binary topological phylogenetic network on $X$. Let $S$ be the set of undirected rooted triple networks obtained from those displayed on $\mathcal N^+$ by suppressing all cycles of size 2 and 3 and undirecting all edges. Then modulo 2- and 3-cycles and directions of edges in 4-cycles, the LSA network $\mathcal N^\oplus$ is determined by $S$.
\end{proposition}
\begin{proof}
We first build a set of rooted quartet networks from $S$.
Let $\{a,b,c,d\}\in X$ and let $S_{abcd}\subseteq S$ be the set of undirected rooted triple networks on any three elements of $\{a,b,c,d\}$, so $|S_{abcd}|=4$.
By Corollary \ref{cor::numcycquart} and Lemma \ref{lem::triplets4c}, there are $k=0$, $1$, $2$, or $3$ elements of $S_{abcd}$ with a 4-cycle. We consider each possibility in turn, showing that we can determine the undirected rooted quartet network $N^\ominus_{abcd}$ modulo 2- and 3-cycles.
\smallskip
If $k=0$, all rooted triples in $S_{abcd}$ are trees and since $\mathcal N^+_{abcd}$ has no 4- or 5-cycles by Lemma \ref{lem::triplets4c}, the undirected LSA network $\mathcal N^\ominus_{abcd}$ modulo 2-and 3-cycles is a tree. By a well-known result for trees \cite{Semple2005}, $S_{abcd}$ determines $\mathcal N^\ominus_{abcd}$ modulo 2- and 3-cycles.
If $k=1$, then modulo 2- and 3-cycles and relabelling of taxa,
$\mathcal N^+_{abcd}$ is isomorphic to one of the networks in the top row of Figure \ref{fig::quartet4c5c}. But for all these networks if $a,b,c$ are the taxa in the rooted triple network with a 4-cycle, then the rooted 4-taxon network is obtained by attaching $d$ as an outgroup to it. Thus $\mathcal N^\ominus_{abcd}$ is determined modulo 2- and 3-cycles.
If $k=2$, $\mathcal N^+_{abcd}$ is isomorphic, modulo 2- and 3-cycles and relabeling, to one of the networks in the middle row of Figure \ref{fig::quartet4c5c}. Note that for all those rooted quartet networks, the displayed rooted triple networks with 4-cycles are on $\{a,b,c\}$ and $\{a,b,d\}$, and the 4-taxon network can be obtained from either of these by replacing $c$ or $d$ with a cherry on $\{c,d\}$, thus determining $\mathcal N^\ominus_{abcd}$ modulo 2- and 3-cycles.
If $k=3$, $\mathcal N^+_{abcd}$ is isomorphic, modulo $2$-, and $3$-cycles and relabeling, to one of the networks in the bottom row of Figure \ref{fig::quartet4c5c}. In both of these, there is exactly one taxon, $a$, that is in all three rooted triple networks with 4-cycles, and there is exactly one taxon, $c$, that has graph-theoretic distance 3 from $a$ in exactly one of the two rooted triples with 4-cycles it appears in.
Thus we can determine which taxon is $a$, and which is $c$. For the remaining pair $b,d$, if there is a taxon that is at distance 4 from $a$ in both 4-cycle rooted triple networks it appears in, then the 4-taxon network is the one shown on the left, and that taxon is $d$. Otherwise, the network is the one shown on the right.
In this case there is exactly one rooted triple network on $a$ and $c$ which has its third taxon at distance 2 from the root, and this determines $b$.
Thus we obtain the rooted 4-taxon network $\mathcal N^\oplus_{abcd}$ modulo 2- and 3-cycles, and hence $\mathcal N^\ominus_{abcd}$ modulo 2- and 3-cycles
With all rooted 4-taxon networks $\mathcal N^\ominus_{abcd}$ modulo 2- and 3-cycles determined, we attach an outgroup $o$ to all, giving the collection of all 5-taxon unrooted networks including $o$, modulo 2- and 3-cycles, induced from the unrooted network $\mathcal N'$ formed by attaching $o$ to the root of $\mathcal N^+$. But the unrooted 4-taxon networks displayed on these 5-taxon ones form the collection of all 4-taxon undirected networks (possibly including $o$) modulo 2- and 3-cycles displayed on $\mathcal N'$.
Lemma \ref{lem:banos} now determines $\mathcal N'$ modulo 2- and 3-cycles, with directions of cut edges and edges in cycles of size $\ge 5$, though not in 4-cycles. Rooting $N'$ by the outgroup $o$ we recover the topology of $\mathcal N^\oplus$ modulo 2- and 3-cycles and directions of edges in 4-cycles.
\qed\end{proof}
\section{Expected pattern frequencies as convex sums}\label{sec::freqs}
The theoretical logDet distance between taxa depends on the matrix of expected relative site-pattern frequencies $F^{xy}$ in aligned sequences for taxa $x,y$, under the mixture of coalescent mixtures model $\mathcal M(\theta)$. The goal of this section is to show that $F^{xy}$ on a level-1 ultrametric rooted triple network can be expressed as a convex combination of frequency matrices for networks with no cycles below the LSA of the taxa. In this way, we reduce the computation of $F^{xy}$ to its computation on simpler networks. This is complicated somewhat by the fact that the convex combination may have terms which are expected pattern frequencies conditioned on a pair of lineages coalescing below a certain node in a network.
The lemmas that follow often involve modifying a network $\mathcal N^+$ by removing a hybrid edge, to obtain a new network $\mathcal N^+_i$. If one hybrid edge in a cycle is removed, the hybrid node is then suppressed as the other hybrid edge is joined to the descendant tree edge and given the induced length and population size.
We retain all other edge lengths and population sizes, as well as hybrid parameters for unaffected cycles. The parameters for the substitution process describing sequence evolution on gene trees are also retained. If $\theta$ denotes the full set of parameters associated to $\mathcal N^+$, then $\theta_i$ denotes the full set of parameters associated to $\mathcal N^+_i$ in this way.
Notation such as $F^{xy}(\theta)$ or $F^{xy}(\theta_i)$ denotes the dependence of $F^{xy}$ on the parameters $\theta$ or $\theta_i$, which include the network $\mathcal N^+$ or $\mathcal N_i^+$.
\begin{figure}\label{fig:234cycles}
\begin{center}
\includegraphics[width=4.in]{cycleTypes.pdf}
\end{center}
\caption{Examples of level-1 rooted triple networks with $2_1$-, $3_1$-, and $4_1$-cycles. While multiple $2_1$-cycles may be present along any pendant edge shown here in dashes, there can be at most two $3_1$-cycles, whose hybrid nodes are located on a dashed pendant edge. At most one $4_1$-cycle can be present.
Site-pattern frequency matrices from the model $\mathcal M$ on rooted triple networks with these types of cycles are convex combinations of such matrices for 1, 2, or 4 networks without those cycles, as shown by Lemmas \ref{lem::2_1cycles} and \ref{lem::3_1and4cycles}.}
\end{figure}
The most straightforward network simplifications occur when the hybrid node of a cycle has a single descendant leaf, as depicted by the example $2_1$-, $3_1$- and $4_1$-cycles in Figure \ref{fig:234cycles}.
\begin{lemma}\label{lem::2_1cycles} (Removing $2_1$-cycles)
Let $\mathcal N^+$ be a binary level-1 ultrametric rooted triple network on $\{a,b,c\}$ and let $C$ be a $2_1$-cycle in $\mathcal N^+$ with hybrid edges $h_1,h_2$. Let $\mathcal N^+_1$ be the network obtained from $\mathcal N^+$ by removing $h_2$. Then, under the model $\mathcal M$ for any $x,y\in\{a,b,c\}$,
$$F^{xy}(\theta)= F^{xy}(\theta_1).$$
\end{lemma}
\begin{proof} Since the hybrid node of $C$ has only one descendant, the combined coalescent and substitution process on $\mathcal N^+$ can be expressed as a linear combination of those processes on $\mathcal N^+_1,\mathcal N^+_2$, weighted by $\gamma_1=\gamma(h_1),\gamma_2=\gamma(h_2)$. That is, for any $x,y\in\{a,b,c\}$,
$$F^{xy}(\theta)= \gamma_1 F^{xy}(\theta_1)+\gamma_2 F^{xy}(\theta_2).$$
But $\mathcal N^+_1$ and $\mathcal N^+_2$ only differ by $h_1$ and $h_2$ which have the same length, though possibly different population sizes. However, since only one lineage can be present in the population for those edges, those population sizes have no impact in model $\mathcal M$, so $F^{xy}(\theta_2)=F^{xy}(\theta_1)$. Since $\gamma_1+\gamma_2=1$, the claim follows.
\qed\end{proof}
If a network $\mathcal N^+$ has multiple $2_1$-cycles, then applying Lemma \ref{lem::2_1cycles} repeatedly gives $ F^{xy}(\theta)=F^{xy}(\widetilde \theta)$ where $\widetilde{\mathcal N}^+$ is a rooted network with no $2_1$-cycles obtained from $\mathcal N^+$ by deleting one hybrid edge in each of the $2_1$-cycles on $\mathcal N^+$.
\begin{lemma}\label{lem::3_1and4cycles} (Decomposing $3_1$- and $4_1$-cycles)
Let $\mathcal N^+$ be a binary level-1 ultrametric rooted triple network on $\{a,b,c\}$ and let $C$ be either a $3_1$- or a $4_1$-cycle on $\mathcal N^+$. Let $h_1,h_2$ be the hybrid edges of $C$ with $\gamma_i=\gamma(h_i)$. Let $\mathcal N^+_i$ be the network obtained from $\mathcal N^+$ by removing $h_j$, $j\ne i$. Then, under the model $\mathcal M$ for any $x,y\in\{a,b,c\}$,
$$F^{xy}(\theta)= \gamma_1 F^{xy}(\theta_1)+\gamma_2 F^{xy}(\theta_2).$$
\end{lemma}
\begin{proof}
Since the hybrid node of $C$ has only one descendant, we can express the combined coalescent and substitution process on $\mathcal N^+$ as a linear combination of the processes of the $\mathcal N_i$, with coefficients $\gamma_i$, $i=1,2$.
\qed\end{proof}
A level-1 rooted triple network may have one $4_1$-cycle, one $3_1$-cycle, or two $3_1$-cycles. In the last case, Lemma \ref{lem::3_1and4cycles} may be applied twice, to express the pattern frequency matrix under the model as a convex combination of four such matrices for networks with no $3_1$-cycles.
With Lemma \ref{lem::2_1cycles} this shows that computation of the matrix of relative site-pattern frequencies of a level-1 ultrametric rooted triple network $\mathcal N^+$ reduces to cases where there are no $2_1$-, $3_1$-, or $4_1$-cycles. The effects of $2_2$- and $3_2$-cycles are more complicated, however, as a coalescent event may or may not occur below the hybrid nodes of such cycles.
The following definition facilitates studying the impact of such cycles. In it a node $p$ may be either an existing node or a new node introduced along an edge of a network, with appropriate division of the original edge length and population function. Although strictly speaking this second case passes out of the class of binary networks, we allow this only to simplify reference to intermediate states of the coalescent process.
\begin{definition} Let $K_p(\theta)$ be the random variable giving the number of lineages at node $p\in V(\mathcal N ^+)$ under the NMSC. With $X_p\subseteq X$ denoting the set of taxa below $p$, $K_p(\mathcal N^+)$ has sample space $\left\{1,2,\dots,|X_p|\right\}$.
\end{definition}
When $\theta$ is clear from context we write $K_p=K_p(\theta)$. We also use the notation $F^{xy}_{|K_p=m}(\theta)$ to denote the joint distribution of site patterns conditioned on $K_p=m$ under the model $\mathcal M$ with parameters $\theta$.
\begin{figure}
\begin{center}
\includegraphics[width=4.in]{2cyc.pdf}
\end{center}
\caption{ (Top) A rooted level-1 ultrametric network on $\{a,b,c\}$, with the $2_2$-cycle closest to LSA($a,b$) shown. (Bottom) The networks $\mathcal N^+_1$, $\mathcal N^+_2$, and $\mathcal N^+_0$ obtained from $\mathcal N^+$, respectively, as described in Lemma
\ref{lem::2cyc}. Note that there may be additional cycles along the dashed lines, with hybrid nodes above node $q$ and taxon $c$.}\label{fig::2cyc}
\end{figure}
\begin{lemma}\label{lem::2cyc}(Decomposing $2_2$-cycles)
Let $\mathcal N^+$ be a binary level-1 ultrametric rooted triple network on $\{a,b,c\}$ without $2_1$- or $3_1$-cycles. Suppose, as depicted in Figure \ref{fig::2cyc}, $C$ is a $2_2$-cycle on $\mathcal N^+$, with edges $h_1,h_2$ from node $q$ to hybrid node $p$, hybridization parameters $\gamma_i=\gamma(h_i)$, leaf descendants $a,b$ of $p$, and no cycles below $p$. Denote by $\mathcal N^+_i$, $i=1,2$ the network obtained from $\mathcal N^+$ by removing $h_j$, $j\ne i$ and by $\mathcal N^+_0$
the network obtained from $\mathcal N^+$ by deleting all edges and nodes below $q$ and attaching edges $(q,a)$ and $(q,b)$ of appropriate length so that $\mathcal N^+_0$ is ultrametric. Then, under the model $\mathcal M$ for any $x,y\in\{a,b,c\}$,
\begin{align*}
F^{xy}(\theta)= & \gamma_1^2 F^{xy}(\theta_1 )+\gamma_2^2 F^{xy}(\theta_2 )+ P(K_p=2)2\gamma_1\gamma_2F^{xy}(\theta_0 )+P(K_p=1)2\gamma_1\gamma_2F^{xy}_{|K_p=1}(\theta_1).
\end{align*}
\end{lemma}
\begin{proof}
Since the structure of the model for $\mathcal N^+$, $\mathcal N^+_1$, and $\mathcal N^+_2$ is identical below $p$, we may also use $K_p$ to denote
$K_p(\theta_1)$ and $K_p(\theta_2)$. Thus
\begin{align}
F^{xy}(\theta)&=P(K_p=2) F^{xy}_{|K_p=2}(\theta)+P(K_p=1) F^{xy}_{|K_p=1}(\theta)\notag\\
&= P(K_p=2)\left [\gamma_1^2 F^{xy}_{|K_p=2}(\theta_1)+\gamma_2^2 F^{xy}_{|K_p=2}(\theta_2)+2\gamma_1\gamma_2F^{xy}(\theta_0)\right ] + P(K_p=1) F^{xy}_{|K_p=1}(\theta ).\label{eq:2cyceq}
\end{align}
But since $F^{xy}_{|K_p=1}(\theta)=F^{xy}_{|K_p=1}(\theta_i)$ for $i=1,2$ by the argument used for Lemma \ref{lem::2_1cycles}, and the identity $1=\gamma_1^2+\gamma_2^2+2\gamma_1\gamma_2$,
$$
F^{xy}_{|K_p=1}(\theta)=\gamma_1^2 F^{xy}_{|K_p=1}(\theta_1)+ \gamma_2^2 F^{xy}_{|K_p=1}(\theta_2)+2\gamma_1\gamma_2F^{xy}_{|K_p=1}(\theta_1).
$$
Substituting this into equation \eqref{eq:2cyceq} and using $P(K_p=1)+P(K_p=2)=1$ yields the claim.
\qed\end{proof}
Note that while $\mathcal N^+_1$ and $\mathcal N^+_2$ of Lemma \ref{lem::2cyc} have the same topology and edge lengths, the hybrid edges $h_1,h_2$ may have different population sizes. Thus $F^{xy}(\theta_1 )\ne F^{xy}(\theta_2 )$ is possible. This is in contrast to the argument on removing $2_1$-cycles in Lemma \ref{lem::2_1cycles}, in which hybrid edge population sizes did not play a role.
Since a level-1 3-taxon rooted network cannot have a $2_2$-cycle above a $3_2$-cycle, Lemma \ref{lem::2cyc} can be applied recursively to the $\mathcal N^+_i$, $i\in\{1,2\}$ to eliminate all $2_2$-cycles. Thus the remaining complication to producing an
expression
for $F^{xy}(\theta)$ as a convex combination of such matrices for networks without $2_1$-, $3_1$-, or $2_2$-cycles is the presence of
terms of the form $F^{xy}_{|K_p=1}({\theta}')$ where ${\mathcal N'^+}$ has cherry $\{a,b\}$ and neither $2_1$- nor $3_1$-cycles. Such terms are handled with the following.
\begin{lemma} \label{lem::CondCoal} (Decomposing $2_2$-cycles conditioned on coalescence)
Let $\mathcal N^+$ be a binary level-1 ultrametric rooted triple network on $\{a,b,c\}$ without $2_1$- or $3_1$-cycles, and
on which $\{a,b\}$ form a cherry. Let $p$ be the parent of the common parent of $a,b$.
Denote by ${\widetilde {\mathcal N}^+}$ a network obtained from $\mathcal N^+$ by removing one hybrid edge from each $2_2$-cycle.
If $\mathcal N^+$ has no $3_2$-cycle, then
$$F^{xy}_{|K_p=1}(\theta)= F^{xy}_{|K_p=1}({\widetilde\theta}).$$
If $\mathcal N^+$ has a $3_2$-cycle, with hybrid edges $h_1,h_2$ and hybridization parameters $\gamma_i=\gamma(h_i)$, then let ${\widetilde {\mathcal N}}^+_i$ be the network obtained from ${\widetilde {\mathcal N}}^+$ by removing $h_j$, $j\ne i$. Then
$$F^{xy}_{|K_p=1}(\theta)= \gamma_1F^{xy}_{|K_p=1}(\widetilde \theta_1)+\gamma_2F^{xy}_{|K_p=1}(\widetilde \theta_2).$$
\end{lemma}
\begin{proof}
Conditioned on $K_p=1$, there is only one lineage in any population above $p$ and below the hybrid node of a $3_2$-cycle, if such a cycle is present, or the LSA otherwise.
Thus, as in the proof of Lemma \ref{lem::2_1cycles}, no $2_2$-cycle will have any effect on the joint distribution.
If there is no $3_2$-cycle on $\mathcal N^+$ this yields the claim. If there is a $3_2$-cycle, since only one lineage reaches the hybrid node of the $3_2$-cycle, we obtain the claim as in the proof of Lemma \ref{lem::3_1and4cycles}.
\qed\end{proof}
\begin{lemma}\label{lem::3_2cyc} (Decomposing $3_2$-cycles)
Let $\mathcal N^+$ be a binary level-1 ultrametric rooted triple network on $\{a,b,c\}$ with no cycles below its LSA except a $3_2$-cycle $C$.
Let $p$ denote the hybrid node of $C$, and $h_1,h_2$ the hybrid edges with hybridization parameters $\gamma_i=\gamma(h_i)$ and lengths $y,z$, as depicted at the top of Figure \ref{fig::3_2c}. Let $\mathcal N^+_1$, $\mathcal N^+_2,$ $\mathcal N^+_3,$ and $\mathcal N^+_4$ be the networks derived from $\mathcal N^+$ shown at the bottom of Figure \ref{fig::3_2c}. Then, under the model $\mathcal M$, for any $x,y\in\{a,b,c\}$, with $K_p=K_p(\theta)$,
\begin{equation*}
\begin{split}
F^{xy}(\theta)=& \gamma_1^2 F^{xy}(\theta_1) + \gamma_2^2 F^{xy}(\theta_2)+P(K_p=2) \gamma_1\gamma_2 \left ( F^{xy}(\theta_3) + F^{xy}(\theta_4)\right ) \\
&\ \ \ \ +P(K_p=1) \gamma_1 \gamma_2 \left (F^{xy}_{|K_p=1}(\theta_1)+F^{xy}_{|K_p=1}(\theta_2) \right ).\\
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
Observe that
\begin{equation}
\begin{split}
F^{xy}(\theta)&= P(K_p=2)F^{xy}_{|K_p=2}(\theta)+P(K_p=1)F^{xy}_{|K_p=1}(\theta)\\
&= P(K_p=2)\left [ \gamma_1^2 F^{xy}_{|K_p=2}(\theta_1) + \gamma_2^2 F^{xy}_{|K_p=2}(\theta_2) + \gamma_1\gamma_2 F^{xy}(\theta_3) +\gamma_1\gamma_2 F^{xy}(\theta_4)\right ] \\
&\ \ \ \ \ +P(K_p=1)F^{xy}_{| K_p=1}(\theta).
\end{split}\label{eq:3_2cyc}
\end{equation}
Since $F^{xy}_{|K_p=1}(\theta)=\gamma_1 F^{xy}_{|K_p=1}(\theta_1)+\gamma_2 F^{xy}_{|K_p=1}(\theta_2)$
and $\gamma_1+\gamma_2=1$,
$$ F^{xy}_{|K_p=1}(\theta)=\gamma_1^2 F^{xy}_{|K_p=1}(\theta_1)+ \gamma_2 ^2 F^{xy}_{|K_p=1}(\theta_2) +\gamma_1 \gamma_2 \left(F^{xy}_{|K_p=1}(\theta_1)+F^{xy}_{|K_p=1}(\theta_2)\right).
$$
Using this and $P(K_p=1)+P(K_p=2)=1$
in equation \eqref{eq:3_2cyc} yields the claim.
\qed\end{proof}
\begin{figure}
\begin{center}
\includegraphics{3_2a.pdf}
\end{center}
\caption{ (Top) A rooted level-1 ultrametric network with a $3_2$-cycle, and (Bottom) the networks $\mathcal N^+_1$, $\mathcal N^+_2$, $\mathcal N^+_3$, and $\mathcal N^+_4$ used in Lemma \ref{lem::3_2cyc}. Although only topology and branch lengths are shown, population size parameters for each edge of $\mathcal N^+_i$ are obtained from the corresponding ones of $\mathcal N^+$.
}\label{fig::3_2c}
\end{figure}
\section{Theoretical logDet distances}\label{sec::logDet}
In this section, we show that, under the mixture of coalescent mixtures model $\mathcal M$ on an ultrametric level-1 rooted triple network, the theoretical logDet distances between taxa determine most topological features of the network. The previous section established that the pattern frequency matrices for the model on such networks can be expressed as convex combinations of those on simpler networks (possibly subject to conditioning), whose only cycles are $2_3$-cycles located above
${\operatorname{LSA}}(a,b,c)$, such as depicted in Figure \ref{fig::chain2cyc}. The following algebraic lemma is key to drawing conclusions about the determinants
of such linear combinations of matrices.
\begin{lemma}[\cite{Allman2019}, Lemma 3.1]\label{lem::main}
Suppose for each $i$, $F_i$ and $G_i$ are $\kappa\times \kappa$ symmetric positive definite matrices such that $y^TF_iy\geq y^TG_iy$ for every $y\in {\mathbb R}^\kappa$ with the inequality strict for some $y$ and some $i$. For $\alpha_i\geq 0$, let
$$ F=\sum_{i=1}^{m}\alpha_i F_i,\quad G=\sum_{i=1}^{m}\alpha_i G_i.$$
Then
$$ \det F> \det G.$$
\end{lemma}
Analyzing the pattern frequency matrix for networks with $2_3$-cycles above ${\operatorname{LSA}}(a,b,c)$ requires a detailed look at the coalescent process
in such a chain of $2$-cycles.
For a simple case, assume lineages $x$ and $y$ enter the single cycle chain depicted in Figure \ref{fig::single2cycle}.
Population functions $N_1,$ $N_2$ , $N_3$, and $N_4$ are fixed for each edge, where for convenience, we shift domains from the convention in Section \ref{ssec:NMSC} so that $N_1$ is defined on $[0,t_0)$, $N_2,N_3$ on $[t_0,t_1)$, and $N_4$ on $[t_1,\infty)$.
The probability density $c(t)$ for time to coalescence of the lineages $x,y$ entering at the bottom node ($t=0$) can be calculated piecewise as follows:
For $t\in[0,t_0)$,
$$c(t)=\frac 1{{ N}_1(t)} \exp \left ( -\int_0^t \frac 1{{ N}_1(\tau)}\, d\tau\right ),$$
as given in \cite{Allman2019}.
For $t\in[t_0,t_1)$,
$$c(t)={p_0}\left( \gamma^2 c_2(t) +(1-\gamma)^2 c_3(t)\right )$$
where $p_0=1-\int_0^{t_0} c(t)\,dt$ is the probability of no coalescence before $t_0$, and for $i=2,3$
$$c_i(t)=\frac 1{N_i(t)} \exp \left (-\int_{t_0} ^t \frac 1{N_i(\tau)}\, d\tau\right).$$
Finally, for $t\in[t_1,\infty)$, with $p_1=1-\int_0^{t_1} c(t)\,dt$ the probability of no coalescence before $t_1$,
$$c(t)={p_1} \frac 1{N_4(t)} \exp \left (-\int_{t_1}^t \frac 1{N_4(\tau)}\, d\tau\right).$$
\begin{figure}
\begin{center}
\includegraphics[width=5cm]{2cycdist_v2.pdf}
\end{center}
\caption{A 2-cycle and adjacent tree edges in a species network, depicted (Left) with pipes whose width represent population sizes, and (Right) as a schematic. }\label{fig::single2cycle}
\end{figure}
It is straightforward to extend this analysis of $c(t)$ to a chain with an arbitrary number of 2-cycles.
Since we will not need an explicit formula for the distribution of coalescent times for two lineages entering such a chain of 2-cycles, we omit a complete derivation, and only state the properties of it that we use.
Formally, a \emph{chain of 2-cycles} is a species network with leaf $a_0$, internal vertices $b_1$, $a_1$, $b_2$, $a_2,\dots, a_n$, with root $r=a_n$, tree edges $e_i=(b_i,a_{i-1})$, and hybrid edges
$e_i'=(a_i,b_i)$, $e_i''=(a_i,b_i)$, together with edge lengths, piecewise-continuous population size functions on each edge, including above the root, and hybrid parameters $\gamma_i',\gamma_i''=1-\gamma_i'$ for each pair of hybrid edges $e_i',e_i''$.
Using the technical assumptions given in Subsection \ref{ssec:NMSC}, it is straightforward to deduce the following.
\begin{lemma}\label{lem::coalFacts} Consider a fixed chain of 2-cycles with leaf $a_0$. Let $c:[0,\infty)\to \mathbb R^{\ge 0}$ denote the probability density function under the NMSC for the time $T$ of coalescence of two lineages entering the chain at $a_0$. Then $c(t)$ is piecewise
continuous, and $c(t)>0$ for all $t\in[0,\infty)$.
\end{lemma}
The next three technical lemmas generalize Lemmas 4.1, 4.4, and 4.5 of \cite{Allman2019} from a tree to a network setting. These culminate in Proposition \ref{prop::chain2cyc} below, which justifies the application of Lemma \ref{lem::main}.
\begin{lemma} \label{lem:Cineq} Let $c:[0,\infty)\to \mathbb R^{\ge 0}$ be the probability density function under the NMSC for the time $T$ of coalescence of two lineages entering a chain of 2-cycles, and for times $t_2>t_1\ge 0$
let $c_i$ be the conditional density given $T\ge t_i$. Then the cumulative distribution functions for $c_1$ and $c_2$
satisfy
$$C_1(t)\ge C_2(t),$$
with the inequality strict on some interval.
\end{lemma}
\begin{proof}
Since $0=c_2(t)\le c_1(t)$ for all $t\le t_2$, the inequality is immediate for $t\le t_2$. Since using Lemma \ref{lem::coalFacts} we have $c_1(t)>c_2(t)=0$ for $t\in(t_1,t_2)$,
the inequality is strict on a subinterval.
For $t\ge t_2$, let $J=\int_{t_1}^{t_2} c_1(t) \,dt$ and $I(t)=\int_{t_2}^t c_1(s) \,ds$, so
\begin{align*}
C_1(t)-C_2(t)&=J+I(t)-\frac {I(t)}{1-J}\\
&= J-\frac J{1-J} I(t).
\end{align*}
Differentiating and using Lemma \ref{lem::coalFacts} shows $C_1(t)-C_2(t)$ is decreasing for $t>t_2$. Since $C_1(t)-C_2(t)\to 0$ as $t\to\infty$, this implies $C_1(t)-C_2(t)\ge 0$, as claimed.
\qed\end{proof}
\begin{lemma}\label{lem:inteig} Let $c_1,c_2$ be probability density functions on $[0,\infty)$, with cumulative distribution functions $C_1,C_2$, such that $C_1(t)\ge C_2(t)$ for all $t$, with the inequality strict on some interval. Let
$s(t)=\int_{0}^{t} \mu(x)\,dx$ for a positive, piecewise-continuous $\mu$ on $[0,\infty)$ such that $s(\infty)=\infty$.
For $\lambda\le 0$ let
$$f(\lambda,\mu, C_i)=\int_0^\infty \exp(2 \lambda s(t))c_i(t) \,dt.$$
Then if $\lambda=0$,
$$f(0,\mu, C_1)=f(0,\mu, C_2)=1.$$
while for $\lambda<0$
$$f(\lambda,\mu, C_1)>f(\lambda,\mu, C_2).$$
\end{lemma}
\begin{proof}
For $\lambda=0$ we find $f(0,\mu, C_i)=\int_0^\infty c_i(t)\,dt=1$.
If $\lambda<0$, integrating by parts yields
\begin{align*}
f(\lambda,\mu, C_i) &= \exp(2 \lambda s(t))C_i(t)\bigg |_{t=0}^\infty - 2\lambda \int_0^\infty \mu(t) \exp(2 \lambda s(t))C_i(t) dt\\
&= - 2\lambda \int_0^\infty \mu(t) \exp(2 \lambda s(t))C_i(t) dt.
\end{align*}
Thus
$$
f(\lambda,\mu, C_1)- f(\lambda,\mu, C_2)=
-2\lambda \int_0^\infty \mu(t) \exp(2 \lambda s(t)) (C_1(t)-C_2(t)) dt.
$$
As the integrand is non-negative, and positive on some interval, the claim for $\lambda<0$ follows.
\qed\end{proof}
\begin{lemma}\label{lem:41}
Consider a GTR substitution model with rate matrix $Q\ne 0$, a scalar-valued rate function $\mu(t)$ satisfying the assumptions of Subsection \ref{ssec::substitution}, and a cumulative distribution function $C(t)$ for the time $T$ to coalescence of 2 lineages in a population.
Let $F(x)=F(Q,\mu,C,x)$ be the expected site-pattern frequency array for two lineages that enter a population at time 0 and undergo substitutions at rate $\mu(t)Q$ conditioned on $T\ge x$.
For $x<x_1$ let $\widetilde F(x,x_1) =\widetilde F(Q,\mu,C,x,x_1)$ be the expected site-pattern frequency array for two lineages that enter a population at time 0 and undergo substitutions at rate $\mu(t)Q$ conditioned on $x< T< x_1$.
Then for all $0\ne y\in \mathbb R^k$ the functions $y^TF(x)y$ and $y^T\widetilde F(x,x_1)y$ are positive-valued and decreasing in $x$.
Moreover there exists a $y$ for which both are strictly decreasing, and for which if $x_0<x_1\le x_2$
$$y^T\widetilde F(x_0,x_1)y>y^TF(x_2)y.$$
\end{lemma}
\begin{proof} Let $c_x(t)$ denote the conditional probability density function for the coalescent time $T$ given $T>x$.
With $s(t)=\int_{0}^{t} \mu(\tau)\,d\tau$, the Markov matrix describing the substitution process on a single lineage from time $0$ to time $t$ is
$$M(\mu,Q,t)=\exp\left (s(t)Q \right).$$
Thus using time-reversibility of the substitution process, with $\pi$ the stationary distribution for $Q$,
$$F(x)={\operatorname{diag}}(\pi) \int_0^\infty (M(\mu,Q,t))^2 c_x(t)\, dt.$$
Here the square of the Markov matrix accounts for substitutions in the two lineages before coalescence.
Now $S^{-1}QS$ is diagonal for a matrix $S={\operatorname{diag}}(\pi)^{-1/2} U$ with $U$ orthogonal, and $Q$'s eigenvalues satisfy $0=\lambda_1\ge \lambda_2\ge\cdots\ge\lambda_k$ with at least one $\lambda_i<0$ (Lemma 2.2 of \cite{Allman2019}). Thus diagonalizing the Markov matrix yields
\begin{align*}
U^T{\operatorname{diag}}(\pi)^{-1/2} F(x){\operatorname{diag}}(\pi)^{-1/2} U&=\int_0^\infty \Lambda_{M(\mu,Q,t)} c_x(t) \, dt
\end{align*}
where $\Lambda_{M(\mu,Q,t)}$ is diagonal with entries $\exp(2s(t)\lambda_i)$. The diagonal entries of this integral are thus
$$\int_0^\infty \exp(2s(t)\lambda_i) c_x(t) \, dt.$$
But Lemmas \ref{lem:Cineq} and \ref{lem:inteig}
show this is positive, decreasing in $x$, and strictly decreasing for some $i$.
This establishes the claims about $F$, by choosing $y$ to be any eigenvector of $Q$ whose eigenvalue is negative to obtain a strictly decreasing function.
The corresponding claims about $\widetilde F$ are given by the same argument with the cumulative distribution function $C$ replaced by the conditional distribution function given the coalescent time $T<x_1$, that is, with
$$\widetilde C_{x_1} (t) =\begin {cases}
C(t)/C(x_1) & \text{if $t\le x_1$}\\
1&\text{ if $t>x_1$}
\end{cases}.$$
Finally, since for every $t$ the function $\widetilde C_{x_1} (t)$ is decreasing in $x_1$, then for any $y$ and $x_0$, a similar diagonalization argument and again using Lemma \ref{lem:inteig} shows the function $y^T\widetilde F(x_0,x_1)y$ is decreasing in $x_1$. Thus
if $x_0<x_1\le x_2$, then
$$y^T\widetilde F(x_0,x_1)y\ge\lim_{x_1\to\infty} y^T\widetilde F(x_0,x_1)y=y^T F(x_0)y\ge y^T F(x_2)y.$$
Moreover, if $y$ is an eigenvector of $Q$ whose eigenvalue is negative, then strict inequality holds.
\qed\end{proof}
\begin{proposition}\label{prop::chain2cyc}
Let $\mathcal N^+$ be a binary level-1 ultrametric rooted triple network on $\{a,b,c\}$ whose LSA network has topology $((a,b),c)$, but above ${\operatorname{LSA}}(\{a,b,c\}, \mathcal N^+)$ there is possibly a chain of 2-cycles.Then, under a coalescent mixture model on $\mathcal N^+$ with fixed parameters $\mu(t)$, $\{N_e\}$, $Q$, $\pi$, the relative site-pattern frequency matrices $F^{ab}$, $F^{bc}$, and $F^{ac}$ are symmetric positive definite, with $F^{ac}=F^{bc},$ and satisfy
$$y^TF^{ab}y\geq y^TF^{ac}y$$ for every $y\in {\mathbb R}^k$, with the inequality strict for some $y$.
Moreover, the same statements hold when the arrays $F^{xy}$ are replaced by $F^{xy}_{|K_p=1}$ with $p$ a node placed above the parent of $a,b$ and below the parent of $c$.
\end{proposition}
\begin{proof}
Let $x_1$ be the length of the pendant edges to $a$ and $b$, and $x_2$ the length of the pendant edge to $c$, so $x_2>x_1$. Then applying Lemma \ref{lem:41} for an appropriately chosen distribution $C(t)$ of coalescent times so
$$F^{ab}=F(x_1),\ \ F^{ac}=F^{bc}=F(x_2),$$
the result is immediate.
Let $x_p$ denote the distance from $a$ or $b$ to $p$, so $x_1<x_p< x_2$. Then conditioning on $K_p=1$, in the notation of Lemma \ref{lem:41} we have
$$F^{ab}_{|K_p=1}=\widetilde F(x_1,x_p),\ \ F^{ac}_{|K_p=1}=F^{bc}_{|K_p=1}=F^{bc}=F(x_2),$$ so again Lemma \ref{lem:41} yields the claim.
\qed\end{proof}
We now turn from considering a coalescent mixture model, with a single substitution model class, to the mixture of coalescent mixtures $\mathcal M$.
\begin{lemma}\label{lem::switchability}
Let $\mathcal N^+$ be a level-1 ultrametric rooted triple network on $\{a,b,c\}$ with no $4$-cycle. Suppose $\{a,b\}$ form a cherry in the tree topology obtained from suppressing all cycles of $ \mathcal N^+$. Then, under the mixture of coalescent mixtures model $\mathcal M$ on $\mathcal N^+$,
$ F^{ac}(\theta)= F^{bc}(\theta).$
\end{lemma}
\begin{proof}
By Lemmas \ref{lem::2_1cycles} and \ref{lem::3_1and4cycles}, we may assume $\mathcal N^+$ has neither a $2_1$- nor a $3_1$-cycle, so there are no cycles below the parent of $a,b$. By the ultrametricity of the network, $a$ and $b$ are exchangeable under the combined coalescent and substitution model for each substitution model class, and therefore for the model $\mathcal M$.
\qed\end{proof}
This result is used to show that logDet distances from rooted triple networks with only $2$- and $3_1$-cycles satisfy the same equality and inequality relationships as those from trees.
\begin{proposition} (No $4_1$-cycles or $3_2$-cycles) \label{prop::2and3_1cyc}Let $ \mathcal N^+$ be a level-1 ultrametric rooted triple network on $\{a,b,c\}$ with neither a 4-cycle nor a $3_2$-cycle. Let ${\mathcal T}=((a,b),c)$ be the tree topology obtained after suppressing all cycles in $ \mathcal N^+$. Under the mixture of coalescent mixtures model $\mathcal M$ on $ \mathcal N^+$ the theoretical logDet distances satisfy
$$d_{LD}(a,c)=d_{LD}(b,c)> d_{LD}(a,b).$$
\end{proposition}
\begin{proof} Under the model $\mathcal M$, the frequencies of bases at any taxon are identical, given by the same convex combination of the base frequency vectors $ \pi_i$ for substitution classes $i$. Thus the value of $\ln(g_ug_v)$ in the definition of the logDet distance, equation \eqref{formula::logdet}, is identical for every pair of distinct taxa $x,y\in\{a,b,c\}$.
It thus suffices to show
$$\det F^{ab}(\theta) \geq \det F^{ac}(\theta) =\det F^{bc}(\theta).$$
Lemma \ref{lem::switchability} gives the equality. By Lemmas \ref{lem::2_1cycles}, \ref{lem::3_1and4cycles}, and \ref{lem::2cyc}, we can express $ F^{xy}(\theta)$ as a convex combination of relative site-pattern frequency matrices, possibly conditioned on $K_p=1$, of networks of the form of the tree ${\mathcal T}$ joined to a (possibly empty) chain of 2-cycles above ${\mathcal T}$'s root, such as depicted in Figure \ref{fig::chain2cyc}. By Proposition \ref{prop::chain2cyc} each of those matrices for coalescent mixture models satisfy the hypotheses of Lemma \ref{lem::main}. Lemma \ref{lem::main} thus yields the claim for mixtures of coalescent mixtures by considering a convex combination across both the networks and substitution model classes.\qed\end{proof}
A weaker result, without the inequality, applies to networks with $3_2$-cycles.
\begin{proposition} ($3_2$-cycle) \label{prop::3_2cyc} Let $ \mathcal N^+$ be a level-1 ultrametric rooted triple network on $\{a,b,c\}$ with a $3_2$-cycle. Let ${\mathcal T}=((a,b),c)$ be the tree topology obtained after suppressing all cycles in $ \mathcal N^+$. Then under the mixture of coalescent mixtures model $\mathcal M$ on $ \mathcal N^+$, the theoretical logDet distances satisfy
$$d_{LD}(a,c)=d_{LD}(b,c).$$
\end{proposition}
\begin{proof}
From Lemma \ref{lem::switchability}, $ F^{ac}(\theta) =F^{bc}(\theta)$, so the result follows as in the previous proof.
\qed\end{proof}
Proposition \ref{prop::2and3_1cyc}, and the arguments leading to it, show that the equality and inequality relationships of logDet distances between only 3 taxa carry no signal of either $2$- or $3_1$-cycles. Proposition \ref{prop::3_2cyc}, however, leaves open the possibility that for a network with a $3_2$-cycle the smallest distance may not necessarily correspond to the taxa which are neighbors after 2- and 3- cycles are suppressed. This suggests that the presence of a $3_2$-cycle might be detectable, at least under some circumstances. In Section \ref{sec::other} we return to this issue, providing a more in-depth analysis of triples of logDet distances.
\begin{proposition}\label{prop::4cyc} ($4_1$-cycle) Let $ \mathcal N^+$ be a level-1 ultrametric rooted triple network on $\{a,b,c\}$ with a $4$-cycle, such that contracting all cycles except the 4-cycle and then deleting one of its hybrid edges gives the trees $((a,b),c)$ and $((a,c),b)$. (See Figure \ref{fig::4c}.)
Then under the mixture of coalescent mixtures model $\mathcal M$ on $ \mathcal N^+$, the theoretical logDet distances satisfy
$$d_{LD}(b,c) > d_{LD}(a,b)\text{ and } d_{LD}(b,c) > d_{LD}(a,c).$$
Moreover, if all other parameters are fixed, then for generic values of the hybridization parameters,
$$ d_{LD}(a,b)\neq d_{LD}(a,c).$$
\end{proposition}
\begin{proof} As in Proposition \ref{prop::2and3_1cyc}, to establish these inequalities for the logDet distance, it is enough to show
\begin{equation}\det F^{bc} (\theta)< \det F^{ab}(\theta)\text{ and }\det F^{bc}(\theta)< \det F^{ac}(\theta).\label{eq:detineq}
\end{equation}
From Lemmas \ref{lem::2_1cycles} and \ref{lem::3_1and4cycles}, for $x,y\in\{a,b,c\}$
$$F^{xy}(\theta)= \gamma_1 F^{xy}(\theta_1)+\gamma_2 F^{xy}(\theta_2)$$
where $\mathcal N^+_1$ and $\mathcal N^+_2$ have the structure of the trees $((a,b),c)$ and $((a,c),b)$ with chains of 2-cycles possibly attached above their roots.
Proposition \ref{prop::chain2cyc} implies that for each GTR substitution model class
$$y^TF^{ab}(\theta_1)y\geq y^TF^{bc}(\theta_1)y=y^TF^{ac}(\theta_1)y\quad\text{ and }\quad y^TF^{ac}(\theta_2)y\geq y^TF^{ab}(\theta_2)y= y^TF^{bc}(\theta_2)y,$$
for every $y\in {\mathbb R}^k$, with the inequalities strict for some choices of $y$.
From this and Lemma \ref{lem::main} we obtain the inequalities \eqref{eq:detineq}.
To see $ d_{LD}(a,b)\neq d_{LD}(a,c)$ for generic hybridization parameters, first observe that these distances extend to analytic functions of the $\gamma$ on all of $\mathbb C$. To show the inequality for generic $\gamma$, it is enough to show there exists one specific choice of $\gamma \in \mathbb C$ for which they are not equal. First consider a choice on the boundary of the parameter space, by letting $\gamma_e=1$, $\gamma_{e'}=0$ for every pair $e,e'$ of hybrid edges with a common child so that the model
reduces to one on the tree $((a,c),b)$.
In this case Theorem 1 of
\cite{Allman2019} establishes the inequality. Continuity implies that there are then choices of $0<\gamma_e<1$, where the model does not degenerate to one on a tree, for which these distances are also not equal.
\qed\end{proof}
Assuming generic parameter values, Proposition \ref{prop::4cyc} combined with earlier results implies that the presence of a $4$-cycle is indicated by three distinct logDet distances computed from expected pattern frequencies. However, the three networks at the top of Figure \ref{fig::4c} all satisfy the hypothesis of Proposition \ref{prop::4cyc}, but using equalities and inequalities of
logDet distances we cannot distinguish them. We can only identify their undirected version as depicted in the bottom of Figure \ref{fig::4c}.
\begin{figure}
\begin{center}
\includegraphics[width=4.in]{4c.pdf}
\end{center}
\caption{ (Top) Three topologically-distinct rooted triple networks with a 4-cycle displaying the trees $((a,b),c)$ and $((a,c),b)$. (Bottom) The undirected rooted topology shared by them. }\label{fig::4c}
\end{figure}
Nonetheless, the combinatorial result of Proposition \ref{prop::combnet} yields information on larger cycles and their hybrid nodes by first using logDet distances to determine undirected rooted triple networks. This gives our main result.
\begin{theorem}\label{thm::main}Let $ \mathcal N^+$ be a binary level-1 ultrametric network on $X$ with a $|X|\geq 3$. Let $\widetilde N$ denote the topological LSA network $\mathcal N^\oplus$ modulo 2- and 3-cycles and directions of edges in 4-cycles. Then for generic hybridization parameters under the mixture of coalescent mixtures model $\mathcal M$ on $ \mathcal N^+$, $\widetilde N$ is identifiable from the theoretical logDet distances for pairs of taxa.
\end{theorem}
\begin{proof}
Propositions \ref{prop::2and3_1cyc}, \ref{prop::3_2cyc}, and \ref{prop::4cyc} imply that for generic parameters the three logDet distances for any choice of 3 taxa are distinct if, and only if, the induced rooted triple network has a 4-cycle. Moreover, the unrooted topology of the 4-cycle is determined by the largest of the three distances. Thus
the set $S$ of Proposition \ref{prop::combnet} is determined, yielding the result.
\qed\end{proof}
An example of a rooted level-1 network and the structure that we have shown to be identifiable from logDet distances under the model $\mathcal M$ is given in Figure
\ref{fig::logdetnet}. On the left is a level-1 rooted phylogenetic network
with cycles of various sizes, and on the right the partially directed network that could be inferred from it for generic parameters.
\begin{figure}
\begin{center}
\includegraphics{logdetnet.pdf}
\end{center}
\caption{(Left) A rooted binary level-1 network and (Right) that part of its structure that Theorem \ref{thm::main} identifies from logDet distances under the model $\mathcal M$ for generic parameters. Both $2$- and $3$- cycles are lost, as are the directions of $4$-cycle edges, and hence knowledge of the hybrid nodes in 4-cycles. Directed edges in cycles of size greater than 4 are identifiable.}\label{fig::logdetnet}
\end{figure}
\section{Normalized triples of logDet distances.}\label{sec::other}
In the previous section, we obtained linear equalities and inequalities that the logDet distances between three taxa must satisfy if they are related by various level-1 rooted networks. Combined with the combinatorial result of Section \ref{sec::comb} these are sufficient for proving the identifiability claim that is the main focus of this work. However, it is worthwhile to seek a more complete characterization of what distances are achievable by various network topologies. In particular, with an eye toward practical application, any tighter characterizations would enable stronger testing for network topology from the empirical distances.
Here we conduct a partial investigation, characterizing not the triple of theoretical logDet distances that may be produced on rooted 3-taxon networks, but rather the \emph{normalized triple} obtained by dividing the distances by their sum. The triple of distances forms a point in the non-negative octant $\left (\mathbb R_{\ge0}\right )^3$, while the normalized triple gives a point in the 2-dimensional simplex. Thus plots can be made with the normalized distances that are analogous to the simplex plots for visualizing gene quartet concordance factors \cite{Banos2019,MAR2019,AMR2020}.
Just as simplex plots of concordance factors aid in understanding genomic data sets, we anticipate that the 2-simplex visualization of the normalized logDet distance triples will be similarly useful.
We begin with the logDet triples from 3-taxon trees.
\begin{proposition}
Let $\ell=(\ell_{ab},\ell_{ac},\ell_{bc})$ with $0<\ell_{ab}\le \ell_{ac}=\ell_{bc}$ be a triple of positive numbers summing to 1. Then there exists an ultrametric rooted tree with topology $((a,b),c)$ and GTR substitution model parameters such that
the normalized theoretical logDet distances of sequences generated under the coalescent mixture model are $\ell$.
\end{proposition}
\begin{proof} Consider the metric species tree $((a\text{:} 0,b\text{:} 0)\text{:} x/2,c\text{:} x/2)$, and constant population sizes $\epsilon>0$ on all edges. Fix a single substitution model, say the Jukes-Cantor, for sequence generation. Since small population sizes $\epsilon$ result in rapid coalescence with arbitrarily high probability, by taking $\epsilon$ sufficiently small one can show the expected frequency array can be made arbitrarily close to that which would arise if all gene trees exactly matched the species tree.
Thus the theoretical logDet distances can be made arbitrarily close to $d_{LD}(a,b)=0$ and $d_{LD}(a,c)=d_{LD}(b,c)=x$, which normalizes to $(0,1/2,1/2)$.
The unresolved species tree $(a\text{:} x/2,b\text{:} x/2,c\text{:} x/2)$, regardless of choice of population functions on the edges yields, by exchangeability of the taxa, a triple of equal logDet distances, which normalizes to $(1/3,1/3,1/3)$.
While the two trees above have 0-length edges and hence are non-binary, perturbations to binary trees with positive length edges can produce normalized logDet distances that are arbitrarily close.
Since the normalized logDet distances are continuous functions of parameters, the parameter space is connected, and the image of the normalized distances lies in a line segment by Proposition \ref{prop::2and3_1cyc}, the claim follows.
\qed\end{proof}
We turn now to networks with a single cycle.
\begin{proposition}\label{prop:4cyclefills}
Let $\ell=(\ell_{ab},\ell_{ac},\ell_{bc})$ with $0<\ell_{ab}\le \ell_{ac}<\ell_{bc}$ be a triple of positive numbers summing to 1. Then there exists a binary ultrametric rooted network on taxa $a,b,c$ with a single 4-cycle and GTR substitution model parameters such that
the normalized theoretical logDet distances of sequences generated under a single-class coalescent mixture model are $\ell$.
\end{proposition}
\begin{proof}
The 4-cycle network we construct is shown in Figure \ref{fig::4c_bac}, with $t_0,t_1$ measured in generations, and the hybrid edges of length 0. Consider a single constant population size $N>0$ for all populations over the tree and above the root, and a Jukes-Cantor substitution process with constant rate $\mu>0$. We will choose values for $t_0, t_1>0$, $\gamma\in[1/2,1)$ so that the normalized distances for the coalescent mixture model with this single substitution process are given by $\ell$.
\begin{figure}
\begin{center}
\includegraphics{4c_bac.pdf}
\end{center}
\caption{The 4-cycle network, with times in generations, constructed in Proposition \ref{prop:4cyclefills}. Hybridization parameters are $\gamma$, $1-\gamma$, and hybrid edges have length 0. }\label{fig::4c_bac}
\end{figure}
Recall that if $M(t)$ denotes the Jukes-Cantor Markov matrix for a substitution process over time $t$ with rate 1, then the common value of all
its off-diagonal entries is
$$f(t)=\frac 14\left (1- e^{-\frac 43 t}\right ) .$$
With $D={\operatorname{diag}} (1/4,1/4,1/4,1/4)$, the Jukes-Cantor pattern frequency array is
$DM(t)$, and the logDet distance (equal to Jukes-Cantor distance) is $$t=f^{-1}(f(t))=-\frac 34 \log \left (1-4f(t) \right ).$$ Note that $f$ is an increasing function.
From equation 4.1 of \cite{Allman2019}, for a coalescent mixture Jukes-Cantor model on an ultrametric tree with uniform population size $N$ and mutation rate $\mu$, sequences for two taxa $x,y$ whose MRCA is at time $t$ before the present has expected pattern frequency array
$$F(t)=D M(2t\mu)\tilde M(\mu,N),$$
where $\tilde M(\mu,N)$ is a Markov matrix of Jukes-Cantor form describing the expected additional substitutions due to the coalescent model delaying lineages merging until some time above the MRCA.
The logDet distance between $x,y$ is then the same as the Jukes-Cantor distance, which is computed to be
$$d_{LD}(x,y)=2t\mu + \beta$$
where $\beta=\beta(\mu,N)>0$ can be explicitly computed from $\tilde M(\mu, N)$, though we will not do so here. Since $\beta$ is continuous and $\beta(\mu,N)\to 0$ as $N\to 0$ and $\beta(\mu,N)\to \infty$ as $N\to \infty$,
it follows that $\beta$ takes on all positive values.
Now by Lemma \ref{lem::3_1and4cycles} on the 4-cycle network of Figure \ref{fig::4c_bac} the expected pattern frequency array for $a,b$ is
$$\gamma F(t_0)+(1-\gamma) F(t_1) = D M_{ab} \tilde M(\mu, N)$$
where $$M_{ab}=\gamma M(2t_0\mu)+(1-\gamma) M(2t_1\mu)$$
has the usual Jukes-Cantor form, with
off-diagonal entries
$$f_{ab}=\gamma f(2t_0\mu)+(1-\gamma)f(2t_1\mu).$$
This shows
$$d_{LD}(a,b)=f^{-1}(f_{ab}) +\beta.$$
A similar calculation shows
$$d_{LD}(a,c)=f^{-1}(f_{ac}) +\beta,$$
where $$f_{ac}= \gamma f(2t_1\mu)+(1-\gamma)f(2t_0\mu).$$
The expected pattern frequencies for $b,c$ sequences is $F(t_1)$, so
$$d_{LD}(b,c)=f^{-1}(f_{bc}) +\beta$$
where
$$f_{bc} =f(2t_1\mu).$$
We now determine parameters which produce the normalized triple of distances $\ell$.
Fixing values of $\mu$, $N$ determines a fixed value of $\beta>0$. Next, choose some value $m$ so that $$f\left (m\ell_{ab}-\beta\right)>\frac 1{8},$$ which can be done since
$f:\mathbb R^{>0} \to (0,1/4)$ is surjective and increasing. Then, with $x_{ij}=f \left (m\ell_{ij}-\beta\right)$, because $\ell_{ab}\le \ell_{ac}< \ell_{bc}$ we have
$$\frac 18<x_{ab}\le x_{ac}< x_{bc}<\frac 14.$$
Let $x_0=x_{ab}+x_{ac}-x_{bc}$, so $0< x_0<\frac 14$. Determine $t_0$ by $f(2t_0\mu)=x_0$, and $\gamma\in [1/2,1)$ by
$$\gamma=\frac {x_{bc}-x_{ab}}{2x_{bc}-x_{ab}-x_{ac}}, \text{ so } 1-\gamma= \frac {x_{bc}-x_{ac}}{2x_{bc}-x_{ab}-x_{ac}}.$$
Then choose $t_1$ by $f(2t_1\mu)=x_{bc}.$
To verify that these parameter choices give the desired normalized triple of distances,
the expected distance between $a,b$ is
\begin{align*}
d_{LD}(a,b)&=f^{-1}( \gamma f(2t_0\mu) +(1-\gamma)f(2t_1\mu ) ) +\beta\\
&=f^{-1}( \gamma x_0 +(1-\gamma)x_{bc}) +\beta\\
&=f^{-1}(x_{ab})+\beta\\
&=m\ell_{ab}.
\end{align*}
Similarly, we see $d_{LD}(a,c)=m\ell_{ac}$.
Finally we have
$$d_{LD}(b,c)=f^{-1}(f(2t_1\mu))+\beta= f^{-1}(x_{bc})+\beta=m\ell_{bc}.$$
\qed\end{proof}
Note that even if $\ell_{ac}=\ell_{bc}$, the argument of Proposition \ref{prop:4cyclefills} can be modified slightly by taking $\gamma=1$ in the analytic continuation of the parameterization. However, that choice of the hybridization parameter essentially means that in place of a 4-cycle network parameter we have a tree.
\begin{figure}
\begin{center}
\includegraphics{Network_3_2.pdf}
\end{center}
\caption{A $3_2$-network, with numbered edges, as used in Proposition \ref{prop:32cyclefills}. The hybridization parameter on edge $e_5$ is $\gamma$, and on $e_4$ is $1-\gamma$.}\label{fig::32cycle}
\end{figure}
Finally, we consider a network with a $3_2$-cycle. While Proposition \ref{prop::3_2cyc} shows the normalized triples of theoretical logDet distances
lie on the same line as those for a tree, we establish they need not be restricted to the same line segment of tree-like distances. However, we do not completely characterize the extent of the segment they fill out.
\begin{proposition}\label{prop:32cyclefills}
Let $\ell=(\ell_{ab},\ell_{ac},\ell_{bc})$ with $\ell_{ac}= \ell_{bc}$ be a triple of positive numbers summing to 1 with $0<\ell_{ab}<\frac 12$. Then there exists a binary ultrametric rooted network on taxa $\{a,b,c\}$ with a single $3_2$-cycle whose leaf-descendants are $a,b$ and GTR substitution model parameters such that
the normalized theoretical LogDet distances of sequences generated under the coalescent mixture model are $\ell$.
\end{proposition}
\begin{proof}
We construct several $3_2$-cycle species networks of the form shown in Figure \ref{fig::32cycle}, with edge lengths $t_i=\ell(e_i)$.
In making choices of numerical parameters, since the network is ultrametric we view $t_1,t_3,t_5,t_7$ as independent, determining $t_2,t_4,t_6$.
The population size on edge $e_i$ for $3\le i \le 8$ are constants $N_i$, with the sizes on terminal edges irrelevant. The hybridization parameters are $1-\gamma$ and $\gamma$ on edges $e_4$ and $e_5$ respectively. We also fix a single Jukes-Cantor substitution process with any constant rate $\mu>0$.
By Proposition \ref{prop::3_2cyc}, for any choices of the $t_i,N_i,\gamma$, the theoretical LogDet distances will satisfy $d_{LD}(a,c)=d_{LD}(b,c)$ so the normalized theoretical LogDet distance triple lies on a line.
Since the parameter space is connected, it is enough to show that
\begin{equation}
\frac{d_{LD}(a,b)}{2d_{LD}(a,c)+d_{LD}(a,b)}\label{eq:ndist}
\end{equation}
is arbitrarily close to $0$ for some choice of the parameters, and arbitrarily close to $1/2$ for others, to conclude that the rescaled expected distances give all the described triples.
To make expression \eqref{eq:ndist} near $0$, we choose parameters with $t_1$ and $N_3$ sufficiently small so that with high probability the $a,b$ lineages coalesce quickly.
Specifically, let $t_3=1$, and fix any positive values for $t_5,t_7$ and $N_i$ for $i\ne 3$. Now for any $\epsilon>0$, as $N_3\to 0^+$, the probability of lineages from $a,b$ coalescing on $e_3$ within $\epsilon$ of entering it approaches 1.
Using this, it is straightforward to show that as $N_3\to 0^+$ the expected pattern frequency array for $a,b$ approaches that for the JC model on a 2-taxon tree of total length $2t_1$. This then implies that $d_{LD}(a,b)\to 2\mu t_1$ as $N_3\to 0^+$. On the other hand, for all values of $N_3>0$ one can show $d_{LD}(a,c)>2\mu (t_1+2)$. Thus for a sufficiently small choices of $t_1$ and $N_3$, we can make $d_{LD}(a,b)/(2d(a,c)+d(a,b))$ as close to $0$ as desired.
To produce a value of expression \eqref{eq:ndist} near $1/2$ is more subtle. We choose parameters so that $a,b$ lineages are likely to enter $e_5$, but if they both do they are then unlikely to coalesce in it, and coalescence of any pair of lineages in $e_7$ is likely to occur quickly. First set $t_5=0$, $t_7=1$ and $N_8$ arbitrary.
For any $t_1,t_3$ and $\gamma$, by choosing $N_3=N_4=N_5$ sufficiently large, the probability that
the $a,b$ lineages coalesce on $e_3$, $e_4$, or $e_5$ can be made arbitrarily small, so that if they coalesce below the root with (conditional) probability approaching 1 they must do so on $e_7$. This requires that both the $a,b$ lineages follow $e_5$, which occurs with probability $\gamma^2$. If lineages $a,c$ coalesce below the root, they must do so on $e_7$, requiring the $a$ lineage to follow $e_5$, which occurs with probability $\gamma$.
By picking $N_7$ sufficiently small, the probability that two lineages in edge $e_7$ coalesce near the lower end can be made close to 1. All this shows that once $t_1,t_3$ and $\gamma$ are chosen, by appropriate choices of the $N_i$ we can ensure the expected frequency arrays for $a,b$
and $a,c$ are arbitrarily close to
$$ \gamma^2 F(t_1+t_3) + (1-\gamma^2) G(t_1+t_3+1,N_8)$$
and
$$ \gamma F(t_1+t_3) + (1-\gamma) G(t_1+t_3+1,N_8),$$
respectively, where $F(t)$ is the expected pattern frequency array for two samples at distance $2t$ and $G(t,N)$ is the expected array under the coalescent
for 2 lineages which enter a common population of size $N$ at time $t$. Further picking sufficiently small values for $t_1,t_3$, the pattern frequency arrays for $a,b$ and $a,c$ can be made arbitrarily close to
$$ \gamma^2\frac 14 I + (1-\gamma^2) G(1,N_8)
$$
and
$$ \gamma \frac 14 I + (1-\gamma) G(1,N_8),
$$
respectively. Thus for any $\gamma$ the theoretical distance can be made arbitrarily close to the distance computed from the above arrays.
Using the formulas defined in the proof of Proposition \ref{prop:4cyclefills}, we find these distances are
$$d_{LD}(a,b)=f^{-1}\left ( (1-\gamma^2) \delta \right)$$
and
$$d_{LD}(a,c)=f^{-1}\left ( (1-\gamma) \delta \right)$$
where $\delta>0$ is the off-diagonal entry of $G(1,N_8)$.
Thus once $\gamma$ is specified, by choosing $t_1$, $t_3$, $N_3=N_4=N_5$, $N_7$ we can ensure expression \eqref{eq:ndist}
is arbitrarily close to
\begin{equation}
\frac {\log(1-4\delta(1-\gamma^2))}{2\log(1-4\delta(1-\gamma))+\log(1-4\delta(1-\gamma^2))}.
\label{eq:gamma}
\end{equation}
Applying L'Hopital's rule shows the limit of expression \eqref{eq:gamma} as $\gamma\to 1$ is $\frac 12$. Thus for any $\epsilon>0$, by first choosing $\gamma$ near 1 so that the expression \eqref{eq:gamma} is within $\epsilon/2$ of $1/2$, and then choosing $t_1=t_3$, $N_3=N_4=N_5$, $N_7$ so that expression \eqref{eq:ndist} is within $\epsilon/2$ of expression \eqref{eq:gamma},
we obtain the desired result.
\qed\end{proof}
The results of this section, combined with those of Section \ref{sec::logDet} are summarized by Figure \ref{fig::regions}, which indicates the various regions of the simplex which normalized logDet triples fill, according to whether the network has a $4$-cycle, a $3_2$-cycle, or neither.
\begin{figure}
\begin{center}
\includegraphics{simplex.pdf}
\end{center}
\caption{The regions of the simplex filled by normalized triples of logDet distances under the model $\mathcal M$ on a 3-taxon network. The networks shown are those obtained by suppressing all cycles other than $4$- and $3_2$-cycles, and then undirecting the 4-cycle edges. Normalized logDet distances are ordered as $(\ell_{ab},\ell_{ac},\ell_{bc}).$ Networks with $3_2$-cycles fill the solid line segments in the center simplex, but it is unknown whether they may also produce points in the dashed line segments.}\label{fig::regions}
\end{figure}
\section{Conclusion}\label{sec:discussion}
Theorem \ref{thm::main} states that most topological features of an ultrametric level-1 network can be identified from theoretical logDet distances under a fairly general model of sequence evolution with incomplete lineage sorting. It more generally implies network identifiability from pattern frequency arrays, since logDet distances are functions of these.
In particular, individual gene trees, or even sequences partitioned into genes, are not required for network identifiability.
While identifiability is a theoretical question about the model, it has important implications for data analysis. Indeed, it is a key requirement for a statistically consistent inference procedure to exist. While our method of proof of identifiability, using the logDet distance, suggests using that distance as a basis for an inference procedure, others might be developed as well.
In subsequent work, we will explore using the logDet distance in a procedure for level-1 network inference following the framework of NANUQ \cite{ABR2019}. In outline, for each triple of taxa, the location of the normalized triple of logDet distances in simplex plots such as those of Figure \ref{fig::regions} can indicate whether the rooted triple has a 4-cycle or not. A triple near the lines through the centroid can, through some statistical test, be judged unlikely to have arisen from a 4-cycle, while those farther away are judged to have arisen from a 4-cycle. Then, modifying the rooted triple distance of \cite{Rhodes2019} to a network setting, similarly to how NANUQ modified the quartet distance, an intertaxon distance can be computed from the results of these statistical tests. Rules for relating a splits graph for the expected rooted triple distance to the original network will be developed. When applied to the splits graph constructed by NeighborNet from the empirically-derived distance, this should lead to consistent network inference. Since individual gene trees are never inferred, this will potentially give a much faster data analysis pipeline than the current version of NANUQ, which is built on quartet concordance factors across gene trees.
\section{Acknowledgements}
This work was supported by the National Institutes of Health [R01 GM117590],
under the Joint DMS/NIGMS Initiative to Support Research at the Interface of the Biological Mathematical Sciences, and
[2P20GM103395], an NIGMS Institutional Development Award (IDeA).
\bibliographystyle{alpha}
|
2,877,628,090,332 | arxiv | \section{ introduction}
\label{sec:intro}
Haldane's prediction concerning the finite spectral gap of the 1D
integer-spin antiferromagnetic Heisenberg chain and its featureless ground state
was quite unexpected\cite{Haldane1D}, as it seemed incompatible with the
thereom of Lieb, Schultz and Mattis \cite{LSM} on half-integer spin chains, with
the difference coming from the presence or absence of
a topological $\theta$ term. In order to understand the integer-spin case,
Affleck, Kennedy, Lieb and Tasaki (AKLT) constructed a state which admits no
local order parameter and which is the exact ground state of a
Hamiltonian whose finite gap can be proven rigorously.\cite{AKLT_PRL,AKLT_1988}
Their construction, which used valence bonds, was then generalized to two
dimensions, for example on the honeycomb and square lattices.
Recently, these two-dimensional valence-bond AKLT states and parent
Hamiltonians have also been recognized as examples of systems with
weak symmetry-protected topological order\cite{CZX,AKLTstrange,SREstrange},
and, somewhat unexpectedly, as a means to realize universal quantum
computation in a measurement-based approach.\cite{AKLT_QC_honeycomb,AKLT_QC_square}
Here we study a two-parameter family of wave functions on the square lattice
as constructed by Niggemann, Kl\"umper, and Zittartz (NKZ)
\cite{NKZspin2} and investigate the corresponding phase diagram.
These wave functions, which contain the AKLT state as a special case,
are ground states of a class of two-site interacting frustration-free
spin-2
Hamiltonians on the square lattice, which have spin-flip and rotation symmetry
in the $z$ direction and are symmetric under lattice rotations, translations,
and reflections.\cite{NKZspin2}
We shall refer to these states as the ``deformed-AKLT'' family of states,
as they may be obtained by applying an on-site deformation to the AKLT state.
In their original work, through a combination of Monte-Carlo analysis and
approximation by an exactly-solvable classical model, Niggemann,
Kl\"umper, and Zittartz predicted that an
Ising-like transition divides
the two-parameter phase diagram into a N\'eel-ordered phase and a disordered
phase. This matches their result for the spin-$\frac{3}{2}$ model,
which Hieida et al.\cite{Hieida} further confirmed by applying a progenitor
of the CTMRG approach that we describe in Appendix~\ref{app:CTMRG}.
As in the preceding work by Huang, Wagner,
and Wei\cite{AKLTspin32},
we apply tensor-network analyses to this system in order to better
understand and characterize these phases. In addition,
following the evidence for an
XY-like phase in that work, we seek to determine whether or not the disordered
``phase'' further divides into multiple phases,
which we strongly expect to find since the
disordered region of the phase diagram contains both the AKLT point,
which possesses symmetry-protected topological (SPT) order,
in its interior, and a product state at its boundary.
Among the phases, the ordered phase can be easily characterized by spontaneous
symmetry breaking using a staggered $S_z$ as a local order parameter; as this
order parameter can be directly calculated by tensor-network methods,
we can accurately locate the boundary between this phase and the
AKLT phase. For the featureless valence-bond AKLT phase,
we can use simulated modular $S$ and $T$ matrices to distinguish its SPT order
from other phases.\cite{tnST,LevinGuSPT,HungWenSPT}
We also isolate, in a region surrounding the product-state
point at the origin of the parameter space, a critical phase with distinctive
properties that we can examine in terms of the conformal field theory of the
classical XY model. By doing so we reveal
robust evidence for the existence of such a phase and for a Kosterlitz-Thouless
transition between it and the SPT-ordered AKLT phase. This is in contrast to
the spin-3/2 case\cite{AKLTspin32}, which new evidence
presented in Sec.~\ref{sec:honeycomb}
suggests does not contain a truly critical, or quasi-long-range ordered,
XY phase, but instead only has a region of
very long correlation length. Such pseudo-quasi-long-range order also exists on
the square lattice, in a region of the AKLT phase adjacent to the
true XY phase. Its existence is related to the suppression of $S_z=\pm 2$
components in this region, resulting in approximate spin-1
behavior for which the Berry
phase from the topological $\theta$ term almost suppresses isolated
tunneling processes.\cite{Haldane2D}
We also examine the possibility of a third disordered phase, a trivial phase
adiabatically connected to the product state at the origin
$a_1=a_2=0$ of the two-parameter
space (as shown in Fig.~\ref{fig:squarephase}). With any single fixed bond-dimension sweep of the phase diagram,
we find that the trivial phase occupies only a very small region near the
origin; as we increase the bond dimension of the tensor-network
algorithm being used, we find that that region shrinks,
suggesting that this ``phase'' might not be anything more than an
isolated point in the phase diagram.
In Sec.~\ref{sec:state}, we begin by describing the family of states we will
be working with and their inherent properties, in addition to how
tensor-network algorithms can apply to them.
Then in Sec.~\ref{sec:phases} we will describe the phases that
we expect to find in the phase diagram of the system on the square
lattice, and detail our results, as obtained
using the tensor-network renormalization (TNR) and higher-order tensor
renormalization group (HOTRG) methods and summarized in the phase
diagram in Fig.~\ref{fig:squarephase}.
Finally, in Sec.~\ref{sec:honeycomb}, we return to the honeycomb lattice
to re-evaluate the evidence for the XY phase there.
\section{ The valence-bond state }
\label{sec:state}
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{images/squarephase}
\caption{The phase diagram of the square-lattice deformed-AKLT model with
deformation parameterized by $a_2$ and $a_1$ as given in
\eqref{eqn:deformation-definition}.
N\'eel indicates the N\'eel-ordered phase, with boundary determined as in
Fig.~\ref{fig:curveVBSNeel}; XY indicates the XY-like phase with
quasi-long-range order, with boundary estimated by interpolating from the
data in Fig.~\ref{fig:XYmaps20}b; and AKLT indicates the AKLT phase, with
the isotropic AKLT point indicated as $|\text{AKLT}\rangle$. Likewise the
product state at the origin of parameter space is noted as
$|0^{\otimes N}\rangle$.
The green dotted line demarks the pseudo-quasi-long-range-ordered region;
points on this line have correlation length $\xi \sim 10^3$
estimated from TNR data by interpolating the parameter where the classical
central charge takes the value $c \simeq 0.35$ after 10 RG steps, as indicated
by Fig.~\ref{fig:VBScorr}.
}
\label{fig:squarephase}
\end{figure}
To define the deformed-AKLT state, we write
a general AKLT state, which will be a tensor-network state
with bond dimension $\chi=2$ on an arbitrary lattice and introduce a
continuously-parameterized deformation.
We start with some lattice with coordination number $q$.
On each link we place a state of two spin-$\frac{1}{2}$ virtual spins
such that each vertex has $q$ such spins. We then produce the
physical degree of freedom by applying a projector $\mathbb{P}_q$ from the $q$
spins $|\eta_i \rangle$ onto the spin-$q/2$ subspace:
\begin{align}
\mathbb{P}_q = \sum_{\eta_1, \eta_2,...,\eta_q} c_s|s \rangle \langle \eta_1, \eta_2,...,\eta_q|,
\end{align}
where $s=\sum_i\eta_i$ is the physical index, $\eta_i=\pm\frac{1}{2}$ represent
the virtual spins in their $S_z$ basis, and $c_s$ are Clebsch-Gordan
coefficients. This yields the AKLT state
\begin{align}
|\psi_\text{AKLT} \rangle= \bigotimes_{v\in V}(\mathbb{P}_q)_v \bigotimes_{l\in L} | \psi^- \rangle_{l},
\end{align}
where the singlet states
$| \psi^- \rangle = |\! \uparrow \downarrow \rangle - | \! \downarrow \uparrow \rangle $
are placed on every link $l$ of the lattice.
We then apply a diagonal, spin-flip-invariant deformation
\begin{equation}
D(\vec{a})=\sum_{s=-q/2} ^{q/2} \frac{a_{|s|}}{c_s} |s \rangle \langle s|
\label{eqn:deformation-definition}
\end{equation}
in the $S^z$ basis to the physical indices. Then we arrive at a family
of deformed-AKLT states,
\begin{align}
|\Psi(\vec{a})_{\rm deformed} \rangle \propto D(\vec{a})^{ \otimes N} |\psi_\text{AKLT} \rangle.
\end{align}
For the remainder of this work, we will fix $a_0 = 1$ (or $a_{\frac{1}{2}}=1$
for half-integer-spin cases).
We thus, for example, end up with two independent parameters in
the spin-2 case and only one independent parameter in the spin-3/2 case.
In short, the deformed-AKLT family of wave functions can be written as
\begin{align}
\label{eqn:deformed}
|\Psi(\vec{a})_{\rm deformed} \rangle = \bigotimes_{v\in V} \left( D(\vec{a})\mathbb{P}_q \right)_{v} \bigotimes_{l\in L} | \psi^- \rangle_{l},
\end{align}
where the operator
$D(\vec{a}) \mathbb{P}_q$ maps the virtual spaces (which represent the
entanglement between the virtual spins) at each vertex $v$ to the
physical space.
We can modify the original two-site AKLT Hamiltonian\cite{AKLTgap} to obtain a
parent Hamiltonian which locally annihilates this state:
\begin{align}
\label{eqn:Ha}
H(\vec{a})\equiv &\sum_{\langle i,j\rangle} D(\vec{a})^{-1}_i \!\otimes\! D(\vec{a})^{-1}_j \, h_{ij}^{(\rm AKLT)} \, D(\vec{a})^{-1}_i \!\otimes\! D(\vec{a})^{-1}_j,\\
h_{ij}^{(\rm AKLT)} &\equiv \frac{1}{28}\left(S_{ij} + \frac{7}{10}S_{ij}^2+\frac{7}{45}S_{ij}^3+\frac{1}{90}S_{ij}^4\right)\notag\\
S_{ij} &\equiv \vec{S}_i\cdot\vec{S}_j\notag
\end{align}
As $h_{ij}^{(\rm AKLT)}$ annihilates the AKLT state, it follows
that $H(\vec{a})$ annihilates the deformed AKLT state.
Additionally, Niggeman, Kl\"umper, and Zittartz constructed
a more general, five-parameter family of two-site, frustration-free
Hamiltonians, invariant under lattice symmetries as well as on-site spin-flip
and $S_z$ invariance.
We note however that the above Hamiltonian is not well-defined when any
component $a_i$ of $\vec{a}$ is zero (under which circumstance we
would need to increase the rank of the two-site Hamiltonian - impossible with a
continuous deformation).
We shall also consider below AKLT-like states constructed using
maximally-entangled two-qubit states other than $|\psi^-\rangle$ as the valence bonds.
\subsection{Tensor network representation, bond states, and symmetry}
\label{sec:symmetry}
\begin{figure}[h]
\centering
\begin{subfigure}
\centering
\begin{tikzpicture}
\foreach \i in {0,...,2}
\foreach \j in {0,...,2} {
\node[coordinate] (\i\j) at (\i,\j) {};
\foreach \a in {0,90,180,270} {
\node[circle,fill=black,inner sep=0pt,minimum size=1mm] (\i\j\a) at ($(\i\j)+(\a:.17)$) {};
\draw (\i\j\a) -- +(\a:.33);
}
}
\node[right] (label) at ($.5*(11)+.5*(12)$) {$|\omega\rangle$};
\node[below] at (current bounding box.south) {\fontsize{8}{8}\selectfont (a)};
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}
\centering
\begin{tikzpicture}
\foreach \i in {0,...,2}
\foreach \j in {0,...,2} {
\node[circle,fill=black!10,draw=gray,inner sep=3pt,minimum size=5mm] (\i\j) at (\i,\j) {};
\draw (\i\j.center) -- +(110:.35);
\foreach \a in {0,90,180,270} {
\node[circle,fill=black,inner sep=0pt,minimum size=1mm] (\i\j\a) at ($(\i\j)+(\a:.17)$) {};
\draw (\i\j\a) -- +(\a:.33);
}
}
\node[circle,fill=none,draw=none,inner sep=0pt,label={[label distance=.26mm]45:$\mathbb{P}_4$}] at (12) {};
\node[below] at (current bounding box.south) {\fontsize{8}{8}\selectfont (b)};
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}
\centering
\begin{tikzpicture}
\foreach \i in {0,...,2}
\foreach \j in {0,...,2} {
\node[circle,fill=black!10,draw=gray,inner sep=3pt,minimum size=5mm] (\i\j) at (\i,\j) {};
\draw (\i\j.center) -- +(110:.35);
\foreach \a in {0,90,180,270} {
\node[circle,fill=black,inner sep=0pt,minimum size=1mm] (\i\j\a) at ($(\i\j)+(\a:.17)$) {};
\draw (\i\j\a) -- +(\a:.33);
}
\node[circle,fill=black!50,inner sep=0pt,minimum size=2mm] (\i\j q) at ($(\i\j)+(110:.2)$) {};
}
\draw[<-] (12q) -- node[left] {\tiny$D$} +(-.3,.05) ;
\node[below] at (current bounding box.south) {\fontsize{8}{8}\selectfont (c)};
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}
\centering
\begin{tikzpicture}
\foreach \i in {0,...,2}
\foreach \j in {0,...,2} {
\node[circle,fill=black!10,draw=gray,inner sep=3pt,minimum size=5mm] (\i\j) at (\i,\j) {};
\draw (\i\j.center) -- +(110:.35);
\foreach \a in {0,90,180,270} {
\node[circle,fill=black,inner sep=0pt,minimum size=1mm] (\i\j\a) at ($(\i\j)+(\a:.17)$) {};
\draw (\i\j\a) -- ++(\a:.18) node[circle,fill=black,inner sep=0pt,minimum size=.5mm] (\i\j\a b){};
\draw[densely dotted] (\i\j\a b) -- +(\a:.15);
}
\node[circle,fill=black!50,inner sep=0pt,minimum size=2mm] (\i\j q) at ($(\i\j)+(110:.2)$) {};
}
\node[below] at (current bounding box.south) {\fontsize{8}{8}\selectfont (d)};
\end{tikzpicture}
\end{subfigure}
\caption{The valence-bond state and tensor-network state pictures of the
spin-2 deformed-AKLT state on the square lattice.
(a) A singlet state (or, more generally, a bond state $|\omega\rangle$
composed from two virtual spin-$\frac{1}{2}$ degrees of freedom) is placed
on each edge of the lattice.
(b) The AKLT state is formed by placing a spin-2 projector $\mathbb{P}_4$ on
each site; the spin-2 indices of the projectors indicate the physical
degrees of freedom.
(c) From there the deformation $D(\vec{a})$ is applied to each site, resulting
in the deformed-AKLT state.
(d) In order to represent this in the typical manner of a PEPS, we may for
example perform a Schmidt decomposition on the bond states; the Schmidt
indices then becomes the ``bond'' indices of the tensor network, and the
contraction of the resulting objects and the deformation matrix with the
five indices of the projection matrix yields the on-site tensor.}
\label{fig:aklt_structure_square}
\end{figure}
Given a state in the valence-bond picture we have just presented, it is natural
to represent it as a tensor network state (TNS), namely a projected entangled
pair state (PEPS). For those who are not familiar with tensor
network states, we recommend several of the cited review and pedagogical
papers\cite{Vidal_iPEPS_2009,PerezGarciaMPS,PerezGarciaPEPS,BridgemanChubb,
MPS_PEPS_phases}.
In this representation we place on each lattice site
a rank-$(q+1)$ tensor with one physical index (the spin on the site) and
$q$ virtual indices (corresponding to the $q$ virtual spins at the site,
or more precisely their Schmidt index);
pairs of virtual indices of adjacent sites are contracted over in a tensor trace
(tTr) to yield the physical state. In an AKLT system we begin with a
rank-$(q+1)$
tensor on each site (the projector $\mathbb{P}_q$) as well as a rank-2 tensor
on each link (the virtual singlets). To get a PEPS description we may
assign each singlet to a neighboring site and contract it with the
corresponding index of that site's projector, although the way in
which the bonds are defined is essentially a gauge choice and
thus can be easily varied. From this we obtain a PEPS description of a general
deformed-AKLT state by contracting the AKLT physical indices with the
deformation matrix $D(\vec{a})$.
We may also alter the state by replacing the singlet state $|\psi^-\rangle$
in the above description with a more general bond state $|\omega\rangle$.
In particular we may use the Bell states
\begin{align}
& | \phi^+ \rangle= |\uparrow \uparrow \rangle+| \downarrow \downarrow \rangle \notag \\
& | \phi^- \rangle = | \uparrow \uparrow \rangle-|\downarrow \downarrow \rangle = I \otimes \sigma^z | \phi^+ \rangle \notag \\
& | \psi^+ \rangle = | \uparrow \downarrow \rangle+| \downarrow \uparrow \rangle = I \otimes \sigma^x | \phi^+ \rangle \notag \\
& | \psi^- \rangle = | \uparrow \downarrow \rangle-|\downarrow \uparrow\rangle = I \otimes i \sigma^y | \phi^+ \rangle,
\end{align}
where $ \sigma^k, k\in \{0,x,y,z\}$ are Pauli matrices and $ \sigma^0= \mathbb{I}$; we may refer to the states $|\phi^\pm\rangle$ as ``ferromagnetic''
bond states and the states $|\psi^\pm\rangle$ as ``antiferromagnetic'' bond
states, due to the behavior of the respective systems in the ordered regime
as discussed below.
This construction is shown graphically in Fig.~\ref{fig:aklt_structure_square}.
When working on a bipartite lattice, we may change from one such
bond state to another
by applying $SU(2)$ transformations $U_A$ and $U_B$ which commute with the
deformation $D(\vec{a})$ to all of the sites of
the sublattices $A$ and $B$, respectively. Due to the $SU(2)$-invariance of
the projector $\mathbb{P}_q$, this is equivalent to performing the
transformation $|\omega\rangle \mapsto
U_A^{(1/2)}\otimes U_B^{(1/2)}|\omega\rangle$ to every bond state.
Therefore, if we start with the singlet $|\psi^-\rangle$ as our bond state,
we may then convert it to
\begin{enumerate}[i]
\item $|\phi^+\rangle$ by applying $U_A = R_y^{-\frac{\pi}{2}}$
and $U_B = R_y^\frac{\pi}{2}$ to the $A$ and $B$ sublattices, respectively;
\item $|\phi^-\rangle$ by applying $U_A = R_x^{-\frac{\pi}{2}}$ and
$U_B = R_x^{\frac{\pi}{2}}$; and
\item $|\psi^+\rangle$ by applying $U_A = R_z^{-\frac{\pi}{2}}$ and
$U_B = R_z^{\frac{\pi}{2}}$,
\end{enumerate}
where the $SU(2)$ rotation $R_j^\phi \equiv e^{-i\phi S_j}$.
Thus, given physical data from any of these four systems, we may easily
produce the corresponding
information about any of the other. If for example
we find it simpler to manipulate the tensors we use in the case
$\ket\omega = \ket{\phi^+}$, we can apply any conclusions
we draw about that case,
such as boundaries of phase diagrams, to the more standard case of
$\ket\omega=\ket{\psi^-}$.
However, on a lattice which is not bipartite, this mapping is not generally
possible; in fact only the $\phi^+$ and $\phi^-$ bond states can be identified
with each other (by applying $R_z^\frac{\pi}{2}$ to every site), and we
expect in general to get three distinct phase diagrams from these four
bond states.
As both the tensors used to build the AKLT state, that is
the projector $\mathbb{P}_q$ and the
singlet state $\psi^-$, are invariant under $SU(2)$ transformations,
the state itself maintains a global $SU(2)$ invariance. However, the deformation
$D(\vec{a})$
breaks this symmetry down to a subgroup isomorphic to $O(2)$ which
can be characterized by its action on the $xy$ plane, on which rotations are
generated by $S_z$ and reflections are produced by spin-flips such
as $R^\pi_x = e^{\pi i S_x}$ and $R^\pi_y = e^{\pi i S_y}$.
Now consider states $\ket{\Psi_\omega}$ with different bond states,
obtained by applying $U_A$ and $U_B$ to the
state $\ket{\Psi_{\psi^-}}$. If $g \in O(2)$ preserves $\ket{\Psi_{\psi^-}}$,
then $\ket{\Psi_\omega}$ will be preserved by applying
$U_AgU_A^\dagger$ to sublattice A and $U_BgU_B^\dagger$ to sublattice
B.\footnote{This cannot be done when the lattice is not bipartite;
and in fact the choice of bond state in combination with the deformation
breaks the $SU(2)$ symmetry of the AKLT state down to $\mathbb{Z}_2\times
\mathbb{Z}_2$ when the bond state is ferromagnetic.}
For the antiferromagnetic bond state $\psi^+$, $U_A$ and $U_B$ commute with
$U(1)$ rotations and are exchanged by $O(2)$ reflections, so that the symmetry
applied in that case will still preserve the state (assuming there are an even
number of sites).
When performing numerical analysis, it may be useful, for data collection
and/or for numerical stability, to explicitly preserve the global on-site
symmetry\cite{Singh1,HeST} by ensuring that the tensors produced in each
step of the renormalization procedures remain invariant. (However, due to
limitations in our code as of when these data were collected, we have not
preserved $O(2)$ itself but rather some adequately large finite subgroup
thereof, either $\mathbb{Z}_2\times \mathbb{Z}_2$, $D_{40}$, or $D_{80}$.)
\subsection{Representation as a (pseudo)classical model}
\label{sec:pseudoclassical}
\begin{figure}[h]
\centering
\begin{subfigure}
\centering
\begin{tikzpicture}[scale=0.7, every node/.style={transform shape}]
\foreach \i in {0,...,2}
\foreach \j in {0,...,4} {
\node[siteq] (\i\j b) at (1.5*\i,1.5*\j) {};
\foreach \a in {0,90} \draw (\i\j b) -- +(\a:.75);
\foreach \a in {0,90} \draw (\i\j b) -- +(\a:-.75);
\node[siteq] (\i\j k) at ($(\i\j b)+(.2,-.6)$) {};
\draw (\i\j k) -- (\i\j b.center);
\foreach \a in {0,90} \draw (\i\j k) -- +(\a:.75);
\foreach \a in {0,90} \draw (\i\j k) -- +(\a:-.75);
}
\node[qop] (Aop) at ($.6*(00b)+.4*(00k)$) {};
\draw [-,line width=.4mm] (Aop) to (00b.center);
\node[siteq] (00k2) at (00k) {};
\draw [-,line width=.4mm] (00k2) to (Aop.center);
\node[qop] (Bop) at ($.6*(24b)+.4*(24k)$) {};
\draw [-,line width=.4mm] (Bop) to (24b.center);
\node[siteq] (24k2) at (24k) {};
\draw [-,line width=.4mm] (24k2) to (Bop.center);
\foreach \i in {0,...,2} {
\draw[dotted,thick] ($.5*(\i 0b)+.5*(\i 0k)+(0,-1)$) -- +(0,-.25);
\draw[dotted,thick] ($.5*(\i 4b)+.5*(\i 4k)+(0,1)$) -- +(0,.25);
}
\foreach \j in {0,...,4} {
\draw[dotted,thick] ($.5*(0\j b)+.5*(0\j k)+(-1,0)$) -- +(-.25,0);
\draw[dotted,thick] ($.5*(2\j b)+.5*(2\j k)+(1,0)$) -- +(.25,0);
}
\node[below] at (current bounding box.south) {\fontsize{11.43}{11.43}\selectfont (a)};
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}
\centering
\begin{tikzpicture}[scale=0.7, every node/.style={transform shape}]
\foreach \i in {0,...,2}
\foreach \j in {0,...,4} {
\node[sitec] (\i\j) at (1.5*\i,1.5*\j) {};
\foreach \a in {0,90} \draw[double] (\i\j) -- +(\a:.75);
\foreach \a in {0,90} \draw[double] (\i\j) -- +(\a:-.75);
}
\node[sitenimp] (Aop) at (00) {};
\node[sitenimp] (Bop) at (24) {};
\foreach \i in {0,...,2} {
\draw[dotted,thick] ($(\i 0)+(0,-1)$) -- +(0,-.25);
\draw[dotted,thick] ($(\i 4)+(0,1)$) -- +(0,.25);
}
\foreach \j in {0,...,4} {
\draw[dotted,thick] ($(0\j)+(-1,0)$) -- +(-.25,0);
\draw[dotted,thick] ($(2\j)+(1,0)$) -- +(.25,0);
}
\node[below] at (current bounding box.south) {\fontsize{11.43}{11.43}\selectfont (b)};
\end{tikzpicture}
\end{subfigure}
\caption{Correlations in a PEPS and the corresponding representation in a
classical model. (a) The correlation function $\langle\Psi|AB|\Psi\rangle$
of two one-site operators in a quantum state $\Psi$ represented by a PEPS.
(b) The correlation function $\langle\mathcal{O}_A\mathcal{O}_B\rangle$
of two classical ``operators,'' shaded, which replace the weight matrix at a
site
with a different tensor. In this case the classical model is the ``doubled
vertex model'' and the operators in it are determined by contracting the
quantum operator with bra and ket tensors.}
\label{fig:quantumclassical}
\end{figure}
Much as we have shown, a two-dimensional quantum state can often be represented
as a
tensor network state (TNS), whose coefficients in a fixed basis are expressed
as a contraction of a tensor network, that is, a tensor trace:
\begin{align}
|\psi\rangle=\sum_{s_1,s_2,\cdots s_m\cdots}\text{tTr}(A^{s_1}A^{s_2}\cdots A^{s_m}\cdots)|s_1 s_2\cdots s_m\cdots\rangle,
\label{TNS}
\end{align}
where $A^s_{\alpha,\beta,\gamma,\ldots}$ is a local tensor with a physical index
$s$ and internal or bond indices $\alpha,\beta,\gamma,\ldots$, and $\text{tTr}$
denotes tensor contraction of all the connected inner indices according to the
underlying lattice structure. TNS defined on two- or higher-dimensional lattices
are often referred to as PEPS.
The norm squared of a TNS is given by
\begin{align}
\langle \psi | \psi \rangle =\text{tTr} (\mathbb{T}^1 \mathbb{T}^2 \mathbb{T}^3\cdots \mathbb{T}^m\cdots ),
\label {ST_TNS}
\end{align}
where we form the local {\it doubled tensor\/} $\mathbb{T}^i$ by
merging a bra layer and a ket layer, contracting the physical indices of
corresponding pairs of tensors $A$ and $A^*$:
\begin{align}
\mathbb{T}\equiv \sum _s (A^s_{\alpha,\beta, \gamma,\delta,\ldots }) \times (A^s_{\alpha',\beta', \gamma',\delta',\ldots }) ^*.
\label{eqn:doubletensor}
\end{align}
In this way, a quantum model maps into something resembling a classical
vertex model on the same lattice, in which the doubled tensor plays the role
of the weight matrix in the corresponding classical model\footnote{However,
it is not guaranteed that the tensor will correspond to a true classical model
as it may not contain strictly real, nonnegative entries.}. As in
Fig.~\ref{fig:quantumclassical}, we can often translate observable quantities
describing the
quantum state into observable quantities describing the classical
model, which helps us get information about the former from the latter.
We will refer to this as the ``doubled vertex'' model.
However, in two and higher dimensions it is in general computationally
intractible to exactly calculate the tensor trace, that is, to contract the
whole tensor network, for reasonably large system sizes.
Several approximation schemes have been proposed as solutions in this context,
such as the iPEPS algorithm\cite{Vidal_iPEPS_2009}, the corner transfer matrix
method (CTMRG)\cite{Nishino_CTMRG_1997,OrusCTM}, and coarse-graining
approaches\cite{Levin_TRG_2007,Xiang_TRG_2008,Wen_Gu_Levin_TRG_2008,TNR}, all
of which tackle the contraction problem essentially by truncating information
and thus scaling down the computational complexity to the polynomial level.
In Appendix~\ref{app:methods},
we will discuss those methods we have used, namely the corner transfer matrix,
quantum-state renormalization group, higher-order tensor renormalization group,
tensor network renormalization, and loop-TNR methods.
\FloatBarrier
\section{Results}
\label{sec:phases}
Here we describe how we characterize the distinct phases that
appear in this two-parameter family of states, as shown in
Fig.~\ref{fig:squarephase}, and then
present the numerical results arising
from this analysis.
\subsection{The N\'eel-ordered phase}
\label{sec:ordered}
\begin{figure}[h!]
\includegraphics[width=.5\textwidth]{images/aklt_square_Mz}
\caption{Applying HOTRG with $\chi=40$, and extracting the magnetization
$\langle S_z\rangle$, we find a sharp phase transition from a disordered
region (containing the AKLT, XY, and product-state phases) into an ordered
region, with the magnetization rapidly increasing to 2.}
\label{fig:square_Mz}
\end{figure}
In the limit $a_2\to \infty$ (equivalently, $a_0,a_1 \to 0$, where
the deformation becomes a projection onto $S_z=\pm 2$) the tensors
$T = Q(\vec{a})\mathbb{P}_4$ effectively become the mapping
$|2\rangle\langle\uparrow\uparrow\uparrow\uparrow\!\!| +
|\!-2\rangle\langle\downarrow\downarrow\downarrow\downarrow\!\!|$.
Assuming the standard bond state $\psi^-$, the deformed-AKLT state
will then be a cat
state with two dominant configurations $|\!+\!2,-2,\ldots,+2,-2\rangle$ and
$|\!-\!2,+2,\ldots,-2,+2\rangle$. Thus, as we approach this limit we expect a
phase where these states
will, in the thermodynamic limit, exhibit spontaneous symmetry breaking
to N\'eel-ordered states.
We can detect this order using the staggered ordered parameter $(-1)^{n+m}S_z$.
In Fig.~\ref{fig:square_Mz}, we see that this order parameter obtains an
expectation value within a well-defined region surrounding the $a_2\to \infty$
limit (and nowhere else).
\subsection{The AKLT phase}
\label{sec:AKLT}
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth]{images/tnST_square}
\caption{ The trace of the simulated modular matrix $T$ of symmetry twists (a)
$\sigma_x$ and (b) $\sigma_z$, calculated using HOTRG with bond dimension
$\chi=30$ after 10 RG steps,
plotted over all the regions of the phase diagram we have studied.
These quantities sharply define the AKLT-N\'eel phase transition,
and approximately define the KT transition.}
\label{fig:tnST_square}
\end{figure}
The isotropic AKLT state is known to have nontrivial but weak symmetry-protected
topological order, preserved under translations combined with on-site
$SU(2)$ transformations\cite{AKLTstrange,SREstrange,CZX} or a suitable subgroup
thereof - for our purposes,
rotations and reflections in the $xy$ plane combining to form $O(2)$.
As discussed in Sec.~\ref{sec:symmetry}, the deformations we are
considering commute with these symmetries, so we expect the AKLT point
$(a_2,a_1,a_0)=(\sqrt{6}, \sqrt{\frac{3}{2}},1 )$ to be contained within a
larger disordered-antiferromagnet phase behaving as a nontrivial weak SPT phase
under these symmetries.
In order to detect this phase we use simulated modular matrices of
Huang and Wei\cite{tnST,HungWenSPT}, which originated from the idea that
gauging SPT order yields intrinsic topological order\cite{LevinGuSPT}.
Applying $R^\pi_i$ to the physical index of a site tensor
is equivalent to applying the Pauli matrix $\sigma^i$ to the virtual indices.
Because of this we can extract simulated $S$ and $T$ matrices by
representing symmetry twists in the Hamiltonian with strings of $\sigma^i$
operators applied to virtual bonds. At the AKLT point, the modular matrices
arising from symmetry twists $\sigma^x$, $\sigma^y$, or $\sigma^z$ should be
\begin{align}
\label{nontri_oZ2}
S = \begin{pmatrix}
1 & 0&0 &0 \\
0& 0&1 &0 \\
0 & 1&0 &0\\
0 & 0&0 &1
\end{pmatrix}, \ \
T = \begin{pmatrix}
1 & 0&0 &0 \\
0& 1&0 &0 \\
0 & 0&0 &1\\
0 & 0&1 &0
\end{pmatrix},
\end{align}
so that $\op{tr}(S) = \op{tr}(T) = 2$. As we expect this to be a constant
of an SPT phase, we may follow these quantities and use
$\op{tr}(T) = 2$ as an indicator of nontrivial SPT order; when $\op{tr}(T)$
is 1 or 4, meanwhile, we identify a trivial or symmetry-breaking phase.
In Fig.~\ref{fig:tnST_square}, we see that, in precisely the ordered region
indicated by Fig.~\ref{fig:square_Mz}, the traces of the
modular $T$ matrices corresponding to each of the symmetry twists
$\sigma_x$ and $\sigma_z$ take the trivial values of 1 and 4,
respectively. In most of the disordered region, meanwhile, both of these
traces equal 2, with a very sharp transition between these two regimes. Where
we find $\op{tr}(T_x) = \op{tr}(T_y) = 2$, we are within the SPT-ordered
AKLT phase. We will discuss the region in which these traces appear to
continuously vary shortly.
\subsection{Characteristics of the transition between N\'eel-ordered and AKLT phases}
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth]{images/curveVBSNeel}
\caption{The line of transition between the N\'eel and AKLT phases, as
determined by sweeping $a_2$ given fixed $a_1$ with TNR for nontrivial
central charge at scales of up to 12 coarse-graining steps, primarily using
bond dimension $\cchip{12}{10}$ but with $\cchip{16}{12}$ for confirmation.
We find reasonable agreement with the findings of NKZ in the asymptotic limit
$a_1,a_2\to \infty$, but for $a_1\to 0$ we find disagreement, confirmed by
increasing bond dimension as in Table~\ref{tab:VBSNeel}, that exceeds their
estimates of error.}
\label{fig:curveVBSNeel}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth]{images/cVBSNeel}
\caption{Scanning in the neighborhood of the AKLT-N\'eel critical line,
we find as we coarse-grain that the curve of estimated $c$ versus $a_2$
forms an increasingly narrow
peak, with height $c=1$ at $a_1=0$ and $c=1/2$ elsewhere.}
\label{fig:cVBSNeel}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth]{images/CFTVBSNeelI}
\caption{The conformal tower at the critical line between the AKLT and
N\'eel phases, specifically $a_1 = 1.0$, $a_2 = 2.575228$, with
$\cchip{12}{10}$, after 7 RG steps.
We find excellent replication of the conformal
tower of the Ising CFT. Here we have marked theoretical values for the
conformal towers of primary operators
$\mathbbm{1}$, $\epsilon$, and $\sigma$. We find no scaling
operators with nontrivial $U(1)$ charge; what we find instead, with the
spin operator $\sigma$ in the parity -1 sector, is that (arbitrary) $O(2)$
reflections play the role of spin-flips in the Ising model.}
\label{fig:CFTVBSNeelIsing}
\end{figure}
The disordered and N\'eel-ordered phases described above were previously
identified by Niggeman, Kl\"umper, and Zittartz in their original
work, using Monte-Carlo methods. They additionally claim that the critical line
separating these two phases has Ising-like critical exponents and is located at
$a_2^2 = (3.0\pm0.1)a_1^2 + (3.7\pm0.3)$. We evaluate this claim using TNR:
given a value of $a_1$, we analyze several candidate values of $a_2$,
coarse-graining until the estimated value of $c$ passes below a threshold,
and then take a refined selection of $a_2$s around the value which
had the greatest $c$. By using this method to seek a point at which $c$
maintains an asymptotic value up to 12 coarse-graining steps,
we can resolve $a_2$ to several parts per hundred thousand,
as Fig.~\ref{fig:cVBSNeel} demonstrates. Fig.~\ref{fig:curveVBSNeel}
compares results from two values of the bond dimension and from the
estimates in the original work. Moreover, this analysis neatly
confirms the ``Ising-like'' nature of the transition; in
Fig.~\ref{fig:CFTVBSNeelIsing}, we see with TNR that the IR limit of the
doubled vertex model along
the transition exactly matches the Ising CFT with $c=\frac{1}{2}$, with
spin-flips $R^\pi_\phi$ playing the role of spin-flips in the Ising model.
However in the $a_1\to 0$ limit we find $c$ becomes 1, also demonstrated in
Fig.~\ref{fig:cVBSNeel}.
\begin{table}
\[
\footnotesize
\begin{array}{r || c c c c | c }
a_1 & \chi=12, & \chi=16, & \chi=20, & \chi=24, & \text{NKZ}\\
& \chi'=10 & \chi'=12 & \chi'=14 & \chi'=16\\
\hline
0.0 & 1.774789(6) & 1.779038(1) & 1.779243(1) & 1.779348(1) & 1.92(8)\\
0.1 & 1.795103(6) & 1.798905(3) & - & - & 1.93(8)\\
0.2 & 1.835964(6) & 1.839119(6) & - & - & 1.95(8)\\
0.3 & 1.892065(6) & 1.894952(6) & - & - & 1.99(8)
\end{array}
\]
\caption{We use TNR to determine the critical line between the AKLT and
N\'eel-ordered phases, increasing bond dimension from $\chi=12$ to $\chi=16$
at four points and then to $\chi=20$ and $\chi=24$ at one point
to determine the accuracy
of our estimates. Although we determine that the bias is much greater than
the uncertainty of these estimates, we find that it appears nonetheless to
be within $\Delta a_2 < 0.01$ for our least accurate, $\chi=12$ estimates,
and within $\Delta a_2 < 0.001$ for our $\chi=16$ estimates. We also find that
the error appears to decrease for increasing $a_1$, which also reduces
the difference between the original Niggemann, Kl\"umper, and Zittartz (NKZ)
estimates and ours until they are
within appropriate error of each other.}
\label{tab:VBSNeel}
\end{table}
\subsection{The XY-like phase}
\label{sec:XY-properties}
In a region near the origin $a_1=a_2=0$ of the phase diagram,
we will find that the state has infinite correlation length (or equivalently,
quasi-long-range order), as was reported for the analogous model on the
honeycomb lattice.\cite{AKLTspin32} This quasi-long-range-ordered region
will explain much of the anomalous behavior observed in
Fig.~\ref{fig:tnST_square}. In this phase, the doubled vertex model of
\eqref{eqn:doubletensor} is described in the infrared limit by the
continuously-parametrized field theory of the compactified free boson, much like
the XY model in its low-temperature phase. We will begin by describing this
conformal field theory (CFT), following Fendley \cite{Fendley}
and Di Francesco, Mathieu, and S\'en\'echal\cite{DiFrancesco}.
The compactified-free-boson CFT has central charge 1 and
is characterized by a bosonic field $\phi$ whose values are angles and which
has some coupling constant $g$.\footnote{In the XY model, under the original
approximation scheme of Kosterlitz and Thouless\cite{Kosterlitz},
$g = \frac{k_BT}{4\pi J}$, based on the critical exponents they derive.}
The field $\phi$ itself is not a valid operator on the CFT due to its
logarithmic
divergences; however the theory admits derivative and vertex operators,
represented in terms of the holomorphic and antiholomorphic components
$\phi~=~\varphi~+~\bar\varphi$ as
\begin{equation}
\begin{array}{c|c|c}
\text{Field}&\Delta&s\\\hline
\partial\varphi&1&1\\
\bar\partial\bar\varphi&1&-1\\
V_{e,m}&\frac{e^2}{2g} + \frac{m^2g}{2}&em
\end{array}
\end{equation}
where $\Delta = h+\bar{h}$ is the scaling dimension and $s=h-\bar{h}$ is the
conformal spin, $h$ and $\bar{h}$ being the holomorphic and antiholomorphic
conformal dimensions. The vertex operators $V_{e,m} \propto \ :\mathrel{e^{i(e+gm)\varphi}e^{i(e-gm)\bar\varphi}}:$ are indexed by an
``electric charge'' $e \in \mathbb{Z}$ and ``magnetic charge'' $m\in\mathbb{Z}$.
Here $e$ is a $U(1)$ charge, which is to say that a global rotation
$\phi \mapsto \phi+\phi_0$
will send $V_{e,m}\mapsto e^{ie\phi_0}V_{e,m}$; magnetic charge indicates
vortex winding number: in a configuration produced by inserting
$V_{e,m}(z)$, there is a branch cut from $z$ to infinity (or to another vortex)
around which $\phi$ picks up $2\pi m$.
In the full global on-site $O(2)$ symmetry of the compactified free boson or of
the XY model, the charge-$\pm k$ representations
$v \mapsto e^{\pm i k\phi}v$
of the subgroup $U(1)~\subset~O(2)$, of rotations
$|s\rangle \mapsto e^{-i \phi S_z}|s\rangle$, pair to form
doublets; an additional pair of 1D representations have no $U(1)$ charge but
are either even or odd under reflections. Under this symmetry, the derivative
operators belong to the rotation-invariant reflection-odd representation;
the two electric operators $V_{\pm e,0}$ for $e \neq 0$ form a doublet; the four
electromagnetic operators $V_{\pm e, \pm m}$ for $e,m \neq 0$ form a pair of
doublets; and the magnetic operators $V_{0,\pm m}$ for $m > 0$ are exchanged
under reflections, so that $V_{0,m}+V_{0,-m}$ belongs to the
trivial representation and $V_{0,m}-V_{0,-m}$ belongs to the
reflection-odd representation. In particular, $V_{0,1}+V_{0,-1}$
has the smallest scaling dimension of any
$O(2)$-invariant primary operator other than the identity, and will therefore
induce a phase transition when it becomes relevant. As
$\Delta_{0,\pm 1}=\frac{g}{2}$, and a scaling operator is relevant
when $\Delta < d$, this occurs at coupling $g=2d=4$.
\begin{figure} [h!]
\includegraphics[width=0.5\textwidth]{images/mapXY20}
\caption{Basic estimates of $g$ and $c$ from TNR, $\cchip{20}{14}$.
(a) After six coarse-graining steps we can see that the region $g \geq 4$ is
bounded by a curve which intersects the axes at roughly $a_1 = 1.2$ and
$a_2=1.0$. As $a_2$ is held constant and $a_1$ decreases, $g$ increases to as
much as 5.5 before falling off towards the $a_2$ axis.
(b) After 12 coarse-graining steps, $g$ has remained approximately constant
within the region where $g \geq 4$, but it
has decreased, sometimes substantially,
where $g < 4$. (c) After six RG steps $c$ maintains a value very close to 1 in
a region roughly corresponding to $g > 2.5$. (d) This is also true after 12
RG steps, but that region has shrunk due to changes in the estimated value of
$g$. Between estimates for $g$ and for $c$, we end up with an XY-like phase
that's quite well-defined away from the pseudo-quasi-long-range ordered region.}
\label{fig:XYmaps20}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth]{images/XYcompare-interior}
\caption{Following the estimated quantities $g$ and $c$ with increasing
system size - measured in number of coarse-graining steps - at
different bond dimensions. (a),(b) For both $a_1\nobreak\!=\nobreak\!.8, \,a_2\nobreak\!=\nobreak\!0$ and $a_1\!=\!.5,\,a_2\!=.\!3$,
$c$ converges as the bond dimension increases, and by $\chi\!=\!26$ maintains
a value of $1.005\pm.005$ as coarse-graining increases the length scale.
(c) At $a_1\!=\!.8,\,a_2\!=\!0$,
$g$ converges with increasing bond dimension, with a value of $4.80\pm .01$
at $\chi\!=\!26$
which is steady under coarse-graining. (d) At $a_1\!=\!.5,\,a_2\!=\!.3$, $g$ converges
with increasing bond dimension, with a value of $5.30\pm .01 $ at $\chi\!=\!26$
which is steady under coarse-graining.
(Here, and in any other such presentations of data from TNR, we use
$+$ to mark data obtained while preserving $\mathbb{Z}_2\times\mathbb{Z}_2$
and $\times$ to mark data obtained while preserving
$D_{2N}$, typically $D_{80}$; the latter is usually more consistent and
demonstrates more stability.)}
\label{fig:XYcompare-interior}
\end{figure}
We will conclude that this CFT describes the doubled vertex model, including a
transition at $g=4$. By performing TNR (see Appendix~\ref{sec:TNR}), we can
approximate the value of the classical central charge $c$ and the coupling $g$
in much of the XY phase and estimate
the contours of that phase. We find a region in which the
estimated value of $c$ converges to approximately 1 and the estimated value of
$g$ converges to varying values: $g \simeq 4$ on the boundary of this region
and increases to a value of about 5.5 going inward towards the origin.
In Fig.~\ref{fig:XYmaps20} we use TNR with bond dimensions
$\cchip{20}{14}$ (which regulate
the size of renormalized degrees of freedom in each half-step and intermediate
step, respectively) to define this region:
Its outer boundary intersects the $a_1$ axis at approximately 1.2 and the
$a_2$ axis at approximately 1.1; its inner boundary is unclear. As
will be discussed later, the results of this scan are not conclusive in the
inner region, which requires analysis with higher bond dimensions.
Up to a certain point, however, the values from this analysis prove robust when
we increase bond dimension, as shown for two values from the interior of the XY
region in Fig.~\ref{fig:XYcompare-interior}. Furthermore, in
Appendix~\ref{app:tower}, we analyze the conformal tower at these points
in order to get a convincing confirmation that the doubled vertex model
has as its infrared limit a compactified-free-boson CFT.
\begin{figure}[h!]
\includegraphics[width=.5\textwidth]{images/XYcorr}
\caption{Correlation functions of $S_x$ from HOTRG, on a $2^{21}\times 2^{21}$
torus, compared with power-law
estimates of the form $C(L) = C_0L^{-\eta}$, where $\eta=2\Delta=1/g$ is
determined by the TNR estimates in Fig.~\ref{fig:XYcompare-interior} and
$C_0$ is estimated to provide the best tangent line. In both cases
we see the measured curve approaching the power-law estimate with
increasing bond dimension. (a) At $a_1=.8,a_2=0$, the TNR estimate
gives $\eta=0.1887$, and we estimate $C_0=1.711$. (b) At $a_1=.5,a_2=.3$, the
TNR estimate gives $\eta=0.2083$, and we estimate $C_0=1.2309$.}
\label{fig:XYcorr}
\end{figure}
In Appendix~\ref{app:XY-theory}, we attempt
to explain this using the spin-coherent-state
picture as presented by Haldane\cite{Haldane2D} (see also
Auerbach\cite{Auerbach}). Within that framework,
$S_x$ and $S_y$, as classical observables in the sense of
Fig.~\ref{fig:quantumclassical}, should be proportional to the
primary operators
$V_{1,0}\pm V_{-1,0}$, which have scaling dimension $\frac{1}{2g}$ and conformal
spin 0 and which transform into each other under the $k=1$ irrep of the
on-site $O(2)$ symmetry. In particular, we expect the correlation-function
behavior
\begin{equation}
\langle S_\phi(\vec{r})S_\phi(\vec{r}')\rangle \sim |\vec{r}-\vec{r}'|^{2-d-\eta} = |\vec{r}-\vec{r}'|^{-\frac{1}{g}},
\end{equation}
where $S_\phi$ is the combination $S_x\cos\phi + S_y\sin\phi$. As
$S_\phi$ is a quantum observable as well, if this power-law relation holds
it implies quasi-long-range ordered behavior of the quantum model.
To this end we have attempted to extract
the correlations of $S_x$ at these points. Our current TNR methods
cannot efficiently calculate correlation functions at the
bond dimensions of our probe, so we have instead used HOTRG; but this method
cannot replicate long-range behavior of critical systems with finite bond
dimension, so we instead try to replicate the $\chi\to\infty$ limit by
increasing the bond dimension. In Fig.~\ref{fig:XYcorr} we demonstrate
that the correlation functions thus obtained approach
a curve $C_0r^{-\eta}$, where $\eta$ is the critical exponent expected from the
previously-discussed TNR studies and the coefficient $C_0$ is chosen to fit
the HOTRG data. This demonstrates that the quasi-long-range
order of the doubled vertex model does in fact reflect quasi-long-range order
of the quantum state. In Appendix~\ref{app:delta}, we additionally study
the critical exponent $\delta$.
\subsection{The Kosterlitz-Thouless transition between the XY and AKLT phases}
\label{sec:KT}
\begin{figure}[h!]
\includegraphics[width=.5\textwidth]{images/KTcomp}
\caption{We use TNR at $\cchip{20}{14}$ (dashed lines) and $\cchip{30}{20}$
(solid lines) to analyze the long-range behavior of the system in the
immediate vicinity of the KT-like transition at $a_1=0.2$ (a),(b) and
$a_2=0.5$ (c),(d). We examine values of our estimates for the classical central
charge $c$ (a),(c) and the coupling $g$ (b),(d) after successive coarse-graining
steps, and conclude:
(i) While corrections to behavior from increasing bond dimension are
present, they are unlikely to influence estimates of the location of the
transition by more than about $|\Delta\vec{a}| \simeq 0.05$, which indicates
that the $\chi=20$ behavior of Fig.~\ref{fig:XYmaps20} is largely accurate.
(ii) The idea that a coupling $g<4$ induces the appearance of a length
scale is readily confirmed; in fact in (b) we see that, at $a_1=0.2,a_2=1.05$,
the system appears to flow to a non-trivial fixed point when $g$ barely fails
to cross below 4 at $\chi=20$, but at $\chi=30$ a small correction to
shorter-ranged values of $g$ induces this crossing and causes a violation
of scale invariance. Moreover, $c$ only deviates noticeably from 1 at
length scales where $g$ is rapidly flowing towards 0, but invariably does
so under those conditions.
(iii) In (b) we see much more substantial corrections to $g$ at shorter
length scales than in (d), likely related to the much stronger persistence
of pseudo-quasi-long-range behavior near the points analyzed in (d) than
(b) - specifically, the same perturbations which we expect to induce the
transition for $g<4$ may also lead to corrections to the coupling for $g>4$,
such that the suppression of these perturbations that increases the
persistence of pseudo-quasi-long-range order near the $a_1$ axis may
also reduce corrections to $g$ near the transition in that region.
}
\label{fig:KTcomp}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=.5\textwidth]{images/loop_TNR}
\caption{We use loop-TNR to analyze the same points in the phase diagram
as in Fig.~\ref{fig:KTcomp}, estimating $c$ for $\chi=16$ and $\chi=20$.
Although the results are noisier than those obtained with TNR (possibly partly
since we did not preserve symmetries when using loop-TNR).
By $\chi=20$ the results are largely similar to the TNR results: when we
consider the KT transition at $a_1=0.2$, we observe that $c$ appears stable
for $a_2=1$ and $a_2=1.05$ but not $a_2=1.1$ or $a_2=1.15$; meanwhile, when
we consider the KT transition at $a_2=0.5$, we observe that $c$ appears stable
for $a_1=1.1$ and $a_1=1.15$ but not for $a_1=1.2$.}
\label{fig:loopKT}
\end{figure}
We have hypothesized that the XY phase is stable only when the long-range
value of the coupling $g$ is at least 4. Fig.~\ref{fig:XYmaps20} gives some
evidence for this at selected length scales and fixed bond dimension. In
Fig.~\ref{fig:KTcomp}, we take some points that we can predict to be near
the phase transition and observe how estimates for $c$ and $g$ behave at very
large length scales. The result confirms the expected behavior of $c$ and $g$ as
asymptotic values. Moreover, a substantial increase in bond dimension from that
used for Fig.~\ref{fig:XYmaps20} does not substantially alter the result,
lending confidence to our conclusion. In Fig.~\ref{fig:loopKT} we apply
loop-TNR and find results compatible with those obtained from
Evenbly and Vidal's TNR algorithm.
In Appendix~\ref{app:KT} we examine many points around the
transition to try to see a more thorough picture.
We can also use correlation functions to confirm that points on either side of
the supposed transition lie in, respectively, critical and non-critical phases.
Contrasting Fig.~\ref{fig:XYcorr} with
corresponding data from outside of the transition as in Fig.~\ref{fig:VBScorr},
we find that in the latter case the correlation function converges to a form
that exponentially decays to machine epsilon.
\begin{figure}[h!]
\includegraphics[width=.5\textwidth]{images/VBScorr}
\caption{Using HOTRG, we examine the correlation function
$\langle S_xS_x\rangle$ at several points inside the AKLT phase, varying
bond dimension to confirm convergence and fitting the highest-bond-dimension
curves to the exponential-decay form $C(r) = C_0 r^{-\eta}e^{-r/\xi}$.
Additionally, by interpolating from TNR data with $\cchip{20}{14}$, we
estimate the classical central charge at the length scale of the
correlation length $\xi$ determined by fitting and find
that the sharp falloff in estimated $c$ roughly predicts the correlation
length, with $c(\xi) \sim 0.35$. (Note that, as we obtain $c$ by comparing
transfer matrices $M$ of different length, our estimates of $c$ do not
correspond to a precise length; here we use $c(L)$ to refer to the $c$
obtained by comparing $M(3L)$ and $M(2L)$.)
(a),(b) At $a_1=2.5,a_2=0.5$, toward the outside of the pseudo-quasi-long-range
ordered region, we obtain $C_0 \simeq 0.853$, $\eta \simeq 0.247$,
and $\xi \simeq 365$, with $c(\xi) \simeq 0.30$.
(c),(d) At $a_1=2.0,a_2=0.5$, deeper within the pseudo-quasi-long-range ordered
region, we obtain $C_0 \simeq 1.113$, $\eta \simeq 0.223$,
and $\xi \simeq 693$, with $c(\xi) \simeq 0.44$.
(e),(f) At $a_1=1.2,a_2=0.2$, near the $a_2$ axis and thus further from
pseudo-quasi-long-range ordered effects,
we obtain $C_0 \simeq 0.287$, $\eta \simeq 0.245$,
and $\xi \simeq 203$, with $c(\xi) \simeq 0.21$.
}
\label{fig:VBScorr}
\end{figure}
We see that, within what we label the XY phase, our estimates of $g$
converge with successive coarse-graining operations, with small negative
corrections that grow as we approach the transition,
at which $g$ converges to a value
of approximately 4. Beyond the transition, the estimated $g$ falls to zero,
either starting below 4 and quickly dropping, or starting above 4, slowly
declining until it passes below 4, and then quickly dropping,
in both cases indicating that scale-invariance fails and so the doubled
vertex model departs from the conformal invariance of the XY
phase\footnote{Recall that
our estimates of $g$ are inversely proportional to our estimates of the first
scaling dimension, determined from the logarithm of the ratio of
the first two eigenvalues of the transfer matrix
(or the first two with $U(1)$ charges 0 and 1, respectively).
Thus, when we observe the estimated value of $g$ falling to 0 under
coarse-graining, we can typically assume that this implies the opening of a gap.
These cases should not be confused
with actual free-boson CFTs with $g < 1$, which are equivalent to
$g > 1$ theories under T-duality, which exchanges ``magnetic'' vertex operators
with ``electric'' operators and takes $g \leftrightarrow 1/g$ -
in which case the vertex operators with the least scaling dimension are now
``magnetic'' rather than ``electric''.}. Either way, the estimate
for $c$ plateaus at approximately 1 and diverges noticeably from that value
only when $g$ is significantly less than 4 at the length scale in question.
In fact, in Fig.~\ref{fig:VBScorr} we will find that the length scale at which
$c$ measurably decays approximately predicts the correlation length.
\subsection{The pseudo-quasi-long-range ordered region}
\label{sec:pqlro}
Informed by the observation that the decay of $c$ roughly predicts
correlation length, we note that we can generally estimate $c$ to be
about 1 much further into the AKLT phase for small $a_2$
than it does for small $a_1$; in particular this behavior is most evident in
Figs.~\ref{fig:XYmaps20} and \ref{fig:KTlines}.
This suggests an extended region of
``pseudo-quasi-long-range order'' (behavior which imitates that of the
quasi-long-range ordered XY phase up to length scales large enough that they
may be experimentally impractical)
near the $a_1$ axis. In fact in Fig.~\ref{fig:squarephase}, we delineate
a region ``above'' the XY phase that is nearly as large, which we believe to
have correlation length of about 1000 times the lattice spacing or more.
(Such behavior has, notably, interfered with attempts to delineate the XY phase
using methods other than TNR, even at relatively high bond dimensions.)
Based on the
analysis of Haldane\cite{Haldane2D}, we believe this is because the system in
this region resembles a spin-1 antiferromagnet well enough that tunneling
processes of odd
winding number, including those that induce the KT transition, are almost
suppressed and can therefore only weakly break scale invariance. We
present this argument in somewhat more detail in Appendix~\ref{app:spin1}.
We may also confirm the KT transition using the physical critical
exponent $\delta$, as in Appendix~\ref{app:delta}, or
use the corner entropy to approximate the boundary of the XY
phase, as in Appendix~\ref{app:corner}. The results are consistent with those
from TNR, although not as precise.
\subsection{The elusive product-state phase}
\label{sec:prodstate}
\begin{figure}[h!]
\includegraphics[width=.5\textwidth]{images/prodchi}
\caption{A very rough estimate of the bounds of the product-state region
obtained using TNR at various bond dimensions. For a given bond dimension
$\chi$, points within the regions labeled with $\chi$ or $\chi_2>\chi$
are expected to correspond to parameters for which TNR will report trivial
behavior at that bond dimension; outside of that region XY-like behavior
is expected at that bond dimension. Most of the data used to
establish these boundaries can be found in Figs. \ref{fig:prodpts} and
\ref{fig:prodKT}.}
\label{fig:prodchi}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=.5\textwidth]{images/prodSTall}
\caption{Traces of the simulated modular $S$ and $T$ matrices obtained by
inserting virtual symmetry matrices
$\sigma_z$, using HOTRG with bond dimensions
(a),(b) $\chi\nobreak\!=\nobreak\!16$ (c),(d) $\chi\nobreak\!=\nobreak\!24$ (e),(f) $\chi\nobreak\!=\nobreak\!32$. We see that both traces
appear to be
close to 1 in much of the XY phase, but rise toward the trivial value of
4 in a small region within $a_1\!<\!.04, \,a_2\!<\!.4$, similar to
the estimates in Fig.~\ref{fig:prodKT}. However, as the bond dimension grows,
the extent of this region appears to shrink, much as in Fig.~\ref{fig:prodchi}.}
\label{fig:prodST}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=.5\textwidth]{images/prodpts}
\caption{We perform 12 coarse-graining steps of TNR at several points near
the origin, estimating $g$ after the final coarse-graining step and using
that to determine the behavior in this region. (a) For $\cchip{30}{20}$,
$g$ consistently falls as we approach the origin within the area shown,
inducing a KT-like transition into an apparent trivial phase in a region
which intersects the $a_1$ axis at $a_1 \simeq 0.01$ and the $a_2$ axis at
$a_2 \simeq 0.2$. (b) For $\cchip{36}{24}$, however, there does not
appear to be a well-defined product-state phase, at least for $a_1\geq 0.001$.
Rather, we observe fluctuations in the estimated value of $g$ with no
discernable pattern and with non-trivial limiting values of $g$ found
as close to the origin as $(a_1,a_2) = (0.001,0.001)$.}
\label{fig:prodpts}
\end{figure}
When $a_1 = a_2 = 0$, however, the deformation projects the state onto the
product $|0,0,\ldots,0,0\rangle$. Na\"ively, we expect to find a gapped,
entirely trivial phase in a region around this. But the amplitude of
$|0,0,\ldots,0,0\rangle$ in the AKLT state is precisely
equal to the partition function
of a six-vertex model (specifically at the ``square-ice'' point
$a=b=c=1$), whose degrees of freedom are reflected in the
virtual, or entanglement, degrees of freedom of the PEPS
representation of this state. On the square
lattice this model has been well-studied and is known to be critical,
behaving in the infrared as the compactified free boson CFT with coupling
$g=\frac{1}{3}$. At this point,
therefore, the doubled vertex model is critical with $c=2$ despite the
quantum state being trivial in every way.
In TNR studies of the region of the phase diagram surrounding this
point, when we approach the origin
\textit{with fixed bond dimension}
we find that the system appears to encounter a second
KT-like transition, visible in Fig.~\ref{fig:XYmaps20}: $g$ appears to fall
from a maximum value, ultimately reaching a value of 4 after which the behavior
ceases to be critical, with $c$ ultimately falling to 0 rather than remaining
stable. This behavior is shown in more detail in Appendix~\ref{app:KT}.
We may also observe such behavior by analyzing simulated $S$ and $T$ traces
as in Fig.~\ref{fig:prodST}, which appear to reach a trivial value of 4 in
this region.
However, when we increase the bond dimension, we find that the boundary of
this transition recedes towards the origin as in Fig.~\ref{fig:prodchi},
and points which appear to have
finite correlation length at lower bond dimension tend to obtain a central
charge of 1 at higher bond dimensions.
In Appendix~\ref{app:linecomp} we analyze individual points
near the origin of the phase diagram and find that results are highly sensitive
to bond dimension. In fact, when we
increase the bond dimension from $\cchip{30}{20}$ to $\cchip{36}{24}$,
rather than straightforward behavior in a well-defined region we find noisy
fluctuations in $g$ (some of which do pass below 4), as in
Fig.~\ref{fig:prodpts}. Some results from $\cchip{42}{28}$ are also
presented in Appendix~\ref{app:linecomp}; they do little to clarify this
picture.
The principal alternatives we should consider are that:
\begin{enumerate}
\item The product-state phase has finite, but small, extent and is defined
by a KT-like transition much as the data in Fig.~\ref{fig:XYmaps20} suggests;
\item The product-state phase does not exist; rather, the coupling $g$
\begin{enumerate}
\item peaks along some curve, before falling to a limiting value,
likely 4, approaching the origin;
\item keeps rising to a limiting value of 6 or greater approaching the
origin; or
\item keeps rising to $\infty$, much as at the ferromagnetic Heisenberg
point of the XXZ model.\cite{Fendley}
\end{enumerate}
\end{enumerate}
It is currently unclear which of these is most likely, but evidence does suggest
the absence of a separate gapped and completely trivial phase.
To this lack of evidence we add that the arguments we have used to
justify our expectation of a trivial phase are inherently flawed.
The parent Hamiltonian\cite{NKZspin2} derived by Niggemann, Kl\"umper,
and Zittartz does not
extend to the limiting case $a_1=a_2=0$, nor does the formulation in
\eqref{eqn:Ha}: in order to project out spin values, a parent Hamiltonian
will generally have to increase its rank, which is impossible to do via
continuous deformation. For example, at all points in the interior of the
phase diagram, the two-site parent Hamiltonian will annihilate
$\ket{12}-\ket{21}$ (see Eq.~11 of Niggemann, Kl\"umper, and
Zittartz\cite{NKZspin2}); therefore, so must the limit of any sequence of
such Hamiltonians. For $a_1\to 0$ or $a_2\to 0$, this introduces the
possibility that
some sites may have (respectively) $S_z=1$ or $S_z=2$ with nonzero probability
in the ground space of the limiting Hamiltonian, even though they have been
projected out of the deformed
state.
We also note that, if we were to include the product state at $a_1=a_2=0$ in the
phase diagram, we would be implicitly suggesting that nearby states in
the phase diagram could be obtained by perturbing this product state, which
should in turn imply that it is enclosed in the phase diagram by a phase
of finite correlation length. However, we suggest that such ``perturbations''
may instead have quasi-long-range correlations and therefore should, in the
thermodynamic limit, dominate the original (product) state at any magnitude.
We conclude, therefore, that if we wish to connect the point $a_1=a_2=0$ with
the surrounding points of the phase diagram, we should do so with extreme
caution.
\section{Re-examining the XY phase on the honeycomb lattice}
\label{sec:honeycomb}
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth]{images/hexline}
\caption{We use TNR at various bond dimensions to analyze the honeycomb-lattice
system.
(a) At linear system size $O(10)$, a region extending to $a \simeq 0.4$ appears
to have long-range behavior with $c=1$.
(b) For system length $O(100)$, this has shrunk to
within $a \simeq 0.1$. (c) By system length $O(1000)$, all points clearly do
not exhibit quasi-long-range ordered behavior.
(d) The value of $g$ estimated at small system sizes is close to 3.5 at
$a=0$, and smoothly falls off leaving the ``pseudo-quasi-long-range ordered''
region. (e),(f) At all points the estimate of $g$ gradually falls off to 0.
(g),(h),(i) TNR estimates of the Chen-Gu-Wen $X$-ratio \cite{ChenGuWen} rise towards
the AKLT-phase value of 1. Within the pseudo-quasi-long-range ordered region,
however, they may appear to take a different, nontrivial intermediate value
up to fairly large length scales.}
\label{fig:hexregion}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth]{images/hex0}
\caption{We use TNR to analyze the honeycomb system at $a=0$, the most likely
candidate for critical behavior.
(a) Although increasing bond dimension tends to raise the estimated
value of the classical central charge $c$ towards 1, as a function of
the length scale $c$ appears to converge to
a decaying form at the highest bond dimensions tested.
(b) When we estimate the coupling $g$, we find that its initial value always
appears to be close to $3.6 < 4.0$, and that it never
appears to be stable under coarse-graining.
We additionally note that several successive increases in bond dimension, from
$\cchip{26}{20}$ to ultimately $\cchip{42}{28}$,
do not substantially affect the estimates for either $c$ or $g$.
}
\label{fig:hexpoint}
\end{figure}
In Huang, Wagner and Wei\cite{AKLTspin32}, the analogous model for the
honeycomb lattice was
examined using tensor-network methods, and it was concluded that a
quasi-long-range ordered phase
exists close to the value $a=0$ of the perturbation parameter.
Having refined our analysis to more sensitively
judge the properties and boundaries of such a phase, as in
Sec.~\ref{sec:XY-properties}, we return to
this model. The numerical methods are the same as for the
square-lattice model (save for a
correction in order to account for the anisotropy induced by blocking
honeycomb-lattice sites into square-lattice sites).
In Fig.~\ref{fig:hexregion}, we observe that successive applications
of TNR coarse-graining, with increasing bond dimension, fail to maintain
evidence of a ``transition'' to a quasi-long-range ordered region; rather,
we observe that what appears to be an XY phase shrinks in size as
RG proceeds. We nowhere and at no scale estimate $g\geq 4$, and what
was previously believed to be a phase with quasi-long-range order appears to
give way to a region with pseudo-quasi-long-range order similar to the
analogous region of the AKLT phase on the square lattice. Much as we claim
that, when $a_2\sim 0$, the square-lattice state approximates a spin-1
antiferromagnet in which isolated vortices of winding number 1 (modulo 2) are
suppressed by square-lattice symmetries, we claim that, when $a\sim 0$,
the honeycomb-lattice state approximates a spin-$\frac{1}{2}$ antiferromagnet
in which isolated vortices of winding number 1 and 2 (modulo 3) are
suppressed by honeycomb-lattice symmetries. If this is true, we expect
that XY couplings as low as $g=\frac{4}{9}$ should become ``approximately
stable,'' reproducing pseudo-quasi-long-range ordered behavior.
In Fig.~\ref{fig:hexpoint}, we see that even at the point $a=0$, as we increase
the bond dimension we consistently observe behavior
consistent with finite, but large, correlation length.
\section{Discussion} \label{sec:Conclusion}
We have used tensor-network methods to explore the two-parameter phase diagram
of the deformed-AKLT model on the square lattice. In addition to confirming
our expectations about the AKLT and N\'eel phases, we find a well-defined
quasi-long-range ordered phase with properties resembling those of the classical
XY model, including a Kosterlitz-Thouless (KT)-like transition.
Evenbly and Vidal's
TNR algorithm\cite{TNR} gives us a way to effectively and accurately extract a
substantial amount of information about this behavior; prior to its
development, we may not have even been able to conclusively demonstrate the
phase's existence, as was the case when tensor-network methods were previously
applied to a similar question.\cite{AKLTspin32}
Although we have \textit{not} been
able to efficiently use TNR to directly compute correlation functions, it has
yielded predictions about critical exponents $\eta$ and $\delta$ that we
have been able to roughly confirm with HOTRG. We also find from our
analysis that a ``pseudo-quasi-long-range ordered'' region of persistently large
correlation length extends from part of that transition. We explain this
by arguing that isolated tunneling processes are approximately suppressed,
a claim which could benefit from more rigorous analysis.
We have also re-examined the honeycomb case. Using the analysis that we have
applied to the XY phase of the square-lattice model, we have
found that the region previously identified
as an XY phase is instead a pseudo-quasi-long-range ordered region of the
AKLT phase; that model has no true XY phase.
We also find some peculiar behavior when the parameter $a_1$ is very small.
Aside from an apparent crossover in the AKLT/N\'eel transition in this limit,
we find that there is a region close to the origin ($a_2<0.3$, $a_1\ll 0.1$)
where the system's behavior is no longer evident. Although we have largely
exhausted our resources in attempting to determine the exact behavior in that
region using current methods, we may be able to extract more information
either by refining our techniques, for example by taking further advantage of
the symmetry\cite{Singh1}, or by analyzing the $a_1=0$ line specifically
with approaches that may only apply there. In doing so we would wish to
determine whether or not this region contains a distinct
phase with no long-range order, if so, what the nature of the transition is,
and if not, what the system's behavior is as $a_1$ and $a_2$ are both reduced
to 0.
Future work may examine the mechanism of the Kosterlitz-Thouless transition,
including the origin of the coupling which we have labeled $g$ and the role of
tunneling processes. Extensions to this system, such as deformations of
spin-$2m$ AKLT states, a spin-1 AKLT-like state, or the kagome-lattice AKLT
state, may also give us more information about the underlying physics.
The state's behavior along the $a_1=0$ axis, and how it relates to the
behavior in the interior of the phase diagram as $a_1\to 0$,
may have much to tell us about the behavior of the XY phase near or in the
``product-state'' region.
\nocite{Kosterlitz}
\acknowledgments
The authors would like to acknowledge useful discussions with
Alexander Abanov, Ian Affleck, and
especially Cenke Xu, who suggested the physical picture for the
pseudo-quasi-long-range ordered region.
This work was partially supported by
the National Science Foundation under Grant No. PHY 1620252 and Grant
No. PHY 1314748.
|
2,877,628,090,333 | arxiv | \subsection*{Methods}
\noindent \textbf{Methods}
\noindent The memory nuclear
spin qubit can be initialized by swapping the initialized NV electron
qubit state to the memory spin or by using dynamical nuclear polarization.
Details on the swap operations are presented in Supplemental Information.
The LG decoupling field~\cite{lee1963nuclear,cai2013large,wang2015positioning} can remain turned on for the entire duration of our protocol, including NV electron spin initialization and readout, because the frequency of rf decoupling field is far off-resonance to the transition frequencies of the NV electron spin. This allows for the rf decoupling field to be applied by external coils and resonators to avoid possible heating on the diamond sample. The LG decoupling with $\Delta_{\text{LG}}=2\pi \times 20$ kHz requires a rf field's amplitude to be
much smaller than the values of $\sim0.1$ T in the control fields reported in refs.~\onlinecite{michal2008two,fuchs2009gigahertz}.
|
2,877,628,090,334 | arxiv | \section{Introduction}
The X-ray variability in both AGN and X-ray binaries could be assessed through the use of the Power Spectral Density (PSD). The PSD describes how the X-ray power varies on different timescales (or different temporal frequencies), which depends on the mechanisms responsible for the X-ray production near black holes \cite[e.g.,][]{Martin2012}. Another timing technique to probe the innermost regions is by measuring X-ray reverberation lags, i.e., the time delays between the changes in direct continuum and reprocessing echoes from the disc \citep[see][for a review]{Uttley2014}. Due to the longer distance travelled by the reflected photons, changes in energy bands dominated by reflection, or reprocessing, lag behind changes in the continuum dominated band. The first hint of such reverberation delays was reported by \cite{McHardy2007} in the AGN Ark 564, following by the first robust detection by \cite{Fabian2009} in the AGN 1H0707--495.
X-ray reverberation in black hole X-ray binaries was first robustly detected in GX~339--4 by \cite{Uttley2011} when the source was in its hard state. Previous studies of GX~339--4 pointed to the approximate central mass being $\geq 6 M_{\odot}$ \citep[e.g.,][]{Hynes2003} and a small disc inclination angle \citep{Demarco2015a}. \cite{Miller2008} fit the {\it Suzaku} spectra and found the central black hole has a very high spin $a \sim 0.998$. The X-ray spectroscopic analysis of the hard state spectra from the {\it RXTE} archive carried out by \cite{Garcia2015} suggested the black hole spin to be $a \sim 0.95$. Spectral fitting of GX~339--4 during its very high flux state using $\emph{NuSTAR}$ and $\emph{Swift}$ also suggested a high spin of $a \sim 0.95$ \citep{Parker2016}. According to the time-lag analysis, \cite{Uttley2011} found that the disc thermal emission ($\sim 0.3$--0.7~keV, soft band) leads the power-law variations ($\sim 0.7$--1.5~keV, hard band) on long timescales ($>1$s). \cite{Mahmoud2019} assumed the soft component that leads the power-law emission is a soft Comptontized component. \cite{Rapisarda2016, Rapisarda2017} instead modelled it as a variable inner region of the thin disc. However, the disc blackbody variations lag behind the power-law variations by a few milliseconds on short timescales ($< 1$s). This switch from low-frequency hard to high-frequency soft lags is thought to be produced by two distinct mechanisms. While the hard lags are likely due to inward propagating fluctuations \citep[e.g.,][]{Kotov2001,Arevalo2006}, the soft lags can be explained by thermal reverberation associated with the longer light-travel time the hard photons take from the central power-law X-ray source to the disc where they are reprocessed into relatively soft blackbody emission. The thermal reverberation lags then provide clues to the geometry of the X-ray source and the inner accretion flow close to the event horizon of the central black hole.
The timing analysis of GX~339--4 including additional {\it XMM-Newton} observations revealed that the changes in time lags in the high flux states can be characterized as a function of luminosity \citep{Demarco2015a}. The luminosity may change with the truncation radius. When the luminosity increases, the truncation radius becomes smaller so that the observed flux increases due to additional disc dissipation and reflection from the inner disc. Meanwhile, the amplitude of reverberation lags become smaller as the photons reflecting off the inner radii of the disc have a smaller average light crossing time. On the other hand, if the truncation radius increases, the luminosity decreases and the reverberation lag increases. \cite{Reig2018} demonstrated that the correlation between the average time lag and the photon index (i.e., the lags increase as the X-ray continuum becomes softer) in the black-hole X-ray binaries could be a result of inverse Comptonization in the base of the jet. \cite{Kylafis2018} also reported this tight correlation in GX~339--4. \cite{Sridhar2020} found that the temperatures of both inner disc and corona are sensitive to the luminosity of state transition.
The physical properties as well as the geometry of the Comptonizing region in GX~339--4, however, are still unclear. Different fitting techniques also lead to different implied values of truncation radius. \cite{Wang2018} investigated the mean spectrum of this source during a failed outburst in 2013 observed by {\it NuSTAR} and {\it Swift}. They found that a maximum truncation radius could reach $\sim 37r_{\rm g}$ ($1r_{\rm g}=1GM/c^{2}$ is the gravitational radius, where $G$ is the gravitational constant, $M$ is the black hole mass, and $c$ is the speed of light). They also reported a smaller truncation radius of $\sim 3$--$15r_{\rm g}$ during the 2015 outburst. Evolution of the reverberation lag at the end of the 2015 outburst and during the return to quiescence was investigated by \cite{Demarco2017}. By converting the amplitude of the lags to light-crossing distance, \cite{Demarco2017} estimated the truncation radius to be $\sim40$--$200r_{\rm g}$. \cite{Kylafis2018} showed that during the hard and hard-intermediate states the inner disc could extend inwards shrinking the hot inner flow and the jet base. \cite{Garcia2019} carried out a spectroscopic analysis of this source when it went through the failed outburst in 2017. They found that a dual-lamppost model provides a better fits than the standard single lamppost source and implied the truncation radius to reach a few gravitational radii as the luminosity increases in the hard state.
While some of the spectroscopic studies of the refection component in the bright hard state suggested that the inner disc could extend very close to the innermost stable circular orbit
\citep[e.g.][]{Garcia2015, Steiner2017, Wang2018}, there is also previous literature that reported the significantly larger truncation radius of, e.g., $>35r_{\rm g}$ \citep{Tomsick2009} during the hard state. The truncation of the optically thick, geometrically thin disc in the hard state can be the result of thermal conduction of heat from the corona that causes the inner disc to evaporate \citep{Meyer2000}. \cite{Mahmoud2019} modelled the hard state spectral-timing data of GX~339--4 taking into account the time lags due to reverberation and found evidence of the truncation radius in the order of $\sim20r_{\rm g}$.
\cite{Veledina2016, Veledina2018} investigated the interference of X-rays under the propagating-fluctuations framework and found that the presence of multiple sources (e.g., disc Comptonization and synchrotron Comptonization sources) could produce a complex PSD shape seen in X-ray binaries. Reverberation signatures can also be imprinted the the PSD profiles \citep{Papadakis2016, Chainakun2019a}. The aim of our work is to develop a PSD model, taking into account both propagating-fluctuations and reverberation effects, to predict the changes in the accretion disc and the inner hot flows of GX~339--4. It is clear that there is a large scatter in the reported values of the truncation radius in GX~339--4 \citep[see, e.g.,][and references therein]{Wang2018}. Here we focus on the very last stages of the soft-to-hard transition as the source returns to quiescence, similar to the observations analyzed by \cite{Demarco2017}.
The GX~339--4 observations used here and the data reduction are explained in Section 2. The theoretical PSD models are presented in Section 3. The fitting results are shown in Section 4, followed by the discussion in Section 5. The conclusions are drawn in Section 6.
\section{Observations and data reduction}
The data analysed in this paper were selected from those that were in the final phase of GX~339--4 outburst observed during August--September 2015, and were obtained from {\it XMM-Newton} Science Archive.\footnote{\url{http://nxsa.esac.esa.int/}} The selected observations are listed in Table~\ref{tab:xmm_obs}. All observational data were cleaned following the standard data reduction method as described on the {\it XMM-Newton} data analysis webpage{\footnote{\url{https://www.cosmos.esa.int/web/xmm-newton/sas-threads}}} using Science Analysis Software (SAS) version 18.0.0 with the latest calibration files. We then created EPIC-pn light curves using the SAS task {\sc evselect} with the selection criteria of PATTERN $\leqslant$4 and the time bin size of 6 ms. The source extraction region for the data in timing mode was defined as the data which has 28 $\leqslant$ RAWX $\leqslant$ 48 while the region for those observed in small window mode was defined as the circular area centred on the source position with radius of $40''$.
Since the source is very bright, the light curves should be dominated by the source counts. Here, we followed the data reduction outlined in \cite{Demarco2017}. To maximise the data signal to noise ratio, we did not remove background flaring events from the observational data. In fact, the fraction of the useful exposure time affected by the flaring events for each observation is $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}{\raise2pt\hbox{$<$}}}}$ 7\%, excepting for the observation O1 in which $\sim60$ \% of the exposure time is affected. Nevertheless, we checked that timing products, i.e. the power spectra, obtained from the observation with and without removing the flaring events are well consistent. This verifies that the flares did not affect the GX~339--4 timing properties in our a timing analysis. We also note here that all observations were not affected by pile-up. In fact, the source count rates during the observations O1 - O4 are substantially below the maximum count rate allowed for an observation in timing mode of the pn detector ($<$240 count s$^{-1}$). For the observations O5 - O6 which were observed in small window mode, we ran the SAS task {\sc epatplot} to check and confirm that the observations were not significantly affected by the pile-up. Therefore, the useful exposure time for each observation after performing the data reduction is shown in column 4 of Table~\ref{tab:xmm_obs}. The light curves in two different energy bands -- 0.3--0.7 keV and 0.7--1.5 keV -- were extracted from all pn observations. These bands are dominated by thermal reverberation and power-law continuum, respectively, as previously reported by \citet{Demarco2017}.
Finally, based on the light curves obtained, we created the power spectra using the {\sc ftools} task {\sc powspec}.\footnote{\url{https://heasarc.gsfc.nasa.gov/lheasoft/ftools/fhelp/powspec.txt}} In brief, each light curve was divided into a number of segments with the interval of 41 s and the time bin size of 6 ms; then these segments were converted into power spectra and averaged to create the single power spectrum for each light curve. The power spectra were rebinned by a factor of 1.04 dex for the data below 2 Hz and 1.2 dex for the data above 2 Hz. We ignored data points consistent with zero (or negative) power. The power spectra obtained from this method were then used as the basis for further analysis.
\begin{table}
\begin{center}
\caption{\emph{XMM-Newton} observations of GX~339--4.} \label{tab:xmm_obs}
\begin{tabular}{lccc}
\hline
Obs. ID & Date& Mode$^{a}$ & Exposure$^{b}$ \\
& & & (ks) \\
\hline
0760646201 (O1) & 2015-08-28 & Timing & 14.7 \\
0760646301 (O2) & 2015-09-02 & Timing & 15.7 \\
0760646401 (O3) & 2015-09-07 & Timing & 20.2 \\
0760646501 (O4) & 2015-09-12 & Timing & 18.6 \\
0760646601 (O5) & 2015-09-17 & Small Window & 36.5 \\
0760646701 (O6) & 2015-09-30 & Small Window & 33.4 \\
\hline
\multicolumn{4}{l}{\footnotesize \textit{Note.} $^{a}${X-ray instrument operating mode.} $^{b}${Useful exposure}} \\
\multicolumn{4}{l}{\footnotesize {time after data cleaning.}} \\
\end{tabular}
\end{center}
\end{table}
\section{Theoretical model}
\subsection{Geometry setup}
To avoid having too many free parameters, we fix the black hole mass of GX~339--4 to be $10M_{\odot}$, the inclination angle $i=30^{\circ}$, and the black hole spin $a=0.998$ \citep[e.g.,][]{Hynes2003, Demarco2015a,Garcia2015,Parker2016}. Therefore, the innermost stable circular orbit (ISCO) is at $\sim 1.23r_{\rm g}$. The disc is geometrically thin and optically thick \citep{Shakura1973} ranging between $400r_{\rm g}$--$r_{\rm trc}$, where $r_{\rm trc}$ is the truncation radius. Inside $r_{\rm trc}$ the disc is replaced by two hot-flow zones responsible for the emission of soft spectrum ($\Gamma_{\rm sz}$) and hard spectrum ($\Gamma_{\rm hz}$) ranging between $r_{\rm trc}$--$r_{\rm sh}$ and $r_{\rm sh}$--$1.23r_{\rm g}$, respectively. The $\Gamma_{\rm sz}$ and $\Gamma_{\rm hz}$ are the photon indices of the X-ray continuum emitted from the inner soft and hard zones, respectively, and $r_{\rm sh}$ is the transition radius between the two zones. The accretion disc varies on long timescales while the turbulent inner hot-flows vary intrinsically on relatively fast timescales. The parameter $t_{\rm p}$ characterises the time taken for fluctuations to propagate from $r_{\rm trc}$ to $r_{\rm sh}$. Our geometric setup is presented in Fig~\ref{geometry}.
We consider two distinct mechanisms that take place in our system: fluctuations in the mass accretion rate and the thermal reverberation. The accretion disc provides seed photons to the inner hot-flow zones, and reflects the Comptonised photons travelling outwards from the hot-flows. The continuum flux in the energy band of interest $E_{j}$ emitted from the spectrally soft and the hard hot-flow zones are
\begin{eqnarray}
F_{\rm sz}(E_{j}) & \propto & \int_{E_{j,\rm low}}^{E_{j,\rm high}} {E_{j}^\prime}^{-\Gamma_{\rm sz}} {\rm d} E_{j}^\prime \;, \\
\label{eq:f1}
F_{\rm hz}(E_{j}) & \propto & \int_{E_{j,\rm low}}^{E_{j,\rm high}} {E_{j}^\prime}^{-\Gamma_{\rm hz}} {\rm d} E_{j}^\prime \;,
\label{eq:f2}
\end{eqnarray}
where $E_{j,\rm low}$ and $E_{j,\rm high}$ are the lowest and highest energy of the specific band of interest, respectively.
Recently, \cite{Ingram2019} showed that there could be some misclassified photons in the energy band < 0.7 keV that should belong to higher energy bands due to the fact that {\it XMM-Newton}'s response matrix is non-diagonal. Taking into account this effect as well as the absorption column found in GX339--4 could change the flux normalization in a particular soft energy band. In this work, these effects are included indirectly by employing the reflected response fraction obtained from the spectral fits by \cite{Demarco2017} to produce the PSD, which will be described later in Section 4. Therefore, the PSD results here should tie to the mean spectrum fits of the data that have already folded through the instrument responses and that should incorporate the flux contamination induced by the effects of the response matrix as well.
\begin{figure}
\centerline{
\includegraphics[width=0.45\textwidth]{fig-geometry-r2.png}
}
\caption{The geometry of our model. The accretion disc varies on long timescales and extends from $400r_{\rm g}$--$r_{\rm trc}$. The inner regions from $r_{\rm trc}$--$r_{\rm sh}$ and $r_{\rm sh}$--$1.23r_{\rm g}$ are turbulent inner flows varying intrinsically on fast timescales which are responsible for the soft and hard spectral emission, respectively. The accretion disc provides seed photons to the inner hot flows, and reprocess and reflect the Comptonized emission photons coming back from the inner regions. The parameter $t_{\rm p}$ is the time taken for fluctuations to propagate from $r_{\rm trc}$ to $r_{\rm sh}$.
}
\label{geometry}
\end{figure}
\subsection{Mass accretion rate fluctuations and PSD}
When fluctuations in mass accretion rate, $\dot{m}$, propagate through the accretion disc, they induce variations in the seed photon emission. The disc seed photons are then Compton up-scattered inside the hot corona (or hot flow) producing the Comptonized photons whose variability is directly proportional to that of the mass accretion rate fluctuations. The observed continuum flux in the energy band $E_{j}$ then consists of two components associated with the emission from soft and hard zones, which can be described by
\begin{equation}
x_{\rm 0}(t,E_{j}) = x_{\rm hz}(t,E_{j}) + x_{\rm sz}(t,E_{j})\;,
\label{eq:lc}
\end{equation}
where
\begin{eqnarray}
x_{\rm hz}(t,E_{j}) & \propto & F_{\rm hz}(E_{j})\dot{m}(t) \;, \\
x_{\rm sz}(t,E_{j}) & \propto & F_{\rm sz}(E_{j})\dot{m}(t+t_{\rm p})*h(t) \;.
\label{eq:lc2}
\end{eqnarray}
$\dot{m}(t)$ is the variations of mass accretion rate fluctuations. $h(t)$ is the filter function that screens out the high-frequency variability of signals in the soft hot-flow zone so that the harder, inner zone produces the hard continuum spectra varying more at higher frequencies, which is the traditional property observed in PSD of AGNs \citep{Martin2012} and X-ray binaries \citep{Cui1997, Remillard2006}. Also, $h(t)$ produces the different X-ray variability in the two hot-flow components, so that the oscillatory interference features in the PSD can be produced. There is no one unique framework to explain distinct emission regions and the peaks seen in the PSD. This interference framework was previously proposed by \cite{Veledina2016, Veledina2018}, and was suggested by \cite{Mahmoud2018} that it can encompass other current models \citep[e.g.,][]{Ingram2012} within the parameter space \citep[see][for further discussion]{Mahmoud2019}. The variability produced from the soft hot-flow zone is modelled by a convolution ($*$ sign) of the filter function with the mass accretion rate. The parameter $t_{\rm p}$ is a characteristic propagation time from the $r_{\rm trc}$ to $r_{\rm sh}$ where the hard Comptonization operates.
The PSD is estimated by the modulus squared of the discrete Fourier transform of the light curve \citep[e.g.,][]{Nowak1999,Emmanoulopoulos2013} which can be written as
\begin{equation}
P_{0}(f,E_{j}) \propto |X_{0}(f,E_{j})|^{2} / \bar{x_{0}}^2 \;,
\label{eq:psd}
\end{equation}
where upper case letters represent the quantities in the frequency domain corresponding to those in the time domain written in lower case letters. According to eqs.~\ref{eq:lc}--\ref{eq:psd}, the PSD under this framework can be calculated via \citep[e.g.,][]{Veledina2016}
\begin{equation}
P_{0}(f,E_{j}) = \frac{1+\epsilon^{2} H^{2}(f) + 2 \epsilon H(f)\cos(2\pi f t_{\rm p}) }{(1+\epsilon)^{2}}{|\dot{M}}(f)|^{2} \;,
\label{eq:psd2}
\end{equation}
where it is normalized in terms of the squared fractional rms \citep{Miyamoto1991} and $h(t)$ is a zero-lag function, so that $H(f)$ is real as this is instrumental in the derivation of eq.~\ref{eq:psd2}. $\epsilon = N_{\rm F}(F_{\rm sz}/F_{\rm hz})$ is the relative flux ratio of the X-ray continuum emitted from two different hot-flow zones. While $F_{\rm sz}/F_{\rm hz}$ depends on $\Gamma_{\rm sz}$, $\Gamma_{\rm hz}$, and the energy band being considered (see eqs.~1--2), we assume the normalization factor $N_{F}$ is the same for all energy bands in the same observation. The $N_{F}$ in the same energy band, however, could be different among different observations. $H(f)$ is the Fourier form of the filter function which can be expressed in the form \citep{Veledina2016}
\begin{equation}
H(f) = \frac{1}{(f/f_{\rm filt})^{4}+1} \;
\label{eq:ffilt}
\end{equation}
so that the transmitted signals above the frequency $f_{\rm filt}$ are damped.
The variability power of the mass accretion rate significantly decays at the viscous frequency \citep{Ingram2013}
\begin{equation}
f_{\rm visc}(r_{n}) = r_{n}^{-3/2}(H/R)^{2}\alpha/2\pi \;,
\label{eq:fvis}
\end{equation}
where $H/R$ is the disc scale-height ratio and $\alpha$ is the viscosity parameter. The radius $r_{\rm n}$ is in units of $r_{\rm g}$ so the viscous frequency $f_{\rm visc}(r_{n})$ is in units of $1/t_{\rm g}$, where $t_{\rm g} = GM/c^{3}$. One can convert gravitational units to physical units when the black hole mass is known. For a black hole mass of $10M_{\odot}$, $1r_{\rm g} = 1.48\times10^{4}$~m and $1t_{\rm g} = 4.9\times10^{-5}$~s.
To describe the broken power-law shape usually seen in the PSD, the form of $|{\dot{M}}|^{2}(f)$ is assumed to be
\begin{equation}
|{\dot{M}}(f)|^{2} = \frac{1}{1+(f/f_{\rm break})^{\beta}}\;,
\label{eq:mdot_f}
\end{equation}
where $f_{\rm break}$ is the break frequency and $\beta$ is an arbitrary index. We fix $\beta=1.5$ when investigate other model parameters, but allow it to be free when we fit the data. ${\dot{M}}(f)$ represented in eq.~\ref{eq:mdot_f} is then the frequency domain form of the driving signal that is used to activate the X-ray continuum emission and variability from the hot-flows.
There are two important parameters in our model giving the radial positions of the truncation radius, $r_{\rm trc}$, and the transition radius where the hot flows change from the spectrally soft to spectrally hard zone, $r_{\rm sh}$. We then relate the break and the filter frequency to the viscous timescales at the truncation and transition radius, respectively,
\begin{eqnarray}
f_{\rm break} & = &f_{\rm visc}(r_{\rm trc})\;, \\
f_{\rm filt} & = &f_{\rm visc}(r_{\rm sh})\;.
\label{eq:fbff}
\end{eqnarray}
The characteristic time for fluctuations that originate at a radius $r$ to propagate across a radial distance $\Delta r$ is $\Delta t =\Delta r /[r f_{\rm visc}(r)]$ \citep{Ingram2013}. Assuming the characteristic time for fluctuations to propagate through the soft zone depends on the viscous timescale at the truncation radius, the value of $t_{\rm p}$ in eqs.~\ref{eq:lc2} and \ref{eq:psd2} then can be estimated as
\begin{equation}
t_{\rm p} = (r_{\rm trc} - r_{\rm sh})/(r_{\rm trc} f_{\rm break})\;.
\label{eq:tp}
\end{equation}
Fig.~\ref{fig2} shows examples of the model PSD for different energy bands when the photon indices of the continuum emitted from soft and hard zones are $\Gamma_{\rm sz} = 2.5$ and $\Gamma_{\rm hz} = 2.0$, respectively. We assume the disc is truncated at $r_{\rm trc}=10r_{\rm g}$ and the transition radius is $r_{\rm sh}=5r_{\rm g}$. The disc parameter $(H/R)^{2}\alpha$ is set to be 0.01. The model can produce larger high-frequency power in higher energy bands, which is the traditional PSD properties seen in both AGN and X-ray binaries. Furthermore the dips and humps are naturally produced by the model. We fix $N_{\rm F}=0.5$ so the parameter $\epsilon = N_{\rm F}(F_{\rm sz}/F_{\rm hz})$ that regulates the importance of the filter level depends only on $F_{\rm sz}/F_{\rm hz}$, which is different in each energy band. Generally, the dips and humps could also depend on $r_{\rm trc}$ and $r_{\rm sh}$ that determine $f_{\rm break}$, $f_{\rm filt}$, and the characteristic propagation time $t_{\rm p}$.
\begin{figure}
\centerline{
\includegraphics[width=0.45\textwidth]{fig2-r1.pdf}
}
\caption{The PSD in the 0.3--0.7~keV energy band (black line), 0.7--1.5~keV (blue line), and 3--5~ keV (red line) with $r_{\rm trc}=10r_{\rm g}$ and $r_{\rm sh}=5r_{\rm g}$. The soft and hard zones of the inner flow have the spectral photon indices $\Gamma_{\rm s}=2.5$ and $\Gamma_{\rm h}=2.0$, respectively. We fix $N_{\rm F}=0.5$ so that all energy bands are different due to the values of the intrinsic flux ratio $F_{\rm s}/F_{\rm h}$.
}
\label{fig2}
\end{figure}
How the geometry of the inner flows affects the PSD in 0.3--0.7 keV band when $\epsilon=0.05$ and $(H/R)^{2}\alpha=0.01$ is shown in Fig.~\ref{fig3}. We note that when $\epsilon$ is fixed, there can be different pairs of $\Gamma_{\rm sz}$ and $\Gamma_{\rm hz}$ that produce the same PSD profile for each value of $N_{\rm F}$. To illustrate this, Fig.~\ref{fig3} (top panel) shows the $\Gamma_{\rm sz}$ v.s. $\Gamma_{\rm hz}$ plots that provide $\epsilon=0.05$ for different $N_{\rm F}$. In principle, we expect $\Gamma_{\rm sz} > \Gamma_{\rm hz}$, so the black solid line marks the upper limit of $N_{\rm F}$ of which above this value the model returns $\Gamma_{\rm sz} < \Gamma_{\rm hz}$ in this energy band, which is unrealistic and can be neglected. Fig.~\ref{fig3} (middle panel) shows that stronger high-frequency power is obtained with decreasing $r_{\rm trc}$. Since we fix $r_{\rm sh} = 2r_{\rm g}$, larger $r_{\rm trc}$ results in longer propagation-time delay, $t_{\rm p}$, through the inner soft to the inner hard hot-flow zones. Increasing time $t_{\rm p}$ results in more humps being imprinted on the PSD due to the effect of the cosine term in eq.~\ref{eq:psd2}. This is also true when we fix $r_{\rm trc}=20r_{\rm g}$, but vary $r_{\rm sh}$ and hence also $t_{\rm p}$ (Fig.~\ref{fig3}, bottom panel). Moreover, the smaller $r_{\rm sh}$ is, the higher frequency the oscillations is significantly filtered. This is because the $f_{\rm filt}$ is larger for smaller $r_{\rm sh}$. Decreasing $r_{\rm sh}$ means the inner-soft zone could produce X-rays on shorter characteristic timescales to interference with X-rays from the hard hot-flows, leading to the oscillation, interference structures on the PSD profiles towards higher frequencies.
\begin{figure}
\centerline{
\includegraphics[width=0.45\textwidth]{fig1-d-r1.pdf}
}
\vspace{0.2cm}
\centerline{
\includegraphics[width=0.45\textwidth]{fig1-c-r1.pdf}
}
\vspace{0.2cm}
\centerline{
\includegraphics[width=0.45\textwidth]{fig1-b-r1.pdf}
}
\caption{\emph{Top panel}: $\Gamma_{\rm sz}$ vs. $\Gamma_{\rm hz}$ plots that give $\epsilon=0.05$ for different $N_{\rm F}$ values for a 0.3--0.7 keV band of interest. \emph{Middle panel}: The corresponding PSD in 0.3--0.7~keV band varying with $r_{\rm trc}$. \emph{Bottom panel}: The corresponding PSD in the same band varying with $r_{\rm sh}$.}
\label{fig3}
\end{figure}
\subsection{Simulated PSD including reverberation}
Since the radial size of the disc is relatively large comparing to that of the inner hot flows, the source illuminating the disc can be approximated as a central illuminating source \citep[e.g.][]{Gardner2014, Mahmoud2019}. Therefore, the time delay of photons travelling from the central source to be reprocessed by the disc at radius $r$ is given by \citep{Welsh1991}
\begin{equation}
\tau = \frac{r}{c}\bigg{(}1- \sin{i} \cos{\phi} \bigg{)} \;,
\label{eq:impuse_res}
\end{equation}
where $i$ is the inclination angle measured from the observer’s line of sight to the disc axis and $\phi$ is the azimuthal angle between each specific point on the disc and the projection of the line of sight onto the disc. We note that eq.~\ref{eq:impuse_res} does not include general relativistic effects but is still acceptable since we do not specify the exact geometry of the flows. The impulse response function due to reverberation, $\psi(\tau)$, can be produced by summing the photon counts over a given range of radii $r_{\rm trc} \leq r \leq 400r_{g}$ and all azimuthal angles $0 \leq \phi \leq 2\pi$ as a function of the time delay using eq.~\ref{eq:impuse_res}. At radii smaller than $r_{\rm trc}$, the disc is replaced by the hot flow and X-ray reverberation does not take place.
We introduce the parameter $\gamma$ to describe the emissivity of the accretion disc so that the number of emitted photons from each annulus of area $2 \pi r {\rm d} r$ is proportional to $r^{-\gamma}$. Higher $\gamma$ means the X-ray reflection is more concentrated at the inner part of the disc. Meanwhile, $\gamma=0$ means that the same number of photons is produced in each area of $2 \pi r {\rm d} r$ and that emissivity, in terms of flux per unit area on the disc, is proportional to $r^{-1}$. Examples of the disc-response functions are presented in Fig.~\ref{p4}. Here we fix $M_{\rm BH}=10M_{\odot}$ and $i=30^{\circ}$ while allow the $r_{\rm trc}$ to be a free parameter. The solid and dotted lines represent the case of $\gamma=0$ and $1.5$, respectively.
\begin{figure}
\centerline{
\includegraphics[width=0.45\textwidth]{fig3-a-r1.pdf}
}
\caption{Simulated impulse disc responses in 0.3--0.7~keV band varying with the disc truncation radius. The solid and dotted lines represent the cases when the emissivity index $\gamma=0$ and 1.5, respectively. The inclination angle is fixed at $i=30^{\circ}$ and $M_{\rm BH}=10M_{\odot}$.}
\label{p4}
\end{figure}
The impulse reverberation response also acts as a filter on the driving signal that affects the shape of the observed PSD \citep{Papadakis2016, Chainakun2019a}. According to the convolution theorem, the Fourier transform of an observed light curve including reverberation effects can be written as \citep{Uttley2014, Papadakis2016}
\begin{equation}
X(f,E_{j}) = X_{0}(f,E_{j})\Psi(f,E_{j}) \;,
\end{equation}
where $X_{0}(f)$ and $\Psi(f)$ are the Fourier transform of the driving signal and the impulse response, respectively. The observed PSD in the energy band $E_{j}$ is then given by
\begin{equation}
P_{\rm obs}(f, E_{j}) = \frac{|X(f,E_{j})|^{2}}{\big{(}1+R(E_{j})\big{)}^{2}} = \frac{|X_{0}(f,E_{j})|^{2}|\Psi(f,E_{j})|^{2}}{\big{(}1+R(E_{j})\big{)}^{2}}\; ,
\end{equation}
where $R(E_{j})$ is the reflected response fraction defined as the (reflection flux)/(continuum flux) measured in the specific energy band $E_{j}$, which can vary between observations.
The 0.3--0.7 keV PSD predicted by the model that includes both effects of disc fluctuations and reverberation are presented in Fig.~\ref{p5} (solid lines). We fix $\epsilon=0.1$ and $(H/R)^{2}\alpha=0.01$, but vary $\gamma$, $r_{\rm trc}$ and $r_{\rm sh}$. For comparison, the corresponding PSD with reverberation excluded is also shown as the dotted lines. It is clear that the reverberation acts as a filter that reduces the high-frequency power in the observed PSD, which is independent of the input signals \citep[see, also, discussion in][]{Papadakis2016,Chainakun2019a}. Interestingly, if the emissivity $\gamma$ is too high (i.e., the source illumination is too centrally concentrated), most photons would reflect from inner disc so the reverberation delays would be quite short and, consequently, relatively small effects of reverberation are seen in the PSD profiles towards low temporal frequencies (i.e., long timescales). We note that the reverberation features are imprinted on the PSD at high temporal frequencies associated with the short timescales of the disc reflection. Given $M_{\rm BH}=10M_{\odot}$, the significant drop of power due to reverberation at frequencies around a few tens of Hz should be the beginning of the main dip of the oscillatory structures in the PSD produced by reverberation such like in case of AGN \citep{Papadakis2016, Chainakun2019a}. Furthermore, the result in Fig.~\ref{p5} (bottom panel) suggest that the overall features of the PSD are quite different for different combinations of key model parameters such as $r_{\rm trc}$, $r_{\rm sh}$, $\epsilon$ and $\gamma$.
\begin{figure}
\centerline{
\includegraphics[width=0.45\textwidth]{fig4-r1.pdf}
}
\vspace{0.2cm}
\centerline{
\includegraphics[width=0.45\textwidth]{fig4b-r1.pdf}
}
\caption{\emph{Top panel}: The total PSD in 0.3--0.7~keV band varying with $\gamma$ when $i=30^{o}$, $r_{\rm trc}=10r_{\rm g}$, $r_{\rm sh}=5r_{\rm g}$, $\epsilon=0.1$ and $(H/R)^{2}\alpha=0.01$. \emph{Bottom panel}: The total PSD in 0.3--0.7~keV band varying the geometry of the system with $\gamma=1.5$. All dotted lines represents the corresponding cases with reverberation excluded. The reduction of power at high frequencies is produced by reverberation.}
\label{p5}
\end{figure}
\section{Fitting results}
Our model consists of nine parameters in total: truncation radius, $r_{\rm trc}$, soft-to-hard transition radius, $r_{\rm sh}$, disc parameter, $(H/R)^{2}\alpha$, normalization flux, $N_{\rm F}$, photon index of the hard zone, $\Gamma_{\rm hz}$, photon index of the soft zone, $\Gamma_{\rm sz}$, low-frequency break index, $\beta$, reflected response fraction, $R(E_{j})$, and disc emissivity index, $\gamma$. To minimize the free parameters, the $R(E_{j})$ is fixed to the values obtained from the spectral fits by \cite{Demarco2017}, which are approximately 4.0 and 1.0 in 0.3--0.7 keV and 0.7--1.5 keV bands, respectively. The reflected response fractions employed here describe the effects of contamination flux between cross-components in the reverberation-dominated and continuum-dominated energy bands. In principle, they should also be able to describe the effect of misclassifying photons due to the non-diagonal response matrix of the instrument that plays a role in changing contamination flux as well. The effects of changing the photon index on the PSD are similar to changing $N_{\rm F}$, which is a free parameter. Since the photon index of the X-ray continuum in these GX~339--4 observations was previously reported to vary between 1.5--2 \citep[e.g.,][]{Demarco2017}, we fix $\Gamma_{\rm hz} = 1.5$ and $\Gamma_{\rm sz} = 2.0$. This helps avoid the degeneracies in the model and also avoid predicting a spectrum that is wildly different from the observed spectrum. The parameter $N_{\rm F}$ is fixed between two energy bands, but allowed to vary across the observations.
According to \cite{Veledina2016}, the disc and viscous parameters of GX~339--4, via modelling the PSD in 2--15 keV, were found to be $H/R > 0.23$ and $0.01 < \alpha < 0.5$. We consider the cases when $0.0005 \lesssim (H/R)^{2}\alpha \lesssim 0.1$ which is approximately between the lower limit and a reasonable upper limit $(H/R)=1$ and $\alpha=0.1$, but still in the range of what previously reported \citep{Veledina2016}. Firstly, the global grids of the model have been produced and the PSD data in two energy bands are simultaneously fitted in ISIS \citep{Houck2000}. We find that for the majority of fits the values of $(H/R)^{2}\alpha$ is 0.005 and $\gamma$ is 0. To improve the fit while limiting computation time, finer local grids of the model are produced independently for each observation, but this time both $(H/R)^{2}\alpha$ and $\gamma$ are fixed at the values constrained by the global grids, under the assumption that they do not vary much during these observations. The fitting is repeated with the finer, local grids for each observation with fixed $(H/R)^{2}\alpha = 0.005$ and $\gamma = 0$. The fitting results are presented in Fig.~\ref{fig_best_fit}. The corresponding best-fit parameters are listed in Table~\ref{tab_fit_para}. We found an increase in the truncation radius from $\sim 10r_{\rm g}$ to $\sim 55r_{\rm g}$ during O1--O6 as the source is decreasing in flux towards the end of the outburst. An increase of the soft--hard transition radius and a variation of $N_{\rm F}$ are also found during O1--O6. Meanwhile, $\beta$ seems to decrease towards the end of the outburst.
\begin{figure*}
\centerline{
\includegraphics*[width=0.55\textwidth]{0760646201_best_fit.pdf}
\hspace{-1.0cm}
\includegraphics*[width=0.55\textwidth]{0760646301_best_fit.pdf}
\vspace{-0.5cm}
}
\centerline{
\includegraphics*[width=0.55\textwidth]{0760646401_best_fit.pdf}
\hspace{-1.0cm}
\includegraphics*[width=0.55\textwidth]{0760646501_best_fit.pdf}
\vspace{-0.5cm}
}
\centerline{
\includegraphics*[width=0.55\textwidth]{0760646601_best_fit.pdf}
\hspace{-1.0cm}
\includegraphics*[width=0.55\textwidth]{0760646701_best_fit.pdf}
}
\caption{Data, model and residuals from fitting the PSD model to 0.3--0.7 keV (blue) and 0.7--1.5 keV (red) PSD data with $(H/R)^{2}\alpha$ fixed at 0.005 and $\gamma$ fixed at 0 as obtained by the global fits. The PSD displayed here are Poisson noise-subtracted and normalized in terms of squared fractional rms \citep{Miyamoto1991}. Data points consistent with zero (or negative) power have been excluded. The best-fit model parameters for each observation are listed in Table~\ref{tab_fit_para}. }
\label{fig_best_fit}
\end{figure*}
\begin{table*}
\begin{center}
\caption{Fit parameters for GX~339--4 with $(H/R)^2 \alpha = 0.005$ and $\gamma = 0$.}
\label{tab_fit_para}
\begin{tabular}{lllllll}
\hline
Parameter & O1 & O2 & O3 & O4 & O5 & O6 \\
\hline
$r_\text{trc} (r_g)$ & $10^{+1.14}_{-0.89}$ & $15^{+1.16}_{-1.89}$ & $19^{+0.19}_{-0.20}$ & $26^{+0.16}_{-0.00}$ & $39^{+1.12}_{-1.34}$ & $55^{+2.38}_{-3.98}$ \\
$r_\text{sh} (r_g)$ & $7^{+0.64}_{-0.75}$ & $9^{+1.24}_{-1.93}$ & $12^{+0.47}_{-0.20}$ & $17^{+1.48}_{-0.50}$ & $27^{+0.13}_{-1.36}$ & $29^{+2.90}_{-3.20}$ \\
$(H/R)^2 \alpha$ & 0.005$^f$ & 0.005$^f$ & 0.005$^f$ & 0.005$^f$ & 0.005$^f$ & 0.005$^f$ \\
$\beta$ & $1.8^{+0.01}_{-0.19}$ & $1.7^{+0.01}_{-0.20}$ & $1.6^{+0.01}_{-0.20}$ & $1.5^{+0.00}_{-0.08}$ & $1.4^{+0.03}_{-0.10}$ & $1.4^{+0.02}_{-0.08}$ \\
$\gamma$ & 0$^f$ & 0$^f$ & 0$^f$ & 0$^f$ & 0$^f$ & 0$^f$ \\
$N_F$ & $0.05^{+0.01}_{-0.03}$ & $0.08^{+0.01}_{-0.07}$ & $0.07^{+0.01}_{-0.03}$ & $0.04^{+0.00}_{-0.02}$ & $0.05^{+0.00}_{-0.02}$ & $0.03^{+0.02}_{-0.00}$ \\
\hline
$\chi^2 / \text{d.o.f.}$ & 195 / 96 & 175 / 94 & 151 / 100 & 137 / 97 & 130 / 79 & 134 / 69 \\
\hline
\end{tabular}
~\\
$^f$ parameter frozen.
\end{center}
\end{table*}
\nopagebreak
\section{Discussion}
For GX~339--4, there is strong evidence that the central black hole has a spin parameter close to the maximal value of $a=0.998$ \citep[e.g.][]{Miller2008, Garcia2015, Parker2016}. During the high soft state, the inner disc could extend into the ISCO \citep{Plant2014}. Recently, \cite{Sridhar2020} found that the truncation radius remains low ($< 9r_{\rm g}$) throughout the bright intermediate state (i.e., the width of the Fe~K emission line is nearly constant throughout this transition indicating a quasi-static truncation radius). The evolution of the accretion disc during the intermediate state then should be driven by the variation in accretion rate rather than by the changes in truncation radius. The data anlyzed here captured the very last stages of the soft-to-hard transition (O1--O2) and the return to quiescence where the source was decreasing its luminosity (O3--O6).
The data analysed here are similar to those used for the reverberation lag analysis of \cite{Demarco2017}, where they reported a decrease of the reverberation lag as a function of the source luminosity. Our results suggest that the truncation radius increases from $\sim 10 r_{\rm g}$ to $\sim 55 r_{\rm g}$ (see Table~\ref{tab_fit_para}) as the outburst proceeds and the flux decreases from O1--O6. The truncation radius versus the Eddington-scaled luminosity is shown in Fig~\ref{compare-rtrc}. The truncation radii from \cite{Demarco2017} are also shown in Fig~\ref{compare-rtrc}; these are estimated by converting lag amplitudes to the light-crossing distance from the centre to the inner edge of the disc, assuming a black hole mass of $10M_{\odot}$. The truncation radii being directly converted from the lag amplitude are significantly larger than those constrained by our PSD model. This is what we expected because the estimates by \cite{Demarco2017} are highly simplistic whereas our model accounts for the dilution. Our results support the commonly agreed framework that the disc truncation radius increases as the luminosity decreases \citep[e.g.][and references therein]{Wang2018}.
\cite{Mahmoud2018} proposed a framework to explain the time-averaged spectrum, the PSD in different energy bands and the time lags, simultaneously. Their model includes at least three distinct Compton components associated with three specific radii: the truncation radius, the inner radius of the hot flow and the jet-launch radius. Later on, \cite{Mahmoud2019} used their spectral-timing model, taking into account the effects of reverberation, to fit the O1 data, and found a truncation radius of $\sim 19.5 r_{\rm g}$. Their result is also plotted in Fig~\ref{compare-rtrc}. The truncation radius for O1 constrained by our PSD model is $\sim 10 r_{\rm g}$, which is smaller but still comparable to \cite{Mahmoud2019}. Spectral fits of the 2015 outburst data observed by {\it NuSTAR} and {\it Swift} were carried out by \cite{Wang2018}, where the truncation radius was found to be $\sim 3$--$15r_{\rm g}$. The accretion disc implied by the spectroscopic analysis \citep[e.g.,][]{Wang2018,Garcia2019} is likely to be truncated at smaller radii than all of those suggested by timing analysis.
In principle, the truncation radius, $r_{\rm trc}$, and the X-ray source height cannot be well-constrained independently via reverberation lag measurements only since both of them directly determine the measured light-travel distance. Furthermore, from Table~\ref{tab_fit_para}, the frequency break index $\beta$ decreases as the outburst proceeds, suggesting that there is a variation in accretion rate as well as in $r_{\rm trc}$. Decreasing $\beta$ means the variability power in the frequency domain induced by the mass accretion rate fluctuations varies, from O1--O6, in a way that produces relatively high power at frequencies $f > f_{\rm break}$ (see eq.~\ref{eq:mdot_f}). Recently, \cite{Mushtukov2018} described the propagation of the fluctuations in the disc using the diffusion equation whose solution was derived by the method of Green functions. The suppression of variability at high frequencies was produced by the Green functions and was stronger for a lower parameter associated with radial kinematic viscosity. Our parameter $\beta$ that suppresses variability at high frequencies above the viscous frequencies then could describe dependence of kinematic viscosity and possibly tie back to the Green's function treatments such as in \cite{Mushtukov2018}. If the changes in luminosity are also driven by the variation in the viscous parameters, the frequency of $f_{\rm break}$ is changed as well (see eqs.~\ref{eq:fvis}--~\ref{eq:fbff}). We note that to limit the number of free parameters and avoid model degeneracy we fix $(H/R)^2 \alpha=0.005$ (i.e., assuming this parameter does not significantly vary during O1--O6). This value is obtained from the fits using the global grid and is in the acceptable range reported in the literature \citep[e.g.,][]{Veledina2016}.
\begin{figure}
\centerline{
\includegraphics[width=0.5\textwidth]{fig_compare_rtrcvsz.pdf}
}
\vspace{-0.2cm}
\caption{Comparison of the truncation radius vs. Eddington-scaled luminosity for GX 339--4 inferred by this work (red diamonds), \cite{Demarco2017} via approximate reverberation measurement (circles) and \cite{Mahmoud2019} via spectral-timing fits (crosses).
}
\label{compare-rtrc}
\end{figure}
Furthermore, \cite{Mahmoud2019} fitted the spectral-timing data of O1 and reported the transition radius where the spectrally soft flow changes to hard inner flow to be $\sim 6.6 r_{\rm g}$. We estimate $r_\text{sh}$ for O1 to be $\sim 7 r_{\rm g}$, which is in agreement with their result. Starting from $r_\text{sh} = 7 r_{\rm g}$ at O1, our fitting also suggests that the soft-hard transition radius keeps increasing to $r_\text{sh} = 29 r_{\rm g}$ towards O6. The size of the spectrally soft inner-flow and hard inner-flow zones can be estimated as $r_{\rm trc} - r_{\rm sh}$ and $r_{\rm sz} - 1.235r_{\rm g}$, respectively. The comparative size of these flows is shown in Fig.~\ref{inner-flow-size}. As the flux decreases from O1--O6, the size of hard zone expands from $\sim 5r_{\rm g}$ to $27r_{\rm g}$, which is $\sim 1.1-2.2$ times the radial extent of the soft zone that expands from $\sim 3r_{\rm g}$ to $26r_{\rm g}$.
Since we fixed $\Gamma_\text{sz}=2.0$ and $\Gamma_\text{hz}=1.5$ (consistent with the values reported by \cite{Demarco2017}), variations in $N_{F}$ (see Table~\ref{tab_fit_para}) could actually be due to the variations in spectral indices of the hot flows between different observations. The viscous timescale at the truncation radius, or the outer radius of the hot flow, sets the low-frequency break of the PSD. A low-frequency QPO due to Lense-Thirring precession of the hot flow \citep{Ingram2009} is sometimes observed in BHXBs in such a way that the QPO frequency moves with the low-frequency break. However, the QPO was previously found to be weak throughout O1--O6 \citep{Demarco2017, Mahmoud2019}, so they are not included in our fits. Additional Lorentzian or Gaussian components associating with these QPOs could definitely improve the fits, especially for the O1, but the trend of the PSD model and the implied parameters should not be significantly different.
\begin{figure}
\centerline{
\includegraphics[width=0.5\textwidth]{fig8_inner_size_v2.png}
}
\vspace{-0.2cm}
\caption{The comparative size of inner-hot flow zones between different observations (O1--O6) implied from the best-fit parameters. Error bars are small and are excluded in this plot.
}
\label{inner-flow-size}
\end{figure}
Our model could allow the emissivity indices to be free but the exact values should depend on the assumed source geometry. Different geometries of the flow produce different illumination patterns on the disc and hence $\gamma$ could also vary among observations. However, we find that $\gamma \sim 0$ is obtained from the global fits in all six observations. It should be noted that $\gamma = 0$ produces same number of photons in each annulus of area $2\pi r {\rm d} r$ and that the emissivity in terms of flux per unit area on the disc is $r^{-1}$. This may suggest that the inner flows are significantly vertically extended in the way that the number of photons illuminating the outer part of the disc is large enough that make the emissivity profile likely flat. It may also require the inner flow to be more vertically extended than the outer flow so that some photons that are supposed to illuminate the thin disc near $r_{\rm trc}$ are obscured by the outer flow, flattening the emissivity especially at the inner parts of the disc. We caution that this is just a rough approximation and interpretation suggested by the global fits. In principle, $\gamma$ should not be thought of as an independent parameter but instead should tie to the exact geometry of the flow. Additional function may be required to tie $\gamma$ to characteristic radii of the flow. In that case, the exact geometry of the flow and also the disc (e.g., in case of flared disc) must be explicitly specified that makes the model much more complicated. We then select to fix $\gamma$ to the values obtained from the global fits in this work. Nevertheless, the disc emissivity profile is worth investigating in the future, using high density disc models \citep{Jiang2019}.
Dual lamppost sources illuminating the disc were found to be able to explain the spectroscopic data of GX~339--4 in some observations compared to the single lamppost source \citep{Garcia2019}. Therefore, the X-ray source could be extended and the realistic impulse responses need to be produced through the ray-tracing simulations \citep{Wilkins2016, Chainakun2017, Chainakun2019b, Garcia2019}. Here, we assume the size scale of the source, or the hot-flows, to be relatively small compared to the radial size of the accretion disc \citep{Gardner2014, Mahmoud2019}. However, \cite{Chainakun2019a} reported that the reverberation signatures on the PSD profiles (e.g., the main dip and the oscillation features) would be relatively weak and become more difficult to probe if the source is significantly more extended. Using more realistic, ray-traced impulse responses from different spatially extended source geometries is beyond the scope of this paper, but is planned for the future.
Although we consider two temperature inner flow plus thermal reverberation, the model can naturally produce the wiggles in the PSD profile which are usually seen in the observed data. The bumpy PSD could also be produced by other realistic treatments such as when the propagating fluctuations are solved through the diffusion equation using the Green function \citep{Mushtukov2018} and when extra variabilities are injected at different characteristic radii \citep[e.g.,][]{Rapisarda2016,Rapisarda2017}. Since the number of free parameters have been minimized to avoid degeneracies, it can be seen that the model provides very good constraint to each key parameter with small statistical errors. The power extracted from observational data towards the high-frequency end at some points turn out to be zero or negative, which are excluded in our analysis. New data delivered by upcoming X-ray observatories could provide higher quality data (e.g., mean spectrum, lags, and PSD) that could be robustly constrained with the model. High quality PSD extending to higher frequencies may reveal the clear dips and oscillatory structures produced by reverberation that are imprinted on the PSD.
At the moment, we do not directly consider the effects of the response matrix of {\it XMM-Newton} that can misclassify photons at low energies \citep[see][and discussion therein]{Ingram2019}. This, however, could affect the timing data in the same way of changing the contamination flux in the thermal reverberation dominated and continuum dominated bands, which has been taken into account in this work by employing the reflected response fraction to determine the contamination ratio between these cross-components. Applying the response matrix directly to the model would be more straightforward and is worth investing in the future. Last but not least, our model can be further developed to predict the lag-frequency spectrum between these two bands. Fitting the PSD and the lags simultaneously (or the full cross-spectrum) is challenging and optimal statistics must be investigated to account for significantly different numbers of bins between different data set, otherwise the fitting might be biased towards the best-fit parameters for the PSD data that have significantly larger numbers of data points. A development of a more self-consistent model to simultaneously explain multi-timing data including the effects of the instrument response, however, is planned for the future.
\section{Conclusion}
In this study we develop a PSD model that is then fit to the data of GX~339--4 observed during the end of the 2015 outburst. We model a truncated accretion disc from $r_{\rm trc}$--$400r_{\rm g}$. Inside $r_{\rm trc}$, spectrally soft and hard hot-flows take place down to the ISCO. The model incorporates both disc-fluctuation and reverberation signals. The fluctuations inside the hot flows are propagate inwards on the viscous timescale of the truncation radius. The model can qualitatively reproduce the traditional PSD profiles exhibiting increased high-frequency power for higher energy band. Furthermore, stronger high-frequency power could be produced with decreasing $r_{\rm trc}$, which is also expected. Including the reverberation signals produces a dip at the high frequency end which should be the beginning of oscillatory structures imprinted in the PSD profiles as discussed by \cite{Papadakis2016} and \cite{Chainakun2019a}.
To produce the model grids, the reflected response fraction of 0.3--0.7 and 0.7--1.5 keV band are fixed at the values constrained by the time-averaged spectral analysis \citep{Demarco2017}, as well as the photon index of the X-ray continuum associated with the soft and hard hot-flow zones. In doing this, the PSD models are tied to the realistic parameters obtained from the spectral fitting and the contribution of continuum flux in each energy band as the dilution of reverberation is properly taken into account. The PSD data for both energy bands are fit simultaneously for each observation. We find the disc parameter $(H/R)^2 \alpha = 0.005$ could provide a good fit for the majority of observations using the global grid of the model, hence it is fixed when the finer, local grids are produced independently for each observation and the fitting is repeated.
The fitting results suggest that the truncation radius moves outward from $10$--$55r_{\rm g}$ as the source luminosity decreases from O1--O6. Although the trend of increasing $r_{\rm trc}$ with decreasing luminosity is in agreement with previous studies, the values we obtain are smaller than previous reverberation lag analysis \citep{Demarco2017} and spectral-timing modelling \citep{Mahmoud2019}, but are larger than some of those constrained using spectroscopic data alone \citep[e.g.][]{Wang2018,Garcia2019}. We find that the transition radius also increases from $7$--$29r_{\rm g}$ during O1--O6, meaning that the size of inner hard hot-flows increase from $\sim 5$--$27r_{\rm g}$, which always span a slightly larger radial range than the spectrally soft hot-flows by a factor of $\sim 1.1$--$2.2$. The current PSD model can be straightforwardly adapted for different source and inner-flow geometries, e.g., dual lamppost cases, that may suit the unique data of different X-ray binaries and AGNs.
\begin{acknowledgements}
The calculations in this work were carried out using the BlueCrystal supercomputer of the Advanced Computing Research Centre, University of Bristol, UK. All data analyzed are based on observations obtained with {\it XMM-Newton}, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. PC thanks the Thailand Research Fund (TRF) for support under grant number MRG6280086, and acknowledges useful discussions with Utane Sawangwit. WL thanks for the financial support from the Faculty of Science, Srinakharinwirot University. We thank the anonymous referee for their comments which led to a clarification of some important points in the paper and has inspired some interesting work for the future.
\end{acknowledgements}
|
2,877,628,090,335 | arxiv | \section{Introduction}
\label{sec:intro}
Detection of primordial gravitational waves (PGWs) is considered to be a smoking gun for the inflationary universe.
Their amplitude is often characterized by the tensor-to-scalar ratio $r$, and a number of experiments aim to pin down the value of $r$ through the B-mode polarizations in the Cosmic Microwave Background (CMB) temperature anisotropy.
While current Planck and BICEP2/Keck Array joint observations have constrained its amplitude as $r \lesssim 0.06$ \cite{Ade:2015lrj}, the observational sensitivity is expected to increase up to $\Delta r = \mathcal{O}(10^{-3})$ in the next decade by upcoming missions including CMB-S4 project \cite{Abazajian:2016yjj} and LiteBIRD satellite \cite{Matsumura:2013aja}.
In the standard prediction, the value of $r$ is directly related to the inflationary energy scale by $E_{\rm inf} \sim 10^{15} {\rm GeV} \times ( r / 10^{-3} )^{1/4}$, and thus
if PGWs should be detected by these missions, the energy scale of inflation
would naively be estimated around GUT scales, $\sim 10^{15}{\rm GeV}$.
Therefore, in view of studies on fundamental physics, it is extremely important to test the validity of this prediction and to explore viable inflationary mechanisms to provide PGWs.
In the standard inflationary scenario, PGWs are generated by vacuum fluctuations of spacetime amplified due to the quasi-de Sitter expansion of the universe.
The resultant statistical properties are encoded in the tensor spectrum of cosmological perturbations and are (i) nearly scale invariant, (ii) statistically isotropic, (iii) parity symmetric and (iv) almost gaussian.
However, these features are not necessarily true if other sources of gravitational waves do exist in the early universe, such as gauge fields.
In string theory or supergravity, gauge sectors are kinetically or topologically coupled with scalar sectors even if they are neutral under the corresponding gauge group.
Through these couplings a background motion of scalar fields can violate the conformal invariance of gauge fields and amplify gauge quanta during inflation.
Historically their cosmological role has been discussed in the context of primordial magnetogenesis \cite{Ratra:1991bn, Garretson:1992vt, Martin:2007ue, Demozzi:2009fu, Kanno:2009ei, Fujita:2012rb, Ferreira:2013sqa, Fujita:2013pgp, Fujita:2014sna, Obata:2014qba, Fujita:2015iga, Fujita:2016qab, Adshead:2016iae, Caprini:2017vnn}.
Recently, it has been revealed that amplified gauge fields also enhance other fluctuations that are coupled to them and imprint observable signatures in the spectrum of scalar and/or tensor perturbations.
Depending on the dynamics of gauge field production during inflation, the curvature perturbation sourced by produced gauge quanta can be highly non-gaussian \cite{Barnaby:2010vf, Barnaby:2011qe, Barnaby:2012tk, Anber:2012du, Barnaby:2012xt, Linde:2012bt, Ferreira:2014zia, Peloso:2016gqs}, statistically anisotropic \cite{Watanabe:2009ct, Himmetoglu:2009mk, Gumrukcuoglu:2010yc, Watanabe:2010fh, Kanno:2010nr, Watanabe:2010bu, Soda:2012zm, Bartolo:2012sd, Ohashi:2013qba, Ohashi:2013pca, Naruko:2014bxa, Ito:2015sxj, Abolhasani:2015cve, Ito:2017bnn}, and sufficiently large to form primordial black holes after inflation \cite{Linde:2012bt, Garcia-Bellido:2016dkw, Domcke:2017fix, Garcia-Bellido:2017aan, Cheng:2018yyr}.
Also, as sourcing effects on tensor modes, several types of inflationary models with gauge fields have been suggested, predicting scale-dependent, statistically anisotropic, parity-violating, and/or non-gaussian PGWs \cite{Ito:2016aai, Sorbo:2011rz, Cook:2011hg, Mukohyama:2014gba, Choi:2015wva, Namba:2015gja, Domcke:2016bkh, Guzzetti:2016mkm, Obata:2016oym, Obata:2016xcr, Fujita:2017jwq, Ozsoy:2017blg, Fujita:2018zbr}.
These significant deviations from the conventional vacuum modes are potentially testable with the correlations of CMB temperature and polarization anisotropies \cite{Saito:2007kt, Bartolo:2014hwa, Bartolo:2015dga, Bartolo:2017sbu, Thorne:2017jft,Hiramatsu:2018vfw}, laser interferometers \cite{Seto:2006hf, Seto:2006dz, Seto:2008sr, Bartolo:2018qqn} or the measurement of pulsar timing arrays \cite{Kato:2015bye}.
In particular, $SU(2)$ gauge fields coupled to an axionic field are known as an interesting alternative PGW source.
Chromo-natural inflation was proposed as the first axion-$SU(2)$ model for inflation \cite{Adshead:2012kp}, where a large axion-gauge coupling allows the vacuum expectation value (vev) of $SU(2)$ gauge fields to support an isotropic inflationary attractor.
Its background solution is realized with broad parameter region which includes
seemingly different inflationary models such as gauge-flation and non-canonical single field inflation \cite{Maleknejad:2011jw, SheikhJabbari:2012qf, Adshead:2012qe, Dimastrogiovanni:2012st, Maleknejad:2012fw, Maleknejad:2012dt}, and the background isotropy is stable against small anisotropies of Bianchi I type \cite{Maleknejad:2013npa}.
Although the original scenario is excluded from CMB data \cite{Adshead:2013nka}, extended models have been suggested where additional fields can resolve the observational conflict \cite{Obata:2014loa, Obata:2016tmo, Maleknejad:2016qjz, Dimastrogiovanni:2016fuu, Adshead:2016omu, DallAgata:2018ybl, Domcke:2018rvv}.
Intriguingly, in axion-$SU(2)$ models, the rotationally symmetric background configuration enforces fluctuations of $SU(2)$ gauge fields to have components of scalar and tensor types that interact with density perturbations and gravitational waves, respectively, at the linearized level \cite{Dimastrogiovanni:2012ew, Adshead:2013qp, Namba:2013kia}.
Since one polarization mode of tensor components of gauge fields experiences a tachyonic instability around horizon crossing in this class of models, an exponential enhancement of parity-violating gravitational waves are generated.
Remarkably, the linear interactions of perturbations allow for a parameter region where the tensor components are amplified while the scalar ones are not, which enables to provide sizable amount of chiral gravitational waves consistent with CMB data.
In addition to two-point functions, three-point correlations sourced by $SU(2)$ gauge fields are also important observables in this scenario.
Recently, non-linear analyses of this type of models have been explored regarding tensor-tensor-tensor non-gaussianity~\cite{Agrawal:2017awz, Agrawal:2018mrg}, scalar-tensor-tensor mixed non-gaussianity~\cite{Dimastrogiovanni:2018xnn}, and the one-loop contribution to the curvature power spectrum~\cite{Dimastrogiovanni:2018xnn,Papageorgiou:2018rfx}.
Along with such a growing interest, in this paper we calculate a three-point cross-correlation function that is a mixed non-gaussianity between scalar and tensor sectors in the framework of axion-$SU(2)$ model proposed in \cite{Dimastrogiovanni:2016fuu}, where an $SU(2)$ gauge field is coupled to a spectator axion field.
The mechanism of generating such a correlation in this model is multi-fold: first, one of the tensor components of gauge field perturbations are copiously produced by the transient tachyonic instability described in the previous paragraph. The metric tensor modes $h$ (gravitational waves) inherit the effect of this production due to linear mixings. The curvature perturbation $\zeta$ on the other hand gravitationally interacts with scalar modes in the axion-$SU(2)$ sector, which have direct three-point interactions with the gauge-field tensor modes. This way, $\zeta$ and $h$ correlate to induce the scalar-tensor-tensor $\zeta hh$ mixed non-gaussianity, mediated by the gauge-field tensor mode.
As a first step, we focus on the gravitational interaction between $\zeta$ and the axion-field perturbation in the scalar sector.
While a similar cross-correlation has been recently discussed in \cite{Dimastrogiovanni:2018xnn}, we introduce a new calculation approach, in which the mixing effect between the axion and $SU(2)$ fields in the quadratic action is fully taken into account and is not disregarded as a higher-order contribution.
More precisely, we apply a non-perturbative formalism to quantize a coupled system \cite{Nilles:2001fg} and include linear mixing terms in the calculation of the mixed scalar-tensor-tensor non-gaussianity, by employing the in-in formalism \cite{Weinberg:2005vy}.
In order to correctly derive three-point vertices in the interaction Hamiltonian, we find that not only the axion perturbation but also the scalar components of the $SU(2)$ gauge field are relevant, which have been neglected in the previous works \cite{Dimastrogiovanni:2018xnn, Papageorgiou:2018rfx}.
We show that the resultant spectrum is a folded-shape and its non-linearity parameter can be $\mathcal{O}(1)$ with our fiducial parameter choice. While a careful analysis on the signal detectability needs to await future studies, we expect that this novel feature should serve as a distinct signature of the present mechanism, confronted with the future CMB measurements.
This paper is organized as follows.
In Sec.~\ref{sec:overview}, we briefly summarize the setup of our model and then explain our approach to calculate the mixed non-gaussianity. In Sec.~\ref{sec:quadratic}, we show the quadratic action for the scalar and tensor sectors and review how to quantize the system in the initial vacuum. We then derive the cubic interaction Hamiltonian in Sec.~\ref{sec:cubic}. Here we also describe our target observable and obtain its formal expression. We show our results in Sec.~\ref{sec:results}. Sec.~\ref{sec:summary} is devoted for the summary and discussion of our result. We show explicit expressions for shape functions in Appendix \ref{appen:explicit} and compare our approach and result to previous works in Appendix \ref{appen:comparison}.
\section{Model and Our Approach}
\label{sec:overview}
\subsection{The model setup}
\label{subsec:model}
Here we briefly describe our model, while the readers are referred to Ref.~\cite{Dimastrogiovanni:2016fuu} for more detailed discussions.
We consider the following matter action under the general relativity
\begin{equation}
S=\int \mathrm{d}^4x\sqrt{-g}\left[-\frac{1}{2}\left(\partial\phi\right)^{2}-V(\phi)-\frac{1}{2}\left(\partial\chi\right)^{2}-W(\chi)-\frac{1}{4}F_{\mu\nu}^{a}F^{a, \mu\nu}+\frac{\lambda\chi}{4f}F_{\mu\nu}^{a}\tilde{F}^{a, \mu\nu}\right]\,,
\end{equation}
where $\phi$ is an inflaton with potential $V(\phi)$,
$\chi$ is a pseudo-scalar field (axion) with potential $W(\chi)$, $F_{\mu\nu}^{a}\equiv\partial_{\mu}A^{a}_{\nu}-\partial_{\nu}A^{a}_{\mu}-g \epsilon^{abc} A^{b}_{\mu}A_{\nu}^{c}$ is the field strength of a $SU(2)$ gauge field $A_{\mu}^{a}$ and $\tilde{F}^{a, \mu\nu}\equiv\epsilon^{\mu\nu\rho\sigma}F^{a}_{\rho\sigma}/(2\sqrt{-g})$ is its dual with $\epsilon^{\mu\nu\rho\sigma}$ a flat-spacetime totally anti-symmetric symbol of the choice $\epsilon^{0123} = +1$.
The constants $\lambda$ and $f$ are dimensionless and dimensionful parameters of the model, respectively.
Although we do not specify any concrete model of $V(\phi)$, the inflaton is assumed to cause a quasi-de Sitter expansion $a(t)\propto e^{H t}$ with a nearly constant Hubble parameter $H$ during inflation and to produce the observed amplitude of the curvature perturbation $\zeta$ through the relation,
\begin{equation}
\zeta=-H\frac{\delta\varphi}{\dot{\phi}_0} \; ,
\label{zeta-def}
\end{equation}
in the spatially flat gauge,
where the inflaton $\phi(t,\bm x)$ is split into the background part $\phi_0(t)$ and the perturbation $\delta\varphi(t,\bm x)$, and dot denotes the cosmic time derivative $\dot{}\equiv \partial_t$\,.
We assume that the coupling between the axion and the gauge fields is sufficiently strong and the background value of the gauge field is not negligibly small compared to that of the axion, while the axion-gauge field system is still a spectator sector in that their energy densities do not alter the background expansion driven by the inflaton $\phi_0$.
In that case, the background fields have an attractor solution where the $SU(2)$ gauge fields take an isotropic configuration,
\begin{equation}
A_0^a(t)=0,
\quad
A_i^a(t)=\delta^a_i a(t)Q(t),
\label{A configuration}
\end{equation}
which is compatible with the Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) metric.
It is useful to introduce two dimensionless quantities,
\begin{equation}
m_Q\equiv \frac{g Q}{H},
\qquad
\Lambda\equiv \frac{\lambda Q}{f}.
\end{equation}
With them, our assumptions of a strong axion-gauge coupling and a significant vev of the gauge fields are quantified as $\Lambda\gg 1$ and $m_Q\gtrsim 1$, respectively.
In this regime, one can show that the background attractor solution reads
\begin{equation}
\xi\equiv \frac{\lambda \dot{\chi}_0}{2fH}\simeq m_Q+m_Q^{-1},
\qquad
m_Q\simeq \left(\frac{-g^2 f\,W_\chi(\chi_0)}{3\lambda H^4}\right)^{\frac{1}{3}},
\label{attractor}
\end{equation}
where $\chi_0(t)$ is the background part of the axion field, and subscript $\chi$ on $W$ denotes derivative with respect to $\chi$.
Furthermore, the kinetic part of the energy fraction of the background gauge field $\epsilon_E\equiv (\dot{Q}+HQ)^2/(M_{\rm Pl}^2H^2)$ and its self-interaction part $\epsilon_B\equiv g^2 Q^4/(M_{\rm Pl}^2H^2)$
satisfy the simple relation, $\epsilon_B\simeq m_Q^2\epsilon_E$.
Throughout this paper, we work in this attractor regime.%
\footnote{We use $m_Q\simeq 3.5$ and $\Lambda\simeq 160$ in our main calculation in Sec.~\ref{sec:results}.}
Around the above well-behaved background, we introduce perturbations as
\begin{equation}
\chi=\chi_0+\delta\chi,
\quad
A^a_0=a\partial_aY
\quad
A^a_i=a \delta^a_i (Q+\delta Q)+a\epsilon^{iab}\partial_bU+aT_{ai},
\quad
g_{ij}=a^2(\delta_{ij}+h_{ij}),
\label{pert_decomp}
\end{equation}
where $\delta \chi,Y, \delta Q$ and $U$ are scalar perturbations while $T_{ai}$ and $h_{ij}$ are tensor perturbations. We suppress the vector perturbations as they are irrelevant for our target observable in this paper. The $SU(2)$ gauge freedom is already fixed in the expression \eqref{pert_decomp}, and $Y$ is a non-dynamical variable that can be expressed in terms of dynamical degrees of freedom.
Note that the $SU(2)$ indices and the spacial indices are treated identical under the attractor configuration eq.~\eqref{A configuration} \cite{Maleknejad:2011jw}.%
\footnote{The spatial component of the gauge field vector potential $\bm{A}_i \equiv A^a_i \tau^a$, where $\tau^a$ is the generator of $SU(2)$, transforms under the global part of $SU(2)$ in its adjoint representation, which is isomorphic to $SO(3)$. The vector configuration \eqref{A configuration} precisely enforces to identify this (global) $SO(3)$ with the background spatial rotation.}
As we will see in Sections \ref{sec:quadratic} and \ref{sec:cubic}, these perturbations have interactions in quadratic and cubic actions in this model, which lead to the mixed non-gaussianity.
\subsection{New calculation approach}
\label{subsec:outline}
In this paper we compute the scalar-tensor-tensor non-gaussianity $\langle\zeta hh\rangle$ by using a new calculation approach.
As we discuss in Sec.~\ref{sec:cubic}, the relation between $\zeta$ and $\delta\chi$ is classical and linear.
Hence it is essential to evaluate $\langle \delta\chi hh\rangle$ in a quantum mechanical way to obtain $\langle \zeta hh\rangle$ in this model, which is concretely shown in Sec.~\ref{sec:cubic}.
Here we explain the reason why we introduce a new approach to compute $\langle \delta\chi hh\rangle$ and outline its calculation scheme.
Since $\langle \delta\chi hh\rangle$ is a cross-correlation, some interactions between fields must be involved in the calculation. In our model, we have relevant interaction terms in the quadratic action and cubic action, whose explicit expressions are shown in Sections \ref{sec:quadratic} and \ref{sec:cubic}, respectively.
We call the former mixing terms (e.g. $\delta \chi \delta Q$), while the latter are called 3-point vertex terms (e.g. $\delta\chi T_{ij} T_{ij}$, and recall that $T$ has linear mixing with $h$). The importance of the mixing effects is illustrated in Fig.~\ref{Pchi-comparison}.
One can see the behavior of $\delta\chi$ is drastically changed by the mixing.
Therefore, although one may obtain a non-zero value of $\langle \delta\chi hh\rangle$ by considering only the 3-point vertices and ignoring the mixing, it is indispensable to take into account the mixing effects in order to properly
evaluate $\langle \delta\chi hh\rangle$.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=90mm]{Pchi-comparison.pdf}
\end{center}
\caption
{The power spectra of $\delta\chi$ normalized by $(H/2\pi)^2$ which are computed in different ways are compared. The black solid line shows the result of the NPS method where the mixing effects between the scalar perturbations are fully taken into account.
The other two lines neglect the mixing, and $\delta\chi$ has no mass (orange dot-dashed line) and has a mass $m_\chi^2\approx 40H^2$ (blue dashed line).
The significant deviations imply that one should not regard the mixing effects as a small correction.}
\label{Pchi-comparison}
\end{figure}
One may think to put both the mixing and vertex terms into the
interaction Hamiltonian in the in-in formalism all together and perturbatively calculate their contributions to $\langle \delta\chi hh\rangle$.
In that case, one would expect that the contribution from the diagram in the right panel of Fig.~\ref{diagram} is suppressed compared to the left panel contribution, because an extra mixing effect is involved in the right panel.
Nevertheless, we find that this naive perturbative counting fails and the two contributions are actually comparable.
It suggests that the mixing effects should be taken into account in a non-perturbative manner.
Why is the mixing effects between the scalar perturbations so significant that a perturbative treatment is invalidated in the present model?
To understand the reason, it is useful to consider the background dynamics.
In the previous subsection, we have assumed that the coupling between the axion and the gauge fields is strong enough to achieve the slow-roll regime.
The background axion $\chi_0$, which is pushed by its own potential force, acquires a (non-Hubble) friction from the gauge field $Q_0$.
At the same time, as the backreaction from $\chi_0$ to $Q_0$, the gauge field in turn sustains its background value $Q_0$ by gaining the kinetic energy of $\chi_0$.
If $Q_0$ becomes too big (small), the friction to $\chi_0$ increases (decreases) and the energy transfer from $\dot{\chi}_0$ to $Q_0$
diminishes (enlarges). As a result of this continuous mutual feedback system, $\chi_0$ and $Q_0$ end up balancing each other out and keep the slow-roll regime.
When it comes to the perturbations, the scalar degrees of freedom, $\delta\chi, \delta Q$ and $U$, must have similar properties to the background, because their behaviors should be converged to those of the background fields in the long wavelength limit.
Their leading interactions (or feedback) are represented by the mixing terms. If one employed the in-in formalism or equivalently the Green function method and computed the mixing effects only perturbatively, the interaction of one way (e.g.~$\delta Q$ slows down $\delta\chi$) and its subsequent backreaction ($\delta\dot{\chi}$ sources $\delta Q$) would be treated hierarchically and would not be in the same order in perturbations. As a result, one would not correctly reproduce the mutual feedback system.%
\footnote{This argument does not apply to the tensor perturbations. Because of the hierarchy $T^R\gg h^R$ due to a suppressed mixing between them (the right-handed modes ($R$) are the ones amplified with our choice of parameters), one can focus on the source effect from $T^R$ to $h^R$ and its backreaction is negligible to compute $h^R$. Thus, the result of the non-perturbative calculation is well approximated by a perturbative one in the case of the tensor perturbations.}
It is therefore mandatory to treat the mixing in the scalar sector non-perturbatively.
\begin{figure}[tbp]
\hspace{7mm}
\includegraphics[width=53mm]{diagram3.pdf}
\hspace{20mm}
\includegraphics[width=60mm]{diagram2.pdf}
\caption
{These diagrams schematically show the two different channels to produce
$\langle\delta\chi h h\rangle$ which are contributed by the vertices
$\delta\chi TT$ (left) and $\delta Q TT$ (right).
Black circle and crossed circle denote the 3-point vertex and
the mixing effect, respectively.
If the mixing between $\delta\chi$ and $\delta Q$ is treated as a perturbation,
the right diagram is considered as a higher-order contribution and should be suppressed compared to the left one. Nevertheless, our calculation shows the contributions of the two channels are comparable.}
\label{diagram}
\end{figure}
Fortunately, a non-perturbative quantum formalism to include the mixing terms is known.
Nilles, Peloso and Sorbo (NPS) have developed the formalism in Ref.~\cite{Nilles:2001fg}, and Dimastrogiovanni and Peloso have applied it to the axion-$SU(2)$ coupled system in the context of Chromo-natural inflation~\cite{Dimastrogiovanni:2012ew}.
With the NPS formalism, one can quantize a coupled quadratic action and solve the corresponding coupled system of linear equations even with arbitrary mixing.
On the other hand, the in-in formalism is the technique that readily takes care of cubic and higher order interaction terms.
Therefore, the state-of-art calculation approach is the combination of the NPS method and the in-in formalism: We solve the linear equations of motion derived from the quadratic action including all the mixing terms through the NPS formalism. Then, we calculate the contribution of each 3-point vertex to $\langle \delta\chi hh\rangle$ by using the in-in formalism.
Note that we demonstrate this hybrid approach for the first time in the literature.
\section{Quadratic Action with Non-perturbative Treatment}
\label{sec:quadratic}
In this section, we write down the quadratic actions of the scalar and tensor perturbations, quantize them with the NPS method developed in~\cite{Nilles:2001fg}, and numerically solve their equations of motion (EoM).
First, we transform the perturbations into Fourier space,
\begin{align}
S_I(\tau,\bm x) &= \int \frac{\mathrm{d}^3 k}{(2\pi)^{3/2}}\, {\rm e}^{i\bm k \cdot \bm x} \hat S_I(\tau,\bm k),
\\
T_{I, ij}(\tau,\bm x) &= \sum_{\sigma=L,R}\int \frac{\mathrm{d}^3 k}{(2\pi)^{3/2}}\, {\rm e}^{i\bm k \cdot \bm x}\, e_{ij}^{\sigma}(\hat{\bm k})\, \hat T_I^\sigma(\tau,\bm k),
\end{align}
where subscript $I (=\chi, Q,U, h,T)$ is the label of the perturbations,
\begin{equation}
\hat S_\chi \equiv \delta\hat\chi, \quad \hat S_Q\equiv \delta \hat Q,\quad
\hat S_U \equiv k \hat U,\quad \hat T_h^\sigma \equiv \hat h^\sigma,\quad \hat T_T^\sigma=\hat T^\sigma,
\label{SITI-def}
\end{equation}
hat denotes operators in Fourier space, $\sigma$ is the label of the tensor polarizations, and $e_{ij}^{L,R}$ are the circular polarization tensors satisfying
$e^{L}_{ij} (-\hat{\bm{k}})= e^{L*}_{ij} (\hat{\bm{k}})=e^{R}_{ij} (\hat{\bm{k}}),\
i \epsilon_{ijk} k_i e_{jl}^{L/R}(\hat{\bm{k}})=
\pm k e_{kl}^{L/R}(\hat{\bm{k}})$.%
\footnote{See e.g.~\cite{Agrawal:2018mrg} for details. We report in passing that there is a typo in \cite{Dimastrogiovanni:2016fuu}, and the definition of $L$ and $R$ below eq.~(2.12) in that paper should be inverted, while all the subsequent calculations were done consistently with the same definition as the one in this paper.}
Note that we have switched the time variable to conformal time $\tau$, which is useful to analyze the perturbations.
After integrating out the non-dynamical variable $Y$, we obtain the scalar quadratic action $S_S^{(2)}$ in Fourier space as,
\begin{equation}
S_S^{(2)}=\frac{1}{2}\int \mathrm{d}\tau\,\mathrm{d}^3 k
\left[
\hat\Delta'^\dag_I \hat\Delta'_I+ \hat\Delta'^\dag_I K_{IJ} \hat\Delta_J- \hat\Delta^\dag_I K_{IJ} \hat\Delta'_J
- \hat\Delta^\dag_I \Omega^2_{IJ} \hat\Delta_J\right],
\label{scalar action}
\end{equation}
where prime denotes derivative with respect to $\tau$ and the canonical fields $\Delta_I$ are defined as
\begin{equation}
\hat S_I=\mathcal{R}_{IJ}\hat\Delta_J ,
\qquad
\mathcal{R}_{IJ}\equiv
\left(
\begin{array}{ccc}
\displaystyle \frac{1}{a} \;\; & 0 & 0 \\
0 \;\; & \displaystyle \frac{1}{\sqrt{2} \, a} \;\; & 0 \\
0 \;\; & \displaystyle - \frac{m_Q H}{\sqrt{2} \, k} \;\; &
\displaystyle \frac{{\cal B}}{\sqrt{2} \, k \, a}
\end{array}
\right)
\label{scalar-rotation}
\end{equation}
and the matrices $K_{IJ}$ and $\Omega^2_{IJ}$ are given by
\begin{equation}
\begin{aligned}
K_{IJ} & = \frac{a \Lambda}{\sqrt{2}}
\left(
\begin{array}{ccc}
0 & \displaystyle m_Q H \;\; & \displaystyle \frac{a m_Q^2 H^2}{{\cal B}} \\
- \displaystyle m_Q H & 0 \;\; & 0 \\
\displaystyle - \frac{a m_Q^2 H^2}{{\cal B}} & 0 \;\; & 0
\end{array}
\right)
\; ,
\\
\Omega^2_{IJ} & =
\left(
\begin{array}{ccc}
\displaystyle
k^2 + a^2 W_{\chi\chi} - \frac{a''}{a}
+ \frac{a^2 k^2 m_Q^2 \Lambda^2 H^2}{{\cal B}^2} \;\;
& \displaystyle
\frac{a^2 m_Q \Lambda H^2}{\sqrt{2}}
\left( 3 + \frac{2 \, Q'}{a Q H} \right) \;\;
& \displaystyle \Omega^2_{\chi U} \\
\displaystyle
&
k^2 + 2 a^2 m_Q \left( 2 m_Q - \xi \right) H^2 \;\;
&
\Omega^2_{QU} \\
\displaystyle & \;\;
&
\Omega^2_{UU}
\end{array}
\right)
\; , \\ &
\Omega^2_{\chi U} = \frac{\Lambda}{\sqrt{2} \, {\cal B}} \left[ a^3 m_Q^2 H^3 + \frac{2k^4 +3 a^2 k^2 m_Q^2 H^2 + 4 a^4 m_Q^4 H^4}{{\cal B}^2} \, \frac{\partial_\tau ( a Q )}{aQ} \right] \; ,
\\ &
\Omega^2_{QU} = 2 a \left( m_Q - \xi \right) H {\cal B} \; ,
\\&
\Omega^2_{UU} = k^2 + 2 a^2 m_Q^2 H^2 + \frac{2 a^2 k^2 m_Q \left( m_Q - \xi \right) H^2}{{\cal B}^2} + \frac{6 a^2 k^2 m_Q^2 H^2}{{\cal B}^4} \, \frac{\left[ \partial_\tau (a Q) \right]^2}{a^2 Q^2} \; ,
\end{aligned}
\label{scalar-matrices}
\end{equation}
where ${\cal B} \equiv \sqrt{k^2 + 2a^2 m_Q^2 H^2}$, and $\Omega^2_{IJ}$ is a symmetric matrix.
Note that the off-diagonal components of $K_{IJ}$ and $\Omega^2_{IJ}$ denote the mixing terms among the scalar perturbations.
Also note that the index $U$ on $K_{IJ}$ and $\Omega^2_{IJ}$ in fact corresponds to a linear combination of the original variables $U$ and $\delta Q$.
Following the NPS method, we quantize $\hat\Delta_I$ as
\begin{equation}
\hat{\Delta}_I(\tau,\bm k)=\mathscr{S}_{IJ}(\tau,k)\, \hat{a}_J(\bm k)+
\mathscr{S}_{IJ}^*(\tau,k)\, \hat{a}^\dag_J(-\bm k),
\label{DI-decompose}
\end{equation}
where the creation/annihilation operators satisfy the standard commutation relation
\begin{equation}
\left[\hat{a}_I(\bm k), \hat{a}^\dag_J(\bm k')\right]= \delta_{IJ}
\delta(\bm k-\bm k'),
\qquad
({\rm otherwise})=0.
\label{commutation}
\end{equation}
It should be emphasized that the scalar mode functions $\mathscr{S}_{IJ}(\tau,k)$ have $3\times 3$ components,
and the $(I,J)$ component represents the part of $I$ field which is induced by the part of $J$ field arising from vacuum fluctuation.
For instance, $\mathscr{S}_{\chi\chi}$ denotes the amplitude of the intrinsic fluctuation of $\delta\chi$, while $\mathscr{S}_{\chi Q}$ denotes the amplitude of $\delta\chi$ which is sourced by the intrinsic $\delta Q$.
Note that $\mathscr{S}_{IJ}$ accompanies the creation/annihilation operator of the $J$ field because $\mathscr{S}_{IJ}$ originates from the vacuum fluctuation of the $J$ field.
This quantization scheme is essential in order to diagonalize the quadratic Hamiltonian in a vigorous manner under the condition that mass matrix $\Omega^2$ in an action of the form \eqref{scalar action} cannot be diagonalized while keeping the kinetic term intact, which is the case for our consideration.
We numerically solve the EoM for the scalar mode functions $\mathscr{S}_{IJ}(\tau,k)$
\begin{equation}
\partial_\tau^2\mathscr{S}_{IL}+2K_{IJ}\partial_\tau\mathscr{S}_{JL}+(\Omega^2_{IJ} + \partial_\tau K_{IJ})\mathscr{S}_{JL}=0,
\label{scalarEoM}
\end{equation}
with the Bunch-Davies initial condition
\begin{equation}
\lim_{|k\tau|\to\infty}\mathscr{S}_{IJ}=\frac{1}{\sqrt{2k}}\,\delta_{IJ},
\qquad
\lim_{|k\tau|\to\infty}\partial_\tau \mathscr{S}_{IJ}=-i\sqrt{\frac{k}{2}}\,\delta_{IJ}.
\label{BDvacuum}
\end{equation}
In actual numerical computations, the initial vacuum must be taken at time $\vert k\tau \vert \gg 1$ when adiabatic conditions are satisfied and $\Omega^2_{IJ} \simeq k^2 \delta_{IJ}$ is a good approximation.
The auto- and cross-power spectra of the scalar perturbations are defined by (see e.g.~\cite{Gumrukcuoglu:2010yc})
\begin{equation}
\frac{1}{2}\left\langle \hat{S}_I(\tau,\bm k)\hat{S}_J(\tau,\bm k')
+ \hat{S}_J(\tau,\bm k)\hat{S}_I(\tau,\bm k')\right\rangle\equiv \delta(\bm k+\bm k') \, \frac{2\pi^2}{k^3}\mathcal{P}_{S_I S_J}(\tau,k).
\end{equation}
We show the auto-power spectra $\mathcal{P}_{\delta\chi\delta\chi},\mathcal{P}_{\delta Q\delta Q}$ and $\mathcal{P}_{kUkU}$ in the left panel of Fig.~\ref{PSplot} and the cross-power spectra $\mathcal{P}_{\delta\chi\delta Q},\mathcal{P}_{\delta Q kU}$ and $\mathcal{P}_{kU\delta\chi}$ in the right panel.
These non-vanishing correlations between $\delta\chi,\delta Q$ and $U$ at the linear order lead to important contributions to the mixed non-gaussianity as we will see in Sec.~\ref{sec:cubic}.
\begin{figure}[tbp]
\hspace{-2mm}
\includegraphics[width=74mm]{PSI.pdf}
\hspace{5mm}
\includegraphics[width=74mm]{PSISJ.pdf}
\caption
{{\bf (Left panel)} The auto-power spectra of $\delta \chi$ (blue solid),
$\delta Q$ (orange dashed) and $kU$ (black dot-dashed) are shown.
{\bf (Right panel)} The cross-power spectra between $\delta \chi \delta Q$ (blue solid), $\delta Q k U$ (orange dashed) and $\delta \chi kU$ (black dot-dashed) are plotted. These cross-correlations are non-zero at the linear level by virtue of the mixing terms. }
\label{PSplot}
\end{figure}
The tensor perturbations with mixing terms can be quantized and solved in the same way as the scalar ones. Their quadratic action is written in the same form as eq.~\eqref{scalar action} with the replacements
\begin{equation}
\begin{aligned}
\tilde{\mathcal{R}}_{IJ} &\equiv
\left(
\begin{array}{cc}
\displaystyle
\frac{2}{M_{\rm Pl} \, a} & 0 \\
0 & \displaystyle \frac{1}{a}
\end{array}
\right) \; ,
\qquad
\tilde{K}_{IJ}=
\left(
\begin{array}{cc}
0 & \displaystyle \frac{\partial_\tau ( a Q)}{M_{\rm Pl} \, a} \\
\displaystyle - \frac{\partial_\tau ( a Q)}{M_{\rm Pl} \, a} & 0
\end{array}
\right) \; , \\
\tilde{\Omega}^2_{IJ , L/R} & =
\left(
\begin{array}{cc}
\displaystyle k^2 - \frac{a''}{a} + \frac{2 Q^2}{M_{\rm Pl}^2} \left[ a^2 m_Q^2 H^2 - \frac{\left[ \partial_\tau \left( a Q \right) \right]^2}{a^2 Q^2} \right] \;\;\; &
\displaystyle
- \frac{a H Q}{M_{\rm Pl}} \left[ \pm 2 k m_Q + 2 a m_Q \xi H - \frac{\partial_\tau \left( a Q \right)}{a Q} \right]
\\
\displaystyle
- \frac{a H Q}{M_{\rm Pl}} \left[ \pm 2 k m_Q + 2 a m_Q \xi H - \frac{\partial_\tau \left( a Q \right)}{a Q} \right] & \displaystyle k^2 \pm 2 k a \left( m_Q + \xi \right) H + 2 a^2 m_Q \xi H^2
\end{array}
\right) ,
\end{aligned}
\label{tensor-matrices}
\end{equation}
where subscript $L/R$ corresponds to left-/right-handed modes, respectively, and to $\pm$ on the right-hand side in the corresponding order.
While the $L$ and $R$ modes are decoupled from each other at linear order, metric ($\hat{h}$) and gauge-field ($\hat{T}$) tensor modes within each sector have linear mixings.
We decompose each sector in terms of creation and annihilation operators in the same manner as for scalar modes \eqref{DI-decompose}, i.e.
\begin{equation}
\hat\Delta^\sigma_I(\tau , \bm{k}) = \mathscr{T}_{IJ}^\sigma(\tau , k) \, \hat{b}_J^\sigma(\bm{k}) + \mathscr{T}_{IJ}^{\sigma \, *}(\tau , k) \, \hat{b}^{\sigma \, \dagger}_J(-\bm{k}) \; ,
\end{equation}
where $\hat\Delta^\sigma_I \equiv (\tilde{\mathcal{R}}^{-1})_{IJ} \hat{T}^\sigma_{J}$ are canonically normalized fields for the tensor, and $\hat{b}_J^\sigma$ and $\hat{b}^{\sigma \, \dagger}_J$ have commutation relations of the same form as in \eqref{commutation}.
The matrix of tensor mode functions $\mathscr{T}_{IJ}^\sigma$ has $2\times 2$ components for both polarizations $\sigma=L,R$ and are given the Bunch-Davies initial conditions as in \eqref{BDvacuum}.
In Fig.~\ref{PTplot}, we show their power spectra defined as $\langle \hat{T}_I^\sigma(\tau,\bm k)\hat{T}_I^\sigma(\tau,\bm k')\rangle\equiv \delta(\bm k+\bm k') 2\pi^2\mathcal{P}_{T_I^\sigma}(\tau,k)/k^3$.
As discussed in previous works, the right-handed $SU(2)$ tensor mode undergoes a tachyonic instability around the horizon crossing and is exponentially amplified. The right-handed gravitational waves that are sourced by the amplified $SU(2)$ tensor is substantially enhanced. On the other hand, the left-handed $SU(2)$ tensor mode does not have this instability and hence does not produce an interesting signature.
Henceforth, we concentrate only on the right-handed modes of the tensor perturbations and disregard the left-handed.
\begin{figure}[tbp]
\hspace{-2mm}
\includegraphics[width=74mm]{PTIR.pdf}
\hspace{5mm}
\includegraphics[width=74mm]{PTIL.pdf}
\caption
{The power spectra of the $SU(2)$ tensorial perturbation $T$ (blue solid) and the gravitational waves $h$ multiplied by the reduced Planck mass $M_{\rm Pl}$ (orange dashed). In the left and right panel, the right- and left-handed polarization modes are shown, respectively.
With the present parameters, the right-handed $T$ undergoes a tachyonic instability and gets amplified, while the left-handed $T$ exhibits no amplification. Thus, only the right-handed $h$ is significantly sourced.}
\label{PTplot}
\end{figure}
\section{Cubic Action with In-in Formalism}
\label{sec:cubic}
In this and following sections we compute $3$-point correlation functions in this model. Our focus in this paper is cross correlations between scalar ($S$) and tensor ($T$) perturbations. While possible combinations of $3$-point functions are $STT$ and $SST$, the exponential enhancement due to transient tachyonic instability occurs only in the tensor sector. This implies that the number of tensor modes involved in correlation functions counts the order of the exponential amplification.%
\footnote{This argument is not exactly valid in the cases of auto-correlations. Since phases of mode functions freeze once the modes become classical, their Green functions do not experience such enhancements \cite{Ferreira:2015omg, Agrawal:2017awz,Agrawal:2018mrg}. Cross correlations we consider in this paper, on the other hand, do not suffer such cancellation, and the naive power counting described here applies.}
Therefore, we are only interested in $STT$ correlations.
In the spatially flat gauge $\delta g_{ij} =a^2 \left( \delta_{ij} + h_{ij} \right)$, where $h_{ij}$ is the traceless transverse part of the metric perturbations, $\zeta$ is directly related to the inflaton perturbation $\delta\varphi$ through $\zeta = - H \delta\varphi / \dot\phi_0$, where $\phi_0(t)$ is the vacuum expectation value (vev) of inflaton.%
\footnote{Even if all the energetically subdominant components decay away after or during inflation, inflaton perturbation $\delta\varphi$ which is induced by them survives and contributes to curvature perturbation $\zeta$.
We conservatively concentrate on it in this paper.
Some other contributions are discussed in appendix.~\ref{appen:comparison}.}
The inflaton has no direct coupling to the $SU(2)$ gauge field, and thus it receives the effects of gauge field production only through gravitational interactions. The dominant channel among them is the gravitational coupling between $\delta\varphi$ and other scalar modes \cite{Ferreira:2014zia, Namba:2015gja}.
We focus on the coupling with $\delta\chi$, and leave those with the other scalar modes $\delta Q$ and $U$ to future studies.
The gravitational coupling between $\delta\varphi$ and $\delta\chi$ can in fact be treated perturbatively, and one can show that part of $\delta\hat\varphi(\bm{k})$ due to this coupling, denoted by $\delta_m\hat\varphi(\bm{k})$, is well approximated by $\delta_m\hat\varphi(\bm{k}) = \dot\phi_0 \dot\chi_0 \Delta N_{\chi , k} \delta\hat\chi(\bm{k}) / (M_{\rm Pl}^2 H^2)$ in terms of the Fourier modes, where $\Delta N_{\chi,k}$ is the number of e-folds after a given mode of $\delta\hat\chi$ crosses the Hubble radius until it starts decaying during or at the end of inflation \cite{Namba:2015gja, Dimastrogiovanni:2016fuu}. Since $\delta\hat\chi$ directly couples to the $SU(2)$ gauge field, $\delta_m\hat\varphi$ feels the gauge field production through the gravitational coupling.
Thus, in order to compute the dominant contribution to $\zeta$ sourced by the production, we consider the part contributing to \eqref{zeta-def},
\begin{equation}
\hat\zeta^{(s)} (\bm{k}) = - \frac{H}{\dot\phi_0} \, \delta_m\hat\varphi(\bm{k})
\simeq - \frac{\dot\chi_0}{M_{\rm Pl}^2 H} \, \Delta N_{\chi,k} \, \delta\hat\chi (\bm{k}) \; ,
\label{sourced_zeta}
\end{equation}
where superscript $(s)$ denotes sourced part.
The $STT$ $3$-point correlation function in turn yields
\begin{equation}
\langle \hat\zeta(\bm{k}_1) \, \hat h^{\sigma}(\bm{k}_2) \, \hat h^{\sigma'}(\bm{k}_3) \rangle \simeq
- \frac{\dot\chi}{M_{\rm Pl}^2 H} \, \Delta N_{\chi,k_1} \,
\langle \delta\hat\chi (\bm{k}_1) \, \hat h^{\sigma}(\bm{k}_2) \, \hat h^{\sigma'}(\bm{k}_3) \rangle \; ,
\label{zetahh-chihh}
\end{equation}
where $\sigma, \sigma'= L,R$ are tensor polarizations.
As argued in Sec.~\ref{subsec:outline}, therefore, the computation of the $STT$ correlation amounts to that of $\langle \delta\chi hh \rangle$.
We employ the in-in formalism, perturbation theory of operator formulation in which correlations of Heisenberg-picture operators are evaluated as expectation values on the ``in'' vacuum \cite{Weinberg:2005vy}.
In computing the $3$-point function as in \eqref{zetahh-chihh}, the leading-order, tree-level, contribution comes from the $STT$ part of cubic interaction Hamiltonian $\delta_3 H_{\rm int}^{STT}$ that correlates with $\delta\chi hh$.
In deriving $\delta_3 H_{\rm int}^{STT}$, some extra cares need to be taken. First, we need to use constraint equations to solve for the non-dynamical variable $Y$ in favor of dynamical ones $\delta\chi$, $\delta Q$, $U$ and $T_{ij}$. Since $\delta_3 H_{\rm int}^{STT}$ contains terms proportional to $Y S$ and $Y TT$, where $S = \{ \delta\chi , \delta Q , U \}$, solving the constraint equations as $Y = {\cal O}(TT)$ and $Y = {\cal O} (S)$ respectively leads to $STT$ interaction terms.
Secondly, we ignore $Shh$ vertex terms out of $STT$ interaction terms.
The dominant part of right-handed gravitational waves is sourced by the gauge field ``tensor'' mode $T^R$ (strictly speaking $\mathscr{T}_{TT}^R$) due to the mixing. Although $T^R$ is directly coupled to the scalar modes $S$, they have only Planck-suppressed interactions with $h$, which can be treated perturbatively as in \eqref{sourced_zeta}.
Thus, part of $\delta_3 H_{\rm int}^{STT}$ of our interest consists of $\delta\chi TT$, $\delta Q TT$ and $UTT$, where $T$ now stands for the gauge field ``tensor'' modes.
Once these considerations are taken into account, the explicit expression of $\delta_3 H_{\rm int}^{STT}$ is found to be, in Fourier space,%
\footnote{This can be obtained as minus of cubic Lagrangian, and it is known to coincide with cubic Hamiltonian~\cite{Huang:2006eha}. We have explicitly checked the equivalence.}
\begin{equation}
\begin{aligned}
\delta_3 H_{\rm int}^{STT} & =
\sum_{\sigma , \sigma'} \int \frac{\mathrm{d}^3k \, \mathrm{d}^3p \, \mathrm{d}^3q}{(2\pi)^{3/2}} \, \delta^{(3)} ( \bm{k} + \bm{p} + \bm{q} )
\, e_{ai}^\sigma (\hat{\bm{p}}) \, e_{aj}^{\sigma'} (\hat{\bm{q}})
\\ & \times
\Bigg[
\frac{\lambda}{f} \,
\epsilon^{ijk} \, i q_k \, \delta\hat\chi_{\bm{k}} \, \frac{\mathrm{d} (a \hat{T}^\sigma_{\bm{p}})}{\mathrm{d}\tau} \, a \hat{T}^{\sigma'}_{\bm{q}}
- \frac{g \lambda}{2 f} \, \delta^{ij} \delta\hat\chi_{\bm{k}} \,
\frac{\mathrm{d}}{\mathrm{d}\tau} \left( a^3 Q \, \hat{T}^\sigma_{\bm{p}} \, \hat{T}^{\sigma'}_{\bm{q}} \right)
\\ & \qquad
- \frac{g \lambda \bar\chi}{2f} \, \delta^{ij} \, \frac{\mathrm{d}}{\mathrm{d}\tau} \left( a^3 \delta \hat{Q}_{\bm{k}} \, \hat{T}^\sigma_{\bm{p}} \hat{T}^{\sigma'}_{\bm{q}} \right)
- a^3 g \, \epsilon^{ijk} \, i q_k \, \delta \hat{Q}_{\bm{k}} \, \hat{T}^\sigma_{\bm{p}} \, \hat{T}^{\sigma'}_{\bm{q}}
\\ & \qquad
- a^3 g \, k_i k_j \, \hat{U}_{\bm{k}} \, \hat{T}^\sigma_{\bm{p}} \, \hat{T}^{\sigma'}_{\bm{q}}
\\ & \qquad
- \frac{g \epsilon^{ijk} \, i k_k}{k^2 + 2 a^2 g^2 Q^2}
\left(
a^2 \, \frac{g \lambda Q^2}{f} \, \delta\hat\chi _{\bm{k}}
- \frac{\mathrm{d}}{\mathrm{d}\tau} \left( a \delta \hat{Q}_{\bm{k}} \right)
+ 2 a g Q \, \frac{\mathrm{d}}{\mathrm{d}\tau} \left( a \hat{U}_{\bm{k}} \right)
- 2 a g \, \partial_\tau(aQ) \, \hat{U}_{\bm{k}} \right)
\\ & \qquad\quad \times
\, \frac{\mathrm{d} (a \hat{T}^\sigma_{\bm{p}})}{\mathrm{d}\tau} \, a \hat{T}^{\sigma'}_{\bm{q}} \Bigg] \; .
\end{aligned}
\label{HISTT_Fourier}
\end{equation}
where $\hat{T}_{\bm{k}}^\sigma$ are Fourier modes of $SU(2)$ tensor modes with polarization $\sigma$ and momentum $\bm{k}$, and $\epsilon^{ijk}$ is a flat-space totally anti-symmetric symbol.
The last two lines in \eqref{HISTT_Fourier} are originated from the terms of the form $YS$ and $YTT$.
Since only right-handed ($R$) modes of $\hat{T}^\sigma$ are enhanced with our choice of parameters, we neglect all the terms that contain left-handed modes. Then the above expression \eqref{HISTT_Fourier} can be rewritten compactly as
\begin{equation}
\delta_3 H_{\rm int}^{STT}(\tau) = \sum_{I = \chi , Q , U} \int \frac{\mathrm{d}^3k \, \mathrm{d}^3p \, \mathrm{d}^3q}{(2\pi)^{3/2}} \, \delta^{(3)} (\bm{k} + \bm{p} + \bm{q}) \, \mathcal{F}^{(I)}(\bm{k} , \bm{p} , \bm{q} , \tau) \, \hat{S}_{I, \bm{k}}(\tau) \, \hat{T}^R_{\bm{p}}(\tau) \, \hat{T}^R_{\bm{q}}(\tau) \; ,
\label{F-def}
\end{equation}
where operator $\mathcal{F}^{(I)}$ acts on $\hat{S}_I \hat{T}^R \hat{T}^R$. The leading-order contribution to the correlator in \eqref{zetahh-chihh} in the in-in formulation is given by
\begin{equation}
\begin{aligned}
& \langle \delta\hat\chi_{\bm{k}_1} \hat h^{\sigma}_{\bm{k}_2} \hat h^{\sigma'} _{\bm{k}_3} (\tau) \rangle =
i \int^\tau_{-\infty} \mathrm{d}\tau' \, \left\langle \left[ \delta_3 H_{\rm int}^{STT} (\tau') , \, \delta\hat\chi_{\bm{k}_1}(\tau) \, \hat h^{\sigma}_{\bm{k}_2}(\tau) \, \hat h^{\sigma'}_{\bm{k}_3} (\tau) \right] \right\rangle \; .
\end{aligned}
\label{chihh_general}
\end{equation}
Since only right-handed modes are produced in the tensor sector, the correlation occurs only with right-handed modes, which can easily be shown explicitly.
We define bispectrum $B_{\zeta hh}(k_1 , k_2 , k_3)$ by
\begin{equation}
B_{\zeta h h} (k_1 , k_2 , k_3) \, \delta^{(3)} (\bm{k}_1 + \bm{k}_2 + \bm{k}_3) \equiv \langle \hat\zeta_{\bm{k}_1} \hat h^{R}_{\bm{k}_2} \hat h^{R}_{\bm{k}_3} \rangle \; ,
\label{bispec-def}
\end{equation}
where a general expression is obtained using \eqref{zetahh-chihh} and calculating \eqref{chihh_general},
\begin{equation}
\begin{aligned}
B_{\zeta h h} (k_1 , k_2 , k_3) & = \frac{- 2 \Delta N_{\chi,k_1}}{(2\pi)^{3/2}} \, \frac{\dot\chi}{M_{\rm Pl}^2 H}
\sum_{I,J,M,N}
\int^\tau_{-\infty} \mathrm{d}\tau'
\left[ \mathcal{F}^{(I)}(- \bm{k}_1 , -\bm{k}_2 , -\bm{k}_3 , \tau')
+ \mathcal{F}^{(I)}(- \bm{k}_1 , -\bm{k}_3 , -\bm{k}_2 , \tau') \right]
\\ & \quad \times
{\rm Im} \left[ S_{\chi J , k_1}(\tau) \, R_{hM , k_2}(\tau) \, R_{hN , k_3}(\tau) \,
S_{IJ , k_1}^*(\tau') \, R_{TM , k_2}^*(\tau') \, R_{TN , k_3}^*(\tau') \right] \; ,
\end{aligned}
\label{Bzetahh_general}
\end{equation}
and the explicit expressions of $\mathcal{F}^{(I)}$ will be given in the next section. Here, for notational brevity, we denote $S_{IJ , k} (\tau) \equiv \mathcal{R}_{IK}(\tau , k) \mathscr{S}_{KJ}(\tau , k)$ for the scalar and $R_{IJ , k}(\tau) \equiv \tilde{\mathcal{R}}_{IK}(\tau , k) \mathscr{T}^R_{KJ}(\tau , k)$ for the right-handed tensor.
During the computation, we have discarded a disconnected piece. It is manifest from this expression that the result is symmetric under interchange of $\bm{k}_2 \leftrightarrow \bm{k}_3$. Note that $B_{\zeta hh}$ has a mass dimension $-6$.
The fact that this bispectrum is originated from $\delta\chi$ is captured by the overall factor $\dot\chi$, and the fact that it is due to the mixing between $\delta\chi$ and $\delta\varphi$ is by $\Delta N_{\chi,k_1}$.
In the next section, we compute $B_{\zeta hh}$ using numerically calculated mode functions $S_{IJ,k}$ and $R_{IJ,k}$, and show our results of its shape and amplitude from the model.
\section{Result of the Mixed Non-gaussianity}
\label{sec:results}
Our goal is to compute the scalar-tensor-tensor correlation $\langle \zeta h h \rangle$ \eqref{zetahh-chihh}, and the general form of its bispectrum $B_{\zeta hh}$ given by \eqref{Bzetahh_general}. In general, bispectra in a scale-invariant system can be characterized by two quantities: {\it shape} and {\it amplitude}. Due to background homogeneity, the three wave vectors in bispectra form a triangle $\bm{k}_1 + \bm{k}_2 + \bm{k}_3 = 0$, as clearly seen in \eqref{bispec-def}, and thus the norms $k_1$, $k_2$ and $k_3$ uniquely determines the {\it shape} of bispectra. Due to background isotropy, the momentum dependence of bispectra can also be reduced to that on $k_1$, $k_2$ and $k_3$.
Moreover, (approximate) scale invariance in the system results in a scaling relation
\begin{equation}
B_{\zeta hh} ( s k_1 , s k_2 , s k_3 ) = s^{-6} B_{\zeta hh} ( k_1 , k_2 , k_3 ) \; ,
\label{Bzhh_scaleinv}
\end{equation}
at the leading order in the slow-roll expansion.
Taking these considerations into account, {\it shape} is often conveniently defined as
\begin{equation}
{\cal S}_{\zeta hh} \equiv {\cal N} k_1^2 k_2^2 k_3^2 B_{\zeta hh} \; ,
\label{shape-def}
\end{equation}
where ${\cal N}$ is an arbitrary normalization factor, and {\it amplitude} by non-linearity parameter
\begin{equation}
f_{\zeta hh}^{\rm NL} \equiv
\frac{10}{3 \left( 2 \pi \right)^{5/2} {\cal P}_\zeta^2} \,
\frac{\prod_i k_i^3}{\sum_i k_i^3} \,
B_{\zeta hh} \; ,
\label{fNL-def}
\end{equation}
where ${\cal P}_\zeta$ is the power spectrum of curvature perturbation.
The numerical factor in \eqref{fNL-def} is taken in such a way to coincide with the standard definition of non-linearity parameter for scalar auto non-gaussianity \cite{Komatsu:2001rj}, see also \cite{Barnaby:2010vf}.
With these definitions \eqref{shape-def} and \eqref{fNL-def}, ${\cal S}_{\zeta hh}$ and $f_{\zeta hh}^{\rm NL}$ are scaling free, i.e.
\begin{equation}
{\cal S}_{\zeta hh} ( s k_1 , s k_2 , s k_3 ) = s^0 {\cal S}_{\zeta hh} ( k_1 , k_2 , k_3 ) \; , \qquad
f_{\zeta hh}^{\rm NL} ( s k_1 , s k_2 , s k_3 ) = s^0 f_{\zeta hh}^{\rm NL} ( k_1 , k_2 , k_3 ) \; .
\end{equation}
Since our bispectrum is invariant under exchange of tensor-mode momenta, $\bm{k}_2 \leftrightarrow \bm{k}_3$, but not under exchange of $\bm{k}_1$, it is convenient to normalize $k_1$ and $k_2$ by $k_3$, defining
\begin{equation}
x_1 \equiv \frac{k_1}{k_3} \; , \qquad
x_2 \equiv \frac{k_2}{k_3} \; .
\label{x1x2-def}
\end{equation}
Then the $k$-dependences of ${\cal S}_{\zeta hh}$ and $f_{\zeta hh}^{\rm NL}$ are carried only by $x_1$ and $x_2$, ${\cal S}_{\zeta hh} = {\cal S}_{\zeta hh} ( x_1 , x_2)$ and $f_{\zeta hh}^{\rm NL} = f_{\zeta hh}^{\rm NL} ( x_1 , x_2)$.
Having triangular inequalities on $k_1$, $k_2$ and $k_3$, and barring double-counting between $k_2 \leftrightarrow k_3$, the following region of $x_1$ and $x_2$ exhausts all the shapes without redundancy:
\begin{equation}
x_1 + x_2 \ge 1 \; , \qquad
x_2 + 1 \ge x_1 \; , \qquad
x_2 \le 1 \; ,
\end{equation}
where the third triangular inequality is redundant.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{shape_region.pdf}
\caption{Region of the shape of non-gaussianity in the space of $x_1 \equiv k_1 / k_3$ and $x_2 \equiv k_2 / k_3$. Combining triangular inequalities and the condition to avoid redundancy ($k_2 \le k_3$), the colored region (blue$+$orange) exhausts all the configurations of a triangle formed by $\bm{k}_1$, $\bm{k}_2$ and $\bm{k}_3$. In the case of auto-three-point correlations, one can impose a further redundancy condition $k_1 \le k_2$, and the orange shaded region is sufficient, but in our present case, the region has to be extended to the entire colored region.}
\label{fig:shape_region}
\end{figure}
This region is shown in Fig.~\ref{fig:shape_region}. In the case of three-point auto correlations, since $\bm{k}_1$ would also be symmetric together with $\bm{k}_2$ and $\bm{k}_3$, one could further restrict the region by imposing $x_1 \le x_2$, leading to the orange part in Fig.~\ref{fig:shape_region}. However, in the current case for cross correlations, we need to extend the region to include the blue part of the figure.
In order to see different contributions from different terms in $\delta_3 H_{\rm int}^{STT}$ in \eqref{HISTT_Fourier}, we separately compute the terms of the form $\delta\chi TT$ (second line in \eqref{HISTT_Fourier}), $\delta Q TT$ (third line), $U TT$ (fourth line) and the terms originated from integrating out the non-dynamical mode $Y$ (last two lines). We denote the corresponding operators ${\cal F}^{(I)}$, introduced in \eqref{F-def}, by ${\cal F}^{(\chi)}$, ${\cal F}^{(Q)}$, ${\cal F}^{(U)}$ and ${\cal F}^{(I)}_{\rm rest}$, respectively. Note that ${\cal F}^{(I)}_{\rm rest}$ also has terms that contain $\delta\chi$, $\delta Q$ and $U$, denoting ${\cal F}^{(\chi), (Q), (U)}_{\rm rest}$. Their explicit expressions read
\begin{equation}
\begin{aligned}
{\cal F}^{(\chi)} (\bm{k} , \bm{p} , \bm{q} , \tau' ) & =
a^3 \, \frac{\lambda}{f} H \, e_{ai}^R (\hat{\bm{p}}) \, e_{ai}^{R} (\hat{\bm{q}}) \left[
\frac{m_Q}{2} \, \frac{\mathrm{d}^{(\bm{k})}}{\mathrm{d}\tau'}
+ \frac{q}{aH} \left( \frac{\mathrm{d}^{(\bm{p})}}{\mathrm{d}\tau'} + aH \right) \right] \; ,
\\
{\cal F}^{(Q)} ( \bm{k} , \bm{p} , \bm{q} , \tau')
& = a^4 g H \, e_{ai}^R (\hat{\bm{p}}) \, e_{ai}^{R} (\hat{\bm{q}}) \left(
\xi - \frac{q}{a H} \right)
\; ,
\\
{\cal F}^{(U)} (\bm{k} , \bm{p} , \bm{q} , \tau' )
&
= - a^3 g \, \frac{q_i p_j}{k} \, e^R_{ai}(\hat{\bm{p}}) \, e^{R}_{aj} (\hat{\bm{q}})
\; ,
\end{aligned}
\label{FI-expression}
\end{equation}
and
\begin{equation}
\begin{aligned}
{\cal F}^{(\chi)}_{\rm rest} ( \bm{k} , \bm{p} , \bm{q} , \tau' ) &
= - a^4 H^2 \, \frac{\lambda}{f} \, m_Q^2 \,
\frac{\left( p - q \right) e_{ai}^R (\hat{\bm{p}}) \, e_{ai}^R (\hat{\bm{q}})}{k^2 + 2 a^2 H^2 m_Q^2}
\left( \frac{\mathrm{d}^{(\bm{p})}}{\mathrm{d}\tau'} + a H \right) \; ,
\\
{\cal F}^{(Q)}_{\rm rest} ( \bm{k} , \bm{p} , \bm{q} , \tau' ) &
= a^3 g \, \frac{\left( p - q \right) e_{ai}^R (\hat{\bm{p}}) \, e_{ai}^R (\hat{\bm{q}})}{k^2 + 2 a^2 H^2 m_Q^2}
\left( \frac{\mathrm{d}^{(\bm{k})}}{\mathrm{d}\tau'} + aH \right)
\left( \frac{\mathrm{d}^{(\bm{p})}}{\mathrm{d}\tau'} + aH \right) \; ,
\\
{\cal F}^{(U)}_{\rm rest} ( \bm{k} , \bm{p} , \bm{q} , \tau' ) &
= 2 a^4 H \, \frac{g m_Q}{k} \, \frac{\left( p - q \right) e_{ai}^R (\hat{\bm{p}}) \, e_{ai}^R (\hat{\bm{q}})}{k^2 + 2 a^2 H^2 m_Q^2}
\left(
- \frac{\mathrm{d}^{(\bm{k})}}{\mathrm{d}\tau'} - a H
+ \frac{\partial_{\tau'} \left( aQ \right)}{a Q}
\right)
\left( \frac{\mathrm{d}^{(\bm{p})}}{\mathrm{d}\tau'} + a H \right) \; ,
\end{aligned}
\label{FrestI-expression}
\end{equation}
where $H = \dot{a} /a$ is the physical Hubble parameter, and time derivatives $\mathrm{d}^{(\bm{p})} / \mathrm{d} \tau'$ act only on quantities $S_{IJ} = {\cal R}_{IK} \mathscr{S}_{KJ}$ and $R_{IJ}= \tilde{\mathcal{R}}_{IK} \mathscr{T}^R_{KJ}$ that depend on both $\tau'$ and $\bm{p}$ (thus do not act on background quantities directly).
In obtaining these expressions, we have also used $\bm{k} + \bm{p} + \bm{q} = 0$ thanks to the delta function in \eqref{F-def} and the properties of polarization tensors (see below \eqref{SITI-def}).
Then using \eqref{Bzetahh_general}, we can write $B_{\zeta hh}$ as
\begin{equation}
B_{\zeta h h} (k_1 , k_2 , k_3) \,
= \frac{- g \dot{\chi} H^2}{2^2 (2\pi)^{3/2} M_{\rm Pl}^4} \,
\frac{\Delta N_{\chi , k_1}}{k_1^2 k_2^2 k_3^2}
\sum_{I= \chi , Q , U} \left[ {\cal J}^{(I)}(k_1 , k_2 , k_3 , \tau) + {\cal J}^{(I)}_{\rm rest}(k_1 , k_2 , k_3 , \tau) \right]
\; ,
\label{B-expression}
\end{equation}
where
\begin{equation}
\begin{aligned}
{\cal J}^{(I)}(k_1 , k_2 , k_3 , \tau) & \equiv \frac{k_1 k_2 k_3}{g H^3}
\sum_{J,M,N}
\int^\tau_{-\infty} \mathrm{d}\tau'
\left[ {\cal F}^{(I)}(-\bm{k}_1 , -\bm{k}_2 , -\bm{k}_3 , \tau') + {\cal F}^{(I)}(-\bm{k}_1 , -\bm{k}_3 , -\bm{k}_2 , \tau') \right]
\\ & \quad \times
{\rm Im} \left[
\frac{S_{\chi J, k_1}(\tau) \, R_{h M, k_2}(\tau) \, R_{h N, k_3}(\tau) \,
S_{IJ, k_1}^*(\tau') \, R_{TM, k_2}^*(\tau') \, R_{TN, k_3}^*(\tau')}{(2^3 k_1 k_2 k_3)^{-1} M_p^{-2}} \right] \; ,
\end{aligned}
\label{JI-def}
\end{equation}
with no summation on index $I$, and the same for ${\cal J}_{\rm rest}^{(I)}$ only with ${\cal F}^{(I)}$ replaced by ${\cal F}^{(I)}_{\rm rest}$.
Due to the scaling of $B_{\zeta hh}$ in \eqref{Bzhh_scaleinv}, one can see that ${\cal J}^{(I)}$ and ${\cal J}^{(I)}_{\rm rest}$ are scaling free: ${\cal J}^{(I)} (s k_1 , s k_2 , s k_3) = {\cal J}^{(I)} (k_1 , k_2 , k_3)$, and the same for ${\cal J}^{(I)}_{\rm rest}$, in de Sitter.
Thus they can be written in terms of $x_1$ and $x_2$, defined in \eqref{x1x2-def}, and are independent of the size of the triangle. Their explicit expressions are summarized in Appendix \ref{appen:explicit}.
Using ${\cal J}^{(I)}$ and ${\cal J}^{(I)}_{\rm rest}$ in \eqref{JI-explicit} and \eqref{JrestI-explicit}, we compute the shape \eqref{shape-def}
\begin{equation}
{\cal S}_{\zeta hh} ( x_1 , x_2 ) = - {\cal N}' \,
\sum_{I= \chi , Q , U} \left[ {\cal J}^{(I)}(x_1 , x_2) + {\cal J}^{(I)}_{\rm rest}(x_1 , x_2) \right] \; ,
\label{shape-expression}
\end{equation}
where ${\cal N}'$ is an arbitrary normalization factor,%
\footnote{Although $\Delta N_{\chi , k_1}$ depends on $k_1$, we absorb this factor in the definition of $\mathcal{N}'$ and exclude it from the definition of shape for simplicity.}
and the non-linearity parameter \eqref{fNL-def}
\begin{equation}
f_{\zeta hh}^{\rm NL} =
\frac{- 5 \Delta N_{\chi , k_1}}{6 \left( 2 \pi \right)^{4} {\cal P}_\zeta^2} \,
\frac{g \dot{\chi} H^2}{M_{\rm Pl}^4} \,
\frac{x_1 x_2}{x_1^3 + x_2^3 + 1} \,
\sum_{I= \chi , Q , U} \left[ {\cal J}^{(I)}(x_1 , x_2) + {\cal J}^{(I)}_{\rm rest}(x_1 , x_2) \right] \; .
\label{fNL-expression}
\end{equation}
In computing ${\cal J}^{(I)}$ and ${\cal J}^{(I)}_{\rm rest}$, we solve the matrix-form equations of motion for the scalar sector, \eqref{scalarEoM}, and the ones for the tensor. We then use the rotation matrix ${\cal R}_{IJ}$ in \eqref{scalar-rotation} for scalar and $\tilde{\cal R}_{IJ}$ in \eqref{tensor-matrices} for tensor to obtain $S_{IJ , k}$ and $R_{IJ , k}$, which we need to perform the time integrals in \eqref{JI-def}.
We neglect slow-roll corrections and treat all the background quantities as constants except for the scale factor $a = -1 / (H\tau)$.
As fiducial values of parameters, we take the following for the purpose of straightforward comparison with Ref.~\cite{Dimastrogiovanni:2018xnn},
\begin{equation}
m_Q = 3.45 \; , \quad
f = 10^{-2} M_{\rm Pl} \; , \quad
\epsilon_B = 3 \cdot 10^{-5} \; , \quad
\lambda = 1000 \; ,
\label{parametervalues}
\end{equation}
and other parameters are automatically fixed by attractor solutions \eqref{attractor} as $\xi \simeq m_Q + m_Q^{-1} \simeq 3.74$, $\Lambda \simeq 159$ and $W_{\chi\chi} \simeq -41.3 H^2$, with potential form $W(\chi) = \mu^4 \left[ 1 + \cos(\chi /f) \right]$ and initial condition $\chi_0 = 0.9 \cdot \pi /2$.
We are interested in the effect of production of the tensor modes on the bispectrum $B_{\zeta hh}$, which is localized in time around Hubble crossing for each mode. Focusing our interest on this effect and barring other UV and IR behaviors to take over, we restrict the time integrals in ${\cal J}^{(I)}$ and ${\cal J}^{(I)}_{\rm rest}$ to a limited interval of $\sim 7$ e-folds before and after Hubble crossing to correctly include the production effect. Also, in order to deal with fast oscillations in the integrands, we employ the Clenshaw-Curtis Rule for numerical integration using Mathematica.%
\footnote{To cross-check the validity of this numerical method, we have compared numerical results to a crude analytical estimate, and they match each other up to ${\cal O}(1)$ difference, which we nonetheless expect due to low accuracy of our analytical method.}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Schi_mQ345.png} \hfill
\includegraphics[width=0.47\textwidth]{SQ_mQ345.png}
\vspace{2mm}
\includegraphics[width=0.47\textwidth]{SU_mQ345.png} \hfill
\includegraphics[width=0.47\textwidth]{Srestchi_mQ345.png}
\vspace{2mm}
\includegraphics[width=0.47\textwidth]{SrestQ_mQ345.png} \hfill
\includegraphics[width=0.47\textwidth]{SrestU_mQ345.png}
\caption{The shape of each contribution to ${\cal S}_{\zeta hh}$ in \eqref{shape-expression}. Each of ${\cal S}^{(I)}$ corresponds to the term of ${\cal J}^{(I)}$ respectively, and the same for ${\cal S}^{(I)}_{\rm rest}$. The $z$-axis is normalized such that the sum of all the contributions becomes unity at the folded configuration $x_1 = 2 x_2 = 2$, where the overall signal is peaked.
Note that, for the purpose of presentation, the orientation of the $x$- and $y$-axes is rotated by almost $180$ degrees compared to that in Fig.~\ref{fig:shape_region}.
One can see hierarchical relations $\mathcal{S}^{(\chi)}, \mathcal{S}^{(Q)}\gtrsim \mathcal{S}^{(\chi)}_{\rm rest}\gg \mathcal{S}^{(U)}, \mathcal{S}^{(Q)}_{\rm rest}, \mathcal{S}^{(U)}_{\rm rest}$ in terms of their peak magnitude.}
\label{fig:shape-each}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{Stot_mQ345.png}
\caption{The overall shape of the scalar-tensor-tensor non-gaussianity, \eqref{shape-expression}.
The peak is located at the folded configuration. This is obtained by summing all the contributions plotted in Fig.~\ref{fig:shape-each}.
Note that, as in Fig.~\ref{fig:shape-each}, the orientation of the $x$- and $y$-axes is rotated by almost $180$ degrees compared to that in Fig.~\ref{fig:shape_region}.}
\label{fig:shape-total}
\end{figure}
The result of each contribution to ${\cal S}_{\zeta hh}$ in \eqref{shape-expression} is shown in Fig.~\ref{fig:shape-each}, and the overall shape in Fig.~\ref{fig:shape-total}. The normalization factor ${\cal N}'$ is chosen such that ${\cal S}_{\zeta hh}$ becomes unity at the peak configuration, which is in our case the folded shape, namely ${\cal S}_{\zeta hh} ( x_1 = 2 , x_2 = 1 ) = 1$.
As can be seen both from the figures and from the expressions \eqref{JI-explicit} and \eqref{JrestI-explicit} for ${\cal J}^{(I)}$ and ${\cal J}^{(I)}_{\rm rest}$, every contribution vanishes on the line $x_1 + x_2 - 1 = 0$, i.e.~$k_1 + k_2 = k_3$ (and $1 + x_1 - x_2 = 0$, which is redundant), simply because such an interaction process by one scalar and two tensor modes is not allowed while respecting momentum conservation. On top of that, ${\cal J}^{(U)}$ vanishes at $x_2 + 1 = x_1$, i.e.~$k_1 = k_2 + k_3$, since the vertex for $UTT$ interaction is proportional to $k_{3i} e_{ai}(\hat{\bm{k}}_2)$ (see \eqref{FI-expression}), which must vanish for this configuration as $\hat{\bm{k}}_2 = \hat{\bm{k}}_3$ ($= -\hat{\bm{k}}_1$).
On the other hand, each one of ${\cal J}^{(I)}_{\rm rest}$ vanishes at $k_2 = k_3$ ($x_2 = 1$), as well as $k_1 + k_2 = k_3$. This is because the corresponding part of the interaction Hamiltonian (last two lines of \eqref{HISTT_Fourier}) is antisymmetric in exchange of $\bm{p}$ ($= \bm{k}_2$) and $\bm{q}$ ($= \bm{k}_3$). Due to the nature of the $\langle \zeta h h \rangle$ correlation function, however, $\bm{k}_2$ and $\bm{k}_3$ have to be symmetric, leading to vanishing correlation at $k_2 = k_3$.
From Fig.~\ref{fig:shape-each}, we observe that the dominant contribution comes from ${\cal J}^{(\chi)}$ and ${\cal J}^{(Q)}$, which are similar both in shape and in size. Both peak at the folded configuration $x_1 = 2 x_2 = 2$ (i.e.~$k_1 = 2k_2 = 2k_3$). This can be understood as follows: production of tensor modes is localized in time around Hubble crossing as is seen in Fig.~\ref{PTplot}, and thus correlation between tensor modes is maximum if they cross the horizon at the same time, leading to $k_2 = k_3$.
On the other hand, the scalar perturbations are more efficiently sourced by the tensor modes in earlier times, because the Green functions of scalar perturbations are decaying around and after the horizon crossing due to their mass.%
\footnote{We call ${\rm Im} \left[
S_{\chi J, k_1}(\tau) S_{IJ, k_1}^*(\tau') \right]$ in eq.~\eqref{JI-def} the scalar Green functions. The non-oscillating part of the Green functions start decaying around $k_1 / a \sim 6 H$. This timing is set by the behaviors of eigenvalues of the mass matrix $\Omega^2_{IJ}$ in \eqref{scalar-matrices}.
One of the eigenvalues that corresponds to the lightest eigenmode crosses zero around this time. The timing is independent of the parameters $m_Q$, $\Lambda$ and $W_{\chi\chi}$, as long as $\Lambda \gg m_Q \gg 1$, which is the parameter regime of our interest. The physical interpretation of this specific moment is obscure due to the complicated expressions of the eigenvalues.}
Hence their correlation is maximized when $k_1$ correlates with $k_2$ and $k_3$ modes with hierarchy $k_1 > k_2, k_3$. Given the momentum conservation $\bm k_1+\bm k_2+\bm k_3=0$, this is achieved at the folded configuration $k_1 = 2 k_2 = 2 k_3$, as we see in our result.
The contributions other than ${\cal J}^{(\chi) , (Q)}$ appear not to follow this argument in Fig.~\ref{fig:shape-each}. In fact, however, the integrals \eqref{JI-explicit} and \eqref{JrestI-explicit} in ${\cal J}^{(I)}$ and ${\cal J}^{(I)}_{\rm rest}$ are maximum at the folded configuration by themselves. Due to the consideration of polarizations discussed in the previous paragraph, these other contributions have prefactors in \eqref{JI-explicit} and \eqref{JrestI-explicit} coming from helicity consideration and have to vanish at the folded configuration $k_1 = 2 k_2 = 2 k_3$.
These contributions are nonetheless subdominant, and the dominant part is controlled by ${\cal S}^{(\chi)}$ and ${\cal S}^{(Q)}$, and therefore the overall shape of non-gaussianity is peaked at the folded configuration, as is seen in Fig.~\ref{fig:shape-total}.
In order to quantify $f_{\zeta hh}^{\rm NL}$, it is more convenient to write \eqref{fNL-expression} in terms of our model parameters. In this regard we replace $\dot\chi_0$, $g$ and $\lambda$ using the relations $\dot\chi_0 = 2fH \xi / \lambda$, $g = m_Q^2 H / (M_{\rm Pl} \sqrt{\epsilon_B})$ and $\lambda = m_Q \Lambda f / (M_{\rm Pl} \sqrt{\epsilon_B})$. Moreover, the tensor-to-scalar ratio $r$ in this model can be written as~\cite{Dimastrogiovanni:2016fuu}
\begin{equation}
r = \frac{H^2}{\pi^2 M_{\rm Pl}^2 {\cal P}_\zeta} \left[ 2 + \epsilon_B {\cal F}^2(m_Q) \right] \; ,
\label{r-expression}
\end{equation}
where the first term in the square parentheses corresponds to the standard prediction from vacuum fluctuations of graviton and the second to the contribution from particle production. The explicit expression of ${\cal F}^2(m_Q)$ can be found in Ref.~\cite{Dimastrogiovanni:2016fuu} and is well fitted for the range $m_Q \in [ 3 , 5]$ by
\begin{equation}
{\cal F}^2 \simeq 0.11 \, m_Q^{7.7} \, {\rm e}^{1.94 m_Q} \; , \qquad
3 \le m_Q \le 5 \; .
\end{equation}
Using \eqref{r-expression} to replace $H/M_{\rm Pl}$, we can re-express $f_{\zeta hh}^{\rm NL}$ in \eqref{fNL-expression} as
\begin{equation}
f_{\zeta hh}^{\rm NL} =
\frac{- 5 \Delta N_{\chi , k_1}}{2^4 \cdot 3} \,
\frac{m_Q \xi \, r^2}{\Lambda \left( 2 + \epsilon_B {\cal F}^2 \right)^2} \,
\frac{x_1 x_2}{x_1^3 + x_2^3 + 1} \,
\sum_{I= \chi , Q , U} \left[ {\cal J}^{(I)}(x_1 , x_2) + {\cal J}^{(I)}_{\rm rest}(x_1 , x_2) \right] \; ,
\label{fNL-expression2}
\end{equation}
where the dependence on ${\cal P}_\zeta$ is conveniently canceled out.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{fNL_mQ3to5.png}
\caption{The values of $\vert f_{\zeta hh}^{\rm NL} \vert$ at the folded configuration $x_1 = 2 x_2 = 2$ as a function of $m_Q$.
Here the sign of $f_{\zeta hh}^{\rm NL}$ is in fact negative.
In the numerical computation we set other free parameters as in \eqref{parametervalues}, namely $f = 10^{-2} M_{\rm Pl} , \, \epsilon_B = 3 \cdot 10^{-5} , \, \lambda = 1000$, and other parameters are automatically fixed. For $f_{\zeta hh}^{\rm NL}$ values, we take $\Delta N_{\chi , k_1} = 60$ and $r = 0.06$.}
\label{fig:fNL_mQ}
\end{figure}
For the fiducial parameters in \eqref{parametervalues}, the value of $f_{\zeta hh}^{\rm NL}$ at the folded configuration $x_1 = 2 x_2 = 2$ is
\begin{equation}
f_{\zeta hh}^{\rm NL} ( x_1 = 2 , \, x_2 = 1 ) \simeq -1.1 \, \frac{\Delta N_{\chi , k_1}}{60} \, \left( \frac{r}{0.06} \right)^2 \; , \qquad
m_Q = 3.45 \; .
\end{equation}
For other values of $m_Q$, we repeat numerical calculations by varying $m_Q$ and plot $f_{\zeta hh}^{\rm NL}$ at the same configuration (which gives the peak amplitudes) for the region $3 \le m_Q \le 5$ in Fig.~\ref{fig:fNL_mQ}. In this plot we fix the model parameters $f = 10^{-2} M_{\rm Pl}$, $\epsilon_B = 3 \cdot 10^{-5}$ and $\lambda = 1000$ as in \eqref{parametervalues}, and $\Delta N_{\chi , k_1} = 60$ and $r = 0.06$.
We observe that $f_{\zeta hh}^{\rm NL}$ changes almost linearly in $m_Q$; indeed, the exponential factor coming from ${\cal F}^{-4}$ in \eqref{fNL-expression2} is expected to cancel that from ${\cal J}^{(I)}$.
Note that the values of $f_{\zeta hh}^{\rm NL}$ should also depend on other model parameters such as $\Lambda$ and $\epsilon_B$.
Moreover the total signal of non-gaussianity should receive contributions from the gravitational coupling between $\delta\varphi$ and the scalar modes of $\delta A_\mu^a$ as well, as briefly mentioned at the beginning of Sec.~\ref{sec:cubic}.
Our main purpose of this study is to report the consistent treatment of mixing effects in cosmological perturbations and implementation of the in-in formalism in this context. We leave the parameter search and detectability against actual observational sensitivities to our future more comprehensive study.
\section{Summary and Discussion}
\label{sec:summary}
In this paper, we considered the coupled system of a spectator axion field and $SU(2)$ gauge fields during inflation.
Axion dynamics in the primordial universe induces several phenomenologically intriguing signatures in the CMB observables. Motion of axion during inflation results in copious production of gauge fields in a parity-violating manner. For Abelian gauge fields, production occurs for each Fourier mode but is localized around its Hubble crossing time. On the other hand, for non-Abelian gauge fields, in particular those of an $SU(2)$ (sub-)group, the degrees of freedom that behave as ``scalar'' under the global part of $SO(3) \cong {\rm adj}[SU(2)]$, which leaves background spatial isotropy intact, can be sustained for a prolonged period during inflation, due to their self interactions as well as their coupling to axion. This in turn leads to an attractor behavior for homogeneous modes, in which the gauge field and axion support each other against decaying away. Inhomogeneous perturbations of these ``scalar'' degrees of freedom inevitably inherit non-trivial behaviors due to the background attractor, and mixings among the ``scalar'' modes are therefore substantial.
Effects of such ``scalar'' mixings are most visible in the mixed scalar-tensor-tensor non-gaussianity $\langle \zeta h h\rangle$ that is sourced by the perturbations of the spectator axion and $SU(2)$ gauge fields. Contrary to the previous works, we treated the mixing of the scalar and tensor fluctuations in a non-perturbative way with the NPS method. As a consistent treatment, we then took into account the contributions of all the relevant interaction vertices, not only $\delta\chi TT$ but also $\delta Q TT$ and $U TT$. We found that the $\delta Q TT$ contribution which had been disregarded as a higher order correction was actually significant.
We reported our results of $\langle \zeta hh \rangle$ non-gaussianity in Sec.~\ref{sec:results}. Fig.~\ref{fig:shape-each} shows that the signal is dominated by the contributions from the $\delta\chi TT$ and $\delta Q TT$ vertices, which are completely comparable to each other. As clearly seen from Fig.~\ref{fig:shape-total}, the total bispectrum is peaked at the folded configuration in which the wave number of the scalar mode $\zeta$ exceeds the other two of the tensor modes $h$ that are mutually equal, i.e.~$k_1 = 2 k_2 = 2k_3$. This can be understood as follows: the two tensor modes should have the same wave numbers, since the tachyonic enhancement of one of the helicity modes occurs only near Hubble crossing (see Fig.~\ref{PTplot}), and thus their correlation is maximum if they have the same momenta and cross the horizon at the same time. On the other hand, there is no substantial enhancement in the scalar sector; instead, the scalar modes are more efficiently sourced by these tensor modes in earlier time even before the time at $k \sim aH$.
This is caused by non-trivial mass matrix of the scalar modes $\Omega^2_{IJ}$ in \eqref{scalar-matrices}.
Hence maximal correlation occurs for $k_1 > k_2, k_3$, which, together with the momentum conservation, results in the folded-shape $\langle \zeta hh \rangle$ non-gaussianity. This shape dependence is in large part a result of the scalar mixings and is correctly obtained by the consistent treatment of the mixings. To our knowledge, our result is the first example of folded-shape non-gaussian cross correlations in models of particle production during inflation.
We found that the non-linearity parameter of our $B_{\zeta hh}$ can be ${\cal O}(1)$, as seen in Fig.~\ref{fig:fNL_mQ}. We would like to come back to consideration on the detectability of these parity-violating signals in the CMB observables, such as $TBB$ and $EBB$ correlations, as well as on the comprehensive search for parameter dependence, in our future studies.
Despite of the above interesting results, we do not claim that the computation of $B_{\zeta hh}$ is completed.
In fact, in this paper, we did not calculate all the contributions to the scalar-tensor-tensor non-gaussianity $\langle\zeta h h\rangle$ even at the leading order.
The scalar perturbations $\delta Q$ and $U$ are gravitationally coupled to the inflaton $\delta\phi$ and thus they induce the curvature perturbation in the same way as $\delta\chi$ does. In these channels, $\langle\delta Q hh\rangle$ and $\langle U hh\rangle$ are proportional to $\langle \zeta hh \rangle$ in similar ways to eq.~\eqref{zetahh-chihh}.
We do not see an obvious reason that these contributions are negligible compared to $\langle\delta \chi hh\rangle$, and there is a chance that they might non-trivially change the result in this paper.
These contributions can be calculated in essentially the same way as we did in this paper.
We hope to come back to this issue in the near future.
The detectability of the mixed non-gaussianity $\langle \zeta hh\rangle$ has not yet been investigated well.
Although there exist some earlier studies discussing the CMB observations~\cite{Shiraishi:2012sn,Shiraishi:2013vha,Bartolo:2018elp},
those works assumed different production mechanisms of non-gaussianity, the resultant shapes are distinct, and hence they cannot be applied to our case. We need a dedicated work to determine the observability of our result.
We leave this study for future work.
Our calculation method developed in this paper can be applied to other quantities.
It would be interesting to revisit the tensor non-gaussianity $\langle hhh\rangle$ which was computed without the NPS method, while we naively expect that higher order corrections of the mixing terms are not significant in the tensor case as discussed in Sec.~\ref{subsec:outline}.
It is also important to precisely estimate the $1$-loop contribution to the curvature power spectrum $\mathcal{P}_\zeta$, because we know the observed value of $\mathcal{P}_\zeta$ on the CMB scales to a great precision and hence it potentially puts a tight constraint on this model.
Recently, the $1$-loop $\mathcal{P}_\zeta$ was estimated in the model of our interest in \cite{Dimastrogiovanni:2018xnn} and was more thoroughly calculated in the Chromo-natural inflation model in Ref.~\cite{Papageorgiou:2018rfx} in which the mixing effect was treated perturbatively.
We would like to explore them for our future work.
\begin{acknowledgments}
R.N. would like to thank Eamanuela Dimastrogiovanni, Valerie Domcke, Matteo Fasiello, Elisa M.~G.~Ferreira, A.~Emir G\"{u}mr\"{u}k\c{c}\"{u}o\u{g}lu, Kaloian D.~Lozanov, Azadeh Maleknejad, Kyohei Mukaida and Lorenzo Sorbo for helpful discussions.
The work of T.F is partially supported by the Grant-in-Aid for JSPS Research Fellow No.~17J09103.
R.N. was in part supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada
and by the Lorne Trottier Chair in Astrophysics and Cosmology at McGill University.
\end{acknowledgments}
|
2,877,628,090,336 | arxiv | \section{Introduction}
The non-strange baryon spectra below $\sim 2500$ MeV
reveal, isospin by isospin, as a striking phenomenon
mass degenerate series of $K$ pairs of resonances of opposite
spatial parities and spins ranging from
$\frac{1}{2}^{\pm}$ to $\left( K- \frac{1}{2}\right)^{\pm}$
which terminate by a highest spin--$\left( K+\frac{1}{2}\right)$
resonance that remains unpaired \cite{MK-97}. Such series
(displayed in Fig.~1) perfectly fit into $SU(2)_I\otimes O(4)$
representations of
the type $\left(\frac{K}{2},\frac{K}{2} \right)\otimes
\lbrack \left( \frac{1}{2},0\right) \oplus \left( 0, \frac{1}{2}\right)
\rbrack$, an observation due again to \cite{MK-97}.
\begin{figure}
{(a)}\includegraphics[width=70mm,height=70mm]{Nucleon.eps}
{(b)}\includegraphics[width=70mm,height=70mm]{Delta.eps}
\caption{Experimentally observed baryon resonances (l.h.s.) $N$ and (r.h.s.)
$\Delta$. The dash-point lines represent the series mass average.
Notice that the resonances
with masses above 2000 MeV are of significantly lower
confidence but those with
masses below 2000 MeV where the degeneracy is very well pronounced. Empty
squares denote predicted (``missing'') states. Typical, the {\tt \bf
systematical lack of a parity partner to the first highest spins }
$D_{I3}$, $F_{I7}$, and $H_{I, 11}$ (the last two being among the
``missing'' $N$ states).
}
\end{figure}
The appeal of the above classification is twofold.
On the one side, up to the $\Delta (1600)$ state which is likely to be
a hybrid, no other resonances drop out of the proposed scheme.
Also the prediction of less ``missing'' resonances relative to others
schemes falls under this issue.
On the other side, due to the local $O(4)\sim O(1,3)$ isomorphism,
the non-relativistic $O(4)$ multiplets have as an exact relativistic
image the covariant high-spin degrees of freedom given by the
totally symmetric rank-$K$ Lorentz tensors with Dirac
spinor components, $\psi_{\mu_{1}...\mu_{K}}$
known as spin-$\left( K+\frac{1}{2}\right)$
Rarita-Schwinger fields \cite{RS}.
In this fashion, one can view the series of mass degenerate
resonances of alternating parities and
spins rising from $\frac{1}{2}^\pm $ to $\left(K+\frac{1}{2}\right)^P $
as rest frame $\psi_{\mu_{1}...\mu_{K}}$ of mass $m$ and look for
possibilities to generate such multiplets as bound states within an
appropriate quark potential.
Although the degeneracy of the non-strange baryons follows same
patterns as the states of an electron with spin in the Hydrogen atom,
the level splittings are quite different.
The mass formula that fits the $N(\Delta )$ spectrum
has been encountered in Ref.~\cite{MK_2000} as
\begin{eqnarray}
M_{(\sigma ;I)}-M_{(1;I)} = -f_{I}\frac{1}{\sigma ^{2}}+
g_{I}\frac{\sigma ^{2}-1}{4}\, ,
&\quad& \sigma =K+1, \quad I=N, \Delta\, ,
\label{mass-fla}\\
f_{N}=f_{\Delta}=600\,\, \mbox{MeV}, \quad g_N =70\,\, \mbox{MeV}, &\quad &
g_\Delta =40\,\, \mbox{MeV}\, ,
\label{mfla_prms}
\end{eqnarray}
and contains besides the Balmer-like term, $\left( \sim -1/\sigma ^2\right)$,
also its inverse.
In effect, the baryon mass splittings increase with $\sigma $.
The degeneracy patterns and the mass formula have found explanation
in Ref.~\cite{KiMoSmi} within
a version of the interacting boson model (IBM) for baryons.
To be specific, to the extent QCD prescribes baryons to be
constituted of three quarks in a color singlet state, one can
exploit for the description of baryonic systems
algebraic models developed for the purposes of triatomic molecules,
a path pursued by Refs.~\cite{U(7)}.
An interesting dynamical limit of the three quark system is
the one where two of the quarks
are ``squeezed'' to one independent entity, a di-quark (qq),
while the third quark (q) remains spectator. In this limit,
which corresponds to
$U(7)\longrightarrow U(3)\times U(4)$, one can exploit the following chain
of reducing $U(4)$ down to $O(2)$
\begin{eqnarray}
&&U(4)\supset O(4)\supset O(3)\supset O(2)\, ,\nonumber\\
&&N\qquad\quad K\qquad\quad l\qquad\quad m\,
\label{chains}\\
&& K=N, N-2, ... 1(0), \quad l=K,K-1, ..., 0, \quad |m|< l\, ,
\label{brnsh_rules}
\end{eqnarray}
in order to describe the \underline{ro}tational and
\underline{vibr}ational (rovibron) modes of the $(qq)-q$ dumbbell.
In so doing, one reproduces the quantum numbers describing
the degeneracies in the light quark baryon spectra
by means of the following Hamiltonian:
\begin{eqnarray}
{\mathcal H}-{\mathcal H}_0&=&-f_I(4{\mathcal C}_2(so(4))+1)^{-1}+
g_I{\mathcal C}_2(so(4))\, ,\\
{\mathcal C}_2(so(4))\left( \frac{K}{2},\frac{K}{2}\right)&=&
\frac{(K+1)^2 -1}{4}\left( \frac{K}{2},\frac{K}{2}\right)\, .
\label{O(4)_Ham}
\end{eqnarray}
with ${\mathcal C}_2(so(4))$ being the second $so(4)$ Casimir operator.
In the second row of Eq.~(\ref{chains}) we indicate the quantum
numbers associated with the respective group of the chain.
Here, $N$ stands for the principle quantum number of the four dimensional
harmonic oscillator associated with $U(4)$,
$K$ refers to the $O(4)$ four dimensional angular momentum,
while $l$, and $m$ are in turn ordinary three-- and two angular momenta.
In Ref.~\cite{KiMoSmi} the interested reader can find all the details
of the algebraic description of the nucleon and $\Delta$ resonances
within the rovibron limit.
Yet, as a principle challenge still remains finding
a suitable quark potential
that leads to the above scenario.
In the present work we make the case that the exactly soluble
trigonometric Rosen-Morse potential is precisely the potential we
are looking for.
The paper is organized as follows. In the next section we analyze the
shape of the trigonometric Rosen-Morse potential.
In section III we present the exact real orthogonal polynomial
solutions of the corresponding Schr\"odinger equation. The paper
ends with a brief concluding section.
\section{The shape of the trigonometric Rosen-Morse potential }
We adopt the following form
of the trigonometric Rosen-Morse potential \cite{Sukumar},\cite{Khare}
\begin{equation}
v(z)=-2 b \cot z +a(a+1)\csc^2 z\, , \quad a>-1/2,
\label{v-RMt}
\end{equation}
displayed in Fig.~2. Here,
\begin{eqnarray}
z=\frac{r}{d},\quad v(z)=V(z)/(\hbar^{2}/2md^{2}),
\quad \epsilon_n &=&E_n/ (\hbar^{2}/2md^{2})\,,
\end{eqnarray}
the one-dimensional variable is $r=|\mbox{r}|$,
$ d$ is a properly chosen length scale, $V(r)$ is the potential in
ordinary coordinate space, and $E_n$ is the energy of the levels.
Our point here is that $v(z)$ interpolates between a Coulomb-like and
an infinite-wall potential going through an intermediary region of
linear, and quadratic (harmonic-oscillator) dependences in $z$.
To see this (besides inspection of
Fig.~2) it is quite instructive to expand the potential
in a Taylor series which for appropriately small $z$,
takes the form of a Coulomb-like
potential with a centrifugal-barrier like term (if $a$
were to be a positive integer)
provided by the $\csc^2 z$ part,
\begin{eqnarray}
v(z)\approx -\frac{2b}{z} +\frac{a(a+1)}{z^2}\, .
\label{Taylor_Coul}
\end{eqnarray}
In an intermediary range where inverse powers of $z$ may be neglected,
one finds the linear plus a harmonic-oscillator potentials
\begin{equation}
v(z)\approx
\frac{2b}{3}\, z +\frac{a(a+1)}{15}z^2\, .
\label{Taylor_lin_HO}
\end{equation}
Finally, as long as $\cot\, z \stackrel{z\to \pi}{\longrightarrow}
\infty$ and $\csc^2\, z \stackrel{z\to \pi}{\longrightarrow}\infty$,
the potential obviously evolves to an infinite wall.
\begin{figure}
\begin{center}
\includegraphics[width=70mm,height=70mm]{RMt-coulomb.eps}
\caption{
The proximity of the $(\sim \cot z )$- to the $\left( \sim \frac{1}{z}\right) $
potential.}
\end{center}
\end{figure}
The above shape captures surprisingly well the essentials of the QCD
quark-gluon dynamics where the one gluon exchange gives rise to
an effective Coulomb-like potential, while the self-gluon interactions
produce a linear confinement potential as established by lattice calculations
of hadron properties.
Finally, the infinite wall piece of the trigonometric
Rosen-Morse potential provides the regime suited for
trapped but asymptotically free quarks.
By the above considerations one is tempted to conclude that
the potential under consideration may be a good candidate for a
common quark potential of QCD traits.
\noindent
In the next section we present the exact solutions of the
Schr\"odinger equation with the trigonometric Rosen-Morse potential.
\section{Energies and wave functions of the levels within the
trgonometric Rosen-Morse potential}
The three-dimensional Schr\"odinger equation with the trigonometric
Rosen-Morse potential (tRMP) reads:
\begin{equation}
\vec \nabla^{\, 2} \ \psi (\mathbf{z}) +\left(
2 b\cot z-\frac{a(a+1)}{\sin^2 z}+\epsilon\right) \psi ({\mathbf z})=0\, ,
\label{Sch-RMt}
\end{equation}
and is solved in polar coordinates in the standard way by separation
of variables. In effect, the wave function factorizes according to
\begin{equation}
\psi ({\mathbf z})=\frac{R(z)}{z}Y_{l}^{\mu }(\theta,\phi ),
\quad l=0,1,2,..., \quad
|\mu |< l,
\label{wafu_psi}
\end{equation}
where $Y_l^\mu(\theta ,\phi )$ stand for the standard spherical
harmonics, and $R(z)$ satisfies the one-dimensional equation
\begin{equation}
\frac{d^{\, 2}}{d^2z} \ R (z) +\left(
2 b\cot z-\frac{a(a+1)}{\sin^2 z}+\epsilon\right) R ( z)=0\ .
\label{wafu_U(r)}
\end{equation}
This equation (up to inessential notational differences)
has been solved in our previous work ~\cite{CK}.
There, we exploited the following factorization ansatz
\begin{equation}
R (z) =\textrm { e}^{-\alpha z/2} (1+\cot ^2 z)^{\frac{-(1-\beta )}{2}}
C^{(\beta , \alpha )} (\cot z)\, ,
\label{Schroed}
\end{equation}
with $\alpha$ and $\beta$ being constant parameters.
Upon introducing the new variable $x=\cot z$,
substituting the above factorization ansatz
into Eq.~(\ref{Schroed}), and a subsequent division by $(1+x^2)^{(1-
\beta)/2}$ one finds
\begin{eqnarray}
&&(1+x^2)
\frac{d^{\, 2}\ C^{(\beta, \alpha) } (x)}{d\ x^2}+
2\left({\alpha\over 2}+\beta x\right)
{d\ C^{(\beta ,\alpha ) } (x) \over d\ x }\nonumber\\
+{\Big(}(-\beta(1-\beta)-a(a+1)) &+&{(-\alpha(1-\beta)+2 b)x + \left(
\left({\alpha\over 2}\right)^2-(1-\beta )^2+
\epsilon_m\right)\over 1+x^2}{\Big)}
C^{(\beta ,\alpha )} (x) = 0\ . \label{Sch-RMt4}
\end{eqnarray}
This equation is suited for comparison with
\begin{equation}
(1+x^2)
\frac{d^{\, 2}\ {\mathcal R}_m^{(\beta, \alpha) }(x)}{d\ x^2}+
2\left({\alpha\over 2}+\beta x\right)
{d\ {\mathcal R}_m^{(\beta ,\alpha ) }(x) \over d\ x }\
-m(2\beta +m-1){\mathcal R}_m^{(\beta ,\alpha )}(x)=0\, ,
\label{new_pol}
\end{equation}
which being of the form of the text-book hypergeometric equation
\cite{handbook},\cite{textbook},\cite{Dennery} can be cast into the
self-adjoined Sturm-Liouville form given by
\begin{eqnarray}
s (x){{d^{\, 2}{\mathcal R}^{(\beta ,\alpha)}_m(x)} \over {d\ x^2}}+
{1\over {w(x)}}\left({{d\ s(x)w(x)}\over {d\ x}}\right)
{d\ {\mathcal R}^{(\beta ,\alpha)}_m(x)
\over d\ x}+\lambda_m \ {\mathcal R}^{(\beta ,\alpha)}_m(x)&=&0\ ,
\label{d2-R2}\\
s(x)=1+x^2, \quad
w^{(\beta,\alpha)}(x)=
s(x)^{\beta -1}e^{-\alpha\cot ^{-1}x},\,\,
\lambda_m=-m(2\beta +m-1),\,\, -\infty <x<\infty . &&
\label{weight_funct}
\end{eqnarray}
However, while the standard textbooks consider exclusively
$s(x)$ functions which are at most second order
polynomials of {\it real roots\/}, in which case
\begin{equation}
w(a)s(a)x^l= w(b)s(b)x^l=0 \ , \quad
\forall l=\mbox{\footnotesize integer,}
\label{rule_tobreak}
\end{equation}
holds valid, the roots of $s(x)$ in Eq.~(\ref{new_pol})
are {\it imaginary}. In the former case it is well known that
\begin{itemize}
\item ${\mathcal R}^{(\beta ,\alpha)}_m(x)$ would be polynomials of
order $m$,
\item $\lambda_m$ would satisfy
\begin{eqnarray}
\lambda_m&=&-m\left(K_1{{d\ {\mathcal R}^{(\beta ,\alpha)}_1(x)}
\over {d\ x}}+
{1\over 2}(m-1) {{d^{\, 2} s(x)}\over
{d\ x^2}}\right)\, ,\nonumber\\
\label{lamb}
\end{eqnarray}
with $K_m$ being the ${\mathcal R}^{(\beta ,\alpha )}_m(x)$
normalization constant,
\item the first order polynomial would be defined as
\begin{equation}
{\mathcal R}^{(\beta ,\alpha )}_1 (x) ={1\over {K_1w(x)}}
\left({{d\ s(x)w(x)}\over {d\ x}}\right)\, ,
\label{F1}
\end{equation}
\item the latter relation would generalize to any $m$ via
the so called Rodrigues formula
\begin{equation}
{\mathcal R}^{(\beta ,\alpha)}_m(x)=
\frac{1}{K_mw(x)}{d^m\over d\ x^m}(w(x)\ s(x)^m) \ ,
\label{Rodrigues-0}
\end{equation}
\item $w(x)$ would be the weight-function with respect to
which the ${\mathcal R}^{(\beta ,\alpha)}_m(x)$ polynomials
would appear orthogonal.
\end{itemize}
Within this context the question arises whether the imaginary roots
of $s(x)=(1+x^2)$ in Eq.~(\ref{new_pol}) would prevent the
${\mathcal R}_m^{(\beta , \alpha )}(x)$ functions from being
real orthogonal polynomials.
The answer to this question is negative. It can be shown that
also in the latter case
\begin{itemize}
\item the ${\mathcal R}_m^{(\beta , \alpha)}(x)$'s
are polynomials of order $m$,
\item they can also be constructed systematically from a Rodrigues formula
in terms of the respecified weight function,
\item but only a finite number them will be orthogonal
due to the violation of Eq.~(\ref{rule_tobreak}).
\end{itemize}
{}From the historical perspective,
Eq.~(\ref{weight_funct}) has first
been brought to attention
by the celebrated English mathematician Sir
Edward John Routh in Ref.~\cite{Routh}
(modulo the unessential difference in the argument
of the exponential from the present $\cot^{-1}$ to Routh's
$\tan ^{-1}$), the teacher of J.\ J.\ Thomson
and J.\ Larmor, among others famous physicists.
Routh observed that the weight-function of the Jacobi polynomials,
$ P^{(\mu ,\nu )}_m(x)$, takes the form of
Eq.~(\ref{weight_funct}) upon the particular complexification of the
argument and the parameters according to
$\mu \longrightarrow \eta=a+ib, \nu\longrightarrow \eta^*$, and $x\to ix$.
From that he concluded that
$P_m^{( \eta ,\eta^\ast )}(ix)$ is a real polynomial
(up to a global phase factor).
Later on, in 1929, the prominent Russian mathematician
Vsevolod Ivanovich Romanovski, one of the founders of the
Tashkent University, studied few more
of their properties in \cite{Romanovski} and it was him who
observed that only a finite number of them appear orthogonal.
While the mathematics literature is familiar with such polynomials
\cite{NikUv}, \cite{Ismail}, \cite{Koepf}, \cite{Neretin} where they are
referred to as finite Romanovski polynomials \cite{Zarzo},
or, Romanovski-Pseudo-Jacobi polynomials \cite{Lesky},
a curious omission from all the standard textbooks on mathematical physics
\cite{handbook},\cite{textbook} takes place.
This might be related to circumstance that
the physics problems which call for such
polynomials are relatively few. Recently, it has been reported in the
peer physics literature \cite{CK}, \cite{ACK} that the
Schr\"odinger equation with the respective hyperbolic Scarf and
trigonometric Rosen-Morse potentials is resolved exactly precisely in terms
of the Romanovski polynomials. Moreover, the latter are also relevant in
calculation of gap probabilities in finite Cauchy random matrix
ensembles \cite{Witte}.
In the following, we shall adopt the notion of Routh-Romanovski polynomials
for obvious reasons.\\
\subsection{The exact spectrum}
Back to Eq.~(\ref{Sch-RMt4}) we observe that if it is to coincide with
Eq.~(\ref{new_pol}) then the coefficient in front of
the $1/(x^2+1)$ term has to nullify. This restricts
the $C^{(\beta ,\alpha )}(x)$ parameters in the Schr\"odinger
wave function to be
\begin{eqnarray}
-\alpha(1-\beta)+2 b=0\ ,
\quad
\left({\alpha\over 2}\right)^2-(1-\beta)^2+\epsilon=0\label{ep-b2_a2-1}\, .
\end{eqnarray}
With that Eq.~(\ref{Sch-RMt4}) to which one
has reduced the original Schr\"odinger equation amounts to
\begin{eqnarray} (1+x^2){d^{\, 2}\ C^{(\beta ,\alpha )}(x) \over d\ x^2}+
2\left({\alpha\over 2}+\beta x\right)
{d\ C^{(\beta ,\alpha )}(x)
\over d\ x} + (-\beta(1-\beta)-a(a+1))C^{(\beta ,\alpha )}(x) = 0\ .
\label{Sch-RMt5}\end{eqnarray}
The final step is to identify the constant term in the latter equation with
the one in Eq.~(\ref{new_pol}) which amounts to a third condition
\begin{eqnarray}
-\beta(1-\beta)-a(a+1) = -m(2\beta+m-1)\ ,\label{b-1}
\end{eqnarray}
which introduces the dependence of the
$C^{(\beta ,\alpha ) }(x)$ functions on the index $m$, i.e.
$C^{ (\beta ,\alpha )}(x)\longrightarrow C_m^{ (\beta ,\alpha ) }(x)$.
Remarkably, Eqs.~(\ref{ep-b2_a2-1}) and (\ref{b-1})
indeed allow for consistent solutions for $\alpha$, $\beta $, and $\epsilon$
and given by (upon renaming $m$ by $(n-1))$:
\begin{eqnarray}
\beta_n=-(n+a)+1\ ,&\quad& \alpha_n={2 b\over n+a}\ ,\nonumber\\
\epsilon_n = (n+a)^2-{b^2\over (n+a)^2}\ ,
&&\quad n=m+1\, ,
\label{RMt_spectrum}
\end{eqnarray}
now with $n\ge 1$.
In this way Eq.~(\ref{RMt_spectrum}) provides
the exact tRM spectrum.
In effect, the polynomials that define the exact solution of the
Schr\"odinger equation with the trigonometric Rose-Morse potential
turn out identical to the Routh-Romanovski polynomials
however with indices that depend on $n$.
As we shall see below, this circumstance will become of crucial
importance in allowing for an {\it infinite \/} number of orthogonal
polynomials (as required by the infinite depth of the potential) and thus
for avoiding the finite orthogonality of the bare Routh-Romanovski
polynomials.
With that all the necessary ingredients for the solution of
Eq.~(\ref{Sch-RMt5}) have been prepared. In now exploiting the
Rodrigues representation (when making the $n$ dependence explicit),
\begin{eqnarray} C^{(\beta_n,\alpha_n)}_{n}(x)\equiv
{\mathcal R}_n^{(\beta_n,\alpha_n)}(x)=
{1\over K_n\ w^{(\beta_n, \alpha_n)}(x)}
{d^{n-1}\over d\ x^{n-1}}\left(w^{(\beta_n,\alpha_n)}(x)\
s(x)^{n-1}\right) \, , \label{pol-nvo}\end{eqnarray}
allows for the systematic construction of
the solutions of Eq.~(\ref{Sch-RMt5}).
Notice that in terms of $w^{(\beta_n ,\alpha_n)}(x)$ the wave function
is expressed as
\begin{eqnarray}
R_n^{(a,b)}(\cot^{-1} x)=\sqrt{w^{\left(-(n+a)+1, \frac{2b}{n+a} \right)} (x)}
{\mathcal R}_n^{\left( -(n+a)+1 ,\frac{2b}{n+a} \right)}(x)\, .
\label{R_z}
\end{eqnarray}
\noindent
Next one can check orthogonality of the physical solutions in $z$ space
and obtain it as it should be as
\begin{eqnarray} \int_0^\pi \ R_{n}\left(z\right)
R_{n'}\left(z\right)d z
=\delta_{n\ n'}\, ,
\label{orth_R}\end{eqnarray}
The orthogonality of the wave functions $R_n(z)$
implies in $x$ space orthogonality of the
${\mathcal R}_n^{(\beta_n,\alpha_n)}(x)$ polynomials with respect to
$w^{(\beta_n, \alpha_n)}(x)\frac{dz}{dx}$ due to the variable change.
As long as $\frac{d \cot^{-1} x }{dx}=-1/(1+x^2)\equiv -1/s(x)$
then the orthogonality integral for the polynomials takes the form
\begin{eqnarray} \int_{-\infty}^\infty {dx\over s(x)}
\sqrt{w^{(\beta_n,\alpha_n)}(x)}
{\mathcal R}_n^{(\beta_n,\alpha_n)}(x)
\sqrt{w^{(\beta_{n'},\alpha_{n'})}(x)}
{\mathcal R}_{n'}^{(\beta_{n'},\alpha_{n'})}(x)=
\delta_{n\ n'}\ .
\label{orto-1}\end{eqnarray}
The existence of an infinite number of orthogonal Routh-Romanovski polynomials
was made possible on cost of the $n$ dependence of the parameters which
emerged while converting the Schr\"odinger equation to the polynomial
one.
\subsection{The degeneracy in the spectra}
Inspection of Eq.~(\ref{RMt_spectrum}) reveals existence of an
intriguing degeneracy in the tRMP spectrum. In order to
see it let us assume that
the $a$-parameter in Eq.~(\ref{wafu_U(r)}) takes only
integer non-negative $a=0,1,2,...$-values. In such a case, one immediately
reads off from Eq.~(\ref{RMt_spectrum}) that the energy levels
for any $\sigma =(m+1+a)$ with $\sigma =1,2,3,...$ are
$\sigma $-fold degenerate as $a$ can take all the values between
$0$ and $(\sigma -1)$ according to $a=0,1,...(\sigma -1)$ (see Fig.~3).
\begin{figure}
\begin{center}
\includegraphics[width=70mm,height=70mm]{v_l-E_n.eps}
\caption{Degeneracy of energy levels of same $\sigma $ but different
angular momenta in $l$ dependent
trigonometric Rosen-Morse potentials. The curves correspond to
$b=60$, a value fitted to the $N$ spectrum.}
\end{center}
\end{figure}
Comparison of the $a$-degeneracy to the non-strange baryon spectra
in Fig.~1 and Eqs.~(\ref{brnsh_rules})
is suggestive of the idea to interpret the non-negative
integer $a$-values as angular momenta and view the
$\csc^2\left( \frac{r}{d}\right)$ term as
a {\bf non-standard} centrifugal barrier
\begin{equation}
a(a+1)\csc^2 \left( \frac{r}{d}\right)
\longrightarrow \frac{l(l+1)}{\sin^2 \left(\frac{r}{d}\right)}\, ,
\quad a\equiv l =0,1,2,....
\label{nst_cfbr}
\end{equation}
In terms of $\sigma $ the mass formula in Eq.~(\ref{mfla_prms})
translates to
\begin{equation}
\frac{
4
\left(
M_{\sigma ,I}-M_{1,I} +
\frac{1}{4}
g_I
\right)
}{g_I}
\longrightarrow
\epsilon_\sigma =-\frac{b^2}{\sigma^2}+\sigma^2, \quad b^2=\frac{4f_I}{g_I}\, .
\label{fit_pot}
\end{equation}
Non-standard centrifugal barriers of various types have been frequently
exploited in the calculation of the spectral properties of collective nuclei.
Specifically, in Ref.~\cite{PTG_1} use has been made
of an angular-momentum dependent potential originally suggested
by Ginocchio in Refs.~\cite{PTG}. The non-standard centrifugal
barrier in this potential asymptotically approaches
for certain parameter values the physical centrifugal barrier, $l(l+1)/r^2$
while for another set of parameters it
becomes the P\"oschel-Teller potential.
In our case, for small arguments the $\csc^ 2$ term also
approaches the physical centrifugal term as evident from
Eq.~(\ref{Taylor_Coul}) and visualized by Figs.~4.
Non-standard centrifugal barriers have the property to couple various
multipole modes in nuclei, an example being given more recently in
Ref.~\cite{Svetla}.
From now onward we shall adopt non-negative integer values for the $a$
parameter and view the $\csc^2$ term as a non-standard centrifugal barrier
according to
\begin{equation}
V_l(r)=-2B\cot\left({r\over d}\right)+
{\hbar^2\over 2\mu d^2} l(l+1)\csc^2\left({r\over d}\right)\, .
\label{l_pot}
\end{equation}
In so doing we are defining a new angular momentum dependent potential,
$V_l(r)$, that possesses the dynamical $O(4)$ symmetry.
Notice that this does not contradict
the statement of Ref.~\cite{Zeng} according to which
only pure or screened Coulomb-like potentials are $O(4)$ symmetric as
the theorem of Ref.~\cite{Zeng} refers to potentials with the standard
centrifugal barrier only.
The $b$ parameter in Eq.~(\ref{fit_pot}) now relates to the potential
parameter $B$ as
\begin{equation}
b={2 \mu d^2 B\over \hbar^2}\, .
\label{sh}
\end{equation}
Next we shall bring down the $a$ index, suppress the $b$ index
and change notations according to
\begin{equation}
R^{\left((a\equiv l), b\right)}_n\left(\frac{r}{d}\right)
\longrightarrow R_{\sigma l}\left(\frac{r}{d}\right)\, , \quad \sigma =n+l.
\label{new-R_nl}
\end{equation}
The single-particle wave functions within the angular dependent
trigonometric Rosen-Morse potential are straightforwardly calculated
from Eq.~(\ref{R_z}). Below we list the first three levels for illustrative
purposes:
\begin{itemize}
\item \underline{ground state $\sigma=1$}:
\begin{eqnarray}
\mbox{\bf 1s:}\quad R_{10}\left(
\frac{r}{d}\right)&=&
2\sqrt{b (b^2+1)\over (1-e^{-2 \pi b }) }
e^{-b \left( \frac{r}{d}\right)}\sin\left({
\frac{r}{d}}\right),
\label{gst}
\end{eqnarray}
\item \underline{first excited state, $\sigma =2$}:
\begin{eqnarray}
\mbox{\bf 2s:}\quad R_{20}\left(\frac{r}{d}\right)&=&
\sqrt{2 b \left(\left( \frac{b}{4}\right)^2+1\right)
\over (1-e^{-\pi b})}
e^{-b \left(\frac{r}{2 d}\right)}\sin
{\left(\frac{r}{d}\right)}
\left( b\sin\left(\frac{r}{d}\right)-2
\cos\left(\frac{r}{d}\right)\right),\nonumber\\
\mbox{\bf 2p:}\quad
R_{21}\left(\frac{r}{d}\right)&=&2\sqrt{2\over 3}\sqrt{b \left(
\left(\frac{b }{2}\right)^2+1\right)
\left(\left(\frac{b }{4}\right)^2+1\right)\over (1-e^{-\pi b}) }
e^{-b \frac{r}{ 2 d}}\sin^2
\left(\frac{r}{ d}\right),
\end{eqnarray}
\item \underline{ second excited state $\sigma =3$}:
\begin{eqnarray}
\mbox{\bf 3s:}\quad
R_{30}
\left(
\frac{r}{d}
\right))&=&
\frac{2}{9\sqrt{3}}
\sqrt{
\frac{
\left(
b
\left(
\frac{b}{9}
\right)^2+1
\right)
}
{\left(
1-e^{
-2\pi\frac{b}{3}
}
\right)
}}
e^{-b \frac{r}{3d}}
\sin\left(
\frac{r}{d}
\right)
\nonumber\\
&&\left(2\left(\left({b \over 3}\right)^2\sin^2\left({r\over d}
\right)-b\sin\left({r\over d}\right)\cos\left({r\over d}\right)
+\cos^2\left({r\over d}\right)\right)-1\right),\nonumber\\
\mbox{\bf 3p:}\quad
R_{31}\left(\frac{r}{d}\right)&=&\left({2\over 3}\right)^{\frac{3}{2}}
\sqrt{
b
\left(
\left(
\frac{b }{3}
\right)^2
+1\right)
\left(\left(
\frac{b }{9}\right)^2+1
\right)
\over (1-e^{-2 \pi \frac{b }{3}}) }
e^{-b \frac{r}{ 3 d}}\sin^2\left({\frac{r}{ d}}\right)\left(b \sin
\left(r\over d\right)-6 \cos\left(r\over d\right)\right),\nonumber\\
\mbox{\bf 3d:}\quad
R_{32}\left(\frac{r}{d}\right)&=&4\sqrt{2\over 15}\sqrt{
b \left(
\left(
\frac{b }{3}\right)^2+1\right)
\left(
\left(
\frac{b }{6}\right)^2+1
\right)
\left(\left(
\frac{b }{9}\right)^2+1
\right)\over (1-e^{-2 \pi \frac{b }{3}}) }
e^{-b \frac{r}{3d} }\sin^3\left({\frac{r}{ d}}\right).
\end{eqnarray}
\end{itemize}
In Figs.~5 we display as an illustrative example the
wave functions for the first two $\sigma $ levels.
\begin{figure}
\begin{center}
{(a)}\includegraphics[width=70mm,height=70mm]{v_r-2.eps}
{(b)}\includegraphics[width=70mm,height=70mm]{v_l.eps}
\end{center}
\caption{The $\cot\left( \frac{ r}{d}\right) $
potential with the physical centrifugal barrier (left) and the
non-standard one (right).}
\end{figure}
\begin{figure}
\begin{center}
{(a)}\includegraphics[width=70mm,height=70mm]{psi1l.eps}
{(b)}\includegraphics[width=70mm,height=70mm]{psi2l.eps}
\end{center}
\caption{ Wave functions for $\sigma =1,l=0$ (left)
and $\sigma =2, l=0,1$ (right).}
\end{figure}
\section{Discussion and concluding remarks}
In this work we made the case that the trigonometric Rosen-Morse
potential with the $\csc^2$ term being reinterpreted as a non-standard
centrifugal barrier provides quantum numbers and
level splittings of same dynamical $O(4)$ patterns
as observed within the $SU(2)_I\times O(4)$ classification scheme
of baryons in the light quark sector. Due to local
$O(4)\sim O(1,3)$ isomorphism, the potential
$\left( \frac{K}{2},\frac{K}{2}\right)\otimes
\lbrack\left(\frac{1}{2},0 \right)\oplus \left( 0,\frac{1}{2}\right) \rbrack $
levels have as a relativistic image the covariant field theoretical
high-spin degrees of freedom, $\psi_{\mu_1...\mu_K}$.
\noindent
We presented exact energies and wave functions of a particle within the above
potential and in so doing put it on equal algebraic footing
with the harmonic oscillator and/or the Coulomb potentials of
wide spread.
In this fashion,
\begin{itemize}
\item an element of covariance was brought
into the otherwise non-covariant potential picture,
\item the algebraic Hamiltonian in Eq.~(\ref{O(4)_Ham})
describing the $O(4)$ degeneracy
in the $N$ and $\Delta $ spectra was translated into a potential
model of same dynamical symmetry.
\end{itemize}
The $O(4)$ degeneracy of the $N$ and $\Delta$ spectra
seem to speak in favor of quark-diquark as leading configurations
of resonance structures. Yet, form factors are known to be more
sensitive to configuration mixing effects and may require inclusion of
genuine three quark configurations.
As long as the tRMP shape captures the
essential traits of the quark-gluon dynamics of QCD, we here
consider it as a promising candidate for a realistic common quark potential
that is worth being employed in the calculations
of spectroscopic properties of non-strange resonances.
\section*{Acknowledgments}
It is a pleasure to thank the organizers for their warm hospitality
and the excellent working conditions provided by them during the workshop.
We benefited from extended discussions with Hans-J\"urgen Weber
and Alvaro P\'erez Raposo on various aspects of the
Romanovski polynomials with emphasis on their orthogonality properties.
Work supported by Consejo Nacional de Ciencia y
Tecnolog\'ia (CONACyT) Mexico under grant number C01-39280.
|
2,877,628,090,337 | arxiv | \subsection{Network Architecture}
For the large set of image data, we employed a three-layer, large-scale deep neural network with a reconstruction independent component analysis (RICA) cost function,
\begin{equation}
\label{RICA}
\begin{split}
\min_{W,\alpha,b} & \sum_i \left\| W^T (\alpha W x^{(i)}) + b - x^{(i)} \right\|_2^2 + \lambda \sqrt{ (\alpha W x^{(i)})^2 } \\
& \text{ subject to } \| W^{(k)} \|_2 = 1, \forall k,\nonumber
\end{split}
\end{equation}
where as in~\cite{coates13}, $W$ is a weighting matrix, $\alpha$ is a scaling value and $x^{(i)}$ are the data points at the beginning of each layer. In addition, we introduce an offset, $b$, for increased model flexibility. The parameter $\lambda$ controls the relative sparsity, and is set to 0.1 at the first two layers and 0.01 at the final layer. Unlike \cite{coates13}, we do not presently include a pooling layer, as we believe the scale of the network and training data allows a similar translational invariance to be automatically learned. A particular advantage conferred by the RICA construction is that the sparseness term $\lambda \sqrt{ (\alpha W x^{(i)})^2 }$ can be computed in-situ with the rest of the model parameters. This is in contrast to the conventional sparse autoencoder construction that requires a second pass through the data to compute a sparseness-specific gradient contribution.
Fig.~\ref{topology} illustrates the structure of our network. The three layers are composed of two untied convolutional layers, and a third fully-connected layer. The first convolutional layer utilizes 5184 filters \footnote{Arranged in a $72\times72$ grid} of input size $16\times16\times3$ with stride $4$ and output size $4\times4\times24$. The second layer takes $16$ spatially contiguous \footnote{Arranged in a $4\times4$ grid} $4\times4\times24$ outputs of the first layer and connects them fully to a $4\times4\times24$ output. The stride length of the second layer is 4. The third layer is dense, and fully connects the $62\times62\times24$ outputs of the second layer to 4096 top-level neurons. The total number of parameters trained is 15 billion. After each layer, local contrast normalization (LCN) is applied prior to continuing onto the next layer. Though no pooling is applied, the window sizes at the next layer are large enough to incorporate spatial information from neighboring blocks.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.3]{topology.png}
\end{center}
\caption{Network topology of large scale, trained network. Approximately 15 billion parameters}
\label{topology}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.3]{mixtraining.jpg}
\end{center}
\caption{Pipeline for semi-parallel training of sparse autoencoders from a single data source}
\label{onlinetrain}
\end{figure}
Training data is arranged into 99,207 data blocks of 960 images. Each data block consists of 5 mini-batches, where each mini-batch contains 192 images. Due to the scale of the data, the proposed algorithm reduces training time by employing a pipeline technique where the next layer begins training before the previous layer has finished. Analogous to the example shown in Fig.~\ref{onlinetrain}, after a layer $L$ has trained an initial set of data blocks (in our case, 1000), the next layer, $L+1$, starts training. To accomplish this, two instances of the layer $L$ are run simultaneously: one which continues training and one that uses up-to-date parameters to forward propagate data from Block 0 to the layer $L+1$. The parameters of the forward-propagating layer $L$ instance are periodically synchronized with the layer $L$ instance that continued training. We observed that our model was not sensitive to the choice of synchronization frequency. As a rule of thumb, we wait to train layer $L+1$ until the objective of layer $L$ stabilizes, which typically occurs after approximately one million images.
\subsection{HPC Architecture}
\input{hpc}
\section{Introduction}
\input{introduction}
\section{Overview of the YFCC100M Dataset}\label{sec:dataset}
\input{yfccdata}
\section{Analysis with Large Scale Neural Networks}\label{sec:network}
\input{architecture}
\section{Preliminary Results}\label{sec:results}
\input{results}
\input{summary}
\section{Acknowledgments}
We would like to thank Adam Coates, Brody Huval and Andrew Ng for providing their COTS HPC Deep Learning software and helpful advice. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
\newpage
\bibliographystyle{IEEEbib}
\section{Summary and Future Work}\label{sec:summary}
The results discussed in this paper present a snapshot of the work in progress at Lawrence Livermore National Laboratory in scaling up deep neural networks. Such networks offer enormous potential to researchers in both supervised and unsupervised computer vision tasks, from object recognition and classification to unsupervised feature extraction.
To date, we see highly encouraging results from training our large 15 billion parameter three-layer neural network on the YFCC100M dataset in an unsupervised manner. The results suggest that the network is capable of learning highly complex concepts such as cityscapes, aircraft, buildings, and text, all without labels or other guidance. That this structure is visible upon examination is made all the more remarkable due to the noisiness of our test set (taken at random from the YFCC100M dataset itself).
Future work on our networks will focus on two main thrusts: (1) improve the high-level concept learning by increasing the depth of our network, and (2) scaling our network's width in the middle layers. On the first thrust, we aim for improved high-level summarization and scene understanding. Challenges on this front include careful tuning of parameters to combat the ``vanishing gradient" problem and design of the connectivity structure of the higher-level layers to maximize learning. On the second thrust, our challenges are primarily engineering focused. Memory and message passing constraints become a serious concern, even on the large HPC systems fielded by LLNL. As we move beyond our current large neural network, we plan to explore the use of memory hierarchies for staging intermediate/input data to minimize the amount of node-to-node communication, enabling the efficient training and analysis of even larger networks.
|
2,877,628,090,338 | arxiv | \section{Introduction}
Self-consistent field (SCF) methods are the starting point of virtually every quantum chemistry application. Kohn-Sham (KS) density-functional theory\cite{Kohn65} (DFT), Hartree-Fock\cite{Hartree1957} (HF) and even semiempirical methods\cite{Thiel2000} require as a fundamental numerical step the variational optimization of the energy with respect to the orbitals, that are determined in the process. The standard algorithm to solve the optimization problem -- the self-consistent field algorithm itself\cite{Roothan51,Roothaan1960} -- is the iterative solution of a non-linear eigenvalue problem, which is solved using a fixed-point approach. As the problem is strongly non-linear, a simple iterative procedure is often not sufficient to achieve convergence. In the last several decades, various strategies have been developed to improve the reliability and stability of the SCF algorithm\cite{Rabuck1999,Cances2000}, most notably Pulay's direct inversion in the iterative subspace\cite{Pulay1980,Pulay1982,Hamilton1985} (DIIS) and various refinement thereof\cite{Sellers1993,Cances2000,Kudin2002}, damping of the SCF iterations\cite{Karlstrom1979}, level shifting\cite{Saunders1973}, and combinations of the methods. Advanced strategies to compute accurate guesses of the initial density have also greatly improved the overall reliability of the method\cite{Vacek1999,Lethola19}, to the point that closed-shell systems seldom pose convergence problems in real life applications.
Unfortunately, things are not so easy when one is dealing with open-shell systems, or even with closed-shell systems with small energy gaps between the highest occupied and lowest unoccupied molecular orbital (HOMO, LUMO). Furthermore, even in well-behaved cases, achieving a very tight convergence of the SCF density can be difficult, but mandatory for applications involving high-order response properties or geometrical derivatives, especially if a post-HF method is used. Being frustrated by problematic SCF convergence is therefore a common experience for computational chemists. For these reasons, alternative strategies and robust numerical procedures are still object of active investigation, despite SCF being possibly the most well-established technique in computational chemistry.
An alternative strategy to solving the SCF problem is to use a standard optimization technique, and, in particular, a second-order method\cite{Bacskay1981,Bacskay1982}. In second-order methods, the SCF energy is parametrized as a function of non-redundant orbital rotations and expanded up to second order in a Taylor series, obtaining thus a quadratic energy model. The latter is then optimized to find a step and the process is iterated until convergence. The straightforward Newton-Raphson (NR) method just described suffers however of a small convergence radius, which can cause an erratic or even divergent behavior if the optimization process is started far from a local minimum. It is possible to constrain the minimization so that the computed step is no larger than a user-defined trust radius. The trust-radius Newton method, known as Levenberg-Marquardt\cite{Fletcher1999} (LM) optimization, can be further coupled with an adaptive choice of the trust radius, based on the agreement of the quadratic model with the actual energy. The global strategy, which has been originally proposed by Fletcher\cite{Fletcher1999} (FLM), guarantees convergence to the closest local minimum for a well-behaved function and can therefore be used to implement a black-box SCF procedure. One of the most attractive features of such a procedure is that it exhibits a quadratic convergence rate, which makes it suited for applications where a very tight convergence of the SCF orbitals and density is required.
Computational cost is, of course, a fundamental aspect that one needs to keep in mind when designing a SCF code. A quadratically convergence SCF (QCSCF) scheme can be implemented in a direct fashion, where all the Hessian-vector products needed to compute the step are performed without assembling the full Hessian matrix and in the atomic orbital (AO) basis. The operations required to perform such a matrix-vector product (MVP) are computationally equivalent to the direct construction of a Fock matrix. Therefore, QCSCF and SCF exhibit the same computational scaling and can benefit both of Cauchy-Schwarz screening in an integral-direct implementation and even of linear-scaling techniques exactly in the same way. Nevertheless, QCSCF requires, in general, a larger number of Fock matrix builds and for well-behaved systems is always going to be more expensive than standard SCF. On the other hand, QCSCF does not require the $\mathcal{O}$($N^3$) diagonalization of the Fock matrix, which makes it in principle advantageous in the asymptotic regime.
Despite the increased computational cost, many second-order SCF implementations are available. An implementation based on the same algorithm as the one described in the present paper is present in the Dalton suite of programs\cite{daltonpaper} for restricted (RHF) and high-spin restricted open (ROHF) references, and a similar trust-region augmented Hessian implementation has been recently presented by Helmich-Paris\cite{Helmich-Paris2021} and implemented in the ORCA package\cite{Neese2020} for restricted and unrestricted (UHF) references. In the Gaussian 16 suite of programs\cite{g16}, a NR second-order method combined with a linear search when far from the quadratic regions is available for RHF and UHF. Other approaches, either based on a quasi-Newton update\cite{Fischer1992} or on an orbital Hessian based preconditioned conjugate gradient\cite{Wong1995}, can be found in MOLCAS\cite{Karlstrom2003,Aquilante2016} and NWChem\cite{Apra2020}, respectively.
The QCSCF program described in this contribution has been implemented in the CFOUR suite of programs\cite{cfour,Matthews2020}. CFOUR is a quantum chemistry package devoted to high-level post-HF calculations, and therefore the computational cost associated with solving the SCF equations is usually not a main concern for the typical application. Therefore, we do not pursue an integral-direct implementation, even though it would not present any additional difficulty, nor the use of linear-scaling techniques. To achieve some computational efficiency, we offer instead an implementation that can either proceed in a traditional fashion, reading pre-computed two-electron integrals from disk, or use their Cholesky decomposition\cite{Beebe1977,Roeggen1986,Roeggen2008,Koch2003,Weigend2009,Aquilante2011,Folkestad2019} (CD). The latter possibility comes as a part of a long-term goal to deploy the CD machinery for subsequent post-HF calculations\cite{Matthews2020} that has been actively pursued by several developers of the CFOUR suite of programs and that has recently been proposed for complete active space-SCF calculations\cite{Nottoli2021} and for the calculation of NMR chemical shielding tensors at second-order M{\o}ller-Plesset perturbation theory (MP2) using gauge-including atomic orbitals\cite{Burger2021}. On the other hand, having a robust, almost black box SCF implementation is particularly attractive for the users of CFOUR that deal with open-shell systems, where the unrestricted (UHF) and high-spin restricted-open-shell HF (ROHF)\cite{Roothaan1960} SCF equations can be particularly hard to converge. In other words, the main goal of this implementation is to save human time rather than machine time.
The paper is organized as follows. In section \ref{sec:theo}, we briefly recapitulate the norm-extended optimization algorithm and its application to the SCF problem. In section \ref{sec:impl}, we discuss the implementation of the various quantities required for the QCSCF procedure with and without CD of the two-electron integrals. In section \ref{sec:app}, we present a few case studies, which represent prototypical applications of the QCSCF program and that illustrate possible problems and drawbacks that a user can encounter. We finally conclude the paper with a short summary.
\section{Norm-extended optimization of the SCF energy\label{sec:theo}}
In this section, we discuss the main principles of a QCSCF implementation based on the norm-extended optimization (NEO) algorithm. The NEO scheme, originally formulated and implemented by Jensen and coauthors for multiconfigurational SCF wavefunctions\cite{Jensen1983,Jensen1986}, is an elegant and efficient practical realization of the FLM second-order procedure that allows for a direct implementation. In this section, we focus our discussion on the high-spin ROHF optimization problem. In the following, we omit the high-spin specification and refer to the method simply as ROHF. RHF and UHF can be easily derived from the more general ROHF case. The ROHF determinant is parametrized in terms of $N_r$ orbital rotations
\begin{equation}
\label{eq:ROHFWF}
|\Phi\rangle = e^{-\hat{\kappa}}|0\rangle,
\end{equation}
where $|0\rangle$ is a reference determinant,
\begin{equation}
\hat{\kappa} = \sum_{ix} \kappa_{ix} \hat{E}^{-}_{ix} + \sum_{ia} \kappa_{ia} \hat{E}^{-}_{ia} + \sum_{xa}\kappa_{xa}\hat{E}^{-}_{xa}
\end{equation}
is the elementary orbital rotation operator, where the rotations mix internal (i, j, ..., doubly occupied) and active (x, y, ..., singly occupied), internal and external (a, b, ..., empty), and active and external orbitals, and $\hat{E}^{-}_{pq} = \hat{E}_{pq} - \hat{E}_{qp}$, where $\hat{E}_{pq}$ is a singlet excitation operator (p, q, ..., generic orbitals). The matrix elements of $\hat{\kappa}$ introduce a complete, non-redundant parametrization of orbital rotations that can connect the reference determinant $|0\rangle$ to any non-orthogonal determinant. This fundamental result, known as Thouless' theorem,\cite{Thouless60} is the basis of direct optimization SCF techniques. We define a quadratic model of the SCF energy by expanding the expectation value of the Hamiltonian up to second order in $\kappa$:
\begin{equation}
\label{eq:QModel}
\mathcal{Q}(\kappa) = E_0 + \sum_{pq} \kappa_{pq} g_{pq} + \frac{1}{2}\sum_{pqrs} G_{pq,rs}\kappa_{pq}\kappa_{rs},
\end{equation}
where $E_0$ is the reference energy. The orbital-rotation gradient $g \in \mathbb{R}^{N_r}$ is
\begin{equation}
\label{eq:gradient}
g_{pq} = \left . \langle 0 | [\hat{E}^{-}_{pq},\mathcal{H}] | 0\rangle \right |_{\kappa=0},
\end{equation}
and the orbital-rotation Hessian $G \in \mathbb{R}^{N_r\times N_r}$ is given by
\begin{equation}
\label{eq:Hessian} G_{pq,rs} = \frac{1}{2}(1+P_{pq,rs})\left . \langle 0 | [\hat{E}^{-}_{pq},[\hat{E}^{-}_{rs},\mathcal{H}]] | 0\rangle \right |_{\kappa=0},
\end{equation}
where $P_{pq,rs}$ permutes the indices pairs $pq$ and $rs$.
The FLM procedure is an iterative algorithm that computes an optimization step by minimizing the quadratic model in eq.~\ref{eq:QModel} under the constraint that the norm of the step is not larger than a user-defined trust radius $R_t$. This is achieved by introducing the constraint using a Lagrange multiplier $\nu$:
\begin{equation}
\label{eq:Lag}
\mathcal{L}(\kappa,\nu) = \mathcal{Q}(\kappa) - \frac{1}{2}\nu (\|\kappa\|^2 - R_t^2).
\end{equation}
The resulting Euler-Lagrange equations, also known as the LM equations, are
\begin{equation}
\label{eq:LM}
\left \{
\begin{array}{l}
(G - \nu I)\kappa = -g,\\
\|\kappa\|(\nu) = R_t.
\end{array}
\right .
\end{equation}
The trust radius is updated dynamically during the optimization based on the agreement between the energy and its quadratic model. Let $\Delta E$ be the actual energy variation after a LM step, i.e., $\Delta E = E(\kappa) - E_0$ and let $\Delta Q$ be the predicted variation using the quadratic model, i.e., $\Delta Q = Q(\delta) - E_0$, where $\kappa$ is the solution to the LM equations. Let $r = \Delta E/\Delta Q$ be the ratio of the variations. If the ratio is negative, the energy is rising and the step is rejected. The trust radius is reduced by a factor (0.66 in our implementation) and a new step is computed. If the ratio is positive the step is accepted. If $0<r\leq 0.25$, the agreement of the quadratic model with the energy is poor and the trust radius is reduced (again, in our implementation, by a factor 0.66). If $0.25 < r \leq 0.75$ the trust radius is left unchanged, while if $r > 0.75$ the trust radius is increased (by a factor 1.2 in our implementation). The algorithm is robust with respect to the choice of the parameters used to adapt the trust radius, and it can be proven that convergence to a local minimum is always achieved\cite{Fletcher1999}.
In principle, solving the LM equations requires the knowledge of at least the lowest eigenvalue $\lambda_1$ of the Hessian, as it can be proved that, to get to a minimum, the constraint equation needs to be solved in the range $\nu \in (-\infty, \lambda_1)$. In other words, one would need to compute the lowest eigenvalue of the Hessian and then to solve the LM equations. The NEO algorithm is an efficient, combined realization of the two steps that is achieved by introducing a gradient-scaled, augmented Hessian $L(\alpha) \in \mathbb{R}^{N_r+1,N_r+1}$, defined as
\begin{equation}
\label{eq:NeoL}
L(\alpha) = \left (
\begin{array}{cc}
G & \alpha g \\
\alpha g^\dagger & 0
\end{array}
\right ).
\end{equation}
Let $P$ be an orthogonal projector such that $PL(\alpha)P = G$. The NEO step is given by
\begin{equation}
\label{eq:NEOStep}
\kappa = \frac{1}{\alpha s} Py,
\end{equation}
where
\begin{equation}
\label{eq:NeoEig}
L(\alpha) y = \nu_1 y,
\end{equation}
i.e., $y$ is the eigenvector of $L(\alpha)$ associated to its lowest eigenvalue, $s = (1-P)y$, and $\alpha$ is obtained by solving numerically the monodimensional equation $\|\kappa\| = R_t$. It can be shown that the NEO step solves the LM equations with a level-shift parameter $\nu = \nu_1$. The Hylleraas-Undheim-MacDonald theorem guarantees that, as $G = PL(\alpha)P$, $\nu_1 \leq \lambda_1$. Therefore, the NEO algorithm converges to a local minimum\cite{Jensen1983}.
The NEO algorithm requires one to compute the lowest eigenvalue of the augmented Hessian $L$, which can be done in a direct fashion using an iterative algorithm such as Davidson diagonalization. It can also be shown that if the vector $(0,\ldots, 0,1)$ is kept into the subspace, it is possible to compute a new step, in case the one computed is rejected, without having to solve the eigenvalue problem in eq.~\ref{eq:NeoEig} again.
As a final note, we remark that, as soon as the optimization has reached a local region and the quadratic approximation becomes valid, the NEO step becomes fully equivalent to a standard NR step. Therefore, in the final stage of the optimization, we switch from solving the NEO equation to the plain NR ones.
\section{Implementation\label{sec:impl}}
Equations for the energy, the gradient, and the MVP can be conveniently obtained by introducing the generalized Fock matrix, $F$, whose elements are obtained as follows
\begin{align}
\label{eq:genfock}
&F_{ip} = 2(F^{I}_{ip} + F^{A}_{ip}),\\
&F_{xp} = Q_{xp} + F^{I}_{xp}, \\
&F_{ap} = 0,
\end{align}
with $F^{I}_{pq}$, $F^{A}_{pq}$, and $Q_{xp}$ being the inactive Fock matrix, the active Fock matrix, and the Q matrix respectively. The latter can be effectively computed in the AO basis
\begin{align}
&F^{I}_{\mu\nu} = h_{\mu\nu} + \sum_{\rho\sigma}P^{I}_{\rho\sigma}\left[(\mu\nu|\rho\sigma) - \frac{1}{2}(\mu\sigma|\rho\nu)\right],\\
&F^{A}_{\mu\nu} = \sum_{\rho\sigma}P^{A}_{\rho\sigma}\left[(\mu\nu|\rho\sigma) - \frac{1}{2}(\mu\sigma|\rho\nu)\right],\\
&Q_{\mu\nu} = \sum_{\rho\sigma}P^{A}_{\rho\sigma}\left[(\mu\nu|\rho\sigma) - (\mu\sigma|\rho\nu)\right],
\end{align}
where we have used Mulliken notation for the two-electron integrals and Greek indices to refer to the AOs.
Here, $P^{I}_{\mu\nu} = 2\sum_{i}C_{\mu i}C_{\nu i}$ and $P^{A}_{\mu\nu}=\sum_{u}C_{\mu u}C_{\nu u}$ are the inactive and active one-body density matrices written in the AO basis, respectively. As it is evident from the equations above, the actual implementation requires minor modifications to the customary routine that assembles the RHF Fock matrix.
The generalized Fock matrix is transformed into the MO basis and then used to calculate the ROHF energy
\begin{equation}
E_{\rm ROHF} = \sum_{i}\left(h_{ii} + \frac{1}{2}F_{ii}\right) + \frac{1}{2}\sum_{u}\left(h_{uu}+F_{uu}\right).
\end{equation}
Furthermore, its anti-symmetric part is used to compute the gradient as follows
\begin{equation}
g_{pq} = 2(F_{pq}-F_{qp}),
\end{equation}
where the only relevant rotations are the ones mixing orbitals belonging to different classes (i.e., internal, active, external).
The eigenvalue problem in eq. \ref{eq:NeoEig} is solved via Davidson diagonalization, while the NR linear system is solved using an iterative preconditioned conjugate-gradient (PCG) solver. Both algorithms are implemented in a matrix-free spirit, that is, they only require one to perform MVPs, and not to build and store in memory the full matrix.
It is important to stress that the overall algorithm works in the MO basis, as this allows us to exploit the diagonal dominant character of the MO rotation Hessian.
In the MO basis, the MVP can be written as
\begin{equation}
\label{eq:MVP}
b_{pq} = 2(\tilde{F}_{pq}-\tilde{F}_{qp}) + \frac{1}{2}\sum_{r}\left(\kappa_{pr}g_{rq} - g_{pr}\kappa_{qr}\right),
\end{equation}
where $\kappa$ is a trial vector in Davidson's algorithm and where we introduce the one-index transformed generalized Fock matrix $\tilde{F}$, which is defined as in eq. \ref{eq:genfock} but using intermediate matrices computed with density matrices dressed with the trial vector. To avoid transforming the two-electron integrals into the MO basis, we compute the MVP in the AO basis. In particular, we define transformed and symmetrized one-body density matrices:
\begin{align}
&\tilde{P}^{I}_{\mu\nu} = 2\sum_{iq}C_{\mu i}\kappa_{iq}C_{\nu q} + 2\sum_{iq}C_{\nu q}\kappa_{qi}C_{\sigma i},\\
&\tilde{P}^{A}_{\mu\nu} = \sum_{uq}C_{\mu u}\kappa_{uq}C_{\nu q} + \sum_{uq}C_{\nu q}\kappa_{qu}C_{\sigma u},
\end{align}
and use them to assemble the one-index transformed internal and active Fock matrices and Q matrix. These are in turn used to build the one-index transformed generalized Fock matrix, which is then finally transformed back to the MO basis. Therefore, computing the required MVP exhibits a computational cost similar with the one of a standard SCF iteration and, more importantly, the same scaling with respect to the system's size. Specifically, the leading term operation in both the SCF and QCSCF algorithms scale as $\mathcal{O}(N^{4})$, where N is the number of basis functions. However, each QCSCF iteration requires the solution of either the NEO or the NR equations with an iterative solver thus increasing the prefactor of a QCSCF calculation with respect to the standard SCF one.
When using the CD, the two-electron integrals matrix -- written in the AO basis -- is approximated as follows
\begin{equation}
(\mu\nu|\rho\sigma) \simeq \sum_{K}^{N_{\rm ch}} L^{K}_{\mu\nu}L^{K}_{\rho\sigma},
\end{equation}
where $L^{K}_{\mu\nu}$ is a Cholesky vector and $N_{\rm ch}$ is the rank of the decomposition.
In this framework, it is convenient to compute the Coulomb and exchange contributions to the various matrices separately\cite{Aquilante2007,Aquilante2011}.
The Coulomb contribution can be computed by performing a straightforward contraction of a Cholesky vector with the one-body density matrix and then multiplying the resulting factor by a Cholesky vector. On the other hand, in order to compute the exchange contributions, it is convenient to first half-transform the Cholesky vectors into the MO basis, and then assemble the exchange contribution. Considering the inactive Fock matrix as an example we have
\begin{equation}
F^{I}_{\mu\nu} = h_{\mu\nu} + \sum_{K}^{N_{\rm ch}}\left(\sum_{\rho\sigma}P^{I}_{\rho\sigma}L^{K}_{\mu\nu}L^{K}_{\rho\sigma}
- \sum_{i}L^{K}_{\mu i}L^{K}_{\nu i}\right)
\end{equation}
where, $L^{K}_{\mu i} = \sum_{\nu}C_{\nu i}L^{K}_{\mu\nu}$.
A similar procedure is applied also for the calculation of the one-index transformed matrices.
The restricted and unrestricted SCF implementations can be trivially obtained as sub-cases of the ROHF one. Second-order RHF is simply derived by neglecting the contributions of active orbitals, that is, by setting the active density matrix to zero. Under these circumstances, only the inactive Fock matrix has to be considered. Regarding the second-order implementation of UHF, we have to take into account two different bases -- one for the alpha and one for the beta electrons. As a result, we have two different set of orbital rotation parameters
\begin{equation}
\hat{\kappa} = \sum_{ia}\kappa^{\alpha}_{ai}\hat{E}^{\alpha}_{ai}
+ \sum_{ia}\kappa^{\beta}_{ai}\hat{E}^{\beta}_{ai},
\end{equation}
where, $\hat{E}^{\alpha}_{pq}=\hat{a}^{\dagger}_{p\alpha}\hat{a}_{q\alpha}$ and $\hat{E}^{\beta}_{pq}=\hat{a}^{\dagger}_{p\beta}\hat{a}_{q\beta}$. Furthermore, two different density matrices can be obtained namely, $^{\alpha}P^{I}_{\mu\nu}=\sum_{i}C^{\alpha}_{\mu i}C^{\alpha}_{\nu i}$ and $^{\beta}P^{I}_{\mu\nu}=\sum_{i}C^{\beta}_{\mu i}C^{\beta}_{\nu i}$. These are used to assemble the alpha and beta Fock matrices
\begin{align}
&^{\alpha}F^{I}_{\mu\nu} = h_{\mu\nu} + \sum_{\rho\sigma} {^{\alpha}}P^{I}_{\rho\sigma}
\left[(\mu\nu|\rho\sigma) - (\mu\sigma|\rho\nu)\right] +
\sum_{\rho\sigma}{^{\beta}}P^{I}_{\rho\sigma}(\mu\nu|\rho\sigma),\\
&^{\beta}F^{I}_{\mu\nu} = h_{\mu\nu} + \sum_{\rho\sigma}{^{\beta}}P^{I}_{\rho\sigma}
\left[(\mu\nu|\rho\sigma) - (\mu\sigma|\rho\nu)\right] +
\sum_{\rho\sigma}{^{\alpha}}P^{I}_{\rho\sigma}(\mu\nu|\rho\sigma).
\end{align}
Accordingly, $^{\alpha}F^{I}$ and $^{\beta}F^{I}$ are used to compute the alpha and beta parts of the gradient respectively.
A summary of the main equations for the RHF and UHF references can be found in Table \ref{tab:HFrefs}.
\begin{table}[h]
\centering
\begin{tabular}{cccc}
\toprule
Reference & Energy & Gradient & MVP \\
\midrule
RHF & $\sum_{i}\left(h_{ii} + F^{I}_{ii}\right)$
& $g_{ai}=-4F^{I}_{ia}$
& $b_{ai} = \tilde{g}_{ai}$\\[5mm]
\multirow{2}{*}{UHF}& \multirow{2}{*}{$\sum_{i}\left(h_{ii} + \tensor[^{\alpha}]{F}{}^{I}_{ii} + \tensor[^{\beta}]{F}{}^{I}_{ii}\right)$} &
$g^{\alpha}_{ai}=-4\tensor[^{\alpha}]{F}{}^{I}_{ia}$ &
$b^{\alpha}_{ai} = \tilde{g}^{\alpha}_{ai}$\\
&
&$g^{\beta}_{ai}=-4\tensor[^{\beta}]{F}{}^{I}_{ia}$&
$b^{\beta}_{ai} = \tilde{g}^{\beta}_{ai}$\\
\bottomrule
\end{tabular}
\caption{Leading equations used in the second-order implementation of RHF and UHF. We have used a shorthand notation for $\tilde{g}_{ai}$ that means a gradient built with one-index transformed matrices; moreover, this is the only contribution to the MVP since the commutator-like term of eq.~\ref{eq:MVP} vanishes. }
\label{tab:HFrefs}
\end{table}
We conclude this section with an important remark. Both Davidson diagonalization and the PCG solver require a preconditioner, the most common choice being the inverse diagonal of the matrix. This is a good choice, as the MO rotation Hessian is diagonally dominant in the MO basis. However, assembling the exact diagonal of the Hessian can be expensive. This is not a problem in neither RHF nor UHF, as the leading term of the electronic Hessian's diagonal is given by the difference $F^{I}_{aa}-F^{I}_{ii}$: the two-electron integral contributions to the diagonal can therefore be safely neglected. However, the same is, unfortunately, not the case for ROHF. In particular, the diagonal elements associated with the inactive-active and active-external rotations are poorly approximated by the diagonal elements of the inactive Fock matrices. A much better approximation can be obtained by noting that some of the two-electron contributions to the aforementioned diagonal elements are given by $Q_{ii}$ and $Q_{aa}$. From the theoretical definition of the $Q$ matrix such blocks would not exist; nevertheless, we have direct access to them since we are computing $Q$ in the AO basis. In this way, a good approximation to the Hessian's diagonal can be computed with no additional cost. The inclusion of such terms is fundamental in order to achieve a good convergence for both the Davidson and PCG algorithms.
\section{Numerical results\label{sec:app}}
In this section, we present numerical results obtained with the quadratic SCF implementation described in the paper. First, we apply the strategy to medium-sized, hard to converge cases to show the numerical stability of the algorithm. Then, we apply the CD version of the code to larger molecules, to show that thanks to the compression operated by the CD technique, calculations can be performed efficiently even for very large systems. Finally, we present a discussion on the possible numerical issues that can arise even when using a quadratically convergent implementation and give some suggestions to rationalize and troubleshoot them.
The QCSCF algorithm has been implemented in the CFOUR suite of programs\cite{cfour,Matthews2020}, which has been used for all the following QCSCF calculations. In our setup, a few standard SCF iterations are performed to generate a reasonable starting guess for the second-order algorithm. While the second-order optimization can be used for the overall calculation, this is seldom a good idea, as it would require a large number of expensive quadratic steps to reach the local region. On the other hand, a small number of standard SCF iterations are usually enough to get to a good starting point, from which the strong and fast convergence of a second-order scheme can be efficiently exploited.
Specifically, starting from a guess generated by diagonalizing the core-Hamiltonian, we perform up to 30 SCF iterations applying a damping of 0.5 to the first 5 iterations and then using Pulay's DIIS\cite{Pulay1980} to accelerate convergence. We switch to the quadratic algorithm as soon as the root mean square deviation of the density-matrix increment is lower than $10^{-1}$ and its maximum deviation is lower then $1$.
\subsection{Quadratically convergent calculations on small and large molecules}
In order to show the black-box nature of the proposed algorithm, we tested it on two systems that exhibit problematic convergence patterns. The first system is $\rm Ti_{2}O_{4}$ in its $ D_{2\textit{h}}$ geometry, where the two titanium atoms lie on one of the three $C_{2}$ axes, the second is the phenoxyl radical. The first system is used as a template to troubleshoot SCF converge issues\footnote{See \url{https://www.scm.com/doc/ADF/Examples/SCF\_Ti2O4.html}} in the Amsterdam Density Functional (ADF) quantum-chemistry package\cite{ADF}. We compute, using Dunning's aug-cc-pVTZ basis set\cite{Kendall1992}, both a singlet and a triplet wavefunction, the latter using a ROHF reference. The second system, a doublet in its ground state, is usually well behaved, but we choose a particular geometry at which two electronic states are nearly intersecting. Again, we use a ROHF reference and we employ Dunning's cc-pVDZ basis set\cite{Dunning1989}.
We report in Figure \ref{fig:TiPhe} the convergence profile of the calculation on Ti$_2$O$_4$ (left panel) and of the phenoxyl radical (right panel). Both calculations converge reasonably smoothly in a limited number of iterations. It is interesting to note the we could not get the phenoxyl radical ROHF calculation to converge using the standard SCF code in CFOUR using the algorithm described in Ref. \cite{Binkley1974}, despite various attempts using several combinations of DIIS and damping parameters. In other words, using a QCSCF program can turn a labor intensive, possibly fruitless activity into a simple, routine application, at the cost of increased computer time.
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{images/TiPhen.pdf}
\caption{Convergence profile for Ti$_2$O$_4$, both for the RHF (blue) and ROHF (orange) references, and of the phenoxyl radical respectively on the left and right panel. The QC algorithm started at iteration 16 for triplet Ti$_2$O$_4$, 10 for singlet Ti$_2$O$_4$, and 12 for the phenoxyl radical. The final electronic energies for the singlet and triplet Ti$_2$O$_4$ are -1996.144~738~811 and -1996.100~447~937 $E_{h}$ respectively while the one for the phenoxyl radical is -304.953~646~044 $E_{h}$.}
\label{fig:TiPhe}
\end{figure}
As a second example of standard use of a QCSCF implementation, we optimize the RHF wavefunction of a small organic molecule, paranitroaniline (PNA), using Dunning's aug-cc-pVDZ basis set\cite{Dunning1989}. This is not a problematic system, as convergence can be easily achieved with a standard SCF code, but becomes an issue for applications where a very tight SCF convergence is required. This is the case when one is interested in computing high-order molecular properties using a post-HF method. A typical example is the calculation of accurate anharmonic force fields, as the ones required for the treatment of anharmonicity. In such applications, cubic and quartic force fields are in general computed by numerically differentiating analytical Hessians, the latter computed for instance using coupled-cluster (CC) theory. To achieve a good numerical accuracy, a tight convergence of the SCF and CC amplitude equations, as well as the various coupled-perturbed equations, is paramount: this can be difficult for a regular SCF code when using extended basis sets. In Figure \ref{fig:pna}, we compare the convergence profile of the standard SCF code in CFOUR with QCSCF. The standard SCF code has no issue achieving a reasonable ($10^{-7}$ to $10^{-8}$ in the RMS norm of the density increment) convergence, but then stagnates and oscillates. On the other hand, the quadratically convergent code achieves effortlessly the required tight convergence ($10^{-11}$ in the RMS norm of the gradient).
\begin{figure}
\centering
\includegraphics[width=.7\textwidth]{images/PNA.pdf}
\caption{Convergence profile for the PNA molecule for the SCF (orange) and QC-SCF (blue) code using Dunning's aug-cc-pVDZ basis set. The QCSCF code performed 10 regular SCF iterations before switching to the QC algorithm. The final electronic energy is -489.277~111~404 $E_{h}$.}
\label{fig:pna}
\end{figure}
The systems proposed so far are medium-sized, and provide examples of applications that are typical for the users of the CFOUR suite of programs. The standard implementation that relies on precomputed two-electron integrals written on disk has been used in all cases.
\begin{table}[ht]
\centering
\begin{tabular}{lcccccc}
\toprule
molecule & $^{1}\rm A_{1\textit{g}}$ & It. & $^{3}\rm A_{\textit{u}}$ & It. \\
\midrule
$\rm Ti_{2}O_{4}$ & -1996.144~738~811 & 6 & -1996.100~447~937 & 5 & \\
\bottomrule
\end{tabular}
\caption{Results for RHF and ROHF calculation on $\rm Ti_{2}O_{4}$ with the aug-\textit{cc}-pVTZ basis set. The energies are in Hartree units, next to them the number of iterations required by the second-order optimization. The two calculations were done without the CD and exploiting point-group symmetry.}
\label{tab:Ti2O4}
\end{table}
As the next examples, we tested the algorithm on three larger systems. Here, the CD of the two-electron integrals has been exploited using a decomposition threshold equal to 10$^{-4}$. The first molecule is an aqua thiolate iron(III) porphyrin complex (HSFe$^{\rm III}$OH$_2$) in its doublet state used as a model system for the active site of the cytochrome P450 in Ref. \cite{Groenhof2005}. The geometry can be found in the Supporting Information of the aforementioned paper. The second calculation was done on the triplet state of a binuclear copper magnet (CUAQUACO2) whose geometry has been taken from Ref. \cite{Feng2019}. Finally, we optimized the singlet state of a chlorophyll molecule where the phytyl tail has been substituted with a hydrogen atom to reduce the computational cost. The geometry was optimized with B3LYP/6-31G(d)\cite{Becke1993,Hehre1972}, using the Gaussian 16\cite{g16} suite of programs. In Figure \ref{fig:large}, a representation of the three systems under study is shown. The optimization of the open-shell systems, i.e., HSFe$^{\rm III}$OH$_2$ and CUAQUACO2, was carried out with ROHF. For all the calculations we adopted the cc-pVTZ basis set\cite{Dunning1989}. In Table \ref{tab:large}, we report the number of basis functions, the number of iterations required to converge the second-order algorithm, the time spent to assemble the Fock matrix ($F^{I}$, $F^{A}$, and $Q$ for the ROHF calculations), and the total CPU wall time for the whole calculation (i.e., preliminary SCF iterations and quadratically convergent ones). All the calculations have been performed on a computer node equipped with two Xeon Gold 5120 CPUs, for a total of 28 cores, and 128 GB of memory. The time spent performing the MVP is not reported since it is similar with the time needed to compute the Fock matrix.
\begin{figure}
\centering
\includegraphics[width=.85\textwidth]{images/large_sys.png}
\caption{Molecular representations of the three molecules used to test the CD based QCSCF algorithm. From the left: chlorophyll, CUAQUACO2, and HSFe$^{\rm III}$OH$_2$.}
\label{fig:large}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{lccccc}
\toprule
molecule & N$_b$ & It. & Fock (s) & Total (min)\\
\midrule
HSFe$^{\rm III}$OH$_2$ & 1062 & 13 & 42 & 110.4 \\
CUAQUACO2 & 900 & 8 & 38 & 45.4 \\
chlorophyll & 1624 & 6 & 25 & 26.3 \\
\bottomrule
\end{tabular}
\caption{Large systems results. For each of them, we report the number of basis functions, the number of iterations required by the second-order optimization, the time needed to build a Fock matrix (in seconds), and the total CPU wall time for the whole SCF calculation (in minutes).}
\label{tab:large}
\end{table}
\subsection{What can go wrong and how to deal with that?}
Despite the rigorous and sound convergence properties of a quadratic optimization algorithm, there are still a few issues that can arise in a calculation. First, one must mention that the convergence properties of the FLM algorithm hold in infinite precision. While this is usually not an issue on double precision machines, the effects of finite arithmetic precision become apparent when trying to achieve very tight convergence of the SCF, especially when using very large basis sets comprising diffuse functions. This can be rationalized in terms of overall conditioning of the problem. If the basis set presents near linear dependencies, i.e., if the overlap matrix has small eigenvalues, the numerical precision of the computed quantities degrades. As a rule of thumb, one cannot expect to achieve a convergence tighter than the product of the machine precision times the ratio between the largest and smallest eigenvalues of the overlap matrix. In practice, this means that it is possible to converge the SCF up to 10$^{-11}$--10$^{-12}$ depending on the basis set, which is usually more than sufficient for high precision applications.
That being said, the fact that the QCSCF will converge does not imply that it will always converge to the desired solution. There are two possible, common scenarios that can present. First, the QCSCF code will converge to a state of the same symmetry as its starting point, but not the desired state. For well-behaved systems, the few SCF iterations performed in the beginning are usually enough to establish, using the Aufbau principle, the right occupation. The user should however be wary that there is no guarantee that this will work automatically: it is therefore a good practice to specify explicitly the occupation, or the symmetry of the state, to ensure the convergence of the optimization procedure to the desired wavefunction.
A second case that requires some attention by the user is, in general, any UHF calculation. It is well known that the solution to the UHF problem is not unique and that many solutions with different levels of spin contamination can exist. A minimization algorithm will converge to the closest local minimum, so there is no guarantee that the QCSCF solution will be the global minimum. It is the experience of the authors that, with respect to a standard SCF code, a QCSCF optimization tends to converge to the solution lowest in energy and with highest spin contamination. After a UHF calculation, the user needs to check whether the obtained solution is acceptable.
A third, not common, case that can be encountered is convergence to an unstable solution. This is something that typically happens when performing a calculation on a symmetric system without enforcing point-group symmetry and is somewhat similar to what has already been discussed above about convergence to a state with the wrong occupation. The second-order code should in principle always converge to a minimum; however, there are numerical reasons that may leave the optimization stuck into a saddle point. This is a consequence of the parametrization choice, as Thouless' theorem\cite{Thouless60} specifically states that the determinant one can obtain with a rotation parametrized as in eq. \ref{eq:ROHFWF} cannot be orthogonal to the reference. Therefore, if the QCSCF optimization starts from a determinant orthogonal to the minimum, whether due to imposed or to numerical symmetry constraints, it is prevented from reaching the minimum itself. If the minimum has the same symmetry of the starting point -- including if no point-group symmetry is enforced -- this can easily be avoided by perturbing, at the beginning of the second order optimization, the MO gradient with random noise. On the other hand, if symmetry is enforced and there exists a broken-symmetry solution, this cannot be reached with a second-order method. The latter case can be diagnosed doing a stability analysis and resolved, if the broken-symmetry solution is of interest, enforcing symmetry in a lower subgroup or by completely removing the enforcement of symmetry.
We illustrate all the discussed problematic behaviors using the triplet state of \textit{ortho}-benzyne (o-benzyne) as an example. Such a molecule exhibits $C_{2\textit{v}}$ point-group symmetry and has a singlet ground electronic state. This molecule has also been used by Tsuchimochi and Scuseria as a test case for their constrained UHF method\cite{Tsuchimochi2010,Tsuchimochi2011}. We follow their procedure and optimize the geometry using density-functional theory, namely, the B3LYP functional\cite{Becke1993} in conjunction with Pople's 6-31G(d) basis set\cite{Hehre1972}. The geometry optimization has been performed using the Gaussian 16\cite{g16} suite of programs.
On the optimized geometry, we compute the UHF wavefunction with Dunning's cc-pVTZ basis set\cite{Dunning1989} using four different setups, namely i) enforcing point-group symmetry, but without specifying the occupation numbers; ii) enforcing point-group symmetry and specifying the occupation numbers; iii) without symmetry, using the normal setup; iv) without symmetry, adding a small perturbation (0.01 times the gradient's norm times a uniformly distributed random number between -0.5 and 0.5) to the gradient at the beginning of the optimization.
The results, together with a short comment, are reported in table \ref{tab:o-benz}.
\begin{table}[ht]
\centering
\begin{tabular}{lccr}
\toprule
Setup & Energy & Spin contamination & Comment \\
\midrule
i&-229.261~071 & 0.0204 & wrong occupation \\
ii&-229.468~767 & 0.0277 & symmetric solution\\
iii&-229.261~071 & 0.0204 & unstable \\
iv&-229.471~139 & 0.4156 & symmetry broken\\
\bottomrule
\end{tabular}
\caption{Results of UHF calculations for \textit{o}-benzyne with four different setups. Setup i): point-group symmetry enforced, no initial occupation given. Setup ii): point-group symmetry enforced, initial occupation given in input. Setup iii): no symmetry enforced. Setup iv): no symmetry enforced, the gradient is perturbed at the beginning of the second-order optimization. We report the electronic energy (in Hartree), the spin contamination of the wavefunction, and a short comment that describes the solution found.}
\label{tab:o-benz}
\end{table}
In the first setup, the initial SCF iterations are tasked with computing the occupation of the wavefunction using the Aufbau principle. When convergence of the preliminary SCF is reached, the resulting electronic configuration is the following: 10 doubly occupied $a_1$ orbitals, 1 doubly and one singly occupied $b_1$ orbitals, 8 doubly occupied $b_2$ orbitals and a singly occupied $a_2$ orbital. QCSCF then converges without problems the given state, which is however not the ground triplet state. In the second setup, we specify in input the correct occupations (9 doubly and one singly occupied $a_1$ orbitals, 2 doubly occupied $b_1$ orbitals, 7 doubly and one singly occupied $b_2$ orbitals and one doubly occupied $a_2$ orbitals), which results in the correct behaviour. We note that, while the states that result from the first two setups both have $B_2$ symmetry, the occupations make them strictly orthogonal. In other words, no symmetry-allowed orbital rotation can link the two states, making therefore it impossible for the QCSCF code to converge from one to the other.
In the third setup, no symmetry is enforced; however, the initial SCF iterations make the QC procedure start very close to an unstable stationary point, to which the optimization gets stuck. It is interesting to note that the unstable solution found with this setup is exactly the same found with setup i), i.e., enforcing symmetry but without specifying the occupation. This suggests that the QC optimizer started from a wavefunction that is orthogonal to the actual minimum. In other words, even though symmetry was not enforced, it was still present numerically, which explains the observed behavior.
In the fourth setup, we repeated the calculation without symmetry, but added a small, random perturbation to the gradient at the beginning of the QCSCF process. The small amount of random noise allowed the QCSCF optimizer to connect to the orthogonal subspace where the minimum lies and converge to it without particular effort. It is interesting to note that the optimization found a solution that has a lower energy than the symmetric one, and a much larger spin contamination. This is the same solution found by Tsuchimochi and Scuseria. A comparison between the UHF solutions found with the various setups is particularly interesting to understand the behavior of the QCSCF optimization. Performing a stability analysis, the symmetric solution from the second setup is found to exhibit an instability towards a symmetry-broken UHF solution, which is exactly the one found with the fourth setup. The latter is in turn stable. All these results confirm numerically what was discussed above. In setups i) and iii), QCSCF converges to a solution with the wrong occupation -- which is explicitly enforced in the first case, and is a numerical consequence of the preliminary SCF iterations in the third. In the second setup, the minimum within the symmetry is found. In the fourth setup, as no symmetry is enforced and the symmetry is artificially broken with a small, random perturbation, a true minimum is found, which corresponds to the stable, symmetry broken solution.
It is worth commenting that the lowest energy, stable solution exhibits a remarkably large spin contamination, besides being symmetry broken: whether this is the solution of interest, or whether the symmetry one found with the second setup is preferable, is ultimately a choice left to the user.
We conclude this section with some considerations concerning computational efficiency. The choice of parameters that we adopt by default, namely, the parameters that control the preliminary SCF iterations and the initial trust radius, represent a good compromise between robustness and efficiency. This is, of course, not something that applies in general to every molecular system. It is therefore likely that there exists, for any system, a specific combination of parameters that minimizes the number of overall iterations, and therefore the computational cost. Tuning these parameters is not required to achieve convergence, but may speed up the computation, which is particularly relevant for applications involving large and very large systems. Nevertheless, the spirit of a quadratically convergent SCF is to minimize human effort with respect to computational effort: it is up to the user to decide whether to invest time into the numerical optimization of the procedure for a specific application. Let us remark once again that, even if an optimal setup is found, a QCSCF calculation is inevitably going to be more expensive, heuristically, about twice to three times as expensive, than a well-behaving regular SCF one.
\section{Conclusions}
In this contribution, we presented a quadratically convergent self-consistent field program that can achieve convergence of the SCF iteration in a black-box manner for restricted, unrestricted, and (high-spin) restricted-open-shell references. The implementation is based on the Fletcher-Levenberg-Marquardt trust-radius Newton method, in its direct formulation known as norm-extended optimization. All the operation are performed in a direct fashion, without the need of assembling explicitly the MO rotation Hessian nor of transforming the two-electron integrals into the MO basis. To the best of our knowledge, this is the first QCSCF implementation with guaranteed convergence that can be applied to both ROHF and UHF.
The resulting algorithm is suited to be used in conjunction with integral-direct, and even linear-scaling techniques to increase its computational performances. As this implementation is meant for high-accuracy applications on small- to medium-sized molecules, we explore a different strategy to reduce computational cost, namely, the use of Cholesky decomposition of the two-electron integrals.
We showed with numerical examples how the QCSCF code can be helpful not only with hard-to-converge cases, but also with regular cases where a very tight convergence is needed. Using the CD of the two-electron integrals, larger systems become easily treatable.
While a QCSCF calculation is in general more expensive than its linear counterpart, we showed that it is possible to achieve the same scaling with respect to the size of the system as in regular SCF.
Furthermore, as regular and QC SCF share as the leading computational operation the construction of Fock matrices, they can exploit the same techniques to accelerate the computation, including integral direct implementations, linear scaling methods and low-rank approximations of the two-electron integrals matrix. As an example, we showed an implementation based on the Cholesky decomposition of the two-electron integrals, as the use of such a technique is part of a widespread effort amongst the developers of the CFOUR suite of programs.
A second-order optimization algorithm has the remarkable property of being completely predictable. Its black-box nature is shown in its ability to always converge to a solution, which, however may not be the desired one. We discussed and rationalized the main reasons why QCSCF can converge to either a solution with the wrong symmetry or to an unstable solution, and showed how one can overcome such difficulties by either guiding the optimizer to the right symmetry state by specifying an occupation or, when working enforcing either no or a reduced symmetry point-group, by adding to the gradient with a small random perturbation.
In conclusion, a second-order SCF code can be a useful tool to converge problematic cases to a very tight threshold in an almost automated way, without the need of tweaking and tuning various parameters to achieve the desired accuracy. We hope that the QCSCF program that will be made available in the next release of CFOUR will provide the community with a useful tool.
\section*{Acknowledgments} This work is dedicated to Prof. John F. Stanton -- quite possibly the biggest fan of the QCSCF code in CFOUR -- on the occasion of his 60th birthday.
|
2,877,628,090,339 | arxiv | \section{Introduction}
In this paper, we consider the problem of pricing perpetual American options written on dividend-paying assets whose price dynamics follow the classical multidimensional Black and Scholes model. In this model, under the risk-neutral measure $P$, the asset prices $X^{s,x,1},\dots,X^{s,x,d}$ on $[s,\infty)$ evolve according to the stochastic differential equation
\begin{equation}
\label{eq1.1}
X^{s,x,i}_{t}=x_i+\int_{s}^{t}(r-\delta_{i})X^{s,x,i}_{\theta}\,d\theta
+\sum_{j=1}^n\int_{s}^{t}\sigma_{ij}X^{s,x,i}_{\theta}\,dW^j_{\theta},
\quad t\ge s.
\end{equation}
In (\ref{eq1.1}), $W$ is a standard $d$-dimensional Wiener process, $x_i$, $i=1,\dots,d$, are the initial prices at time $s$, $r\ge0$ is the risk-free interest rate, $\delta_i\ge0$, $i=1,\dots,d$, are dividend rates and $\sigma=\{\sigma_{ij}\}_{i,j=1,\dots,d}$ is the volatility matrix. We assume that $a=\sigma\cdot\sigma^*$, where $\sigma^*$ is the transpose of $\sigma$, is strictly positive definite.
Let $T>0$ and $\psi:\BR^d\rightarrow\BR$ be a nonnegative continuous functions with polynomial growth. Under the measure $P$, the value at time $s$ of the American option with payoff function $\psi$ and expiration time $T$ is given by
\begin{equation}
\label{eq1.2}
V_T(s,x)=\sup_{s\le\tau\le T}Ee^{-r(\tau-s)}\psi(X^{s,x}_{\tau}),
\end{equation}
and the value of the perpetual option with payoff function $\psi$ is
\begin{equation}
\label{eq1.3}
V(s,x)=\sup_{\tau\ge s}Ee^{-r(\tau-s)}\psi(X^{s,x}_{\tau}).
\end{equation}
(see \cite{Ka,KS,S}). In (\ref{eq1.2}), the supremum is taken over the set of all stopping times
with values in $[s,T]$, and in (\ref{eq1.3}), over the set of stopping times in $[s,\infty]$. In the event that $\tau=\infty$, we interpret $e^{-r(\tau-s)}\psi(X^{s,x}_{\tau})$ to be zero.
At present, properties of $V_T$ are quite well investigated. It is known (see \cite{EKPPQ,EPQ,EQ}) that $V_T$ can be represented by a solution of a reflected backward stochastic differential equation (RBSDE). A detailed study of the structure of this RBSDE, which in particular leads to the early exercise premium formula, is given in \cite{KR:AMO} (also see Section \ref{sec3.1}). The value $V_T$ can also be characterized analytically as a solution of some obstacle problem (or, in different terminology, variational inequality) (see \cite{EKPPQ,EPQ,EQ,KR:AMO} and Section \ref{sec3.2}). It is worth pointing here that the analytical characterization relies heavily on the characterization via solutions of RBSDEs.
In case of perpetual options less in known, except for put and call options in case $d=1$, which were thoroughly investigated as early as in \cite{McK,M}. For a nice presentation of these results as well as some newer results and historical comments see the books \cite{KS,S}.
Presumably, the main reason that less attention has been paid to $V$ than to $V_T$ is that perpetual options are not traded.
On the other hand, in our opinion, perpetual American options are interesting from historical reasons and from a purely theoretical point of view. This motivated us to ask whether in the multidimensional case, for a wide class of payoffs functions one can represent $V$
in terms of BSDEs or solutions of obstacle problems. Another reason for writing this paper is that the desired representations of $V$ can be derived in a quite elegant way from those for $V_T$. The main idea is as follows. Intuitively, $V$ is the limit of $V_T$ as $T\rightarrow\infty$ (in fact this is true; see Section \ref{sec3.1}). This suggests that properties of $V$ we are interested in can be derived by studying the behaviour, as $T\rightarrow\infty$, of the solution of the RBSDE with terminal condition at time $T$, which is used to represent $V_T$. By modifying some results from the recent paper \cite{KR:PA}, we show that the idea sketched above is indeed realizable. As a result we show that for convex and Lipschitz continuous $\psi$ the value function $V$ is represented by a solution of some RBSDE with terminal condition 0 at infinity and we get the exercise premium formula. We also show that $V$ is a solution a unique of some obstacle problem. Finally, we estimate that rate of convergence of $V_T $ to $V$. It seems that some of our results (the representation in terms of RBSDEs, rate of convergence) are new even in the case of classical call/put option and $d=1$.
\section{Preliminaries}
In our considerations only the distribution of the processes $X^{s,x,i}$ will be important.
Since they depend on $\sigma$ only through $a$, we may and will assume that $\sigma$ is a symmetric square root of $a$. From the same reason (only the distributions are important),
as in \cite{KR:AMO}, we will use a slightly different from (\ref{eq1.1}) form of the price dynamics. It appears to be more convenient for us than (\ref{eq1.1}).
Let $\Omega=C([0,T];\BR^d)$ and let $X$ be the canonical process
on $\Omega$. For $(s,x)\in[0,T]\times\BR^d$ let $P_{s,x}$ denote the law of
the process $X^{s,x}=(X^{s,x,1},\dots,X^{s,x,d})$ defined by
(\ref{eq1.1}) and let $\{\FF^s_t\}$ denote the completion of
$\sigma(X_{\theta};\theta\in[s,t])$ with respect to the family
$\{P_{s,\mu};\mu$ a finite measure on $\BB(\BR^n)\}$, where
$P_{s,\mu}(\cdot)=\int_{\BR^d}P_{s,x}(\cdot)\,\mu(dx)$. Then for
each $s\in[0,T)$, $\BX=(\Omega,(\FF^s_t)_{t\in[s,T]},X,P_{s,x})$
is a Markov process on $[0,T]$.
Using It\^o's formula and L\'evy's characterization of the Wiener process
motion one can check (see \cite[Section 2]{KR:AMO} for details) that
\begin{equation}
\label{eq2.1}
X^i_t=X^i_0+\int_{s}^{t}(r-\delta_{i})X^{i}_{\theta}\,d\theta
+\sum_{j=1}^d\int_{s}^{t}\sigma_{ij}X^{i}_{\theta}\,dB^j_{s,\theta},
\quad t\ge s,\quad P_{s,x}\mbox{-a.s.},
\end{equation}
where
$\{B_{s,t}, t\ge s\}$ is under $P_{s,x}$ a standard $d$-dimensional
$\{\FF^s_t\}$-Wiener process on $[s,\infty)$. It is well known that the unique solution of
(\ref{eq2.1}) is of the form
\begin{equation}
\label{eq2.2}
X^{i}_t=X^i_0\exp\big((r-\delta_i-a_{ii}/2)(t-s)
+\sum^d_{j=1}\sigma_{ij}B^j_{s,t}\big),\quad t\ge s,\quad
P_{s,x}\mbox{-a.s.}
\end{equation}
Let $\sigma_i=\sqrt{a_{ii}}$. Since $\tilde B^i:=\sum^d_{j=1}\sigma_{ij}B^j_{s,\cdot}$ is a continuous martingale with the quadratic variation $\langle \tilde B^i_{s,\cdot}\rangle_t=a_{ii}(t-s)$, $t\ge s$, the process $X^i$ has the form
\begin{equation}
\label{eq2.3}
X^i_t=X^i_0e^{(r-\delta_i)(t-s)}N^i_{s,t}, \quad t\ge s,
\end{equation}
where $N^i_{s,t}=\exp(-(t-s)a_{ii}^2/2+\tilde B^i_{s,t})$, $t\ge s$, is an $(\FF^s_t)$-martingale under $P_{s,x}$.
Let $D=\{x=(x_1,\dots,x_d):x_i>0,i=1,\dots,d\}$. From (\ref{eq2.2}) it follows that if $x\in D$, then $P_{s,x}(X_t\in D,t\ge s)=1$.
Below we recall some known results on the pricing of American options with finite expiration time $T>0$. They will be needed in the next section.
In this paper, we assume that the payoff function satisfies the following condition:
\begin{enumerate}
\item[(A1)]
$\psi:\BR^d\rightarrow\BR$ is a nonnegative convex function which is Lipschitz continuous, i.e. there is $L>0$ such that $|\psi(x)-\psi(y)|\le L|x-y|$ for all $x,y\in\BR^d$.
\end{enumerate}
In particular, $\psi(x)\le C(1+|x|)$ with $C=\max\{L,\psi(0)\}$. Furthermore, since $\psi$ is convex, for a.e. $x\in\BR^d$ there exist the usual partial derivatives $\nabla_1\psi(x), \dots,\nabla_d\psi(x)$ of $\psi$ at $x$. Furthermore, by Alexandrov's theorem (see, e.g.,
\cite[Theorem 7.10]{AA}), $\psi$ has second order derivatives at $x$
for a.e. $x\in\BR^d$, which we denote by $\nabla^2_{ij}\psi(x)$.
Let $\TT_{s,T}$ denote the set of all $(\FF^s_t)$-stopping times with values in $[s,T]$. The fair price (or value) $V_T(s,x)$ of the American option with expiration time $T$ and payoff function $\psi$ is given by
\begin{equation}
\label{eq5.1}
V_T(s,x)=\sup_{\tau\in\TT_{s,T}}E_{s,x}e^{-r(\tau-s)}\psi(X_{\tau}).
\end{equation}
Since $\psi$ is continuous with linear growth, from \cite[Theorem 5.2]{EKPPQ} it follows that for every $(s,x)\in[0,T]\times\BR^d$ there exists a unique solution
$(Y^{T,s,x},K^{T,s,x},Z^{T,s,x})$, on the space $(\Omega,\FF^s_T,P_{s,x})$, of the
RBSDE with coefficient $f(y)=-ry$, $y\in\BR$, terminal condition $\psi(X_T)$ and barrier $\psi(X)$, that is
RBSDE of the form
\begin{equation}
\label{eq5.2}
\left\{
\begin{array}{l}
Y^{T,s,x}_t=\psi(X_T)-\int^T_trY^{T,s,x}_{\theta}\,d\theta
+\int^T_tdK^{T,s,x}_{\theta}
-\int^T_tZ^{T,s,x}_{\theta}\,dB_{s,\theta},\,\,
t\in[s,T],\medskip\\
Y^{T,s,x}_t\ge \psi(X_t),\quad t\in[s,T],\medskip \\
K^{T,s,x}_0=0\,\, ,K^{T,s,x}\mbox{ is continuous and increasing, and satisfies }\\
\qquad\qquad\mbox{the minimality condition }
\int^T_s(Y^{T,s,x}_t-\psi(X_t))\,dK^{T,s,x}_t=0.
\end{array}
\right.
\end{equation}
For the precise definition of a solution we defer the reader to \cite{EKPPQ}. Here let us only note that $E_{s,x}\int^T_s|Z^{T,s,x}_{\theta}|^2\,d\theta<\infty$, so the process
\[
M^{T,s,x}_t=\int^t_sZ^{T,s,x}_{\theta}\,dB_{s,\theta},\quad t\in[s,T],
\]
is a martingale under $P_{s,x}$.
Let $L_{BS}$ denote the Black-Scholes operator defined by
\[
L_{BS}=\sum^d_{i=1}(r-\delta_i)x_i\partial_{x_i}
+\frac12\sum^d_{i,j=1}a_{ij}x_ix_j\partial^2_{x_ix_j}\,,
\]
where $\partial_{x_i},\partial^2_{x_ix_j}$ denote the partial derivatives in the distribution sense. In \cite[Theorem 8.5]{EKPPQ} it is also proved that for every $(s,x)\in [0,T]\times\BR^d$,
\begin{equation}
\label{eq2.8}
Y^{T,s,x}_t=u_T(t,X_t),\quad t\in[s,T],\quad
P_{s,x}\mbox{-a.s.},
\end{equation}
where $u_T$ is a (unique) viscosity solution to the obstacle problem
\begin{equation}
\label{eq5.3} \left\{
\begin{array}{ll}
\min\{u_T-\psi,-\partial_su_T-L_{BS}u_T+r u_T\}=0 &\mbox{in }[0,T]\times\BR^d,
\medskip\\
u_T(T,\cdot)=\psi & \mbox{on } x\in\BR^d.
\end{array}
\right.
\end{equation}
The process $\bar Y^{T,s,x}$ defined as $\bar Y^{T,s,x}_t=e^{-r(t-s)}Y^{T,s,x}_t$, $t\in[s,T]$, is the first component of the solution of RBSDE with coefficient $f=0$, terminal condition $e^{-rT}\psi(X_T)$ and barrier $e^{-rt}\psi(X_t)$, $t\in[s,T]$.
Therefore from (\ref{eq2.8}) with $t=s$ and \cite[Proposition 2.3]{EKPPQ} (or \cite[Proposition 3.3]{EPQ}) it follows that $V_T=u_T$.
Let
\[
\LL_{BS}=\sum^d_{i=1}(r-\delta_{i})x_{i}\nabla_i +\frac12\sum^d_{i,j=1}
a_{ij}\nabla_{ij}\,.
\]
In \cite[Theorem 2]{KR:AMO} it is proved that under (A1), for every $(s,x)\in[0,T]\times D$,
\[
K^{T,s,x}_t=\int^t_s\Phi(X_{\theta},u_T(\theta,X_{\theta}))\,d\theta,\quad t\in[s,T],\quad P_{s,x}\mbox{-a.s.}
\]
where
\begin{equation}
\label{eq2.9}
\Phi(x,y)=\Psi^{-}(x)\fch_{(-\infty,\psi(x)]}(y),\qquad \Psi(x)=-r\psi(x)+\LL_{BS}\psi(x)
\end{equation}
and $\Psi^-=\max\{-\Psi,0\}$. Since $u_T(s,x)\ge\psi(x)\ge0$, we have
\[
\Phi(x,0)=\Psi^{-}(x),\qquad\Phi(x,u_T(s,x))=\Psi^{-}(x)\fch_{\{u_T(s,x)=\psi(x)\}}, \quad (s,x)\in[0,T]\times D.
\]
\section{Perpetual options}
\label{sec3}
To shorten notation, in this section we set $V(x)=V(0,x)$, $\FF_t=\FF^0_t$,
$P_x=P_{0,x}$, and we denote by $E_x$ the expectation with respect to $P_x$. With this notation, (\ref{eq1.3}) takes the form
\begin{equation}
\label{eq6.7}
V(x)=\sup_{\tau\in\TT}E_xe^{-r\tau}\psi(X_{\tau}),
\end{equation}
where $\TT$ is the set of all $(\FF_t)$-stopping times.
\subsection{Stochastic representation of the value function}
\label{sec3.1}
Assume (A1) and let
\[
Y^T_t=u_T(t,X_t),\qquad K^{T}_t=\int^t_0\Phi(X_{s},u_T(s,X_{s}))\,ds,\quad t\in[0,T].
\]
Then $Y^T$ and $K^T$ are
independent of $x$ versions of $Y^{T,0,x}$ and $K^{T,0,x}$, respectively.
Since $V_T=u_T$, we have
\begin{equation}
\label{eq5.9}
V_T(t,X_t)=Y^{T}_t=u_T(t,X_t),\quad t\in[0,T], \quad
P_{x}\mbox{-a.s.}
\end{equation}
From the first equation in (\ref{eq5.2}) it follows that $M^{T,0,x}$ also has a version independent of $x$, which we denote by $M^T$.
Set
\[
\bar Y^{T}_t=e^{-rt}Y^T_t,\qquad \bar K^{T}_t=\int^t_0e^{-rs}\,dK^T_s,\qquad \bar M^T_t=\int^t_0e^{-rs}\,dM^T_s,\quad t\in[0,T].
\]
Since
\[
Y^{T}_t=\psi(X_T)-\int^T_trY^{T}_{s}\,ds
+\int^T_tdK^{T}_{\theta}-\int^T_tdM^T_s,\quad t\in[s,T],
\]
integrating by parts we obtain
\begin{equation}
\label{eq6.4}
\bar Y^T_t=e^{-rT}\psi(X_T)+\int^T_td\bar K^T_s
-\int^T_td\bar M^T_s,\quad t\in[0,T].
\end{equation}
We will also need the following condition.
\begin{enumerate}
\item[(A2)]For every $x\in D$,
\begin{equation}
\label{eq3.13}
\mbox{\rm(a)}\,\,\lim_{t\rightarrow\infty}E_xe^{-rt}\psi(X_t)=0,\qquad \mbox{\rm(b)}\,\,E_x\int^{\infty}_0e^{-rt}\Psi^{-}(X_t)\,dt<\infty.
\end{equation}
\end{enumerate}
\begin{remark}
\label{rem3.1}
(i) Condition (\ref{eq3.13}) can be equivalently stated as
\[
\mbox{\rm(a)}\,\,\lim_{t\rightarrow\infty}e^{-rt}P_t\psi(x)=0,\qquad \mbox{\rm(b)}\,\,R_r\Psi^{-}(x)<\infty,
\]
where $(P_t)_{t>0}$ (resp. $(R_{\alpha})_{\alpha>0})$ is the semigroup (resp. resovent) associated with $X$.
\smallskip\\
(ii) Assume that $r>0$. Clearly (\ref{eq3.13})(a) is satisfied if $\psi$ is bounded.
By (\ref{eq2.3}), $E_xX^{i}_t=x_ie^{(r-\delta_i)t}$, $t\ge0$. Therefore
(\ref{eq3.13})(a) is satisfied for general Lipschitz-continuous $\psi$ if
$\delta_i>0$, $i=1,\dots d$.
Similarly, (\ref{eq3.13})(b) is satisfied if $\Psi^{-}$ is bounded or $\Psi^{-}$ satisfies the linear growth condition and $\delta_i>0$, $i=1,\dots, d$.
\end{remark}
We are going to show that if (\ref{eq3.13}) is satisfied for some $x\in D$, then $\bar Y^T$ converges as $T\rightarrow\infty$ to a process $\bar Y^x$ being the first component of the solution $(\bar Y^x,\bar K^x,\bar M^x)$ of the reflected BSDE which informally can
written as
\begin{equation}
\label{eq6.3}
\bar Y^x_t=\int^{\infty}_td\bar K^x_s
-\int^{\infty}_td\bar M^x_s,\quad t\ge0.
\end{equation}
We will also show that $\bar K^x$ has the representation
\begin{equation}
\label{eq6.6}
\bar K^x_t=\int^t_0e^{-rs}\Phi(X_s,e^{rs}\bar Y^x_s)\,ds,\quad t\ge0,
\end{equation}
so in fact $(\bar Y^x,\bar M^x)$ is a solution of the usual BSDE
\begin{equation}
\label{eq3.5}
\bar Y^x_t=\int^{\infty}_te^{-rs}\Phi(X_s,e^{rs}\bar Y^x_s)\,ds
-\int^{\infty}_td\bar M^x_s,\quad t\ge0.
\end{equation}
Before giving the definition of solutions of (\ref{eq6.3}) and (\ref{eq3.5}) let us recall that a continuous $(\FF_t)$-adapted process $Y$ is said to be of class (D)
under the measure $P_x$ if the collection $\{Y_{\tau}:\tau\in\TT\}$ is
uniformly integrable under $P_x$. Let $\LL^1(P_x)$ denote the space of continuous processes
with finite norm $\|Y\|_{x,1}=\sup\{E_x|Y_{\tau}|:\tau\in\TT\}$.
It is known that $\LL^1(P_x)$ is complete (see \cite[Theorem VI.22]{DM}). Moreover, if $Y^n$ are of class (D) and $Y^n\rightarrow Y$ in $\LL^1(P_x)$, then $Y$ is of class (D) (see \cite[Section 3]{KR:PA}).
\begin{definition}
(i) We say that a triple $(\bar Y^x,\bar K^x,\bar M^x)$ of adapted continuous processes is a
solution of the reflected BSDE (\ref{eq6.3}) with lower barrier $\bar L_t=e^{-rt}\psi(X_t)$
if $\bar Y^x$ is of class D, $\bar M^x$ is a local martingale
with $\bar M^x_0=0$, $\bar K^x$ is an increasing process with $\bar K_0=0$, and for every $T>0$,
\begin{equation}
\label{eq6.1}
\left\{
\begin{array}{l}
\bar Y^x_t=\bar Y^x_T+\int^T_td\bar K^x_s
-\int^{T}_td\bar M^x_s,\quad t\ge0,\medskip\\
\bar Y^{x}_t\ge\bar L_t,\quad t\in[0,T],\quad
\int^{T}_0(\bar Y^{x}_t-\bar L_t)\,d\bar K^{x}_t=0, \medskip\\
\bar Y^x_T\rightarrow0\,\,\,P_x \mbox{-a.s. as }T\rightarrow\infty.
\end{array}
\right.
\end{equation}
(ii) We say that a pair $(\bar Y^x,\bar M^x)$ of adapted continuous processes is a
solution of the BSDE (\ref{eq3.5}) if $\bar Y^x$ is of class D, $\bar M^x$ is a local martingale
with $\bar M^x_0=0$ and for every $T>0$, $\int^T_0e^{-rt}\Phi(X_t,e^{rt}\bar Y^x_t)\,dt<\infty$ $P_x$-a.s.
\begin{equation}
\label{eq3.6}
\left\{
\begin{array}{l}
\bar Y^x_t=\bar Y^x_T+\int^T_te^{-rs}\Phi(X_s,e^{rs}\bar Y^x_s)\,ds
-\int^{T}_td\bar M^x_s,\quad t\ge0, \medskip\\
\bar Y^x_T\rightarrow0\,\,\,P_x \mbox{-a.s. as }T\rightarrow\infty.
\end{array}
\right.
\end{equation}
\end{definition}
\begin{remark}
\label{rem3.2}
Let $(\bar Y^x,\bar M^x,\bar K^x)$ be a solution of (\ref{eq6.1}). Then for every $t\in\TT$,
\[
E_x\bar Y^x_0\ge E_xY^x_{\tau}\ge E_xe^{-r\tau}\psi(X_{\tau}).
\]
To see this, consider a localizing sequence $\{\tau_n\}$ for $\bar M^x$.
Since
\[
\bar Y^x_t=\bar Y^x_0-\int^t_0d\bar K^x_s+\int^t_0d\bar M^x_s,\quad t\ge0,
\]
we have $E_x\bar Y^x_0\ge \liminf_{n\rightarrow\infty}E_xY^x_{\tau\wedge\tau_n}$. Applying Fatou's lemma yields the desired inequalities.
\end{remark}
\begin{proposition}
\label{prop2.1}
Assume that $\psi$ satisfies \mbox{\rm(A1)} and \mbox{\rm(\ref{eq3.13})} for some $x\in D$. Then there is at most one solution of \mbox{\rm(\ref{eq6.1})}. Similarly, there is at most one solution of \mbox{\rm(\ref{eq3.5})}.
\end{proposition}
\begin{proof}
Suppose that $(\bar Y^i,\bar K^i,\bar M^i)$, $i=1,2$, are solutions of (\ref{eq6.1}). Write $\bar Y=\bar Y^1-\bar Y^2$, $\bar K=\bar K^1-\bar K^2$, $\bar M=\bar M^1-\bar M^2$. Then
\[
\bar Y_t=\bar Y_0-\int^t_0d\bar K_s+\int^t_0d\bar M_s,\quad t\ge0.
\]
By the Meyer-Tanaka formula (see, e.g., \cite[Theorem IV.68]{P}),
\begin{equation}
\label{eq5.5}
\bar Y^{+}_t\le\bar Y^{+}_T+\int^T_t\fch_{\{\bar Y^1_s>\bar Y^2_s\}}\,d\bar K_s
-\int^T_t\fch_{\{\bar Y^1_s>\bar Y^2_s\}}\,d\bar M_s.
\end{equation}
Since $e^{-rt}\psi(X_t)L_t\le\bar Y^1_t\wedge\bar Y^2_t\le\bar Y^1_t$,
we have
\[
\int^T_t\fch_{\{\bar Y^1_s>\bar Y^2_s\}}\,d\bar K^1_s
=\int^T_t\fch_{\{\bar Y^1_s>\bar Y^2_s\}}(\bar Y^1_s-\bar Y^2_s)^{-1}(\bar Y^1_s- \bar Y^1_s\wedge\bar Y^2_s)\,d\bar K^1_s\le0.
\]
By the above inequality and (\ref{eq5.5}), $E_x\bar Y^{+}_t\le E_x\bar Y^{+}_T$. Since $E_x\bar Y^+_T\rightarrow0$ as $T\rightarrow\infty$,
we see that $\bar Y^+_t=0$, $t\ge0$, $P_x$-a.s. In the same way we show that $(-\bar Y_t)^+=0$, $t\ge0$, $P_x$-a.s. Thus $\bar Y^1=\bar Y^2$. That $\bar M^1=\bar M^2$ and $\bar K^1=\bar K^2$ now follows from uniqueness of the Doob-Meyer decomposition of $\bar Y^1$.
The proof of the second assertion is similar. Suppose that $(\bar Y^1,\bar M^1)$, $(\bar Y^2,\bar M^2)$ are solutions of (\ref{eq3.5}). Let $\bar Y=\bar Y^1-\bar Y^2$, $\bar M=\bar M^1-\bar M^2$.
Applying the Meyer-Tanaka formula yields
\[
\bar Y^{+}_t\le\bar Y^{+}_T+\int^T_t\fch_{\{\bar Y^1_s>\bar Y^2_s\}}e^{-rs}
(\Psi(X_s,e^{rs}\bar Y^1_s)-\Psi(X_s,e^{rs}\bar Y^2_s))\,ds
-\int^T_t\fch_{\{\bar Y^1_s>\bar Y^2_s\}}\,d\bar M_s.
\]
But
\begin{align}
\label{eq3.21}
&(\Phi(x,y_1)-\Phi(x,y_2))(y_1-y_2) \nonumber\\
&\qquad
=\Psi^{-}(x)(\fch_{(-\infty,\psi(x)]}(y_1)-\fch_{(-\infty,\psi(x)]}(y_2))(y_1-y_2)\le0,
\end{align}
so $\bar Y^{+}_t\le\bar Y^{+}_T-\int^T_t\fch_{\{\bar Y^1_s>\bar Y^2_s\}}\,d\bar M_s$.
To prove that $\bar Y^1=\bar Y^2$ and $\bar M^1=\bar M^2$ it suffices now to repeat the argument from the proof of the first assertion.
\end{proof}
By (\ref{eq6.4}) with $T=n$,
\begin{equation}
\label{eq3.1}
\bar Y^n_t=e^{- rn}\psi(X_n)+\int^n_te^{-rs}\Phi(X_s,e^{rs}\bar Y^n_s)\,ds-\int^n_td\bar M^n_s,\quad t\in[0,n].
\end{equation}
We put
\[
\tilde Y^n_t=\bar Y^n_t,\quad \tilde M^n_t=\bar M^n_t,\quad t<n,\qquad\tilde Y^n_t=0,\quad \tilde M^n_t=\bar M^n_n,\quad t\ge n.
\]
The proof of the following theorem is a modification of the proof of \cite[Propositions 4.1, 4.2]{KR:PA}.
\begin{theorem}
\label{th3.6}
Assume that $\psi$ satisfies \mbox{\rm(A1)} and \mbox{\rm(\ref{eq3.13})} for some $x\in D$. Then there exists a unique solution $(\bar Y^x,\bar M^x)$ of \mbox{\rm(\ref{eq3.5})} on $(\Omega,\FF,P_x)$. Moreover,
\begin{equation}
\label{eq3.15}
E_x\int^{\infty}_0e^{-rt}\Phi(X_t,e^{rt}\bar Y^x_t)\,dt
\le 2 E_x\int^{\infty}_0e^{-rt}\Psi^{-}(X_t)\,dt,
\end{equation}
\begin{equation}
\label{eq3.7}
\lim_{n\rightarrow\infty}\|\bar Y^n-\bar Y^x\|_{x,1}=0
\end{equation}
and for every $q\in(0,1)$,
\begin{equation}
\label{eq3.8}
\lim_{n\rightarrow\infty}E_x\sup_{t\ge0}|\bar Y^n_t-\bar Y^x_t|^q=0.
\end{equation}
\end{theorem}
\begin{proof}
Uniqueness follows from Proposition \ref{prop2.1}. The proof of the existence and (\ref{eq3.15})--(\ref{eq3.8}) we divide
into two steps. \\
Step 1. We shall prove some a priori estimates for the process $\bar Y^n$ and the difference $\delta\tilde Y:=\tilde Y^m-\tilde Y^n$. Specifically, we shall prove that
\begin{equation}
\label{eq5.6}
\|\delta\tilde Y\|_{x,1} \le E_{x}\Big(e^{-rm}\psi(X_m)
+e^{-rn}\psi(X_n)+\int^{m}_{n}e^{-rt}\Psi^{-}(X_{t})\,dt\Big),
\end{equation}
\begin{equation}
\label{eq5.7}
E_{x}\sup_{t\ge0}|\delta\tilde Y_t|^q
\le\frac{1}{1-q}E_x\Big(e^{-rm}\psi(X_m)+e^{-rn}\psi(X_n))
+\int^{m}_{n}e^{-rt}\Psi^{-}(X_t)\,dt\Big)^q
\end{equation}
for every $q\in(0,1)$, and for every $t\ge0$,
\begin{equation}
\label{eq5.10}
E_x\int_0^{t}e^{-rs}\Phi(X_{s},e^{rs}\bar Y^n_{s})\,ds
\le E_x\Big(\bar Y^n_t +2\int_0^{t}e^{-rs}\Psi^{-}(X_{s})\,ds\Big).
\end{equation}
By (\ref{eq3.1}),
\begin{equation}
\label{eq3.9}
\bar Y^n_t
=\bar Y^n_0-\int^t_0\fch_{[0,n]}(s)e^{-rs}\Phi(X_s,e^{rs}\bar Y^n_s)\,ds +\int^t_0\fch_{[0,n]}(s)\,d\bar M^n_s,\quad t\in[0,n].
\end{equation}
Moreover,
\[
\tilde Y^n_t=\tilde Y^n_0-\int^t_0\fch_{[0,n]}(s)e^{-rs}\Phi(X_s,e^{rs}\tilde Y^n_s)\,ds
+\int^t_0dV^n_s+\int^t_0\fch_{[0,n]}(s)\,d\tilde M^n_s,\quad t\ge0,
\]
where
\[
V^n_t=0,\quad t<n,\qquad V^n_t=-\bar Y^n_n,\quad t\ge n.
\]
Hence
\[
\delta\tilde Y_t=\delta\tilde Y_0+R_t+\int^t_0(\fch_{[0,m]}(s)\,d\tilde M^m_s-\fch_{[0,n]}(s)\,d\tilde M^n_s),\quad t\ge0
\]
with
\begin{align*}
R_t&=-\int^t_0\fch_{[0,n]}(s) e^{-rs}(\Phi(X_{s},e^{rs}\tilde
Y^m_{s})-\Phi(X_{s},e^{rs}\tilde Y^n_{s}))\,ds\\
&\quad-\int^t_0\fch_{(n,m]}(s)e^{-rs}
\Phi(X_{s},e^{rs}\tilde Y^m_{s})\,ds+\int^t_0d(V^m_{s}-V^n_{s}).
\end{align*}
By the Meyer-Tanaka formula, for $t<m$ we have
\[
|\delta\tilde Y_m|-|\delta\tilde Y_t|\ge\int^m_t\mbox{sign}
(\delta\tilde Y_{s-})\,d(\delta\tilde Y)_{s},
\]
where $\mbox{sign}(x)=1$ if $x>0$ and $\mbox{sign}(x)=-1$ if
$x\le0$. Therefore, for $t<m$,
\[
|\delta\tilde Y_t|=E_x(|\delta\tilde Y_t|\,|\FF_t)\le
E_x\Big(|\delta\tilde Y_m| -\int^m_t\mbox{sign}(\delta\tilde
Y_{s-})\,dR_{s}\,\big|\FF_t\Big).
\]
From this it follows that for $t\in[0,m]$,
\begin{align*}
|\delta\tilde Y_t|&\le E_{x}\Big(|\delta\tilde Y_m|
+\int_t^m\fch_{[0,n]}(s)e^{-rs}\mbox{sign}(\delta\tilde
Y_{s})(\Phi(X_{s},e^{rs}\tilde Y^m_{s})-\Phi(X_{s},e^{rs}\tilde Y^n_{s}))\,ds\\
&\qquad\qquad+\int^m_t\fch_{(n,m]}(s)e^{-rs}
\mbox{sign}(\delta\tilde Y_{s})\Phi(X_{s},e^{rs}\tilde Y^m_{s})\,ds
+|V^m_m|+|V^n_n| \,\big|\FF_t\Big).
\end{align*}
By (\ref{eq3.21}),
\begin{equation}
\label{eq3.10} \int_t^m\fch_{[0,n]}(s)e^{-rs}\mbox{sign}(\delta
\tilde Y_{s})(\Phi(X_{s},e^{rs}\tilde Y^m_{s})-\Phi(X_{s},e^{rs}\tilde
Y^n_{s}))\,ds\le0.
\end{equation}
Furthermore, since $\tilde Y^n_t=0$ for $t\ge n$, it follows from (\ref{eq3.21}) that
\begin{align*}
&\int^m_t\fch_{(n,m]}(s)e^{-rs} \mbox{sign}(\delta
\tilde Y_{s})\Phi(X_{s},e^{rs}\tilde Y^m_{s})\,ds \\
&\qquad \le \int^m_t\fch_{(n,m]}(s)e^{-rs}
\mbox{sign}(\delta\tilde Y_{s})\Psi^{-}(X_{s})\,ds
\le \int^{m}_{n}e^{-rs} \Psi^{-}(X_{s})\,ds.
\end{align*}
Furthermore, $\delta\tilde Y_m=0$ and
$|V^m_m|+|V^n_n|= |\bar Y^m_m|+|\bar Y^n_n|=e^{-rm}\psi(X_m)
+e^{-rn}\psi(X_n)$.
Therefore, for $t\in[0,m]$ we have
\begin{align*}
|\delta\tilde Y_t|\le E_{x}\Big(e^{-rm}\psi(X_m)
+e^{-rn}\psi(X_n)+\int^{m}_{n}e^{-rs}\Psi^{-}(X_{s})\,ds
\big|\,\FF_t\Big)=:N_t,
\end{align*}
from which (\ref{eq5.6}) follows. By the above inequality and \cite[Lemma 6.1]{BDHPS},
\[
E_{x}\sup_{0\le t\le m}|\delta\tilde Y_t|^q
\le(1-q)^{-1}(E_{x}N_m)^q,
\]
which shows (\ref{eq5.7}). To prove (\ref{eq5.10}), we first observe that by the
Meyer-Tanaka formula,
\[
E_x|\bar Y^n_t|-E_x|\bar Y^n_0|\ge E_x\int^t_0\mbox{sign}(\bar
Y^n_{s-})\,d\bar Y^n_s.
\]
By the above inequality and (\ref{eq3.9}), for $t<n$ we have
\[
E_x|\bar Y^n_t|-E_x|\bar Y^n_0|
\ge-E_x\int^t_0\fch_{[0,n]}(s) \mbox{sign}(\bar Y^n_{s})
e^{-rs}\Phi(X_{s},e^{rs}\bar Y^n_{s})\,ds.
\]
On the other hand, for every $t\ge0$,
\begin{align*}
\int^t_0e^{-rs}\Phi(X_{s},e^{rs}\bar Y^n_{s})\,
&\le\int^t_0e^{-rs}|\Phi(X_{s},e^{rs}\bar Y^n_{s})-\Phi(X_{s},0)|\,ds
+\int^t_0e^{-rs}\Phi(X_{s},0)\,ds \\
&=-\int^t_0{\mbox{sign}}(\bar Y^n_{s})e^{-rs}
(\Phi(X_{s},e^{rs}\bar Y^n_{s})-\Phi(X_{s},0))\,ds\\
&\quad+\int^t_0e^{-rs}\Phi(X_{s},0)\,ds \\
&\le -\int^t_0{\mbox{sign}}(\bar Y^n_{s})e^{-rs} \Phi(X_{s},e^{rs}
\bar Y^n_{s})\,ds +2\int^t_0e^{-rs}\Psi^{-}(X_{s})\,ds,
\end{align*}
which when combined with (\ref{eq3.10}) proves (\ref{eq5.10}).
\\
Step 2. We will prove the existence of a solution of (\ref{eq3.5}) and (\ref{eq3.7}), (\ref{eq3.8}).
From (\ref{eq3.13}) and (\ref{eq5.6}) it follows
that $\|\bar Y^n-\bar Y^m\|_{x,1}\rightarrow0$ as
$n,m\rightarrow\infty$. Hence there exists a process
$Y^x\in\LL^1(P_x)$ of class D such that (\ref{eq3.7}) is satisfied.
By (\ref{eq3.13}) and (\ref{eq5.6}),
$\lim_{n,m\rightarrow\infty}E_x\sup_{t\ge0}|\bar Y^n_t-\bar Y^m_t|^q
\rightarrow0$. Since the space $\DD^q(P_x)$ is complete, the last
convergence and (\ref{eq3.7}) imply that $\bar Y^x\in\DD^q(P_x)$ and
(\ref{eq3.8}) is satisfied. By (\ref{eq5.1}) and (\ref{eq5.9}), $\bar Y^n_t\le\bar Y^{n+1}_t$, $t\ge0$, $P_x$-a.s. By this and (\ref{eq3.8}),
\[
\lim_{n\rightarrow\infty}\fch_{\{e^{rt}\bar Y^n_t\le\psi(X_s)\}}
=\fch_{\{e^{rt}\bar Y_t\le\psi(X_s)\}},\quad t\ge0, \quad P_x\mbox{-a.s.}
\]
Hence
\begin{equation}
\label{eq3.16} \lim_{n\rightarrow\infty}\Phi(X_t,e^{rt}\bar
Y^n_t)=\Phi(X_t,e^{rt}\bar Y_t),\quad t\ge0, \quad P_x\mbox{-a.s.},
\end{equation}
so applying Fatou's lemma we conclude from (\ref{eq5.10})
that for every $T>0$,
\begin{equation}
\label{eq3.14}
E_x\int_0^Te^{-rt}\Phi(X_{t},e^{rt}\bar Y^x_{t})\,dt
\le E_x\Big(\bar Y^x_{T}
+2\int_0^{T}e^{-rt}\Psi^{-}(X_{t})\,dt\Big).
\end{equation}
From (\ref{eq3.8}) it follows that $\bar Y^x_{T}\rightarrow0$
in probability $P_x$ as $T\rightarrow\infty$. As a consequence,
since $\bar Y^x$ is of class D, $E_x\bar Y^x_{T}\rightarrow0$.
Letting $T\rightarrow\infty$ in (\ref{eq3.14}), we therefore
get (\ref{eq3.15}). By (\ref{eq3.1}),
\[
\bar Y^n_t=\bar Y^n_T+\int^T_te^{-rs}\Phi(X_s,e^{rs}\bar Y^n_s)\,ds
-\int^T_td\bar M^n_s,\quad t<T\le n.
\]
Since $\bar M^n$ is a martingale, it follows that
\begin{equation}
\label{eq3.17} \bar Y^n_t=E_x\Big(\bar Y^n_T
+\int^T_te^{-rs}\Phi(X_s,e^{rs}\bar Y^n_s)\,ds\big|\FF_t\Big), \quad
t<T\le n.
\end{equation}
By Doob's inequality and (\ref{eq3.7}),
\begin{equation}
\label{eq3.18} \lim_{n\rightarrow\infty}P_x(\sup_{0\le t\le
T} |E_x(\bar Y^n_T-\bar Y_T|\FF_t)|>\varepsilon)\le\varepsilon^{-1}
\lim_{n\rightarrow\infty} E_x|\bar Y^n_T-\bar Y^x_T|=0.
\end{equation}
By (\ref{eq3.13}), (\ref{eq3.16}) and the dominated convergence
theorem,
\begin{equation}
\label{eq3.19}
\lim_{n\rightarrow\infty}E_x\int^T_0e^{-rs}|\Phi(X_s,e^{rs}\bar
Y^n_s)-\Phi(X_s,0)|\,ds=0.
\end{equation}
From (\ref{eq3.17})--(\ref{eq3.19}) we deduce that
\[
\bar Y^x_t=E_x\Big(\bar Y^x_T+ \int^T_te^{-rs}\Phi(X_s,e^{rs}\bar
Y_s)\,ds\big|\FF_t\Big).
\]
Letting $T\rightarrow\infty$ and using (\ref{eq3.15}) and the fact
that $\lim_{T\rightarrow\infty}E_x\bar Y_T=0$ yields
\[
\bar Y^x_t=E_x\Big(\int^{\infty}_te^{-rs} \Phi(X_s,e^{rs}\bar
Y^x_s)\,ds\big|\FF_t\Big).
\]
Let $\bar M^x$ be a c\`adl\`ag version of the martingale
\begin{equation}
\label{eq6.8}
t\mapsto E_x\Big(\int^{\infty}_0e^{-rs}\Phi(X_s,e^{rs}\bar Y^x_s)\,ds\big|\FF_t\Big)-\bar Y_0.
\end{equation}
One can check that $(\bar Y^x,\bar M^x)$ is a solution of (\ref{eq3.5}).
\end{proof}
\begin{remark}
Since $\bar M^x$ is a version of (\ref{eq6.8}), it follows from (\ref{eq3.15}) and (A2)(b) that it is a closed martingale. Hence (see, e.g., \cite[Theorem I.12]{P}), $\bar M^x_{\infty}=\lim_{t\rightarrow\infty}\bar M^x_t$ exists $P_x$-a.s. and $\bar M^x$ is a martinagale on $[0,\infty]$. Therefore (\ref{eq6.3}) is satisfied $P_x$-a.s. and $E_x\bar M^x_{\infty}=E_x\bar M^x_0=0$. As a result,
\begin{equation}
\label{eq6.9}
E_x\bar Y^x_0=E_x\int^{\infty}_0\Phi(X_t,e^{rt}\bar Y^x_t)\,dt.
\end{equation}
\end{remark}
\begin{corollary}
\label{cor3.6}
Let the assumption of Theorem \ref{th3.6} hold.
\begin{enumerate}[\rm(i)]
\item If $(\bar Y^x,\bar M^x)$ is a solution of \mbox{\rm(\ref{eq3.5})}, then $(\bar Y^x,\bar K^x,\bar M^x)$ with $\bar K^x$ defined by \mbox{\rm(\ref{eq6.6})} is a solution of \mbox{\rm(\ref{eq6.3})}.
\item Conversely, if $(\bar Y^x,\bar K^x,\bar M^x)$ is a solution of \mbox{\rm(\ref{eq6.3})}, then $\bar K^x$ admits the representation \mbox{\rm(\ref{eq6.6})}.
\end{enumerate}
\end{corollary}
\begin{proof}
To prove (i), we only have to show that $\bar Y^x,\bar K^x$ have the properties formulated in the second line of (\ref{eq6.1}). By (\ref{eq3.8}), $\bar Y^x_t\ge\bar L_t$, $t\ge0$, since $\bar Y^n_t\ge L_t$, $t\in[0,n]$, for every $n\ge1$.
Clearly $\bar K^x_0=0$ and $\bar K^x$ is continuous and increasing. Since we know that $\bar Y^x_t\ge\bar L_t$, $t\ge0$, directly from the definition of $\Phi$ it follows that $\bar K^x$ satisfies the minimality condition.
Part (ii) follows from (i) and the first part of Proposition \ref{prop2.1}.
\end{proof}
\begin{corollary}
\label{cor3.7}
Assume that \mbox{\rm(A1), (A2)} are satisfied. Then
\begin{enumerate}[\rm(i)]
\item $V(x)=E_x\bar Y^x_0$, $x\in D$. Moreover, $e^{rt}\bar Y^x_t=V(X_t)$, $ t\ge0$, $P_x$-a.s. for every $x\in D$.
\item $\lim_{T\rightarrow\infty}V_T(t,x)=V(x)$ for all $t\ge0$ and $x\in D$. Moreover, for every $x\in D$,
\begin{equation}
\label{eq3.40}
V(x)-V_T(0,x)\le e\Big(e^{-rT}\psi(X_T)+\int^{\infty}_Te^{-rt}\Psi^{-}(X_t)\,dt\Big),\quad T>0.
\end{equation}
\end{enumerate}
\end{corollary}
\begin{proof}
By (\ref{eq5.1}) and (\ref{eq6.7}), $V_n(0,x)\le V(x)$, $n\ge1$, whereas by (\ref{eq5.9}) and Theorem \ref{th3.6}, $V_n(0,x)=E_x\bar Y^n_0\nearrow E_x\bar Y^x_0$. Hence $E_x\bar Y^x_0\le V(x)$. On the other hand, by Remark \ref{rem3.2}, $E_x\bar Y^x_0\ge V(x)$, which proves
the first part of (i).
From (\ref{eq2.2}) and (\ref{eq5.1}) it follows that $V_T(t,x)=V_{T-t}(0,x)$, $t\in[0,T]$, $x\in D$. By (\ref{eq5.9}) and (\ref{eq3.7}),
$\lim_{T\rightarrow\infty}V_{T-t}(0,x)=\lim_{T\rightarrow\infty}E_x\bar Y^{T-t}_0=E_x\bar Y^x_0$, which equals $V(x)$. This proves the first part of (ii). By (\ref{eq3.8}) and (\ref{eq5.7}), for every $q\in(0,1)$,
\[
|V_T(0,x)-V(x)|\le (1-q)^{-1/q} E_x\Big(e^{-rT}\psi(X_T)+\int^{\infty}_Te^{-rt}\Psi^{-}(X_t)\,dt\Big),\quad T>0.
\]
Letting $q\downarrow0$ yields (\ref{eq3.40}).
Finally, by (ii), for every $x\in D$, $e^{rt}\bar Y^T_t=Y^T_t=V_T(t,X_t)\rightarrow V(X_t)$ $P_x$-a.s. as $T\rightarrow\infty$. On the other hand, by (\ref{eq3.7}) again, $e^{rt}\bar Y^T_t\rightarrow e^{rt}\bar Y^x_t$ $P_x$-a.s. as $T\rightarrow\infty$. Hence $e^{rt}\bar Y^x_t=V(X_t)$ $P_x$-a.s. for every $t\ge0$, which proves the second part of (i) because the processes $t\mapsto e^{rt}\bar Y^x_t$ and $V(X)$ are continuous.
\end{proof}
\begin{remark}
\label{rem3.8}
(i) From Corollary \ref{cor3.6}(ii) and Corollary \ref{cor3.7}(i) it follows that the solution $(\bar Y^x,\bar K^x,\bar M^x)$ of (\ref{eq3.5}) has a version $(\bar Y,\bar K,\bar M)$ independent of $x$.
\smallskip\\
(ii) The argument from the proof of
\cite[Proposition 5.6]{KR:MF} shows that if
$\psi(x)>0$ for some $x\in D$, then $\{x\in D:V(x)=\psi(x)\}\subset\{x\in D:\psi(x)>0\}$. Therefore $\bar K$ can be written in the form
\[
K_t=\int^t_0e^{-rs}\Psi^{-}(X_s)\fch_{\{V(X_s)=\psi(X_s),\,\psi(X_s)>0\}}\,ds, \quad t\ge0.
\]
\end{remark}
The value of ``perpetual European option" with payoff function $\psi$ is defined as
$V^E(x)=\lim_{T\rightarrow\infty}E_xe^{-rt}\psi(X_T)$. Under the assumption (A2) it is equal to zero. Therefore
the next result can be called the early exercise premium formula for perpetual American options.
This formula extends the corresponding formula for call option in one-dimensional model (see \cite[(6.31)]{KS}).
\begin{corollary}
Assume that \mbox{\rm(A1), (A2)} are satisfied. Then for every $x\in D$,
\end{corollary}
\begin{equation}
\label{eq3.20}
V(x)=E_x\int^{\infty}_0e^{-rt}\Psi^{-}(X_t)\fch_{\{V(X_t)=\psi(X_t),\,\psi(X_t)>0\}}\,dt.
\end{equation}
\begin{proof}
Follows immediately from (\ref{eq6.9}) and Corollary \ref{cor3.7}(i) and Remark \ref{rem3.8}(ii).
\end{proof}
\begin{lemma}
\label{lem3.8}
Assume \mbox{\rm(A1)}. Then
\begin{enumerate}[\rm(i)]
\item $D\ni x\mapsto V_T(x)$, $D\ni x\mapsto V(x)$ are Lipschitz continuous with constant $L$
\item For all $x\in D$, $T>0$ and $t\in[0,T]$, $V_T(t,x)\le C(1+|x|)$ with $C=\max\{L,\psi(0)\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) For $y\in D$ set $\tilde X=(\tilde X^1,\dots,\tilde X^d)$, where $\tilde X^i$, $i=1,\dots,d$, is defined by (\ref{eq2.2}) with $x_i$ replaced by $y_i$. Let $x,y\in D$. By (\ref{eq5.1}),
\[
|V_T(0,x)-V_T(0,y)|\le \sup_{\tau\in\TT_T}E_xe^{-r\tau}|\psi(X_{\tau})-\psi(\tilde X_{\tau})|
\le LE_xe^{-r\tau}|X_{\tau}-\tilde X_{\tau}|.
\]
Define $N^i$ as in (\ref{eq2.3}). Since
$|X^i_{\tau}-\tilde X^i_{\tau}|\le|x_i-y_i|E_xN^i_{0,\tau}=|x_i-y_i|$, it follows that
$|V_T(0,x)-V_T(0,y)|\le L|x-y|$ for all $T>0$. This and Corollary \ref{cor3.7} imply that we also have $|V(x)-V(y)|\le L|x-y|$ for $x,y\in D$. \\
(ii) Since $\psi(x)\le C(1+|x|)$, $x\in D$, for all $T>0$ and $t\in[0,T]$ we have
$ V_T(t,x)\le V_T(0,x
\le C+C\sup_{\tau\in\TT_{0,T}}E_xe^{-r\tau}|X_{\tau}|$.
Since $\delta_i\ge0$, $i=1,\dots,d$, for any $\tau\in\TT_{0,T}$ we also have
$|X_{\tau}|\le
\sum^d_{i=1}X^i_0e^{r\tau}N^i_{0,\tau}$. Since $E_xN^i_{0,\tau}=1$, $i=1,\dots,d$, this proves (ii).
\end{proof}
\subsection{Analytical characterization of the value function}
\label{sec3.2}
Let $\varrho(x)=(1+|x|^2)^{-\gamma}$ with $\gamma>(2+d)/4$. By an elementary calculation, $\int_{\BR^d}\varrho^2(x)\,dx<\infty$ and $\int_{\BR^d}|x|^2\varrho^2(x)\,dx<\infty$. In particular,
\[
\int_{\BR^d}|\psi(x)|^2\varrho^2(x)\,dx<\infty,\qquad \int_{\BR^d}|\Psi^{-}(x)|^2\varrho^2(x)\,dx<\infty
\]
if $\psi$ satisfies (A1) and
\begin{equation}
\label{eq3.42}
\Psi^{-}(x)\le c(1+|x|),\quad x\in\BR^d,
\end{equation}
for some $c>0$. Define
\[
L^2_{\varrho}(D)=L^2(D;\varrho^2\,dx),\qquad H^{1}_{\varrho}(D)=\{u\in
L^2_{\varrho}(D):\sum^d_{j=1} \sigma_{ij}x_iu_{x_j}\in
L^{2}_{\varrho},\,i=1,\dots,d\},
\]
and for $\phi,\varphi\in C^{\infty}_0(D)$ set
\begin{align*}
B^{BS}_{\varrho}(\phi,\varphi)&=\sum^d_{i=1}\int_{D}
(r-\delta_i)x_i\partial_{x_i}\phi(x)\varphi(x)\varrho^2(x)\,dx\\
&\quad-\frac12\sum^d_{i,j=1}\int_{D}
a_{ij}\partial_{x_i}\phi(x)\partial_{x_j}(x_ix_j\varphi(x)\varrho^2(x))\,dx.
\end{align*}
One can check that $B^{BS}_{\varrho}(\phi,\varphi)\le c\|\phi\|_{H^1_{\varrho}(D)}\|\varphi\|_{H^1_{\varrho}(D)}$ for some $c>0$. Therefore the form $B^{BS}_{\varrho}$ can be extended to a bilinear form on
on $H_{\varrho}(D)\times H_{\varrho}(D)$, which we still denote by $B^{BS}_{\varrho}$.
For an open set $U\subset\BR^D$, we define the spaces $H^1(U)$, $H^2(U)$ in the usual way.
\begin{definition}
We say that $v\in H^1_{\varrho}(D)$ is a variational solution of the semilinear problem
\begin{equation}
\label{eq3.32}
L_{BS}v=rv-\Phi(\cdot,v),\qquad v\ge\psi
\end{equation}
if $v(x)\ge\psi(x)$ for $x\in D$, $\Phi(\cdot,v)\in L^2_{\varrho}(D)$ and the equation in (\ref{eq3.32}) is satisfied in the weak sense, i.e. for every $\varphi\in H^1_{\varrho}(D)$,
\begin{equation}
\label{eq3.33}
B^{BS}_{\varrho}(v,\varphi)=(rv-\Phi(\cdot,v),\varphi)_{L^2_{\varrho}(D)}.
\end{equation}
\end{definition}
\begin{proposition}
\label{prop3.11}
Assume that \mbox{\rm(A1), (A2)} and \mbox{\rm(\ref{eq3.42})} are satisfied. If $v$ is a variational solution of \mbox{\rm(\ref{eq3.32})} then $v\in H^2_{loc}(D)$. In particular,
\begin{equation}
\label{eq3.41}
L_{BS}v(x)=rv(x)-\Phi(x,v(x))\quad \mbox{for a.e. }x\in D.
\end{equation}
\end{proposition}
\begin{proof}
Fix a bounded open set $U$ such that $U\subset\bar U\subset D$. Let $\xi\in C_0^{\infty}(U)$ and $\varphi=\xi/\varrho^2$. Then $\varphi\in H^1_{\varrho}$, so from (\ref{eq3.33})
it follows that
\[
B^{BS}(v,\xi)=(rv-\Phi(\cdot,v),\xi)_{L^2(\BR^d;dx)},
\]
where $\BB^{BS}$ is defined as $B^{BS}_{\varrho}$ but with $\varrho=1$. Therefore $v$ is a weak solution, in the space $H^1(U)$, of the problem $L_{BS}v=rv-\Phi(\cdot,v)$ in $U$.
Write $e^x=(e^{x_1},\dots,e^{x_d})$ for $x=(x_1,\dots,x_d)\in\BR^d$, and then define
$\tilde v(x)=v(e^x)$, $\tilde \Phi(x)=\Phi(e^x,\tilde v(x))$, $\tilde U=\{x\in\BR^d:e^x\in U\}$ and
\[
\tilde L=\sum^d_{i=1}(r-\delta_i-a_{ii}/2)\partial_{x_i} +\frac12\sum^d_{i,j=1}a_{ij}\partial^2_{x_ix_j}\,.
\]
One can check that $\tilde v\in H^1(\tilde U)$ and $\tilde v$ is a weak solution of the problem
$\tilde L\tilde v=r\tilde v-\tilde\Phi$ in $\tilde U$. By \cite[Theorem 1, Section 6.3]{E}, $\tilde v\in H^2(\tilde U)$, from which it follows that $v\in H^2(U)$. Because of arbitrariness of $U$, $v\in H^2_{loc}(D)$. The equality (\ref{eq3.41}) now follows by a standard argument (see Remark (ii) following \cite[Section 6.3, Theorem 1]{E}).
\end{proof}
\begin{theorem}
\label{th3.13}
Assume that \mbox{\rm(A1), (A2)} and \mbox{\rm(\ref{eq3.42})} are satisfied. Then $V$ is a variational solution of \mbox{\rm(\ref{eq3.32})}.
\end{theorem}
\begin{proof}
Let $W_{\varrho}=\{u\in L^2(0,T;H^1_{\varrho}):u_t\in
L^2(0,T;H^{-1}_{\varrho})\}$. In \cite{KR:AMO} it is proved that for every $T>0$, $V_T\in W_{\varrho}$ and $V_T$ is a variational solution of the Cauchy problem
\begin{equation}
\label{eq3.39}
\partial_tV_T+\LL_{BS}V_T=rV_T-\Phi(\cdot,V_T),\qquad V_T(T,\cdot)=\psi,
\end{equation}
i.e. $V_T\ge\psi$ and (\ref{eq3.39}) is satisfied in the weak sense. In particular, for any $\eta\in C^{\infty}_0((0,T)\times D)$ we have
\[
\int^T_0\langle\partial_tV_T(t),\eta(t)\rangle\,dt +\int^T_0B^{BS}_{\varrho}(V_T(t),\eta(t))\,dt =\int^T_0(rV_T(t)-\Phi(\cdot,V_T(t)),\eta(t))_{L^2_{\varrho}}\,dt,
\]
where $V_t(t)=V_T(t,\cdot)$, $\eta(t)=\eta(t,\cdot)$ and $\langle\cdot,\cdot\rangle$ denotes the duality pairing between $L^2(0,T;H^{-1}_{\varrho})$ and $L^2(0,T;H^1_{\varrho})$. From this one can deduce that for every $\varphi\in C^{\infty}_0(D)$,
\begin{equation}
\label{eq3.34}
\int^1_0\langle\partial_tV_T(t),\varphi\rangle\,dt +\int^1_0B^{BS}_{\varrho}(V_T(t),\varphi)\,dt =\int^1_0(rV_T(t)-\Phi(\cdot,V_T(t)),\varphi)_{L^2_{\varrho}}\,dt.
\end{equation}
By Corollary \ref{cor3.7}(ii), for every $x\in D$, $V_T(0,x)\rightarrow V(x)$ and $V_T(1,x)\rightarrow V(x)$, so
applying the dominated convergence theorem we get
\begin{equation}
\label{eq3.35}
\lim_{T\rightarrow\infty}\int^1_0\langle\partial_tV_T(t),\varphi\rangle\,dt
=\lim_{T\rightarrow\infty}(V_T(1)-V_T(0),\varphi)_{L^2_{\varrho}}=0.
\end{equation}
Suppose that $\mbox{supp}[\varphi]\subset U$ for some relatively compact open set $U\subset D$. By Lemma \ref{lem3.8}, $|\partial_{x_i}V_T|\le L$ a.e. for all $i=1,\dots,d$ and $T>0$, and $V_T$ are bounded on $(0,1)\times U$ uniformly in $T>0$. By this and Corollary \ref{cor3.7}(ii), $V_T\rightarrow V$ weakly in $L^2(0,1;H^1(U))$. Therefore
\begin{equation}
\label{eq3.36}
\lim_{T\rightarrow\infty}\int^1_0B^{BS}_{\varrho}(V_T(t),\varphi)\,dt =B^{BS}_{\varrho}(V,\varphi).
\end{equation}
Since $V_T\le V_{T'}$ if $T\le T'$, in fact $V_T\nearrow V$ as $T\rightarrow\infty$. Therefore $\Phi(\cdot,V_T)\rightarrow\Phi(\cdot,V)$ pointwise, so applying the dominated convergence theorem we get (jakies ograniczenia na $\Psi^{-}$).
\begin{equation}
\label{eq3.37}
\lim_{T\rightarrow\infty}\int^1_0(rV_T(t)-\Phi(\cdot,V_T(t)),\varphi)_{L^2_{\varrho}}\,dt
=(rV-\Phi(\cdot,V),\varphi)_{L^2_{\varrho}}.
\end{equation}
From (\ref{eq3.34})--(\ref{eq3.37}) it follows that $V$ satisfies (\ref{eq3.33}) for $\varphi\in C^{\infty}_c(D)$, and hence for $\varphi\in H^1_{\varrho}$ by
an approximation argument. Clearly $V\ge\psi$, so $V$ is a solution of (\ref{eq3.32}).
\end{proof}
Before stating the uniqueness result, we note that under the assumptions on $\psi$ and $\delta_1,\dots, \delta_d$ stated in Remark \ref{rem3.1}(ii), $e^{-rt}P_tV(x)\rightarrow0$ as $\rightarrow\infty$.
\begin{proposition}
Under the assumptions of Theorem \ref{th3.13} there is at most one variational solution $v$ of \mbox{\rm(\ref{eq3.32})} such that $\lim_{t\rightarrow\infty}e^{-rt}P_tv(x)=0$ for every $x\in D$.
\end{proposition}
\begin{proof}
Let $v^1,v^2$ be two solutions of (\ref{eq3.32}) such that $\lim_{t\rightarrow\infty}e^{-rt}P_tv^k(x)\rightarrow0$, $x\in D$, $k=1,2$, and let $v=v^1-v^2$. Define $\tilde L$ as in Proposition \ref{prop3.11} and set $\tilde v(x)=v(e^x)$. Then
$v(X)=\tilde v(Z)$, where $Z=(Z^1,\dots,Z^d)$,
$Z^i_t=\ln x_i+(r-\delta_i-a_{ii}/2)t+\sum^d_{j=1}\sigma_{ij}B^j_{0,t}$, $t\ge0$.
Choose an increasing sequence $\{U_n\}$ of bounded open sets such that $\bar U_n\subset U_{n+1}$ and $\bigcup_{n\ge1}U_n=D$ and set $\tau_n=\inf\{t>0:X_t\notin U_n\}=\inf\{t>0:Z_t\notin \tilde U_n\}$, where $\tilde U_n=\{x\in\BR^d:e^x\in U\}$. Since $\tilde v\in H^2(\tilde U_n)$, by the extension of It\^o's formula proved by Krylov (see \cite[Chapter II, \S10, Theorem 1]{Kr}) we have
\[
\tilde v(Z_{t\wedge\tau_n})=\tilde v(Z_0)+\sum^d_{i,j=1}\int^{t\wedge\tau_n}_0\partial_{x_i}\tilde v(Z_s)\,\sigma_{ij}\,dB^j_{0,s}
+\int^{t\wedge\tau_n}_0\tilde L\tilde v(Z_s)\,ds,\quad t\ge0.
\]
Define $Y_t=v(X_t)$, $t\ge0$. Since $v(X)=\tilde v(Z)$, it follows that
\begin{equation}
\label{eq3.38}
Y_{t\wedge\tau_n}=Y_0+\sum^{d}_{i,j=1}\int^{t\wedge\tau_n}_0L_{BS}v(X_s)\,ds +R_{t\wedge\tau_n},\quad t\ge0,
\end{equation}
where $R_t=\sum^d_{i,j=1}\int^t_0\sigma_{ij}X^i_s\partial_{x_i}v(X_s)\,dB^j_{0,s}$.
Since $P_x(X_t\in D,t\ge0)=1$, $\tau_n\rightarrow\infty$ $P_x$-a.s. as $n\rightarrow\infty$. Therefore letting $n\rightarrow\infty$ in (\ref{eq3.38}) shows that it holds true with $t\wedge\tau_n$ replaced by $t$. Let $\bar Y_t=e^{-rt}Y_t$. Integrating by parts we obtain
\begin{align*}
\bar Y_t&=\bar Y_0+\int^t_0(-re^{-rs}Y_s\,ds+\int^t_0e^{-rs}\,dY_s\\
&=\bar Y_0+\int^t_0e^{-rs}(-rv+L_{BS}v)(X_s)\,ds+\int^t_0e^{-rs}\,dR_s\\
&=\bar Y_0-\int^t_0e^{-rs}(\Phi(X_s,v^1(X_s))-\Phi(X_s,v^2(X_s))\,ds+\int^t_0e^{-rs}\,dR_s.
\end{align*}
Repeating now the argument from the proof of Proposition \ref{prop2.1} we show that
$E_x\bar Y^+_0\le E_x\bar Y^+_t$, $t\ge0$. In much the same way we show that $E_x\bar Y^-_0\le E_x\bar Y^-_t$, $t\ge0$. Hence $E_x|\bar Y_0|\le E_x|\bar Y_t|=e^{-rt}E_x|v(X_t)|=e^{-rt}P_t|v|(x)$, which converges to zero as $t\rightarrow\infty$. Thus $|v(x)|=E_x\bar Y_0=0$.
\end{proof}
Note that in case of American call and American put on single asset explicit formulas for the solution of (\ref{eq3.32}) are known (see, e.g., \cite{J,KS,McK,S}).
\section{Examples}
Below we give examples of payoff functions satisfying (A1), (A2) and (\ref{eq3.42}). In all the examples $\Psi^{-}$ is computed in the subset $D\cap\{\psi>0\}$ (see Remark \ref{rem3.8}(ii)).
\begin{example}
Let $d=1$.
\[
\psi(x)=(x-K)^+,\qquad \Psi^{-}(x)=(\delta x-rK)^+\quad
\mbox{(call)}
\]
\[
\psi(x)=(K-x)^+,\qquad \Psi^{-}(x)=(rK-\delta x)^+\quad
\mbox{(put)}
\]
The assumptions (A1) and (A2) are satisfied if $r>0$ in case of put option, and if $r>0,\delta>0$ in case of call option. By (\ref{eq3.40}), for put option we have
\[
V(x)-V_T(0,x)\le e\Big(Ke^{-rT}+rK\int^{\infty}_Te^{-rt}\,dt\Big)=2eKe^{-rT},\quad x>K.
\]
For call option, $V(x)-V_T(0,x)\le 2eKe^{-\delta T}$, $T>0$, $x\in(0,K)$.
\end{example}
\begin{example}
In the examples below $d\ge 2$. In all the cases where $\psi$ is bounded, (A1) and (A2) are satisfied if $r>0$. In the other cases they are satisfied if $r>0$ and $\delta_i>0$, $i=1,\dots,d$.
\begin{enumerate}
\item Index options and spread options.
\[
\psi(x)=\big(\sum_{i=1}^{d} w_{i}x_i-K\big)^{+},\quad
\Psi^-(x)=\big(\sum_{i=1}^{d} w_{i}\delta_{i}x_i-r K\big)^{+}\quad
\mbox{(call)}
\]
\[
\psi(x)=\big(K-\sum_{i=1}^{d} w_{i}x_i\big)^{+},\quad
\Psi^-(x)=\big(r K-\sum_{i=1}^{d} w_{i}\delta_{i}x_i\big)^{+}\quad
\mbox{(put)}
\]
\item Call on max option.
\[
\psi(x)=(\max\{x_1,\dots,x_d\}-K)^{+},\qquad
\Psi^{-}(x)=\big(\sum_{i=1}^d\delta_{i}\mathbf{1}_{B_{i}}(x)x_i-r
K\big)^{+},
\]
where $B_{i}=\{x\in\BR^d: x_{i}>x_{j},\, j\neq i\}$.
\item Put on min option.
\[
\psi(x)=(K-\min\{x_1,\dots,x_d\})^{+},\qquad
\Psi^{-}(x)=\big(r K
-\sum_{i=1}^d\delta_{i}\mathbf{1}_{C_{i}}(x)x_i\big)^{+},
\]
where $C_{i}=\{x\in\BR^d: x_{i}<x_{j},\, j\neq i\}$.
\item Multiple strike options.
\[
\psi(x)=(\max\{x_1-K_{1},\dots, x_d-K_d\})^{+},
\]
\[
\Psi^{-}(x)=\big(\sum_{i=1}^{d} \mathbf{1}_{B_{i}}(x-K)(\delta_{i}x_i-r
K_{i})\big)^{+}\quad \mbox{ with }K=(K_{1},\dots,K_d).
\]
\end{enumerate}
\end{example}
\subsection*{Acknowledgements}
{This work was supported by the Polish National Science Centre
under Grant \\ 2016/23/B/ST1/01543).}
|
2,877,628,090,340 | arxiv | \section{Introduction}
When the available data are multiple time series thought to be realizations of nonstationary random processes, estimation of their time varying mean and spectrum offers insight into the behavior of the processes, including whether and how they have changed over time.
For example, an analysis of the spatial distribution of the frequency domain characteristics of
time series of rainfall for sites spanning a wide spatial field can quantify the cyclical variability of the underlying process, while allowing for nonstationarity can suggest ways in which the climate has changed over the observation period.
Extending the work of \citet{priestley1965}, \citet{dahlhaus1997} established an asymptotic framework for locally stationary processes in which the spectral density of the process is allowed to evolve over time. Key to these asymptotics is the notion that better estimates of the local spectrum for a single stochastic process come from observing the stochastic process at finer time intervals, rather than observing it over a longer time period. This is problematic for historic time series, e.g., rainfall, for which no further observations can be made. Joint modeling of multiple time series with similar or identical local spectra is one way to ameliorate this problem, as estimates for different time series can borrow strength from each other. If additional information is available, such as covariates, it should be incorporated into the model to improve estimation. This is the motivation behind the approach taken in this article.
In particular, this article presents methodology for analyzing a panel of possibly nonstationary time series using a covariate-dependent infinite mixture model, with mixture components parameterized by their time varying mean and spectrum. The mixture components are based on AdaptSPEC \citep{rosenetal2012}, which partitions a (centered) time series into an unknown but finite number of segments, estimating the spectral density within each segment by smoothing splines. As part of the proposed method, AdaptSPEC is extended to handle missing values, a common feature of time series which can cause difficulties for nonparametric spectral methods. A second extension is the incorporation of a time varying mean, which avoids having to de-mean (center) the time series as a preliminary step. The covariates, which are assumed to be time-independent, are incorporated via the mixture using the logistic stick breaking process (LSBP) of \citet{rigondurante2020}, where the log odds for each `stick break' are modeled using a thin plate spline Gaussian process over the covariates. The model is formulated in a Bayesian framework, where Markov chain Monte Carlo (MCMC) methods are used for parameter estimation and to deal with missing values. Specifically, as in AdaptSPEC, reversible jump MCMC (RJMCMC) is used to estimate the mixture component parameters, while the LSBP parameters are estimated via the P\'{o}lya-Gamma based latent variable expansion of \citet{rigondurante2020} \citep[see also][for the original latent variable expansion in the finite mixture case]{polsonetal2013}. The model and sampling scheme are capable of handling large panels, such as that of the measles application which has nearly 200,000 observations. In addition to estimating time varying spectra for each time series in the panel, the covariate-dependent mixture structure allows inference about the underlying process at unobserved covariate values, enabling predictive inference. For instance, in this work, we use longitude and latitude as covariates when modeling Australian rainfall data, and are able to infer the predictive time varying spectrum of the rainfall process at unobserved locations.
Many methods have been proposed for the spectral analysis of time series. As the focus of this paper is multiple time series, we provide background in the form of an overview of methods for nonstationary single time series. We then review methods for multiple time series, stationary or otherwise. This excludes methods for multivariate time series, and we refer readers to \citet{likrafty2018} for a review of past and recent work in this active research area.
Approaches to spectral estimation for a single nonstationary time series include fitting parametric time series models with time varying parameters \citep{kitagawagersch1996,dahlhaus1997,westetal1999,yangetal2016}, smoothing the log periodogram \citep{ombaoetal2001,guoetal2003}, dividing the time series into locally stationary segments \citep{adak1998,davisetal2006,rosenetal2009,rosenetal2012}, and using short time Fourier transforms \citep{yangzhou2020}. For a recent and extensive overview of methods for single nonstationary time series, see \citet{yangetal2016}. Most directly relevant to this paper, \citet{rosenetal2009} estimate the log of the spectral density using a Bayesian mixture of splines. The time series is partitioned into small sections, and it is assumed that the log spectral density within each partition is given by a mixture of smoothing splines. The mixture weights are assumed to be time varying. \citet{rosenetal2012} present the AdaptSPEC method, which avoids the fixed partitions of \citet{rosenetal2009}. AdaptSPEC partitions the time series into one or more variable length segments in an adaptive manner, modeling the log spectral density within each segment via a smoothing spline. This results in better estimates than those obtained from the method of \citet{rosenetal2009}. Furthermore, the method can accommodate both slowly and abruptly varying processes, as well as identify stationary processes. AdaptSPEC forms the basis of our proposed model for multiple time series.
For multiple time series, \citet{kraftyetal2011} construct a covariate-dependent model for multiple stationary series in which the log spectrum has a mixed effects representation, where the effects are functions over the frequency domain. \citet{macaroprado2014} propose a Bayesian model for multiple stationary time series with a covariate-dependent spectral density composed as a sum of spectral densities corresponding to different levels of two or more factors. Following \citet{choudhurietal2004}, they model the spectral density associated with the factors via Bernstein-Dirichlet priors. \citet{kraftyetal2017} present a Bayesian model for stationary multivariate time series based on the work of \citet{rosenstoffer2007}, where multiple (multivariate) time series from different subjects are available, and subjects have an associated single covariate. \citet{bruceetal2018} present a method for multiple nonstationary time series with a single covariate that is referred to as conditional adaptive Bayesian spectrum analysis, which adaptively partitions both time- and covariate-space, modeling the spectrum within each partition by smoothing splines \citep[as in][]{rosenetal2012}. \citet{cadonnaetal2018} model multiple stationary time series using a Bayesian hierarchical model. The log-periodogram of a single stationary time series is modeled as a mixture of Gaussian distributions where the mixture weights and mean functions are frequency-dependent. The hierarchical model for multiple time series is constructed by setting the mean functions to be common to all time series while letting the weights vary between time series.
Adding to this literature, we present methodology, referred to as AdaptSPEC-X, which combines four features: multiple time series, nonstationarity in both mean and spectrum, multiple covariates, and missing data. We demonstrate the method on simulated data, and show how it can be used to estimate the mean and spectra in two application areas: Australian rainfall data, and measles incidence in the United States. Software implementing AdaptSPEC-X is available in the R package BayesSpec\footnote{Available from the authors; latest CRAN version does not contain AdaptSPEC-X.}.
The paper proceeds as follows. Section~\ref{sec:model_single} describes AdaptSPEC, the model for single nonstationary time series forming the basis for the analysis of multiple nonstationary time series. AdaptSPEC-X, a covariate-dependent infinite mixture model, is presented in Section~\ref{sec:model_multiple}. Section~\ref{sec:sampling_scheme} outlines the MCMC scheme used to estimate the model parameters. Section~\ref{sec:simulation_study} presents a study of the performance of AdaptSPEC-X on replicated simulated data from multiple time series. Section~\ref{sec:applications} describes the application areas. Australian rainfall is analyzed in Section~\ref{sec:applications_rainfall}, and measles incidence in the US is discussed in Section~\ref{sec:applications_measles}. Appendix~\ref{sec:conditional_distributions} provides details of the conditional distributions necessary for the sampling scheme, and Appendix~\ref{sec:appendix_lambda_derivation} expands on the covariance structure used to derive the conditional distribution of the missing values.
\section{Model for single time series}
\label{sec:model_single}
\citet{rosenetal2012} (henceforth RWS12) present the AdaptSPEC method for modeling single nonstationary time series which we summarize in this section. Let $\bm{x} = (x_1, \ldots, x_n)'$ be a time series of length $n$. Assume for ease of notation that $n$ is even, and suppose initially that $\bm{x}$ is a realization from a stationary process $\{ X_t \}$ with constant mean $\mu$ and a bounded positive spectral density $f(\omega)$ for $\omega\in (-\frac{1}{2},\frac{1}{2}]$. \citet{whittle1957} showed that, for large $n$, the likelihood of $\bm{x}$ can be approximated as
\begin{equation}
p(\bm{x} \mid \mu, f)
=
\frac{1}{(2\pi)^{n / 2}}
\frac{1}{\prod_{k = 1}^n f(\omega_k)^{1 / 2}}
\exp\left\{
-\frac{1}{2} \sum_{k = 1}^n \frac{I_k}{f(\omega_k)}
\right\},
\label{eqn:whittle_likelihood}
\end{equation}
where $\omega_k = \frac{k - 1}{n}$ for $k = 1, \ldots, n$ are the Fourier frequencies,
$I_k = |d_k|^2$ is the periodogram at $\omega_k$, and
\begin{equation}
d_k = \frac{1}{\sqrt{n}}\sum_{t = 1}^n (x_t - \mu) e^{-2\pi i \omega_k (t - 1)}
\label{eqn:dft}
\end{equation}
is the discrete Fourier transform (DFT) at $\omega_k$, with $i = \sqrt{-1}$. RWS12 follow \citet{wahba1990} by expressing $\log f$ as
\begin{equation}
\log f(\omega) = \alpha_0 + h(\omega),
\label{eqn:log_spectral_density_spline}
\end{equation}
and placing a smoothing spline prior on $h(\omega)$. Due to the evenness of $f(\omega)$ and the periodogram, $h(\omega)$ is modeled on the domain $\omega \in [0, 0.5]$, corresponding to the first $\frac{n}{2} + 1$ Fourier frequencies. This prior is expressed via a linear combination of $J$ basis functions, where $J < \frac{n}{2} + 1$ is chosen to balance prior flexibility and computational resources. See Appendix~\ref{sec:conditional_distribution_theta} and RWS12 for details.
Next we allow the underlying process $\{ X_t \}$ to be nonstationary. Let a time series consist of a number of segments, $m$, and let $\xi_{s, m}$ be the end of the $s$th segment, $s = 1, \ldots, m$, where $\xi_{0, m} = 0$ and $\xi_{m, m} = n$. Then we assume that $\{ X_t \}$ is piecewise stationary, with
\begin{equation}
X_t = \sum_{s = 1}^m X_t^s \delta_{s, m}(t),
\label{eqn:piecewise_process}
\end{equation}
where the processes $\{ X_t^s \}$ are independent and stationary with means $\mu_{s, m}$, spectral densities $f_{s,m}(\omega)$, and $\delta_{s, m}(t) = 1$ iff $t \in (\xi_{s - 1, m}, \xi_{s, m}]$. Consider a realization $\bm{x}$ from \eqref{eqn:piecewise_process}. RWS12 approximate the likelihood of $\bm{x}$ by
\begin{equation}
g(\bm{x} \mid \Theta) = \prod_{s = 1}^m p(\bm{x}_{s, m} \mid \mu_{s, m}, f_{s, m}),
\label{eqn:piecewise_whittle_likelihoods}
\end{equation}
where $\Theta = \{ m, \bm{\xi}_m, \bm{\mu}_m, f_{1, m}, \ldots, f_{m, m} \}$, $\bm{\xi}_m = (\xi_{1, m}, \ldots, \xi_{m, m})'$, $\bm{\mu}_m = (\mu_{1, m}, \ldots, \mu_{m, m})'$, $\bm{x}_{s, m} = \{ x_t : \delta_{s, m}(t) = 1 \}$ are the data for the $s$th segment, and $p(\bm{x}_{s, m} \mid \mu_{s, m}, f_{s, m})$ is the Whittle likelihood \eqref{eqn:whittle_likelihood}. The values of $m$, $\bm{\xi}_m$ and $f_{s, m}$ for $s = 1, \ldots, m$ are considered unknown and are assigned priors. For $f_{s, m}$, the prior in \eqref{eqn:log_spectral_density_spline} is used, while $m$ is given the discrete uniform prior between $1$ and $M$. (See RWS12 for the details of the prior on $\bm{\xi}_m$). RWS12 consider $\bm{\mu}_m$ to be known and equal to zero, but in this work we consider it unknown and assign to $\mu_{s, m}$ a uniform prior with support $\mu_- < \mu_{s, m} < \mu_+$. A minimum segment length $t_\text{min}$ is set to ensure that there are sufficient time periods within each segment so that the Whittle likelihood approximation is appropriate.
RWS12 develop a reversible jump Markov chain Monte Carlo algorithm \citep{green1995} that samples from the posterior distribution of this model. They show that AdaptSPEC can handle both abruptly and slowly varying nonstationary time series, as well as identify whether a time series is stationary. AdaptSPEC forms the basis of our spectral estimation technique for multiple time series.
\section{Model for multiple time series}
\label{sec:model_multiple}
In this section we extend the model of Section~\ref{sec:model_single} to multiple time series. Suppose now that the stochastic process $\{ X_t \}$ has associated covariates $\bm{u} = (u_1, \ldots, u_P)'$. We model $\{ X_t \}$ with a covariate-dependent mixture structure
\begin{equation}
\{ X_t \} \sim \sum_{h = 1}^H \pi_h(\bm{u}) g_h(\{ X_t \} \mid \Theta_h),
\label{eqn:single_mixture}
\end{equation}
where the mixture component distributions $g_h$ are instances of AdaptSPEC (Equation~\eqref{eqn:piecewise_whittle_likelihoods}) with parameters $\Theta_h = \{ m^h, \bm{\xi}^h_m, \bm{\mu}^h_m, f^h_{1, m}, \ldots, f^h_{m, m} \}$, $2 \leq H \leq \infty$, and the mixture weights $\pi_h(\cdot)$ satisfy $0 \leq \pi_h(\cdot) \leq 1$ and $\sum_{h = 1}^H \pi_h(\bm{u}) = 1$. Equation~\eqref{eqn:single_mixture} implies that $\{ X_t \}$'s distribution is determined by its covariates $\bm{u}$, which, importantly, do not vary with time. The purpose of the mixture structure in Equation~\eqref{eqn:single_mixture} is to induce covariate-dependence in a flexible, semi-parametric manner, and we do not use this structure to perform inference about clustering or the number of clusters among multiple time series.
Let $\{ \bm{x}_1, \ldots, \bm{x}_N \}$ be a finite collection of $N$ time series, of length $n$ each, where each time series $\bm{x}_j = (x_{1, j}, \ldots, x_{n, j})'$ has covariates $\bm{u}_j = (u_{1, j}, \ldots, u_{P, j})'$ for $j = 1, \ldots, N$. Assuming independence conditional on $\pi_h(\cdot)$ and $\Theta_h$, it follows from Equation~\eqref{eqn:single_mixture} that the joint distribution of the collection is
\begin{equation}
p(\bm{x}_1, \ldots, \bm{x}_N) = \prod_{j = 1}^N \sum_{h = 1}^H \pi_h(\bm{u}_j) g_h(\bm{x}_j \mid \Theta_h).
\label{eqn:multiple_mixture}
\end{equation}
\subsection{Model for mixture weights}
For the mixture weights $\pi_h(\bm{u})$ in Equation~$\eqref{eqn:single_mixture}$, we use the logit stick-breaking prior (LSBP) developed by \citet{rigondurante2020}, according to which $\pi_h(\bm{u})$ is given by
\begin{equation}
\pi_h(\bm{u}) = v_h(\bm{u}) \prod_{h' = 1}^{h - 1} (1 - v_{h'}(\bm{u})),
\label{eqn:lsbp_weights}
\end{equation}
where $\logit v_h(\bm{u}) = w_h(\bm{u})$, so that $w_h(\bm{u})$ are the covariate-dependent log odds. This prior allows for $2 \leq H \leq \infty$ mixture components, with $v_H(\bm{u}) = 1$ when $H < \infty$.
As described by \citet{rigondurante2020}, this construction can be interpreted via sequential (continuation-ratio) logits \citep{agresti2018}. Let $z_j \in \{ 1, 2, \ldots, H \}$ be a latent indicator such that $(\bm{x}_j \mid z_j = h) \sim g_h(\bm{x}_j \mid \Theta_h)$. Then the LSBP can be represented in a generative manner as a sequence of decisions, where $p(z_j = 1) = v_1(\bm{u}_j) = (1 + \exp(-w_1(\bm{u}_j)))^{-1}$, $p(z_j = 2 \mid z_j > 1) = v_2(\bm{u}_j) = (1 + \exp(-w_2(\bm{u}_j)))^{-1}$, and so on, such that in general, $p(z_j = h \mid z_j > h - 1) = (1 + \exp(-w_h(\bm{u}_j)))^{-1}$.
The model in equations~\eqref{eqn:multiple_mixture} and \eqref{eqn:lsbp_weights} has a similar structure to the classic mixture of experts model \citep{jacobsetal1991}, in which the weights of a finite mixture depend on covariates through multinomial logits. Our motivation for choosing the LSBP (Equation~\eqref{eqn:lsbp_weights}) over multinomial logits is to obviate the choice of the number of mixture components. The model in equations~\eqref{eqn:multiple_mixture} and \eqref{eqn:lsbp_weights} is in principle an infinite mixture and so the question of the number of components becomes irrelevant. In practice, however, it is common to truncate the infinite representation at a suitably high but finite $K$ \citep{ishawaranjames2001}. The LSBP is analogous to the probit stick-breaking process \citep{chungdunson2009}, where a probit link function is used in place of the logit.
\subsection{Model for log odds}
\label{sec:model_log_odds}
We model the log odds by a Gaussian process (GP) prior $w_h(\bm{u}) \sim \text{GP}(\beta_{0, h} + \bm{u}' \bm{\beta}_h, \tau_h^2 \Omega(\cdot, \cdot))$, where $\beta_{0, h}$ is an intercept, $\bm{\beta}_h = (\beta_{1, h}, \ldots, \beta_{P, h})'$ is a vector of regression coefficients, $\tau_h ^ 2$ is a smoothing parameter, and $\Omega(\bm{u}, \bm{u}')$ is the covariance kernel constructed via the reproducing kernel Hilbert space defined by a $P$-dimensional thin-plate Gaussian process prior \citep[see][]{wood2013}.
For a finite collection of $N$ time series $\{ \bm{x}_1, \ldots, \bm{x}_N \}$, with associated covariates $\{ \bm{u}_1, \ldots, \bm{u}_N\}$, the log odds vector, $\bm{w}_h = (w_h(\bm{u}_1), \ldots, w_h(\bm{u}_N))'$, has a multivariate normal distribution
\begin{equation}
\bm{w}_h \sim \text{N}(\beta_{0, h} \bm{1}_N + U \bm{\beta}_h, \tau_h^2 \Sigma_w),
\label{eqn:gp_mvt_norm}
\end{equation}
where $\bm{1}_N$ is an $n \times 1$ vectors of ones, $U = (\bm{u}_1, \ldots, \bm{u}_N)'$ is an $N \times P$ matrix, and $\Sigma_w$ is an $N \times N$ matrix whose $j_1 j_2$th entry is equal to $\Omega(\bm{u}_{j_1}, \bm{u}_{j_2})$. To facilitate the posterior sampling scheme in Section~\ref{sec:sampling_scheme}, we transform the problem via a basis expansion \citep{wood2013}.
Let $\Sigma_w = QDQ'$ be the eigenvalue decomposition of $\Sigma_w$, where $Q$ is an $N \times N$ orthogonal matrix whose columns are the eigenvectors of $\Sigma_w$, and $D$ is a diagonal matrix containing the eigenvalues of $\Sigma_w$. Define $U^\dagger = (\bm{1}_N, U, QD^{1/2})$ by columnwise concatenation, let $\bm{\beta}^\mathrm{GP}_h$ be an $N \times 1$ vector, and $\bm{\beta}^\dagger_h = (\beta_{0, h}, \bm{\beta}_h', \bm{\beta}^{\mathrm{GP}\prime}_h)'$. The first column of $U^\dagger$ is a vector of ones, the next $P$ columns of $U^\dagger$ (equal to $U$) are the original covariates, while the last $N$ columns (equal to $QD^{1/2}$) are basis functions, with associated coefficients $\bm{\beta}^\mathrm{GP}_h$. For computational convenience and parsimony, we truncate the basis expansion to the first $B < N$ basis functions, so that $U^\dagger$ is $N \times (P + B + 1)$ and $\bm{\beta}^\dagger_h$ is $(P + B + 1) \times 1$. Equation~\eqref{eqn:gp_mvt_norm} now takes the form
\begin{equation}
\renewcommand{1.2}{1.5}
\begin{array}{lll}
\bm{w}_h
& = & U^\dagger \bm{\beta}^\dagger_h, \\
\bm{\beta}^\mathrm{GP}_h
& \sim & \text{N}(\bm{0}_B, \tau^2_h I_B),
\end{array}
\renewcommand{1.2}{1}
\label{eqn:gp_mvt_norm_linear}
\end{equation}
where $\bm{0}_B$ is an $B \times 1$ vector of zeros, and $I_B$ is the $B \times B$ identity matrix. This basis expansion, combined with the interpretation via sequential logits given in the previous section, facilitate the development of the posterior sampling scheme presented in Section~\ref{sec:sampling_scheme}.
The expression for $v_h(\bm{u}_j)$ becomes $\logit v_h(\bm{u}_j) = \bm{u}^{\dagger\prime}_j \bm{\beta}^\dagger_h$, where $\bm{u}^{\dagger\prime}_j$ is the $j$th row of $U^\dagger$. The prior placed on $(\beta_{0, h}, \bm{\beta}_h')'$ is $\text{N}(\bm{\mu}_\beta, \Sigma_\beta)$, where $\bm{\mu}_\beta$ is a $(P + 1) \times 1$ vector and $\Sigma_\beta$ is a $(P + 1) \times (P + 1)$ covariance matrix, so that
\[
\bm{\beta}^\dagger_h \sim \text{N}\left(
\begin{pmatrix}
\bm{\mu}_\beta \\
\bm{0}_B
\end{pmatrix},
\begin{pmatrix} \Sigma_\beta & 0 \\ 0 & \tau^2_h I_B \end{pmatrix}
\right).
\]
Finally, to complete the model specification, we assign $\tau_h$ a half-$t$ distribution \citep{gelman2006} with density
\[
p(\tau_h) \propto \left( 1 + \frac{1}{\nu_\tau}\left(\frac{\tau_h}{A_\tau}\right)^2 \right)^{-(\nu_\tau + 1) / 2}\;\; ,\tau_h >0,
\]
where $A_\tau$ and $\nu_\tau$ are scale and degrees of freedom parameters, respectively. As described in Appendix~\ref{sec:conditional_distributions}, the half-$t$ distribution can be expressed as
a scale mixture of inverse Gamma distributions, which simplifies the sampling of $\tau_h$. For the results in this paper, we set $\bm{\mu}_\beta = \bm{0}$, $\Sigma_\beta = 100 I_{P + 1}$, $\nu_\tau = 3$ and $A_\tau = 10$.
Figure~\ref{fig:graphical_model} displays a graphical summary of the model in Equations~\eqref{eqn:piecewise_whittle_likelihoods} to \eqref{eqn:gp_mvt_norm_linear}, showing the dependence between the data, covariates, parameters, and hyperparameters.
\begin{figure}
\centering
\begin{tikzpicture}[node distance=0.4cm and 0.7cm]
\node (tau) {$\tau_h$};
\node[right=of tau] (beta) {$\bm{\beta}^\dagger_h$};
\node[right=of beta] (pi) {$\pi_h(\bm{u}_j)$};
\node[above=of pi] (u) {$\bm{u}_j$};
\node[right=of pi] (z) {$z_j$};
\node[right=of z] (x) {$\bm{x}_j \sim g_h(\bm{x}_j \mid \Theta_h)$};
\node[above=of x] (Theta) {$\Theta_h$};
\node[draw,densely dotted,fit=(tau) (beta) (pi) (u) (z)] (lsbp) {};
\node[draw,densely dotted,fit=(Theta) (x)] (adaptspec) {};
\node[above] at (lsbp.north) (lsbp_title) {\footnotesize LSBP};
\node[above] at (adaptspec.north) (adaptspec_title) {\footnotesize AdaptSPEC};
\node[below] at (lsbp.south) (lsbp_hyper) {$B, H$};
\node[below] at (adaptspec.south) (adaptspec_hyper) {$M, J, t_\text{min}, \mu_-, \mu_+$};
\node[draw,densely dotted,fit=(lsbp) (adaptspec) (lsbp_title) (adaptspec_title) (lsbp_hyper) (adaptspec_hyper)] (adaptspec_x) {};
\node[above] at (adaptspec_x.north) (adaptspec_x_title) {\small AdaptSPEC-X};
\path
(tau)
edge [->] (beta)
(beta)
edge [->] (pi)
(u)
edge [->] (pi)
(pi)
edge [->] (z)
(z)
edge [->] (x)
(Theta)
edge [->] (x);
\end{tikzpicture}
\caption{
A graphical representation of AdaptSPEC-X. The bottom row lists the main hyperparameters.
}
\label{fig:graphical_model}
\end{figure}
\subsection{Missing values}
\label{sec:model_missing_values}
The model can accommodate missing values by exploiting the fact that the Whittle likelihood describes a multivariate normal distribution \citep{whittle1953}. As is shown below, this implies that the conditional distribution of the missing values is also multivariate normal, and we use this fact to accommodate missing values by integrating them out as part of the MCMC scheme in Section~\ref{sec:sampling_scheme}. For ease of exposition, in this section we first return to the case where $\bm{x}$ is a single stationary time series with spectral density $f$, then later describe how this is extended to multiple nonstationary time series. Define the $n \times n$ matrix $V$ with entries $V_{\timeindexk} = \frac{1}{\sqrt{n}} \exp\left( -2\pi i (t - 1)\omega_k \right)$ for $t = 1, \ldots, n$ and $k = 0, \ldots, n - 1$. It then follows that $V'(\bm{x} - \mu) = (d_1, \ldots, d_n)'$, where $d_k$ is given in \eqref{eqn:dft}. Noting that $V$ is a unitary matrix, \eqref{eqn:whittle_likelihood} may be rewritten as
\[
p(\bm{x} \mid \mu, f)
=
\frac{1}{(2\pi)^{n / 2}}
|R|^{1 / 2}
\exp\left\{
-\frac{1}{2} (\bm{x} - \mu)' V R V^* (\bm{x} - \mu)
\right\},
\label{eqn:whittle_likelihood_mvtnorm}
\]
where $R = \diag(\bm{r})$, $\bm{r} = (1 / f(\omega_1), \ldots, 1 / f(\omega_n))'$, and $V^*$ is the conjugate transpose of $V$. The precision matrix $\Lambda = V R V^*$ is symmetric and circulant, and is thus defined by its first column, with entries $\Lambda_{t, 1} = \frac{1}{n} \sum_{k = 1}^n \frac{1}{f(\omega_k)} e^{-2\pi i (t - 1)\omega_k}$ (see Appendix~\ref{sec:appendix_lambda_derivation} for derivation). Suppose some values of $\bm{x}$ are missing, and write $\bm{x} = (\bm{x}_\text{mis}', \bm{x}_\text{obs}')'$, where $\bm{x}_\text{mis}$ and $\bm{x}_\text{obs}$ are the missing and observed values, respectively. From standard multivariate normal conditioning results,
\begin{equation}
(\bm{x}_\text{mis} \mid \bm{x}_\text{obs}, \mu, f) \sim \text{N}(\mu_{\text{mis} \mid \text{obs}}, \Lambda_{\text{mis} \mid \text{obs}}^{-1}),
\label{eqn:missing_values_conditional}
\end{equation}
where
\begin{equation}
\begin{split}
\mu_{\text{mis} \mid \text{obs}}
& = \mu - \Lambda_{\text{mis},\text{mis}}^{-1} \Lambda_{\text{mis},\text{obs}} (\bm{x}_\text{obs} - \mu), \\
\Lambda_{\text{mis} \mid \text{obs}}
& = \Lambda_{\text{mis},\text{mis}},
\end{split}
\label{eqn:missing_values_parameters}
\end{equation}
and the quantities in Equation~\eqref{eqn:missing_values_parameters} can be obtained from expressing $\Lambda$ as
\begin{equation}
\Lambda = \begin{bmatrix}
\Lambda_{\text{mis},\text{mis}} & \Lambda_{\text{mis},\text{obs}} \\
\Lambda_{\text{mis},\text{obs}}' & \Lambda_{\text{obs},\text{obs}}
\end{bmatrix} = \begin{bmatrix}
V_\text{mis} R V_\text{mis}^* & V_\text{mis} R V_\text{obs}^* \\
V_\text{obs} R V_\text{mis}^* & V_\text{obs} R V_\text{obs}^* \\
\end{bmatrix},
\label{eqn:missing_values_lambda_decomposition}
\end{equation}
in which $V_\text{mis}$ and $V_\text{obs}$ are matrices made up of the rows of $V$ corresponding to the missing and observed times, respectively. The quantities in Equation~\eqref{eqn:missing_values_lambda_decomposition} can be computed efficiently by the fast Fourier transform. When simulating from Equation~\eqref{eqn:missing_values_conditional}, the most computationally intensive step is the inversion, $\Lambda_{\text{mis},\text{mis}}^{-1}$, in Equation~\eqref{eqn:missing_values_parameters}. In a general framework for spectral estimation with stationary time series, \citet{guiness2019} describes computationally efficient methods for missing data imputation that could be used to simulate from Equation~\eqref{eqn:missing_values_conditional}. For this article we compute $\Lambda_{\text{mis},\text{mis}}^{-1}$ the usual way, i.e., via its Cholesky decomposition.
Missing values may be accommodated in AdaptSPEC by partitioning the data in each segment, $\bm{x}_{s, m}$, into missing and observed times as above, and sampling from \eqref{eqn:missing_values_conditional} for each segment as part of the MCMC scheme. As shown in the next section, this is extended to the full AdaptSPEC-X model by conditioning on the latent indicators $\bm{z}$.
The Whittle likelihood is used in AdaptSPEC-X in two ways: first, as a nonparametric technique based on its asymptotic properties \citep[see Section~\ref{sec:model_single} and][]{rosenetal2012}, and second, in the assumption that the missing values follow a multivariate normal distribution. We justify the latter assumption in several ways. First, \citet{guiness2019}, who makes a similar assumption regarding missing values, found through both theory and numerical experiments that the assumption did not have a deleterious effect on spectral estimation. Second, it is hard to avoid making some assumption about the distribution of the missing values, and multivariate normality is arguably the most parsimonious choice, as it requires only the first two central moments to be specified (the minimum necessary to have mean and spectrum consistent with the observed data). Finally, as shown above and by \citet{guiness2019}, the assumption is computationally convenient.
\section{Sampling scheme}
\label{sec:sampling_scheme}
Define $\bm{z} = (z_1, \ldots, z_N)'$, $\bm{x}^\text{all} = (\bm{x}_1', \ldots, \bm{x}_N')'$, $\bm{\beta}^\dagger = \{ \bm{\beta}^\dagger_1, \ldots, \bm{\beta}^\dagger_{H - 1} \}$, $\Theta = \{ \Theta_1, \ldots, \Theta_H \}$, and $\bm{\tau} = (\tau_1, \ldots, \tau_{H - 1})'$. Let $\bm{x}^\text{all} = (\bm{x}^{\text{all}\prime}_\text{mis} \bm{x}^{\text{all}\prime}_\text{obs})'$ be the decomposition of $\bm{x}^\text{all}$ into missing and observed times, respectively. Values produced in steps 2, 3, 4, and 5 of the MCMC sampling scheme below are indicated by the superscript `$[l + 0.5]$', and are then used in a label swapping move in Step~\ref{enum:sampling_label_swap} to produce the $(l + 1)$th iteration. As described below, this move improves convergence of the following sampling scheme.
\begin{enumerate}[label=Step \arabic*.,ref=\arabic*,leftmargin=*]
\item
$(\bm{x}_\text{mis}^{\text{all} [l + 1]} \mid \bm{\beta}^{\dagger[l]}, \Theta^{[l]}, \bm{z}^{[l]}, \bm{x}^\text{all}_\text{obs})$ as per Section~\ref{sec:model_missing_values}.
\label{enum:sampling_missing}
\item
$(\Theta^{[l + 0.5]}_h \mid \bm{x}_\text{mis}^{\text{all} [l + 1]}, \bm{z}^{[l]}, \bm{x}^\text{all}_\text{obs})$ for each $h = 1, \ldots, H$. This step is potentially transdimensional, and uses the reversible-jump MCMC scheme of RWS12, with two modifications. The first one samples the segment means, $\bm{\mu}^h_m$, as described in Appendix~\ref{sec:sampling_scheme_means}. The second modification incorporates a Riemann manifold Hamiltonian Monte Carlo \citep[RMHMC for short, see][]{girolami2011} step to accelerate convergence, as described in Appendix~\ref{sec:sampling_scheme_rmhmc}.
\label{enum:sampling_theta}
\item
$(\bm{z}^{[l + 0.5]} \mid \bm{\beta}^{\dagger[l]}, \Theta^{[l + 0.5]}, \bm{x}_\text{mis}^{\text{all} [l + 1]}, \bm{x}^\text{all}_\text{obs})$.
\label{enum:sampling_zvec}
\item
$(\bm{\beta}^{\dagger[l + 0.5]}_h \mid \bm{\tau}^{[l]}, \bm{z}^{[l + 0.5]})$ for each $h = 1, \ldots, H$. This uses the Polya-Gamma data augmentation scheme developed by \citet{polsonetal2013}, as applied to the LSBP by \citet{rigondurante2020}.
\label{enum:sampling_beta}
\item
$(\tau^{[l + 0.5]}_h \mid \bm{\beta}^{\dagger[l + 0.5]})$ for each $h = 1, \ldots, H$.
\label{enum:sampling_tau}
\item
$(\Theta^{[l + 1]}, \bm{z}^{[l + 1]}, \bm{\beta}^{\dagger[l + 1]}, \bm{\tau}^{[l + 1]} \mid \bm{x}_\text{mis}^{\text{all} [l + 1]}, \bm{x}^\text{all}_\text{obs})$ using a label swapping step, described below.
\label{enum:sampling_label_swap}
\end{enumerate}
The details of steps~\ref{enum:sampling_theta}, \ref{enum:sampling_zvec}, \ref{enum:sampling_beta}, and \ref{enum:sampling_tau} are presented in Appendix~\ref{sec:conditional_distributions}, while Step~\ref{enum:sampling_missing}, for $\bm{x}_\text{mis}$, is described in Section~\ref{sec:model_missing_values}. For Step~\ref{enum:sampling_label_swap}, we adapt a label swapping move from \citet{hastieetal2015}, who find that it improves convergence in the context of MCMC samplers for Dirichlet process mixture models. The label swapping step is composed of the following substeps:
\begin{enumerate}[label=Step 6\alph*.,ref=6\alph*,leftmargin=*]
\item
Pick uniformly at random components $h_1, h_2 \in \{ 1, \ldots, H \}$, $h_1 < h_2$, to swap.
\item
Construct proposal component indicators $\bm{z}^\text{swap}$ and $\Theta^\text{swap}$ such that
\begin{align*}
z^\text{swap}_j = \begin{cases}
h_2 & \text{if } z^{[l + 0.5]}_j = h_1, \\[0.1cm]
h_1 & \text{if } z^{[l + 0.5]}_j = h_2, \\[0.1cm]
z^{[l + 0.5]}_j & \text{otherwise,}
\end{cases}
&
\qquad \Theta^\text{swap}_h = \begin{cases}
\Theta^{[l + 0.5]}_{h_2} & \text{if } h = h_1, \\[0.1cm]
\Theta^{[l + 0.5]}_{h_1} & \text{if } h = h_2, \\[0.1cm]
\Theta^{[l + 0.5]}_{h} & \text{otherwise.}
\end{cases}
\end{align*}
\label{enum:label_swap_swap}
\item
Construct proposal $\bm{\tau}^\text{swap}$ by setting
\[
\tau^\text{swap}_h = \begin{cases}
\tau^{[l + 0.5]}_{h_2} & \text{if } h = h_1 \\[0.1cm]
\tau^{[l + 0.5]}_{h_1} & \text{if } h = h_2 \\[0.1cm]
\tau^{[l + 0.5]}_{h} & \text{otherwise,}
\end{cases}
\]
and sample proposal $\bm{\beta}^{\dagger\text{swap}}_{h_1}, \bm{\beta}^{\dagger\text{swap}}_{h_2}$ from
\[
q(\bm{\beta}^{\dagger\text{swap}}_h \mid \bm{z}^{[l + 0.5]}, \bm{\tau}^{[l + 0.5]}) \sim \text{N}(
\mu^\text{mode}_h,
\Sigma^\text{mode}_h
),
\]
where $\mu^\text{mode}_h$ and $\Sigma^\text{mode}_h$ are the mode and the negative inverse of the Hessian, respectively, of $\log p(\bm{\beta}^{\dagger\text{swap}}_h \mid \bm{z}^\text{swap}, \bm{\tau}^\text{swap})$.
\label{enum:label_swap_tau_beta}
\item
Accept the swap with probability equal to the Metropolis-Hastings ratio
\begin{align*}
& \min\Bigg\{
1,
\frac{
p(\bm{\beta}^{\dagger\text{swap}}, \bm{z}^\text{swap}, \Theta^\text{swap}, \bm{\tau}^\text{swap} \mid \bm{x}_\text{mis}^{\text{all} [l + 1]}, \bm{x}^\text{all}_\text{obs})
}{
p(\bm{\beta}^{\dagger[l + 0.5]}, \bm{z}^{[l + 0.5]}, \Theta^{[l + 0.5]}, \bm{\tau}^{[l + 0.5]} \mid \bm{x}_\text{mis}^{\text{all} [l + 1]}, \bm{x}^\text{all}_\text{obs})
} \\
& \qquad \qquad
\frac{
q(\bm{\beta}^{\dagger[l + 0.5]}_{h_1} \mid \bm{z}^\text{swap}, \bm{\tau}^\text{swap})
}{
q(\bm{\beta}^{\dagger\text{swap}}_{h_1} \mid \bm{z}^{[l + 0.5]}, \bm{\tau}^{[l + 0.5]})
}
\frac{
q(\bm{\beta}^{\dagger[l + 0.5]}_{h_2} \mid \bm{z}^\text{swap}, \bm{\tau}^\text{swap})
}{
q(\bm{\beta}^{\dagger\text{swap}}_{h_2} \mid \bm{z}^{[l + 0.5]}, \bm{\tau}^{[l + 0.5]})
}
\Bigg\}.
\end{align*}
If accepted, set $\bm{\beta}^{\dagger[l + 1]}, \bm{z}^{[l + 1]}, \Theta^{[l + 1]}$ and $\bm{\tau}^{[l + 1]}$ equal to $\bm{\beta}^{\dagger\text{swap}}, \bm{z}^\text{swap}, \Theta^\text{swap}$ and $\bm{\tau}^\text{swap}$, respectively. Otherwise, $\bm{\beta}^{\dagger[l + 1]}, \bm{z}^{[l + 1]}, \Theta^{[l + 1]}$ and $\bm{\tau}^{[l + 1]}$ are set to $\bm{\beta}^{\dagger[l + 0.5]}, \bm{z}^{[l + 0.5]}, \Theta^{[l + 0.5]}$ and $\bm{\tau}^{[l + 0.5]}$, respectively.
\end{enumerate}
In Step~\ref{enum:label_swap_swap}, the labels of the component indicators for the chosen pair $h_1, h_2$ are swapped, as are the corresponding mixture component parameters $\Theta_h$, leaving the likelihood unchanged. Step~\ref{enum:label_swap_tau_beta} swaps the smoothing spline parameters $\tau_{h_1}, \tau_{h_2}$ and samples new values of $\bm{\beta}^\dagger_h$ from a normal approximation centered on its conditional mode. The latter new values are necessary because due to the sequential nature of the LSBP, merely swapping the values of $\bm{\beta}^\dagger_{h_1}$ and $\bm{\beta}^\dagger_{h_2}$ is unlikely to result in an acceptable proposal.
AdaptSPEC-X is implemented in the latest version of the R package BayesSpec.\footnote{Available from the authors; latest CRAN version does not contain AdaptSPEC-X.} The implementation is in R and C++, and can take advantage of multiple processor cores to reduce the running time of the analysis.
\section{Simulation study}
\label{sec:simulation_study}
We now demonstrate AdaptSPEC-X using replicated simulated data including covariates and multiple time series with known time varying mean and spectrum. We are interested in the model's ability to recover the means and spectra at both observed and unobserved covariate values. Let $U = (\bm{u}_1, \ldots, \bm{u}_{100})'$ be a $100 \times 2$ design matrix corresponding to $N = 100$ subjects, each with two covariates, where the $\bm{u}_j$ are sampled uniformly from $[0, 1] \times [0, 1]$. Each $\bm{u}_j$ is mapped deterministically to $z_j \in \{ 1, 2, 3, 4 \}$, according to the plot shown in Figure~\ref{fig:multiple_simulation_study_true_categories}, which also includes the locations of the $100$ sampled points, denoted by crosses. Four locations are chosen as example time series, marked in green circles, and labeled D1 through to D4 (corresponding to $z_j = 1$ through $z_j = 4$, respectively). Four more locations are marked in red diamonds labeled T1 through to T4 (again for $z_j = 1$ to $4$). These values of $\bm{u}$ have no corresponding time series and are used as test points to evaluate the predictive inferences. The four different regions correspond to four different data generating processes. Each time series $\bm{x}_j$, within region $z_j$, $j = 1, \ldots, 100$, is a realization of length $n = 256$ from
\begin{equation}
(x_{j, t} - \mu_{z_j, t})
=
\phi_{z_j, 1, t} (x_{j, t - 1} - \mu_{z_j, t - 1})
+ \phi_{z_j, 2, t} (x_{j, t - 2} - \mu_{z_j, t - 2})
+ \epsilon_{j, t}, \\
\label{eqn:multiple_simulation_study_model}
\end{equation}
where $\epsilon_{j, t} \sim \text{N}(0, 1)$, and the values of $\mu_{z_j, t}$ and $\phi_{z_j, p, t}$ are given in the following table:
\begin{center}
\begin{tabular}{l|r|rr||r|rr}
& \multicolumn{3}{c||}{$t \leq 128$} & \multicolumn{3}{c}{$t > 128$} \\
& $\mu_{z_j, t}$ & $\phi_{z_j, 1, t}$ & $\phi_{z_j, 2, t}$
& $\mu_{z_j, t}$ & $\phi_{z_j, 1, t}$ & $\phi_{z_j, 2, t}$ \\ \hline
$z_j = 1$ & -1.5 & 1.5 & -0.75
& -2 & -0.8 & 0 \\
$z_j = 2$ & 1 & -0.8 & 0
& -1 & -0.8 & 0 \\
$z_j = 3$ & 0 & 1.5 & -0.75
& 0 & 1.5 & -0.75 \\
$z_j = 4$ & 1 & 0.2 & 0
& 1 & 1.5 & -0.75 \\
\end{tabular}
\end{center}
Thus, time series with $z_j = 1$ have two segments with different means and different spectra, those with $z_j = 2$ have two segments with different means but with the same spectra, time series with $z_j = 3$ have only one stationary segment, and those with $z_j = 4$ have two segments with the same mean but with different spectra. In each time series, 10\% of the times are set as missing. Figure~\ref{fig:multiple_simulation_study_ts} displays example realizations from Process~\eqref{eqn:multiple_simulation_study_model} for $z_j = 1, 2, 3$ and $4$, showing the time series values, underlying time varying mean, and the times at which values are missing.
\begin{figure}
\centering
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics{figures/multiple-simulation-study-true-categories.pdf}
\caption{}
\label{fig:multiple_simulation_study_true_categories}
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics{figures/multiple-simulation-study-ts.pdf}
\caption{}
\label{fig:multiple_simulation_study_ts}
\end{subfigure}
\caption{
(a) Underlying surface mapping $\bm{u}_j$ to $z_j$ for Process~\eqref{eqn:multiple_simulation_study_model}, where the $100$ sampled locations are shown as crosses. The color of a region indicates the corresponding cluster. Four of the crosses are used as examples in the paper and are colored green and labeled D1 to D4. Four test points labeled T1 to T4 are shown with red diamonds.
(b) Example realizations from Process~\eqref{eqn:multiple_simulation_study_model} corresponding to $z_j = 1$ at the top through $z_j = 4$ at the bottom. Black lines give the values of the time series, blue lines the underlying time varying mean $\mu_{z_j, t}$, and red ticks on the bottom axis mark missing values.
}
\end{figure}
We sample 100 replicates from Process~$\eqref{eqn:multiple_simulation_study_model}$, and to each fit AdaptSPEC-X using the MCMC sampling scheme of Section~\ref{sec:sampling_scheme}. We run 50,000 iterations of the MCMC scheme, where the first 10,000 are discarded as burn-in. Each mixture component has $M = 4$, as the maximum number of segments, $t_\text{min} = 40$, as the minimum segment length, $J = 25$, as the number of basis functions for the smoothing spline prior on the log spectra, and $(\mu_-, \mu_+) = (-10, 10)$, as the support of the prior on $\mu^h_{s, m}$. The LSBP is truncated at $H = 25$ components, and has $B = 10$ basis functions. To assess the quality of the estimated time varying mean, we define the mean squared error (MSE) for the mean as
\begin{equation}
\text{MSE}_\text{mean}(\bm{u}) = \frac{1}{n} \sum_{t = 1}^n \left[ \hat{\mu}(t, \bm{u}) - \mu(t, \bm{u}) \right]^2,
\end{equation}
where $\hat{\mu}(t, \bm{u})$ is the estimate of $\mu(t, \bm{u})$, the true time varying mean at covariates $\bm{u}$. Similarly, we define the MSE for the spectrum as
\begin{equation}
\text{MSE}_\text{spec}(\bm{u}) =
\frac{1}{n} \frac{1}{k_\text{max}}
\sum_{t = 1}^n
\sum_{k = 1}^{k_\text{max}} \left[
\log \hat{f}\left( t, \frac{k - 1}{2k_\text{max} - 2}, \bm{u} \right)
- \log f\left( t, \frac{k - 1}{2k_\text{max} - 2}, \bm{u} \right)
\right] ^ 2,
\end{equation}
where $k_\text{max} = 128$, and $\log \hat{f}(t, \omega, \bm{u})$ is the estimate of $\log f(t, \omega, \bm{u})$, the true time varying log spectral density at location $\bm{u}$. Figure~\ref{fig:multiple_simulation_study_mse_boxplot} presents boxplots of $\text{MSE}_\text{mean}$ (top) and $\text{MSE}_\text{spec}$ (bottom), at each observed location, D1--D4, and unobserved test location, T1--T4, from left to right, respectively. The median $\text{MSE}_\text{mean}$ is less than 0.02 at all covariate values except for D2 and T2, for which it is 0.09. Similarly, the median $\text{MSE}_\text{spec}$ is less than 0.08 at all locations except for D2 and T2, for which it is 0.34 and 0.32, respectively. Estimates of the time varying mean and spectrum corresponding to the median MSE values are shown in figures~\ref{fig:multiple_simulation_study_tvm} and \ref{fig:multiple_simulation_study_tvs}, respectively. These qualitatively match the $\text{MSE}_\text{mean}$ and $\text{MSE}_\text{spec}$ scores, in that the estimates for points other than D2 and T2 are visually very close to the truth, while for D2 and T2 some differences are visible.
There are several explanations for the worse performance for D2 and T2 relative to the other points. One reason is that the spatial model has relatively little information to identify the existence of a cluster: these points belong to the category $h = 2$, which is represented by only 8 time series (in contrast to $h = 1, 3$ and $4$, which have 41, 18, and 33 members, respectively), and which has the smallest spatial area in the study. Another reason is that the process mean for $h = 2$ (equal to $1$ in the first half and $-1$ in the second half) is similar to that of the surrounding category, $h = 4$ (constant mean of $1$), making it harder to distinguish between the categories. For both reasons, it is not surprising to see worse performance for T2 and D2, which corresponds to good model behavior in the sense that it represents genuine model uncertainty. This is seen in Figure~\ref{fig:multiple_simulation_study_tvm}, where the $\hat{\mu}(t, \bm{u})$ for T2 and D2 in the second half of the time series is shrunk towards that of $h = 4$, the enclosing category. It is also worth noting that the median $\text{MSE}_\text{mean}$ and $\text{MSE}_\text{spec}$ for T2 and D2 are small relative to the scales of their mean and log spectrum, respectively. This can also be seen qualitatively by the similarity between the true and estimated mean and spectra in figures~\ref{fig:multiple_simulation_study_tvm} and \ref{fig:multiple_simulation_study_tvs}.
\begin{figure}
\centering
\includegraphics{figures/multiple-simulation-study-mse-boxplot.pdf}
\caption{MSE across 100 replications for estimates of mean (top) and spectrum (bottom) for Process~\eqref{eqn:multiple_simulation_study_model} at the observed $\bm{u} =$ D1--D4 and the unobserved $\bm{u} =$ T1--T4 from left to right, respectively.}
\label{fig:multiple_simulation_study_mse_boxplot}
\end{figure}
\begin{figure}
\centering
\includegraphics{figures/multiple-simulation-study-tvm.pdf}
\caption{Estimated mean $\hat{\mu}(t, \bm{u})$ corresponding to the median $\text{MSE}_\text{mean}(\bm{u})$ (red) and true mean $\mu(t, \bm{u})$ (blue) for Process~\eqref{eqn:multiple_simulation_study_model}. The first row shows the estimates for $\bm{u} =$ D1--D4 from left to right, respectively, while the second row shows the estimates for test points $\bm{u} =$ T1--T4.}
\label{fig:multiple_simulation_study_tvm}
\end{figure}
\begin{figure}
\centering
\includegraphics{figures/multiple-simulation-study-tvs.png}
\caption{Estimated time varying log spectra $\log \hat{f}(t, \omega, \bm{u})$ corresponding to the median $\text{MSE}_\text{spec}(\bm{u})$ and true time varying log spectra $\log f(t, \omega, \bm{u})$ for Process~\eqref{eqn:multiple_simulation_study_model}. The first row shows $\log f(t, \omega, \bm{u})$ for $\bm{u} =$ D1--D4 from left to right, respectively, while the second row shows the estimates $\log \hat{f}(t, \omega, \bm{u})$. The third and fourth rows display the analogous quantities for the test points $\bm{u} =$ T1--T4.}
\label{fig:multiple_simulation_study_tvs}
\end{figure}
\section{Applications}
\label{sec:applications}
In this section, we describe two applications. Section~\ref{sec:applications_rainfall} considers Australian rainfall, while incidence counts of measles in the United States are analyzed in Section~\ref{sec:applications_measles}.
\subsection{Australian rainfall data}
\label{sec:applications_rainfall}
Rainfall is governed in large part by cyclical processes, in particular the seasonal cycle driven by the Earth's orbit around the sun. It has therefore historically been a natural application area for spectral methods. These have been used to study interannual variation \citep[see, among many others,][]{alter1924,rajagopalanlall1998,anselletal2000}, intraannual or intraseasonal variation \citep{joshipandey2011}, and the connections between rainfall and other climatic processes \citep{rajagopalanlall1998,anselletal2000}. Here we focus on identifying changes in both the mean and spectrum of Australian rainfall. As part of a report on climate change tendered by several Australian government agencies, \citet{caietal2007} found that, since 1950, the Australian north has seen increased annual rainfall, while the southeast and southwest have experienced the opposite. The causes of these trends have been the subject of study and debate \citep[see, among others,][]{hopeetal2006,ummenhoferetal2009,pooketal2012,risbeyetal2013}. Apart from trends in overall rainfall, several authors have reported relative increases in heavy rainfall events, indicating changes in the variability of rainfall \citep{caietal2007,gallantetal2013}. We contribute to this literature by using AdaptSPEC-X to analyze the time varying mean and spectrum of Australian rainfall from sites dispersed over a wide spatial field, addressing simultaneously the question of whether changes have occurred in rainfall levels and rainfall variability.
We use data from \citet{bertolaccietal2019}, who studied the climatology of Australian daily rainfall using measurements from 17,606 sites across the continent. In particular, we use 151 of these sites characterized by having long and nearly contiguous rainfall records, the locations of which are displayed in Figure~\ref{fig:monthly_rainfall_map}.
These sites are among those identified by \citet{laveryetal1992} as having high quality records suitable for monitoring and assessing climate change. The raw time series are daily, and observations are typically made at 9 am local time, recording the total rainfall in millimeters (mm) for the previous 24 hours. Aggregation to monthly data is performed by calculating the average daily rainfall for the month.
To avoid artifacts, we consider as missing any month with fewer than fifteen days of measurements available (that is, not missing).
\begin{figure}[ht!]
\centering
\begin{subfigure}[t]{\linewidth}
\begin{minipage}[c]{0.10\textwidth}
\caption{}
\label{fig:monthly_rainfall_map}
\end{minipage}
\begin{minipage}[c]{0.825\textwidth}
\centering
\includegraphics{figures/monthly-rainfall-map.pdf}
\end{minipage}\hfill
\end{subfigure}
\begin{subfigure}[t]{\linewidth}
\begin{minipage}[c]{0.10\textwidth}
\caption{}
\label{fig:monthly_rainfall_ts}
\end{minipage}
\begin{minipage}[c]{0.90\textwidth}
\centering
\includegraphics{figures/monthly-rainfall-ts.pdf}
\end{minipage}\hfill
\end{subfigure}
\caption{
(a) Locations of the 151 rainfall sites. Four example sites are marked with green circles, and four test locations are marked with red diamonds. (b) Monthly rainfall records at the four example sites, whose locations are indicated by inset maps. Missing values are marked with red ticks on the bottom axis.
}
\end{figure}
The resulting time series span the 1,078 months from September, 1914 to June, 2004 (inclusive), for a total of 162,778 observations, of which 4,095 are missing. The smallest possible measurement is 0 mm, corresponding to no rainfall for the month; this is true for 9,933 months. Time series for four example sites are displayed in Figure~\ref{fig:monthly_rainfall_ts}, and their locations are marked on inset maps (also marked in green in Figure~\ref{fig:monthly_rainfall_map}). The four time series span a wide range of average rainfall levels from around 1mm at site 47053 to 4mm at site 14042. They also exhibit varying levels of seasonality, where sites 10525 and 14042 have highly seasonal rainfall, while rainfall at sites 47053 and 69018 is less seasonal.
We fit AdaptSPEC-X to these data, setting $\bm{u}_j = (\text{lon}_j, \text{lat}_j)$, the longitude and latitude of the sites. Each mixture component has $t_\text{min} = 60$ (5 years), chosen to ensure several observations of the dominant annual cycle are available. This constrains the maximum number of segments to be $M_\mathrm{max} = 17$. For the number of basis functions for the smoothing spline prior on the log spectra, we choose $J = 60$, giving sufficient flexibility to represent the spike at the dominant annual frequency. The support of the prior on $\mu^h_{s, m}$ is set to $(\mu_-, \mu_+) = (0, 30)$. The lower bound reflects the positivity of rainfall, while the upper bound is three times larger than the largest empirical mean attained in any 5-year period in the data. The LSBP is truncated at $H = 25$ components, which was found to be large enough that higher values made no difference. Finally, the thin-plate GP prior for the LSBP has $B = 20$ basis functions, which captures more than 95\% of the variation implied by the prior. In addition to the locations with measurements, we estimate the predictive time varying mean and spectrum at four locations on the Australian landmass without measurements. These locations are indicated by red diamonds in Figure~\ref{fig:monthly_rainfall_map}.
Two major droughts occurred during the study period: the World War II drought of 1937--1945, and the Millennium drought that started in 1996 and was still ongoing by the end of the study period in 2004 \citep{ummenhoferetal2009}. Table~\ref{tab:monthly_rainfall_tests} presents estimated posterior probabilities of changes in the mean $\mu(t, \bm{u})$ or variance $\sigma^2(t, \bm{u}) = 2 \int_0^{1/2} f(t, \omega, \bm{u}) d\omega $ around these times. Specifically, it shows that $\hat{P}(\mu_{1940} < \mu_{1950}) > 0.9$ and $\hat{P}(\sigma^2_{1940} < \sigma^2_{1950}) > 0.9$ at all four sites. However, $\hat{P}(\mu_{1940} < \mu_{1930}) < 0.7$ and $\hat{P}(\sigma^2_{1940} < \sigma^2_{1930}) < 0.57$ except for site 69018 for which these probabilities are greater than 0.8. The Millennium drought is associated with drops in $\mu(t, \bm{u})$, $\sigma^2(t, \bm{u})$ or both at sites 14042, 47053 and 69018 as can be seen from the estimated probabilities at these sites: $\hat{P}(\mu_{2004} < \mu_{1990}) > 0.9$ and $\hat{P}(\sigma^2_{2004} < \sigma^2_{1990}) > 0.93$. Site 10525 in the southwest of the continent does not exhibit a drop with probability greater than $0.9$, consistent with the fact that the drought principally affected southeastern Australia \citep{ummenhoferetal2009}.
\citet{caietal2007} report large trends in rainfall since 1950. For southeast Australia, they report reductions in annual rainfall corresponding to 10--15mm/decade for the site 47053 and 50mm/decade for 69018. Consistent with this, Table~\ref{tab:monthly_rainfall_tests} shows that, for these sites, AdaptSPEC-X estimates that both $\mu(t, \bm{u})$ and $\sigma^2(t, \bm{u})$ declined between January, 1950 and January, 2004 with probability greater than $0.94$. The estimated drop in $\mu(t, \bm{u})$ for site 47053 corresponds to a reduction of 1--16mm/decade\footnote{Calculated as $10 \times 365.25 \times \text{difference in daily average}\ /\ (2004 - 1950)$} (10th--90th percentile), consistent with \citeauthor{caietal2007}'s estimate. However, for site 69018, the estimated reduction is 6--42mm/decade, substantially less than 50mm/decade. For the site 14042 in the tropical north, \citeauthor{caietal2007} estimate increases of around 40mm/decade since 1950, while the estimates of Table~\ref{tab:monthly_rainfall_tests} indicate a decline over the same period. Finally, \citeauthor{caietal2007} report a decline of 20--30mm/decade in the region of southwest Australia containing the site 10525, but Table~\ref{tab:monthly_rainfall_tests} does not indicate a significant change in $\mu(t, \bm{u})$ at this location (though the variance does appear to have declined). In contrast to \citeauthor{caietal2007} and \citet{gallantetal2013}, the reduction in $\sigma^2(t, \bm{u})$ since 1950 at all locations suggests that rainfall variability has declined. This could have resulted from the use of different definitions of variability, e.g., counts of extreme events as in \citet{caietal2007}, versus our use of change in variance.
\begin{figure}
\centering
\includegraphics{figures/monthly-rainfall-tvm.pdf}
\caption{
Estimated time varying means $\hat{\mu}(t, \bm{u})$ for four monthly rainfall sites (first two rows), and four locations without observations (last two rows).
}
\label{fig:monthly_rainfall_tvm}
\end{figure}
\begin{figure}
\centering
\includegraphics{figures/monthly-rainfall-tvs.png}
\caption{
Estimated time varying spectra $\log \hat{f}(t, \omega, \bm{u})$ for four monthly rainfall sites (first two rows) and four locations without observations (last two rows). The color indicates the log power at the corresponding time and frequency. The $\omega$-axis is on a square-root scale. The axis on the right-hand side displays the period $(1 / \omega)$.
}
\label{fig:monthly_rainfall_tvs}
\end{figure}
\begin{figure}
\centering
\includegraphics{figures/monthly-rainfall-tvs-unscaled.png}
\caption{
The same estimated time varying spectra $\log \hat{f}(t, \omega, \bm{u})$ as in Figure~\ref{fig:monthly_rainfall_tvs}, except that each location is given its own color scale.
}
\label{fig:monthly_rainfall_tvs_unscaled}
\end{figure}
\begin{table}
\centering
\input{figures/monthly-rainfall-tests-table}
\caption{Estimated posterior probabilities $\hat{p}(\cdot \mid \bm{x})$ of various events for monthly rainfall at the four example sites. The events correspond to the WW2 drought (first four rows), the Millenium drought (second two rows), and long term change (last two rows). For each event and site ($\bm{u}$), the table presents probabilities that the mean $\mu(t, \bm{u})$ and the variance $\sigma^2(t, \bm{u})$ changed before or after the event.}
\label{tab:monthly_rainfall_tests}
\end{table}
\clearpage
\subsection{Measles incidence in the United States}
\label{sec:applications_measles}
Measles is a highly contagious disease that causes fever, cough, runny nose, and a rash \citep{moreno2018}. Complications of measles can include pneumonia, deafness, or death \citep{moreno2018}. Prior to the licensing of a vaccine in 1963, the incidence rate of measles averaged 318 cases per 100,000 population per year, with outbreaks occurring annually or every other year \citep{vanpanhuisetal2013}. After vaccine licensure, incidence declined dramatically, and endemic measles transmission was declared eliminated from the United States in 2000 \citep{katzhinman2004}. As part of a study on the impact of vaccination for a variety of contagious diseases, \citet{vanpanhuisetal2013} collated a unique data set by digitizing weekly surveillance reports from the United States of several nationally notifiable diseases, including measles, and have made these data available online at the Project Tycho website\footnote{\url{https://www.tycho.pitt.edu/}}. In this section we analyze Project Tycho's measles data using AdaptSPEC-X.
The data comprise weekly time series of measles incidence for each state, reporting the weekly (where the week starts on a Sunday) incidence rate per 100,000 population. In this work we use time series from the continental United States (that is, excluding Hawaii and Alaska), plus the District of Columbia. These span the 3,914 weeks from week one of 1928 (which we write as 1928-01) to week one of 2003 (2003-01). Across all 49 time series there are 191,786 observations. Of these, 50,067 (26\%) are missing, and 30,439 (16\%) have incidence equal to zero. The time series are shown in Figure~\ref{fig:measles_ts}, where each panel presents the series for a state, and
the incidence axis is on a square-root scale (note however that the spectral analysis performed later is applied to the untransformed data). The layout of the panels roughly matches the geographic distribution of the states. The most striking aspect of these plots is the dramatic decline in both the level and volatility of incidence starting in 1963, the year of vaccine licensure.
\begin{figure}
\centering
\includegraphics[angle=90]{figures/measles-ts.pdf}
\caption{
Measles incidence rate per 100,000 population for the continental US (that is, excluding Hawaii and Alaska), plus the District of Columbia. Each panel shows one state, where the layout of the panels roughly matches the geographic distribution of the states. The incidence axis is square-root transformed.
}
\label{fig:measles_ts}
\end{figure}
We set $\bm{u}_j = (\text{lon}_j, \text{lat}_j)$, the longitude and latitude of the centroid of each state, and fit AdaptSPEC-X to the measles time series. Each mixture component has $t_\text{min} = 208$ (4 years) as the minimum segment length. This was chosen to ensure four observations of the annual cycle in each segment. The maximum number of segments was set as $M_\mathrm{max} = 18$, the maximum allowed by the combination of $t_\text{min}$ and the number of weeks in the data. The support of the prior on $\mu^h_{s, m}$ was set to $(\mu_-, \mu_+) = (0, 20)$; the lower bound represents the known positivity of the incidence rates (and is particularly helpful to constrain $\mu^h_{s, m}$ in periods with very small counts), while the upper bound is twice as large as the posterior mean of this parameter. As in the rainfall application, the number of basis functions for the smoothing spline prior on the log spectra was set to $J = 60$. This was found to be high enough to represent the spectrum accurately, particularly the spike corresponding to the annual cycle. The LSBP is truncated at $H = 10$ components, as we found higher values did not change the results. Finally, the thin-plate GP prior for the LSBP has $B = 20$ basis functions, which captures more than 95\% of the variation implied by the prior. The resulting estimated time varying means and spectra are shown in figures~\ref{fig:measles_tvm} and \ref{fig:measles_tvs}, respectively. The top two rows of Figure~\ref{fig:measles_tvs_special} display zoomed-in plots of the time varying spectra for four states: Arizona, Florida, Maine, and Washington.
As in Figure~\ref{fig:measles_ts}, the post-vaccine drop in the mean and power of incidence is the most obvious feature. This drop occurs in steps, starting with a dramatic drop following the licensing of the Edmonston vaccine in 1963, stalling around 1970, then dropping again around 1980. This corresponds to the waxing and waning of government funding and effort targeted at measles elimination, which culminated in an intensified elimination drive \citep{hinmanetal1979,atkinsonetal1992}. An outbreak during the early 1990s is visible as an increase in power in Figure~\ref{fig:measles_tvs}; this outbreak received much attention and resulted in changes to the immunization schedule for children \citep{atkinsonetal1992}.
In the pre-vaccine period, the spectra in Figure~\ref{fig:measles_tvs} have peaks around frequencies $1 / 52$ and 0, indicating annual seasonality and long-term dependence, respectively. After the introduction of the vaccine, the annual peak disappears. \citet{grenfelletal2001} identified a biennial cycle in similar measles data for the UK, but this does not appear to be a feature of the US data. Figures~\ref{fig:measles_tvm} and \ref{fig:measles_tvs} also indicate changes in the mean and spectrum during the pre-vaccine years, where all states exhibit periods of increased mean incidence and power centered around 1940 and 1955. This concords with \citet{vanpanhuisetal2013}, who noted that incidence rates had variable patterns in the pre-vaccine time period, speculating that these may have been due to sanitation, hygiene, or demographic factors.
Because of the extreme nonstationarity introduced by the vaccine, the time varying spectra in Figure~\ref{fig:measles_tvs} span such a wide range of powers that the spectra for all states look almost identical. This is not the case, as shown in Figure~\ref{fig:measles_tvs_before} and in the bottom two rows of Figure~\ref{fig:measles_tvs_special}, which display time varying spectra for the pre-vaccine years only. These spectra highlight the existence of geographic heterogeneity between states, where higher power is more typical of the west and north, compared to the south and east.
Since the elimination of endemic measles in the US in 2000, there have been a number of outbreaks associated with individuals `importing' measles by acquiring the disease while outside the US and spreading it upon their return \citep{parkeretal2006,cdc2019}. \citet{phadkeetal2016} associated several of these outbreaks with individuals unvaccinated for nonmedical reasons, which they term as vaccine refusal. Some authors have even declared that a resurgence of measles has occurred \citep{lynfielddaum2014}. Using the AdaptSPEC-X fits, we tested whether the mean $\mu(t, \bm{u})$ or variance $\sigma^2(t, \bm{u})$ of measles increased from 1995-01, a few years after the big outbreak in the early 1990s, to 2003-01, the last time period in the data. We find no evidence of increase in $\mu(t, \bm{u})$ in any state, which can be seen from the fact that the highest posterior probability of an increase equals 0.68 in Oklahoma. As for $\sigma^2(t, \bm{u})$, the posterior probabilities of an increase in all states range between 0.81--0.87, which we consider to be weak evidence of change. Unfortunately, because the data end in 2003, it is not possible to assess changes to the mean or spectrum of measles incidence in more recent years.
\begin{figure}[!ht]
\centering
\includegraphics[angle=90]{figures/measles-tvm.pdf}
\caption{Estimated time varying mean $\hat{\mu}(t, \bm{u})$ for measles incidence, where each panel shows the estimate for one state.}
\label{fig:measles_tvm}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=90]{figures/measles-tvs.png}
\caption{
Estimated time varying log spectra, $\log \hat{f}(t, \omega, \bm{u})$ for measles incidence, where each panel shows the estimate for one state. Colors indicate the log power at the corresponding date and $\omega$. The $\omega$-axis is on a square-root scale. The top axis displays the period ($1 /\omega$).
}
\label{fig:measles_tvs}
\end{figure}
\begin{figure}
\centering
\includegraphics{figures/measles-tvs-special.png}
\includegraphics{figures/measles-tvs-special-before.png}
\caption{
Estimated time varying log spectra, $\log \hat{f}(t, \omega, \bm{u})$, for four states. Estimates for the full study period (as in Figure~\ref{fig:measles_tvs}) are shown in the top two rows, while the bottom two rows display estimates for the pre-vaccine period (as in Figure~\ref{fig:measles_tvs_before} below).
}
\label{fig:measles_tvs_special}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=90]{figures/measles-tvs-before.png}
\caption{
Estimated time varying log spectra, $\log \hat{f}(t, \omega, \bm{u})$ for measles incidence for the pre-vaccine period ($<$1963). Figure~\ref{fig:measles_tvs} shows the estimates for the full study period.
}
\label{fig:measles_tvs_before}
\end{figure}
\clearpage
\section{Discussion}
This article has presented AdaptSPEC-X, a Bayesian method for analyzing a panel of possibly nonstationary time series using a covariate-dependent infinite mixture model, with mixture components parameterized by their time varying mean and spectrum. AdaptSPEC-X extends AdaptSPEC to accommodate multiple time series, each with its own covariate values. Specifically, the covariates, which are assumed to be time-independent, are incorporated via the mixture weights using the logistic stick breaking process. The mixture components are based on AdaptSPEC, which handles a single nonstationary time series. In particular, it partitions a time series into an unknown but finite number of segments, and estimates the spectral density within each segment by smoothing splines. New features which have been added to the AdaptSPEC components include estimation of time varying means and handling of missing observations. The model and sampling scheme can accommodate large panels, such as that of the measles application. In addition to estimating time varying spectra for each time series in the panel, AdaptSPEC-X allows inference about the underlying process at unobserved covariate values, enabling predictive inference. Efficient software implementing AdaptSPEC-X is available in the R package BayesSpec.
In Section~\ref{sec:model_log_odds}, the log odds of the LSBP, which determine the mixture weights, are modeled using a thin-plate GP prior. While this prior is flexible, it is also smooth and stationary. This property may be inappropriate in settings where changes in the mean or spectrum of the individual time series occur abruptly over the covariate space. An extension to a nonstationary prior for the log odds, or a piecewise prior as in \citet{bruceetal2018}, may be of interest in these cases.
AdaptSPEC (and therefore AdaptSPEC-X) relies on Whittle's approximation to the likelihood (Equation~\eqref{eqn:whittle_likelihood}). The Whittle likelihood is asymptotically correct for both Gaussian and non-Gaussian time series \citep{hannan1973}, but is known to be inefficient for small sample sizes \citep{contrerasetal2006}. Several methods exist in the literature to ameliorate this problem \citep[see, for example,][]{sykulskietal2019}, and these methods might produce useful extensions to AdaptSPEC for settings with short time series or small segment lengths (i.e., small $t_\text{min}$).
Neither AdaptSPEC nor AdaptSPEC-X account explicitly for measurement error. For i.i.d.\@ Gaussian measurement error, this should not cause a problem, as the added variance would appear as an added constant (that is, white noise) in the spectrum. On the other hand, if the measurement error is changing over time, this may be estimated as false nonstationary, in the sense that the underlying process is not actually changing. Adding an explicit layer for measurement error to AdaptSPEC-X would be a useful extension for these cases. More specialized forms of measurement error may be useful in other settings. For example, in the measles application of Section~\ref{sec:applications_measles}, the data are incidence rates per 100,000 population. While rates are continuous quantities, they are constructed from counts, so the measurement process is actually discrete. A hierarchical extension accounting for this would be an interesting addition to AdaptSPEC-X.
AdaptSPEC-X allows for covariates that do not vary with time, and an extension to a more general framework accommodating time varying covariates would facilitate new types of inference. For example, external time varying climate indicators such as the Southern Oscillation Index are known to influence rainfall patterns \citep{bertolaccietal2019}. A time varying covariate influencing the mean or the spectrum might introduce useful shrinkage, improving predictive performance. One challenge with such an approach would be to determine how a time varying covariate could interact with the segmentation approach used by AdaptSPEC to handle nonstationarity. It would also be interesting to allow the mean to be time varying and covariate-dependent within segments, not only between segments. Future research will focus on these extensions.
\ifdefined\isblinded
\else
\section*{Reproducibility}
Data and code reproducing the figures and tables in this manuscript are available online at \url{https://github.com/mbertolacci/adaptspecx}.
\section*{Funding}
E. Cripps and M. Bertolacci were supported by the ARC Industrial Transformation Research Hub for Offshore Floating Facilities which is funded by the Australian Research Council, Woodside Energy, Shell, Bureau Veritas and Lloyds Register (Grant No. 140100012). S. Cripps is the recipient of an Australian Research Council Australian Future Fellowship (140101266) funded by the Australian Government. O. Rosen was supported in part by grants NSF DMS-1512188 and NIH 2R01GM113243-05.
\fi
\bibliographystyle{asa}
|
2,877,628,090,341 | arxiv | \section{Introduction}
In superconductors, Cooper pairs provide a natural source of entanglement\cite{epr} in both spin and momentum space. If for some reason the constituent electrons of this pair are separated and are allowed to propagate in two different metallic leads or nanodevices such as quantum dots, one expects that the entanglement is preserved because the tunneling processes are spin preserving. This process is called cross Andreev reflection (CAR), and has been the focus of both theoretical and experimental investigations in the last two decades. The initial manifestation of CAR was proposed theoretically for nonlocal current,\cite{byers_flatte,deutscher_feinberg,melin_current,feinberg_dirty,sauret_entangler,zaikin} as well as for non-equilibrium noise cross-correlations (current-current fluctuations between the superconductor and the two outgoing devices).\cite{anantram_datta,martin_pla,torres_martin,lesovik_martin_blatter,chevallier_splitter,rech_splitter,bena,bouchiat} Indeed, the scattering formalism of electron and hole transport showed that noise cross-correlations could be positive. Strictly speaking the positive cross-correlation signal does not constitute a rigorous proof of electrons entanglement originating from superconductors, but it certainly provides evidence of Cooper pair splitting. Alternative approaches exploiting electron energy filtering using Coulomb blockade confirmed at the same time the nonlocal spin singlet nature of electron pairs in opposite quantum dots placed in the vicinity of a superconductor, exploiting T matrix calculations. \cite{recher_sukhorukov_loss,sauret_entangler}
The possibility of generating nonlocal entangled pairs of electrons in condensed matter settings bears fundamental applications in the context of quantum information theory. Tests of quantum entanglement followed these works, based on the idea that Bell inequality violation measurements could be implemented via noise cross-correlations with the so-called superconducting Cooper pair splitter. \cite{chtchelcatchev,sauret_bell}
Moreover, these ideas have been applied to the paradigm of quantum teleportation, \cite{sauret_epjb,long_telep} as well as in other applications.\cite{bennet,ursin}
On the experimental side, attention has mainly focused on nonlocal current measurement on the Cooper beam splitter,\cite{beckmann} a device where typically the superconducting source of electrons is connected to two leads, sometimes via embedded quantum dots.\cite{hofstetter_nature,herrmann,schindele,hofstetter_prl107} Under specific gate voltages imposed on such dots, it is possible to trigger electronic transport in the two outgoing conductors. Only a single experiment managed to measure positive noise cross-correlations when the source of electrons was rendered superconducting.\cite{heiblum}
The main challenge with these proposals resides in the fact that they rely on non-equilibrium measurements. Strictly speaking, these measurements, although to be saluted, constitute only an indirect evidence of Cooper pair splitting, while noise cross correlation measurements represent a considerable ordeal due to the poor signal to noise ratio, and no attempt has been tried so far to reproduce them.
A seminal theoretical work has been suggested early on to circumvent these difficulties by proposing a Josephson equilibrium current geometry to test Cooper pair splitting.\cite{choi_bruder_loss} It describes two superconductors (with applied phase difference) separated by two quantum dots placed in parallel. When a Cooper pair is transmitted from one superconductor to the other, the two electrons can either pass both through a given dot, or they can transit through different dots (cf. Fig.~\ref{processes}). This indeed realizes an Aharonov-Bohm (AB) experiment with superconductors as source and drain, driven by an applied phase difference. The critical current as a function of the AB flux should be $\pi$ periodic if electrons are not split between the two dots, and $2\pi$ periodic if Cooper pair splitting is effective. The clear originality of this proposal resides in the fact that unlike non-equilibrium noise setups, here Cooper pair splitting is uncovered using a current measurement at equilibrium albeit in a Josephson geometry.
The calculation was performed perturbatively in the tunneling Hamiltonian, with infinite repulsion on the dots. Dot gate voltages insured that on average each dot was occupied by a single electron. A complementary study appeared a decade later with the same setup\cite{wang_hu} and perturbative results for dot levels which were assumed to be above the superconducting chemical potentials and with a finite Coulomb repulsion were also presented.
\begin{figure}
\centering
\includegraphics[scale=0.15]{process1.pdf}\hspace{25pt}
\includegraphics[scale=0.15]{process2.pdf}
\bigskip
\bigskip
\includegraphics[scale=0.15]{processcar.pdf}
\caption{Illustration of the three different possibilities for the transmission of a Cooper pair from
the left superconductor to the right one. The two electrons of the pair can either be both transmitted through the upper dot (top left), through the lower dot (top right), or the Cooper pair can be split with one electron transmitted through each dot (bottom).}\label{processes}
\end{figure}
First, so far no analysis of this superconducting Aharonov-Bohm effect has allowed to go beyond lowest order perturbation theory in the tunneling Hamiltonian. Advances in superconducting device fabrication\cite{cleuziou} seem to indicate that by burying nanowires underneath superconductors, large transmission can be achieved between the resulting quantum dot and the lead. Treating the tunnel coupling to all orders of perturbation theory thus constitutes a first motivation of our study.
Secondly, it is established that when a quantum dot is embedded in a Josephson junction, away from the Kondo regime, the strength of the on-site repulsion, the coupling to the leads and the level position of the dot determine whether it constitutes a $0$ or a $\pi$ junction (positive or negative Josephson amplitude). This has been analyzed perturbatively,\cite{spivak_kivelson} as well as with path integral formalism, followed by a saddle point approximation.\cite{rozkhov_arovas} This latter work allows to distinguish between three phases of the Josephson junction: a) the $\pi$ phase (where the dot is singly occupied); b) the $0^{(0)}$ phase (where it is unoccupied); c) and the $0^{(2)}$ phase (with double occupation). These predictions have been verified experimentally a decade ago.\cite{van_dam}
In the Josephson setup with two dots in parallel which is studied here, each dot can either be in the $0^{(0)}$, the $0^{(2)}$ or the $\pi$ phase. In the works of Ref. \onlinecite{choi_bruder_loss} the two electrons are both in a $\pi$ junction configuration while the work of Ref. \onlinecite{wang_hu} have also considered the case where both dots are in the $0^{(0)}$ phase. However, there is to this date no systematic or comparative study specifying which combination of the dot phases may enhance or reduce the AB signal for maximal observation, even less so for arbitrarily large transparencies.
The two above points constitute the main motivations of the present paper. In this work, we employ the path integral formalism to model the AB setup without any restrictions on the transmission properties of the sample. Indeed, we provide results both in the tunneling and the high transparency regimes, and we propose a way to measure the efficiency of CAR processes in both cases, by analyzing the AB signal. We find that the CAR processes are optimized when the two quantum dots are in the same phase.
The paper is organized as follows.
In Sec.~\ref{model} we provide a description of the device and of the model with which we describe it, and we give an expression of the partition function in terms of Grassmann variables.
The free energy used to derive the Josephson current of this nanoSQUID in a non-perturbative manner is presented in Sec.~\ref{free}. In Sec.~\ref{pi_shift}, we discuss the possible occupancy states of the dots. We propose the definition of a Cooper pair splitting efficiency in Sec.~\ref{split} which can be computed for arbitrary transparencies of the studied junctions. The AB signals are calculated first in the tunneling regime in Sec.~\ref{tunnel} and then in the non-perturbative regime in Sec.~\ref{non_perturb} and all possible phase associations for the dots are considered. We discuss our results in Sec.~\ref{conclusion}.
\section{Model and partition function}
\label{model}
The device is illustrated in Fig.~\ref{phases}. For simplicity, the two leads consist of the same superconducting material with a controllable phase difference
which can either be imposed by closing the device in a loop geometry or embedding the device in a macroscopic SQUID.\cite{choi_bruder_loss,wang_hu} Two quantum dots are placed in parallel in the nanogap between the two electrodes, and a magnetic flux (which is in principle independent from the one imposed to trigger a DC Josephson signal between the electrodes) threads the area between the two dots. Electrons can tunnel from the source electrode to the upper or lower dot, but on site Coulomb repulsion favors zero, or single occupation on the latter. In the presence of a magnetic flux between the two dots, because of the AB effect, different phase shifts are expected between the two paths that an electron can follow to reach the drain electrode. If the separation between injection points is larger than the coherence length, Cooper pairs as a whole pass either by the upper or lower path, as in the work of Ref. \onlinecite{cleuziou}. If the injection points are closer together than the superconducting coherence length however, the two electrons of the pair can travel through opposite dots, realizing a CAR process, which gives a third contribution to the Josephson current. By adjusting the quantum dot energy levels and Coulomb interaction, we expect to filter the electrons and eventually favor the CAR process. Granted, we cannot claim to directly measure the degree of entanglement of this CAR contribution (like in a Bell inequalities measurement), nevertheless, the recombination of these two electrons on the destination superconductor could not be possible if this outgoing state did not form a Cooper pair, i.e. an entangled state.
\begin{figure}
\centering
\includegraphics[scale=0.3]{phases.pdf}
\caption{Path dependent phase shifts.}\label{phases}
\end{figure}
We denote by $\hat{d}_{a\sigma}^\dag$ the creation operator for an electron with spin $\sigma=\uparrow,\downarrow$ on the quantum dot $a=U,D$ and by $\hat{\psi}_{jk\sigma}^\dag$ the creation operator for an electron with momentum $k$ and spin $\sigma=\uparrow,\downarrow$ in the superconductor $j=L,R$. It is convenient to introduce the Nambu spinors
\begin{equation}
\hat{d}_a = \left(
\begin{array}{c}
\hat{d}_{a\uparrow} \\ \hat{d}^\dagger_{a\downarrow}
\end{array} \right)\quad\text{and}\quad
\hat{\psi}_{jk} = \left(
\begin{array}{c}
\hat{\psi}_{jk, \uparrow} \\
\hat{\psi}^\dagger_{j(-k), \downarrow}
\end{array} \right).\label{nambu}
\end{equation}
$\sigma_i$ ($i=x,y,z$) are the Pauli matrices that act in Nambu space. The Hamiltonian of the double Josephson junction reads
\begin{equation}
{\cal H}=\sum_{a=U,D}H_{a}+\sum_{j=L,R}H_{j}+H_t.
\end{equation}
$H_{a}$ is the Hamiltonian of the quantum dot $a=U,D$, characterized by its energy level $\varepsilon_a$ and its on-site Coulomb repulsion $U_a$:
\begin{equation}
H_{a} = \varepsilon_a \, \sum_{\sigma = \uparrow, \downarrow} \hat{d}^\dagger_{a\sigma} \hat{d}_{a\sigma}
+ U_a \hat{n}_{a\uparrow} \hat{n}_{a\downarrow}~,
\end{equation}
with $\hat{n}_{a\sigma}=\hat{d}^\dagger_{a\sigma} \hat{d}_{a\sigma}$ the dot occupation operator per spin.
$H_{j}$ is the Hamiltonian of the superconductor $j=L,R$, with gap $\Delta$ and chemical potential $\mu$:
\begin{equation}
H_j = \sum_k \hat{\psi}^\dagger_{jk} \left(
\xi_k \, \sigma_z + \Delta \, \sigma_x \right) \hat{\psi}_{jk},\quad\xi_k = \frac{k^2}{2m} - \mu.
\label{ham_sc}
\end{equation}
Here $\Delta$ is real, the phase difference between electrodes has been gauged out and instead appears in the tunneling Hamiltonian. If we denote by $\mathbf{r_{ja}}$ the location of the injection point from lead $j$ to dot $a$, the tunneling Hamiltonian $H_t$ reads
\begin{equation}
H_t = \sum_{jka}\text{e}^{i\mathbf{k}.\mathbf{r_{ja}}} \,
\hat{\psi}^\dagger_{jk} \, {\cal T}_{ja} \, \hat{d}_a + {\rm h.c.}
\label{ham_tun}
\end{equation}
The tunneling matrices involved in Eq.~\eqref{ham_tun} read
\begin{subequations}
\begin{gather}
{\cal T}_{LU}=t_L\,\sigma_z\,\text{e}^{+i\frac{\phi-\alpha}{4}\sigma_z},\quad
{\cal T}_{RU}=t_R\,\sigma_z\,\text{e}^{-i\frac{\phi-\alpha}{4}\sigma_z},\\
{\cal T}_{LD}=t_L\,\sigma_z\,\text{e}^{+i\frac{\phi+\alpha}{4}\sigma_z},\quad
{\cal T}_{RD}=t_R\,\sigma_z\,\text{e}^{-i\frac{\phi+\alpha}{4}\sigma_z}.
\end{gather}
\end{subequations}
$\phi$ is the phase difference between the superconductors while $\alpha$ is related to the magnetic flux $\Phi$ inside the SQUID loop: $\alpha=2\pi\frac{\Phi}{\Phi_0}$ where $\Phi_0=h/e$ is the flux quantum. For clarity, the phase shifts acquired by tunneling electrons are indicated in Fig.~\ref{phases}.
We employ a path integral approach in the Matsubara formalism in order to compute the partition function of the device. We introduce then the eigenvalues of the annihilation operators $\hat{\psi}_{jk\sigma}$ and $\hat{d}_{a\sigma}$ written as $\psi_{jk\sigma}$ and $d_{a\sigma}$ respectively. These are Grassmann variables and we consider also their conjugates $\overline{\psi}_{jk\sigma}$ and $\overline{d}_{a\sigma}$ as well as the collections in Nambu spinors $d_a$ and $\psi_{jk}$ defined in the same way as Eq.~\eqref{nambu}.
The partition function is given by a functional integration over paths that are $\beta$ antiperiodic:
\begin{equation}
Z=\hspace{-20pt}\int\limits_{\substack{d_a(\beta)=-d_a(0) \\ \psi_{jk}(\beta)=-\psi_{jk}(0)}}\hspace{-20pt}{\cal D}\left[\overline{d},d,\overline{\psi},\psi\right]\exp\left[-S_E\left(\overline{d},d,\overline{\psi},\psi\right)\right].
\end{equation}
\noindent The Euclidean action $S_E$ reads
\begin{align}
S_E\left(\overline{d},d,\overline{\psi},\psi\right)&=\int\limits_0^{\beta}\text{d}\tau\,\bigg\{{\cal H}\left(\overline{d},d,\overline{\psi},\psi\right)\notag\\
&\hspace{10pt}+\sum\limits_a\overline{d}_a\partial_\tau d_a+\sum\limits_{jk}\overline{\psi}_{jk}\,\partial_\tau\psi_{jk}\bigg\}
\end{align}
where the matrix elements of the Hamiltonian can be written as
\begin{align}
\noalign{\vspace{3pt}}
{\cal H}\left(\overline{d},d,\overline{\psi},\psi\right)=&\sum_aH_{a}\left(\overline{d}_a,d_a\right)+\sum_{jk} H_{jk}\left(\overline{\psi}_{jk},\psi_{jk}\right)\notag\\
&\hspace{10pt}+\sum_{jka} H_{t,jka}\left(\overline{d}_a,d_a,\overline{\psi}_{jk},\psi_{jk}\right).
\end{align}
The expressions of $H_{jk}$ and $H_{t,jka}$ are readily obtained from Eqs.~\eqref{ham_sc}-\eqref{ham_tun} by substituting the annihilation operators $\hat{a}$ by their eigenvalues $a$ and the corresponding creation operators $a^\dag$ by the conjugate Grassmann variables $\overline{a}$. For the quantum dots, we can also find an expression in terms of Nambu spinors as follows
\begin{equation}
H_{a}\left(\overline{d}_a,d_a\right)=\tilde{\varepsilon}_a+\tilde{\varepsilon}_a\,\overline{d}_a\,\sigma_z\,d_a-\frac{U_a}{2}\left(\overline{d}_ad_a\right)^2,
\end{equation}
with $\tilde{\varepsilon}_a=\varepsilon_a+\frac{U_a}{2}$.
\section{Free energy and Josephson current}
\label{free}
As the lead degrees of freedom are quadratic in the Hamiltonian, they can be easily integrated out. The partition function is then expressed as a functional integral over the dot Grassmann variables:
\begin{equation}
Z= c_1\hspace{-10pt}\int\limits_{d_a(\beta)=-d_a(0)}\prod_a{\cal D}\left[\overline{d}_a,d_a\right]\exp\left[-S_{\text{eff}}\left(\overline{d},d\right)\right]~,
\label{partition1}
\end{equation}
where $c_1$ is the determinant arising from the integration of the lead variables, which is independent of $\phi$ and $\alpha$. The effective action in Eq.~\eqref{partition1} reads
\begin{widetext}
\begin{align}
S_{\text{eff}}\left(\overline{d},d\right)=&\sum_a\int\limits_0^{\beta}\text{d}\tau\left\{\tilde{\varepsilon}_a+\overline{d}_a(\tau)\left[\partial_\tau \mathbbm{1}_2+\tilde{\varepsilon}_a\,\sigma_z\right]d_a(\tau)-\frac{U_a}{2}\Big(\overline{d}_a(\tau)\,d_a(\tau)\Big)^2\right\}-\sum_{a,b}\int\limits_0^{\beta}\text{d}\tau\int\limits_0^{\beta}\text{d}\tau'\,\,\overline{d}_a(\tau)\,\Sigma^{ab}(\tau-\tau')\,d_b(\tau')\label{eff_action}
\end{align}
\end{widetext}
where the self-energy term
\begin{align}
\Sigma^{ab}(\tau)=\sum_{jk}\text{e}^{i\mathbf{k}.\left(\mathbf{r_{jb}}-\mathbf{r_{ja}}\right)}{\cal T}^\dag_{ja}G_k(\tau){\cal T}_{jb}
\end{align}
involves the Green function of the leads $G_k(\tau)$ which verifies
\begin{equation}
\left[\partial_\tau \mathbbm{1}_2 +\xi_k\sigma_z+\Delta\sigma_x\right]G_k(\tau)=\delta(\tau) \mathbbm{1}_2.
\end{equation}
The quartic terms $(\overline{d}_ad_a)^2$ in Eq.~\eqref{eff_action} prohibit an exact computation of the partition function.
As in Ref. \onlinecite{rozkhov_arovas}, we use a Hubbard-Stratonovich transformation to treat these terms and we neglect the temporal fluctuations of the two auxiliary fields $X_U$ and $X_D$ which are introduced:
\begin{equation}
\text{e}^{\frac{U_a}{2}\int\limits_0^{\beta}\text{d}\tau\left(\overline{d}_ad_a\right)^2}\approx\sqrt{\frac{\beta}{2\pi U_a}}\int_{-\infty}^{+\infty}\text{d}X_a\,\text{e}^{-\frac{\beta}{2U_a}X_a^2+X_a\int\limits_0^\beta\text{d}\tau\,\overline{d}_ad_a}.\label{quartictoX}
\end{equation}
Because both the Green functions $G_k$ and the Nambu spinor components $d_{a\sigma}$ are $\beta$ antiperiodic, we use a Matsubara series expansion $F(\tau)=\sum_{p\in\mathbb{Z}}\text{e}^{-i\omega_p\tau}F(\omega_p)$ over the frequencies $\omega_p=\left(p+\frac{1}{2}\right)2\pi/\beta$.
\begin{widetext}
Rather than keeping track of a cumbersome device-specific position dependence of the self-energy, we choose to introduce a phenomenological parameter $\eta$. It extrapolates between the two most relevant cases: $\eta=0$ for infinitely distant injection points (much more separated than the superconducting coherence length) and $\eta=1$ for coinciding injection points (much closer than the superconducting coherence length). The dots are now integrated out and the partition function becomes
\begin{equation}
Z^\eta(\phi,\alpha)=c_1c_2\int_{-\infty}^{+\infty}\text{d}X_U\int_{-\infty}^{+\infty}\text{d}X_D\exp\left[-S^{\text{HS},\eta}_{\text{eff}}\left(X_U,X_D,\phi,\alpha\right)\right]~\label{part_func}
\end{equation}
where $c_2$ is a fermionic determinant arising from the dot variables integration.
The effective action reads
\begin{equation}
S^{\text{HS},\eta}_{\text{eff}}\left(X_U,X_D,\phi,\alpha\right)=\sum_{a}\left(\beta\,\tilde{\varepsilon}_a+\frac{\beta}{2U_a}X_a^2\right)-2\sum_{p\in\mathbb{N}}\text{ln}\Big(\beta^4\Big|\text{det}\big[{\cal M}^\eta_p\left(X_U,X_D,\phi,\alpha\right)\big]\Big|\Big),
\end{equation}
\begin{equation}
{\cal M}^\eta_p\left(X_U,X_D,\phi,\alpha\right)=\left[\begin{matrix}
-\left(i\omega_p+X_U\right)\mathbbm{1}_2+\tilde{\varepsilon}_U\,\sigma_z-{\cal A}_p(\phi-\alpha)&-{\cal B}^\eta_p(\phi,+\alpha)\\
-{\cal B}^\eta_p(\phi,-\alpha)&-\left(i\omega_p+X_D\right)\mathbbm{1}_2+\tilde{\varepsilon}_D\,\sigma_z-{\cal A}_p(\phi+\alpha)\end{matrix}\right],\label{matrixm}
\end{equation}
\begin{align}
{\cal A}_p(\phi)&=\frac{\Gamma}{\sqrt{\Delta^2+\omega_p^2}}\Bigg[i\omega_p\,\mathbbm{1}_2-\Delta\left(\cos\frac{\phi}{2}\,\sigma_x+\gamma\sin\frac{\phi}{2}\,\sigma_y\right)\Bigg],\\
{\cal B}^\eta_p(\phi,\alpha)&=\eta\frac{\Gamma}{\sqrt{\Delta^2+\omega_p^2}}\Bigg[i\omega_p\left(\cos\frac{\alpha}{2}\,\mathbbm{1}_2+i\gamma\sin\frac{\alpha}{2}\,\sigma_z\right)-\Delta\left(\cos\frac{\phi}{2}\,\sigma_x+\gamma\sin\frac{\phi}{2}\,\sigma_y\right)\Bigg].\label{outdiago}
\end{align}
The decay rate is defined as $\Gamma=\pi\,\nu(0)(t_L^2+t_R^2)$ where $\nu(0)$ is the density of states of the leads at the Fermi level. The contact asymmetry is given by $\gamma=(t_L^2-t_R^2)/(t_L^2+t_R^2)$.
\end{widetext}
The CAR process is due to the off-diagonal terms ${\cal B}_p$ of the matrices ${\cal M}_p$ the determinants of which we have to compute as a result of the Gaussian integrals over quantum dot degrees of freedom. In practice, these off-diagonal terms depend on the separation between injection points $R\equiv|\mathbf{r_{jU}}-\mathbf{r_{jD}}|$. They have an exponential decay on the scale of the superconducting coherence length. With the microscopic tunneling Hamiltonian formulation of Eq.~\eqref{ham_tun}, they also bear fast oscillations on the Fermi wavelength, and possibly power law decay depending on the dimensionality [$\eta(R)=(\sin k_FR)/(k_FR)$ for 3D superconductors \cite{choi_bruder_loss,popoff_these} and no power law decay for quasi-one dimensional superconductors \cite{rech_quartets}]. Existing CAR experiments\cite{hofstetter_nature,herrmann,schindele,hofstetter_prl107} based on nanowire quantum dots embedded in the superconducting leads are typically performed for an injection separation which is less than the superconducting coherence length. Yet these experiments find no evidence of either Fermi wavelength oscillations or power law decay at all: the nonlocal signal is strong and can be optimized by tuning the dot gate voltages. This can be attributed to the proximity effect from the bulk superconductors, which acts on the nanowires used in the experiments. To keep our discussion more general, and avoid such device-specific complications, in this work, we use $\eta$ as a phenomenological parameter, as in Refs. \onlinecite{wang_hu,sadovsky}. Most of the results will be displayed for the extreme values $\eta=0$ and $\eta=1$, but in order to show the evolution of the AB signals, we sometimes allow it to vary smoothly between these two values.
To evaluate the partition function of Eq.~\eqref{part_func}, we use a saddle-point method.\cite{rozkhov_arovas} The effective action $S^{\text{HS},\eta}_{\text{eff}}\left(X_U,X_D,\phi,\alpha\right)$ is computed numerically by summing over Matsubara frequencies (up to a cut-off much larger than the superconducting gap). Its minimum, located in $[X_U^\ast(\phi,\alpha),X_D^\ast(\phi,\alpha)]$, is obtained with a gradient descent method for fixed $\phi$ and $\alpha$ (the number of starting points of the algorithm depends on the symmetry of the function to be minimized). The free energy is then defined from this minimum value as $S_{\text{eff}}^{\text{HS},\eta}\left(X_U^\ast(\phi,\alpha),X_D^\ast(\phi,\alpha),\phi,\alpha\right)\equiv\beta F^\eta(\phi,\alpha)$. The current is finally obtained by differentiating the free energy with respect to the phase difference $\phi$:
\begin{equation}
J^\eta(\phi,\alpha)=2\,\partial_\phi F^\eta(\phi,\alpha).
\end{equation}
The critical current, function of the flux $\alpha$, is defined as
\begin{equation}
J_c^\eta(\alpha)=\text{max}_\phi\left|J^\eta(\phi,\alpha)\right|.
\end{equation}
For $\eta=0$, it is $\pi$ periodic. Indeed, in this case, there are no CAR processes and for a magnetic flux $\alpha$, the current characteristic (current as a function of the phase difference $\phi$) through the double junction is simply the sum of the current characteristics across the independent single junctions, one being shifted by $-\alpha$, the other one by $+\alpha$. Shifting $\alpha$ by $\pi$
results in adding a phase shift $-\pi$ for one of the two junctions [${\cal A}_p(\phi-\alpha)\to{\cal A}_p(\phi-\alpha-\pi)$ in the top left block of Eq.~\eqref{matrixm}] and a phase shift $+\pi$ for the other one [${\cal A}_p(\phi+\alpha)\to{\cal A}_p(\phi+\alpha+\pi)$ in the bottom right block of Eq.~\eqref{matrixm}]. Using the $2\pi$ periodicity in $\phi$ of the current through a single junction,
and taking the maximum on $\phi$ shows immediately that the critical current in unchanged when
$\alpha \to \alpha + \pi$. This is
in contrast to the case with $\eta=1$, \textit{i.e.} in the presence of CAR processes, where
the off-diagonal terms of Eq.~\eqref{matrixm} implies that only the $2\pi$ periodicity of the critical current can be observed.
\section{$0-${\Large $\pi$} transition in a single Josephson junction}\label{pi_shift}
As a first step, let us recall some known results concerning a single dot embedded in a Josephson junction.
In this section, we summarize the properties of such a setup, and we determine under what condition the junction is in the $0^{(0)}$, the $\pi$ or the $0^{(2)}$ phase. The 0 phase is characterized by a positive Josephson current for $\phi\in[0,\pi]$. It can be further separated between a $0^{(0)}$ phase where the dot is almost empty and a $0^{(2)}$ phase where the mean occupation number of the dot is almost 2. The $\pi$ phase is associated with a negative Josephson current for $\phi\in[0,\pi]$, and corresponds to a singly occupied dot.
In perturbative calculations, the Josephson current flowing through the quantum dot is the result of the tunneling of Cooper pairs, which requires a fourth order perturbative expansion in the tunneling amplitudes between a superconductor and the quantum dot. At this order, the current can be written as $J=J_0\sin\phi$, where $\phi$ is the phase difference between the superconductors, and the sign of $J_0$ determines the 0 or $\pi$ character of the junction. Providing a single occupancy on the quantum dot ($0 < -\varepsilon \ll U$) $J_0$ can be negative\cite{spivak_kivelson} unlike the case of an empty quantum dot.
This phenomenon has been investigated numerically\cite{rozkhov_arovas} for arbitrary transmissions between the dot and the superconducting leads. For a fixed quantum dot energy level $\varepsilon<0$ and a fixed decay rate $\Gamma$, the current as a function of the phase difference $\phi$ between the superconductors undergoes a discontinuity as one tunes the Coulomb interaction $U$ across a critical value. Computing the mean occupation number on the quantum dot in the $(-\varepsilon,U)$ plane reveals the presence of all three phases, as shown in Fig.~\ref{occupancy_Gamma}, where the $\pi$ phase, which lies around the line $2\varepsilon+U=0$, separates the $0^{(0)}$ and $0^{(2)}$ phases.
\begin{figure}
\centering
\includegraphics[scale=0.65]{diag_occ.pdf}
\caption{Mean occupation number diagram of the quantum dot of a single Josephson junction for symmetric couplings ($t_L=t_R$) and for $\Gamma=\Delta$, $\beta=50\Delta^{-1}$.}\label{occupancy_Gamma}
\end{figure}
\section{Splitting efficiency}\label{split}
In order to identify the optimal regime for observing signatures of CAR processes in the AB signal, we need to define a splitting efficiency. We start by discussing the case of low electron transmission, where intuition can be gained from simple perturbation theory. We then aim at comparing the results of our general approach for arbitrary transmission to those perturbative results for the AB signal.
The fourth order perturbation expansion~\cite{choi_bruder_loss,wang_hu} in the tunneling amplitudes $t_j$ allows us to write the Josephson current as three contributions, associated with the three different processes illustrated in Fig.~\ref{processes}, so that
\begin{equation}
J(\phi,\alpha)=I_D\sin\left(\phi+\alpha\right)+I_U\sin\left(\phi-\alpha\right)+I_{\text{CAR}}\sin\phi
\label{curr_perturb_expr}.
\end{equation}
Indeed, in the presence of a magnetic flux $\alpha\neq0$, an additional phase shift is acquired, depending on the path that each electron of the Cooper pair followed between the two superconducting leads.
When the Cooper pair tunnels through the $U$ (resp. $D$) quantum dot, it accumulates a phase shift $-\alpha$ (resp. $+\alpha$) in addition to the superconducting phase difference and contributes to the Josephson current with an amplitude $I_U$ (resp. $I_D$). However, when the Cooper pair is delocalized on the two quantum dots (via the CAR process, thus contributing with an amplitude $I_\text{CAR}$), one electron gets a phase shift $+\frac{\alpha}{2}$ while the other one gets a phase shift $-\frac{\alpha}{2}$, the pair accumulating as a result no additional phase shift.
The critical current associated with the Josephson current Eq.~\eqref{curr_perturb_expr} then reads
\begin{equation}
J_c(\alpha)=I_0\sqrt{1+a\,\cos\alpha+b\,\cos2\alpha}\label{crit_curr}
\end{equation}
with $I_0=\sqrt{I_U^2+I_D^2+I_{\text{CAR}}^2}$, $I_0^2\,a=2I_{\text{CAR}}\left(I_U+I_D\right)$ and $I_0^2\,b=2I_DI_U$. For low enough values of $\Gamma$, we are in the tunneling regime and the approximation Eq.~\eqref{curr_perturb_expr} for the Josephson current is justified.
If we are able to extract the parameters $I_U$, $I_D$ and $I_{\text{CAR}}$, e.g. from a fit of the AB signal, we can calculate the quantity
\begin{equation}
r_t=\frac{I_{\text{CAR}}^2}{I_U^2+I_D^2+I_{\text{CAR}}^2}\label{spliteff}
\end{equation}
which encodes the splitting efficiency of the double Josephson junction. It varies between 0 and 1. $r_t=0$ corresponds to a low efficiency of the CAR process while $r_t=1$ is obtained when this process of spatial delocalization of the two electrons of a Cooper pair is much more important than the tunneling processes of a whole Cooper pair through a single quantum dot.
As already stressed out, the formalism developed in Secs.~\ref{model}-\ref{free} is valid regardless of the strength of the coupling between the superconductors and the quantum dots. However, the above definition of the splitting efficiency relies on the expression Eq.~\eqref{crit_curr} for the critical current which is no longer valid in the non-perturbative regime. There, one needs an alternative diagnosis for the detection of the CAR process. As it turns out, a relevant quantity can be extracted from the mean powers of the critical current obtained for infinitely distant injection points ($\eta=0$) and for coinciding ones ($\eta=1$). Indeed, defining ${\cal P}_\eta=\int_0^{2\pi}\text{d}\alpha\left[J_c^\eta(\alpha)\right]^2$, we compute the quantity
\begin{equation}
r=\frac{\left|{\cal P}_{\eta=1}-{\cal P}_{\eta=0}\right|}{{\cal P}_{\eta=1}}\label{spliteff2}.
\end{equation}
This generalizes the concept of splitting efficiency to the case of arbitrary transmission, and coincides with the definition of Eq.~\eqref{spliteff} in the tunneling regime.
\section{Nanosquid in the tunneling regime}\label{tunnel}
We first focus on the tunneling regime, taking a low value of the decay rate $\Gamma=0.01\Delta$.
We choose to explore all the possible combinations for the phases of the two quantum dots (see Figs.~\ref{crit_perturb}-\ref{crit_perturb2}). The results were obtained for symmetric couplings $\gamma=0$, at temperature $\beta^{-1}=0.002\Delta$. The energies of the quantum dots are: $\varepsilon=-0.3\Delta$ for the $\pi$ phase, $\varepsilon=0.3\Delta$ for the $0^{(0)}$ phase and $\varepsilon=-0.9\Delta$ for the $0^{(2)}$ phase. The Coulomb interaction $U$ is chosen to be the same for the two quantum dots, and within a specific range as staying in a given phase at fixed energy restricts the possible values for $U$.
\begin{figure}[b]
\centering
\includegraphics[scale=0.65]{crit_perturb_v2.pdf}
\caption{Critical current (in units of $10^4e\Delta/\hbar$) curves for symmetric associations of dots in the tunneling regime: $\pi-\pi$ (top panel), $0^{(0)}-0^{(0)}$ (middle panel), $0^{(2)}-0^{(2)}$ (bottom panel).}\label{crit_perturb}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.65]{crit_perturb2_v2.pdf}
\caption{Critical current (in units of $10^4e\Delta/\hbar$) curves for asymmetric associations of dots in the tunneling regime: $\pi-0^{(0)}$ (top panel), $\pi-0^{(2)}$ (middle panel), $0^{(0)}-0^{(2)}$ (bottom panel).}\label{crit_perturb2}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[scale=0.65]{var_eta_perturb_v2.pdf}
\caption{Influence of the parameter $\eta$ on the critical current (in units of $10^4e\Delta/\hbar$) in the tunneling regime ($U=0.6\Delta$).}\label{var_eta}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[scale=0.65]{perturb_eff.pdf}
\caption{Splitting efficiency $r$ given by Eq.~\eqref{spliteff2} extracted from the curves of Figs.~\ref{crit_perturb} and \ref{crit_perturb2}.}\label{rnum}
\end{figure}
The particle-hole symmetry ensures that the current is invariant under the change $(\tilde{\varepsilon}_U,\tilde{\varepsilon}_D)\to(-\tilde{\varepsilon}_U,-\tilde{\varepsilon}_D)$. For the values of energy $\varepsilon$ mentioned above, this implies that the critical current is identical for $0^{(0)}-0^{(0)}$ and $0^{(2)}-0^{(2)}$ phase associations with $U=0.6\Delta$, for $\pi-0^{(0)}$ and $\pi-0^{(2)}$ phase associations again with $U=0.6\Delta$, and finally for $0^{(0)}-0^{(0)}$ at $U=0.7\Delta$ and $0^{(2)}-0^{(2)}$ at $U=0.5\Delta$.
We first consider the critical current curves of a SQUID made of two independent single Josephson junctions, \textit{i.e.} for $\eta=0$. In this particular case, $I_\text{CAR}=0$ and consequently $a=0$ in Eq.~\eqref{crit_curr}, so that the extrema of the critical current are the zeros of $\sin2\alpha$, and the total Josephson current is given by Eq.~\eqref{curr_perturb_expr} with $I_\text{CAR}=0$.
From the results of Figs.~\ref{crit_perturb} and \ref{crit_perturb2}, it clearly appears that there is a $\pi/2$ phase shift in the critical current between the situation where the two dots are in the same phase ($0-0$ or $\pi-\pi$) and the one where they are in different phases ($\pi-0$).
More specifically, for two quantum dots in the same phase ($I_D I_U>0$), there is no phase shift between the currents of the two single Josephson junctions for $\alpha=0$, so that the maxima are added ($|I_D+I_U|=|I_D|+|I_U|$) and, as a result, the critical current is maximal for $\alpha=0$. However, for two quantum dots in different phases ($I_D I_U<0$), the phase shift of $\pi$ between the currents of the two single Josephson junctions for $\alpha=0$ is compensated by a phase shift $\alpha=+\pi/2$ for one of the currents and a phase shift $-\alpha=-\pi/2$ for the other one ($|I_D-I_U|=|I_D|+|I_U|$) resulting in a maximum of the critical current for $\alpha=\pi/2$. Such a behavior has been observed experimentally.\cite{cleuziou}
Comparing the left panels of Figs.~\ref{crit_perturb}-\ref{crit_perturb2} to the right ones (i.e. the case $\eta=0$ to $\eta=1$), we immediately obtain evidence of the cross-talk between the two single Josephson junctions: the period of the critical current doubles. This is a signature of the emergence of the CAR process. Note that the symmetric associations (cf. Fig.~\ref{crit_perturb}) differ completely between $\eta=0$ and $\eta=1$: for the $\pi-\pi$ association, the maximum at $\alpha=0$ for $\eta=0$ becomes a minimum for $\eta=1$, and for the $0^{(0)}-0^{(0)}$ and $0^{(2)}-0^{(2)}$ associations, the maximum at $\alpha=\pi$ for $\eta=0$ becomes a zero for $\eta=1$.
Concerning the asymmetric associations (cf. Fig.~\ref{crit_perturb2}), the critical current curves for $\eta=1$ somehow look like those obtained for $\eta=0$: the positions of the maxima and minima are mostly preserved for all phase associations, only their local or global character changes when tuning $\eta$.
Increasing $U$ results in a more pronounced filtering as the processes where the quantum dots are doubly occupied are less favored. This explains the observed decrease in critical current for the $0^{(0)}-0^{(0)}$ and $\pi-\pi$ associations (Fig.~\ref{crit_perturb}). The opposite behavior happens when the quantum dots are doubly occupied, \textit{i.e.} for the $0^{(2)}-0^{(2)}$ association. There, increasing $U$ favors processes where the occupation of the dots is lowered, since approaching the $\pi$ transition results in the decrease of the mean occupation number on the quantum dot. Such opposite behaviors while tuning $U$ for $0^{(0)}-0^{(0)}$ and $0^{(2)}-0^{(2)}$ phase associations can be seen as a consequence of the already discussed particle-hole symmetry. Similarly, the $\pi-0^{(0)}$ and $\pi-0^{(2)}$ combinations have opposite interaction dependence (see Fig.~\ref{crit_perturb2}). Interestingly, the interaction has no noticeable effect in the case of the $0^{(0)}-0^{(2)}$ phase association.
In order to monitor the emergence of CAR processes, we investigate in Fig.~\ref{var_eta} intermediate regimes for the $\pi-\pi$, $0^{(2)}-0^{(2)}$, $\pi-0^{(2)}$ and $0^{(0)}-0^{(2)}$ phase associations by turning on $\eta$ progressively from $0$ to $1$. This is a phenomenological way to introduce more and more CAR processes, as the injection points in the superconductors are brought together, from $\eta=0$ (no CAR effect) to $\eta=1$ (maximal CAR effect). We observe a dramatic increase of the critical current in the symmetric configurations ($\pi-\pi$ and $0^{(2)}-0^{(2)}$ phase associations), while in the asymmetric configurations ($\pi-0^{(2)}$ and $0^{(0)}-0^{(2)}$ phase associations), the modification of the AB signal is noticeable, but not substantial.
We display the splitting efficiency $r$ given by Eq.~\eqref{spliteff2} for all phase associations in Fig.~\ref{rnum}. In the studied domain for Coulomb repulsion parameter $U$, we do not observe noticeable evolutions of $r$ except for the $0^{(0)}-\pi$ association for which we see a clear decrease.
The symmetric associations of phases (for which the critical current differs the most between $\eta=0$ and $\eta=1$) lead to the highest values of $r$. The highest splitting efficiency is obtained for two quantum dots in the $\pi$ phase: ensuring a mean occupation number around 1 on each quantum dot favors the CAR process.
One can get some insight concerning the value of $r$ for the associations $0^{(0)}-0^{(0)}$ and $0^{(2)}-0^{(2)}$ from the perturbative calculation presented in the previous section. Indeed, in the tunneling regime the splitting efficiency $r$ matches the definition $r_t$ of Eq.~\eqref{spliteff} in terms of the amplitudes $I_D$, $I_U$ and $I_\text{CAR}$. For symmetric associations, we must add the constraint $I_D=I_U\equiv I/2$ to the fit procedure, so that the critical current reads
\begin{equation}
\frac{J_c(\alpha)}{|I|}=\left|\frac{I_{\text{CAR}}}{I}+\cos\alpha\right|
\end{equation}
and since the critical current in symmetric associations of 0 phases vanishes for $\alpha=\pi$, we get $I_{\text{CAR}}/I=1$ and $r_t=2/3$.
\section{Nanosquid in the high transparency regime}\label{non_perturb}
The advantage of the formalism developed in Secs.~\ref{model}-\ref{free} relies on the possibility to address high transparency regimes of the double Josephson junction under study. The results presented in this Sec. are obtained for symmetric couplings $\gamma=0$, at temperature $\beta^{-1}=0.02\Delta$ and for a decay rate $\Gamma=2\Delta$. The energies of the quantum dots are: $\varepsilon=-4\Delta$ for the $\pi$ phase, $\varepsilon=4\Delta$ for the $0^{(0)}$ phase and $\varepsilon=-12\Delta$ for the $0^{(2)}$ phase. Particle-hole symmetry implies that the critical current is identical for $0^{(0)}-0^{(0)}$ and $0^{(2)}-0^{(2)}$ phase associations with $U=8\Delta$, for $\pi-0^{(0)}$ and $\pi-0^{(2)}$ phase associations again with $U=8\Delta$, and finally for $0^{(0)}-0^{(0)}$ at $U=9\Delta$ and $0^{(2)}-0^{(2)}$ at $U=7\Delta$.
The strategy is again to investigate all the possible combinations of phases for the quantum dots (which correspond to different mean occupation numbers), in order to reproduce the qualitative study of Sec.~\ref{tunnel}. Our goal is to determine what features are preserved and what has changed, and to compute the splitting efficiency $r$ defined by Eq.~\eqref{spliteff2} in order to determine which associations of phases favor nonlocal phenomena the most. The critical current curves are given in Figs.~\ref{crit_nonperturb}-\ref{crit_nonperturb2}.
From the results of the $\pi-\pi$ association for $\eta=1$, it is clear that we are no longer in the tunneling regime as the flux dependence can not be fitted by Eq.~\eqref{crit_curr} together with the constraint that $I_D=I_U$. We can again notice, for $\eta=0$, the $\pi/2$ phase shift in the critical current between dots in the $\pi-0$ phases (top and middle panels of Fig.~\ref{crit_nonperturb2}) and dots in the $0-0$ or $\pi-\pi$ phases (Fig.~\ref{crit_nonperturb} and bottom panel of Fig.~\ref{crit_nonperturb2}).
\begin{figure}[b]
\centering
\includegraphics[scale=0.65]{crit_nonperturb_v2.pdf}
\caption{Critical current (in units of $e\Delta/\hbar$) curves for symmetric associations of dots in the high transparency regime: $\pi-\pi$ (top panel), $0^{(0)}-0^{(0)}$ (middle panel), $0^{(2)}-0^{(2)}$ (bottom panel).}\label{crit_nonperturb}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.65]{crit_nonperturb2_v2.pdf}
\caption{Critical current (in units of $e\Delta/\hbar$) curves for asymmetric associations of dots in the high transparency regime: $\pi-0^{(0)}$ (top panel), $\pi-0^{(2)}$ (middle panel), $0^{(0)}-0^{(2)}$ (bottom panel).}\label{crit_nonperturb2}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[scale=0.65]{var_eta_nonperturb_v2.pdf}
\caption{Influence of the parameter $\eta$ on the critical current (in units of $e\Delta/\hbar$) in the high transparency regime ($U=8\Delta$).}\label{var_eta_nonperturb}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[scale=0.65]{nonperturb_eff.pdf}
\caption{Splitting efficiency $r$ given by Eq.~\eqref{spliteff2} extracted from the curves of Figs.~\ref{crit_nonperturb} and \ref{crit_nonperturb2}.}\label{eff_nonperturb}
\end{figure}
The critical current in Figs.~\ref{crit_nonperturb}-\ref{crit_nonperturb2} again presents a doubling of its period when switching $\eta$ from 0 to 1. This is due to the emergence of nonlocal processes where both quantum dots are involved. For the $\pi-\pi$ association, during this switching, $\alpha=0$ remains a local maximum (contrary to the tunneling regime). For $0^{(0)}-0^{(0)}$ and $0^{(2)}-0^{(2)}$ phase associations, the maximum at $\alpha=\pi$ for $\eta=0$ becomes a zero for $\eta=1$ (as in the tunneling regime). There is little influence (less than in the tunneling regime) of $\eta$ on the $\pi-0^{(2)}$ and $\pi-0^{(0)}$ associations whereas, for the $0^{(0)}-0^{(2)}$ association the maximum in $\alpha=0$ is considerably lowered (more than in the tunneling regime) from $\eta=0$ to $\eta=1$.
For symmetric associations of phases, the evolution of the critical current with increasing $U$ (Fig.~\ref{crit_nonperturb}) can be explained following the same arguments as in the tunneling regime.
While the filtering of electrons tunneling through $0^{(0)}$ or $\pi$ quantum dots is responsible for a decrease of the signal, favoring one-electron processes through a $0^{(2)}$ quantum dot results in an increase of the signal. The opposite $U$-dependence for the $0^{(0)}-0^{(0)}$ and $0^{(2)}-0^{(2)}$ phase associations is a consequence of particle-hole symmetry. While there is still no noticeable effect of $U$ on the $0^{(0)}-0^{(2)}$ association as in the tunneling regime, the $\pi-0^{(2)}$ association also shows little $U$-dependence. As a consequence, we do not observe the opposite behavior as a function of $U$ for the $\pi-0^{(0)}$ and $\pi-0^{(2)}$ associations (Fig.~\ref{crit_nonperturb2}).
We introduce progressively nonlocal effects in Fig.~\ref{var_eta_nonperturb} by switching $\eta$ from 0 to 1.
For the symmetric associations ($\pi-\pi$ and $0^{(0)}-0^{(0)}$) as well as for the $0^{(0)}-0^{(2)}$ configuration (contrary to the tunneling regime), the AB signal is dramatically increased when CAR processes are switched on. On the contrary, the $\pi-0^{(2)}$ phase association is hardly influenced by the presence of nonlocal processes (even less than in the tunneling regime).
We can quantify the importance of CAR processes compared to the direct tunneling through a single quantum dot by calculating the splitting efficiency $r$ given by Eq.~\eqref{spliteff2} which we display in Fig.~\ref{eff_nonperturb} for the different associations of phases. As a function of $U$, the splitting efficiencies which are found are essentially constant over the range considered.
As it turns out, although we work with specific phases for the individual quantum dots, implying specific populations, the quantization of the electron charge on these dots is ineffective at high transparencies because they constitute ``open quantum dots'', involving large fluctuations from their average population. This is consistent with the observation that the splitting efficiency varies little with $U$. As mentioned above, there is little influence of $\eta$ on the $\pi-0^{(2)}$ and $\pi-0^{(0)}$ associations and this is why these are the lowest splitting efficiencies we found. The splitting efficiencies of the associations of 0 phases are around 0.55 and the highest splitting efficiency is still obtained for the $\pi-\pi$ association.
Finally, we note that the energy levels of the dots can be easily varied experimentally using electrostatic gates. Varying the position of the energy level $\varepsilon$ of a quantum dot at fixed decay rate $\Gamma$ and fixed Coulomb on-site repulsion $U$ changes the effective transparency defined as
\begin{equation}
D(\varepsilon)=\frac{\Gamma^2}{\Gamma^2+\left(\varepsilon+\frac{U}{2}\right)^2} .\label{trans_eff}
\end{equation}
Thus, by varying the energy levels of the two dots, we can optimize the splitting efficiency by reaching the $\pi-\pi$ phase at high effective transparency.
We show on the top panel of Fig.~\ref{crossover_eff} the splitting efficiency as a function of the energy
level of the two dots, taken to be identical. The abrupt change of the efficiency near $D(\varepsilon) = 0.4$ shows the crossover from the $0^{(0)}-0^{(0)}$ to the $\pi-\pi$ junction. Similarly, when tuning only one of the two energy levels, while maintaining the other dot in the $\pi$ (middle panel) or the $0^{(0)}$ phase (bottom panel), the splitting efficiency shows a marked transition as a function of the effective transparency.
\begin{figure}
\includegraphics[scale=0.65]{eff_crossovers.pdf}
\caption{Evolution of the splitting efficiency as a function of the effective transparency $D(\varepsilon)$ [Eq.~\eqref{trans_eff}], for fixed $\Gamma=2\Delta$ and $U=8\Delta$.
\textit{Top:} The energy levels of both dots are taken to be identical and varied simultaneously ($\varepsilon_U=\varepsilon_D\equiv\varepsilon$). \textit{Middle:} the $D$ quantum dot is taken in the $\pi$ phase ($\varepsilon_D=-4\Delta$) while the energy of the $U$ dot is varied ($\varepsilon_U\equiv\varepsilon$). \textit{Bottom:} the $D$ quantum dot is taken in the $0^{(0)}$ phase ($\varepsilon_D=4\Delta$) while the energy of the $U$ dot is varied ($\varepsilon_U\equiv\varepsilon$).
}\label{crossover_eff}
\end{figure}
\section{Conclusion}\label{conclusion}
We have studied a double Josephson junction consisting of two quantum dots connected to two superconductors as a tool to probe Cooper pair splitting. When the two single Josephson junctions which constitute our device are coupled to each other via CAR processes, the doubling of the period of the critical current measured in a nanoSQUID experiment is an evidence for the emergence of nonlocal phenomena in the electronic transport of the double junction. This type of diagnosis may prove more convenient than non-equilibrium scenarios involving a superconducting source of electrons and two normal leads/dots where either nonlocal conductance signal or noise cross-correlations are measured.
While this device had been studied in the context of perturbation theory in the tunneling Hamiltonian coupling the dot to the leads,\cite{choi_bruder_loss,wang_hu} no generalization to arbitrary transmission had been proposed, and no systematic study for optimizing the CAR signal with respect to the phases
($0^{(0)}$, $\pi$, $0^{(2)}$) of the quantum dots had been attempted so far. The path integral approach of Ref. \onlinecite{rozkhov_arovas}, within reasonable approximations (saddle point treatment) allows precisely to meet these goals. One of the key results of the present work resides in defining the degree of efficiency of Cooper pair splitting, and evaluating it for the different dot configurations.
We first studied the tunneling regime where the usual perturbative expression for the Josephson signal allows us to fit our numerical results and to introduce a natural definition of the splitting efficiency. We were also able to generalize this quantity to arbitrary transparency, providing a criterion for the efficiency of CAR processes which is based on an analysis of the AB signal of the Josephson critical current. We thus studied the prominence of nonlocal phenomena depending on the phases of the quantum dots and found that the $\pi-\pi$ association optimizes the splitting of the Cooper pairs that are emitted from one superconductor and recombined on the other one. Yet, our analysis shows that the $0^{(0)}-0^{(0)}$ and $0^{(2)}-0^{(2)}$ combinations also provide robust Cooper pair splitting signals. Within each of these combinations of phases, we see for the most part that variations of the Coulomb repulsion parameter has little influence on the Cooper pair splitting efficiency.
The present treatment should only be valid if the superconductor--dots system is above the Kondo regime.\cite{Kondo04,Siano04} For a single dot embedded between two superconductors, the Kondo effect manifests itself when the Kondo temperature is larger than the superconducting gap, because quasiparticle excitations are
required to trigger the spin flip between the leads and the dot. The expression of the Kondo temperature for a Hubbard type Coulomb repulsion is given~\cite{Siano04,van_der_wiel} by
$T_K =\sqrt{\Gamma U}/2\,\text{e}^{\pi\varepsilon_0 (\varepsilon_0 + U)/(\Gamma U)}$. Within the range of parameters chosen in our numerical study, we find that the Kondo temperature
is at most $0.13 \Delta$, which gives us confidence in our working assumptions.
At any rate, the Kondo regime could be avoided by working with the $0^{(0)}-0^{(0)}$ combination of phases, which according to Fig.~\ref{eff_nonperturb} still has a sizable splitting efficiency.
Furthermore, we treated the CAR coupling parameter $\eta$ in a phenomenological manner (varying it between $0$ and $1$), justifiably so because so far no experimental investigation of Cooper pair splitting in superconducting--normal metal ``forks'' out of equilibrium seems to find the power law suppression which is attributed to bulk 3D superconductors. This may be due to the fact that microscopic models have to be revisited, taking into account that electron emission/absorption in the superconductors is not point like, but should be averaged over some finite volume, reducing the effect of oscillations over the Fermi wavelength of the CAR parameter.
Extensions of this work could be envisioned by going beyond the saddle point approximation of the Hubbard Stratonovich transformation.
This transformation is exact in its functional integral form and neglecting the fluctuations of the auxiliary field and then using a method of steepest descent are approximations which were sufficient to exhibit the $\pi$-shift in a single junction~\cite{rozkhov_arovas}. However, the possibility to go beyond this mean field type of approach could be considered by looking at Gaussian fluctuations around the stationary value of the auxiliary field. Alternatively, a numerical renormalization group method~\cite{Kondo04} could be employed in principle, but the proliferation of couplings and parameters is likely to render it cumbersome.
A limitation of the present Cooper pair diagnosis resides in the fact that the nanoSQUID requires reduced dimensions so that the injection points to the dots on both superconductors are separated within a distance smaller than the coherence length. This imposes constraints on the separation of the quantum dots, and as a consequence, the area where the AB flux is imposed becomes reduced. Recall that in order to perform the AB diagnosis, a few flux quanta need to be introduced in this area,
but if the imposed magnetic field needed to apply several flux quanta becomes larger than the critical field of the superconductors, the whole diagnosis breaks down. In order to optimize the surface area encompassed between the dots, we believe the best choice would be to work with nanowire/nanotube quantum dots, which have a large aspect ratio, and which nevertheless can achieve large charging energies even though their length may exceed several $\mu$m.\cite{tans} This possibility is illustrated in Fig.~\ref{nanowire}. To work out the numbers, we find that imposing 2 magnetic flux quanta within a 1 $\mu$m$^2$ area requires a magnetic field of $8\times10^{-3}$ Tesla, which is still smaller than the critical field of superconducting materials such as Aluminum ($10^{-2}$ Teslas) or Niobium (0.2 Teslas). Note that Aluminum could be quite adapted due to its long coherence length (1.6 $\mu$m).
\begin{figure}
\centering
\includegraphics[scale=0.35]{nanofils3.pdf}
\caption{Setup with nanowire/nanotube quantum dots.}\label{nanowire}
\end{figure}
While this work came to completion, we became aware of a current biased measurement\cite{tarucha} which studies precisely the behavior of the double dot Josephson junction. There, a (remarkable) comparative study of the switching current (the current required to transit to the dissipative regime with voltage bias) was performed for different dot phase configurations. This experiment seems to provide realistic evidence of Cooper pair splitting, albeit indirectly, because the self organized quantum dots embedded in the Josephson junction are too close in order to impose the necessary magnetic flux to observe the AB oscillations.
\acknowledgements
We acknowledge the support of the French National
Research Agency, through the project ANR NanoQuartets
(ANR-12-BS1000701). Part of this work
has been carried out in the framework of the Labex
Archim\`ede ANR-11-LABX-0033 and of the A*MIDEX
project ANR-11-IDEX-0001-02.
The authors acknowledge discussions with S. Tarucha and A. Oiwa about the nanoSQUID device. Discussions with E. Paladino and G. Falci are also gratefully acknowledged.
\bibliographystyle{unsrtnat}
|
2,877,628,090,342 | arxiv | \section{\textbf{Introduction}}
The last decade has produced an increasing volume of methods and
algorithms to analyze community structure in social and other networks, as
witnessed by an abundance of recent reviews
\emph{e.g.}
\citep{Girvan2002,Newman2004,Balasundaram2005,Palla2005,Reichhardt2006,Blondel2008,Leskovec2008,Porter2009,Fortunato2010,Xie2013}.
In this paper we study the structure of close communication, contacts and
association in networks, as represented by simple graphs. \emph{Close
communication} is defined here as contact between nodes at distances of at
most 2, that is by direct contact or by at least one common neighboring
node. \ Such communication is associated with closely-knit groups like
cliques, coteries, peer groups, primary groups and face-to-face communities,
such as small villages and artist colonies. Considered as dense social
networks they can form powerful sources of social capital and support for their
members and serve both quick internal diffusion of social innovation as
well as speedy epidemiological contamination from outside sources.
The parts of a network where close communication can take place are marked by
overlapping subsets of nodes, which all are neighbors of each other or have a
common neighbor in the same subset. These correspond to graphs with a diameter of at most two.
In the following sections we shall characterize this structure and indicate
ways to detect these in social networks.
Mokken \citep{Mokken1979,Mokken2008} introduced the concept of \emph{k-clubs} of a graph as \emph{maximal} induced subgraphs of diameter at most $k$ of a simple connected graph $G$: 'maximal' in the sense that there is no larger induced subgraph of diameter $k$ which includes them. He also showed that close community networks, in the form of simple graphs of diameter at most two (\textit{2-clubs}), come in three distinct types: \emph{coteries, social circles} and \emph{hamlets}, respectively.\citep{Mokken1980}\
Accordingly, the \emph{2-clubs} of a simple graph or network $G$ cover the areas of close communication in that network consisting of non-inclusive, possibly mutually overlapping
\emph{coteries}, \emph{social circles} and \emph{hamlets}.
In the following sections this system of close communication is studied further and we show that it consists of a set of disjoint containers of nonseparable \emph{2-}clubs, \textit{i.e.} subgraphs that we call \emph{boroughs}, each of
which is formed by a set of edge-chained \emph{2}-clubs (hamlets, social circles
and coteries) of the network $G$. Each (nonseparable) \emph{2}-club of $G$ is included in
exactly one borough of $G$ and each borough consists of a nonseparable union
of overlapping \emph{2}-clubs of $G$. Consequently this system of close
communication of a network can be analyzed by studying its boroughs and the \emph{2}-clubs within each or selected boroughs.
The final sections show applications with some real networks and conclude with a discussion.
\section{\textbf{Concepts and notation}}
As the representation and analysis of networks will be in terms of simple
graphs, we will summarize the necessary concepts and notation here. (For
standard graph-theoretic background see \emph{e.g.}\citep{Harary1969,Harary1994,Wasserman1994,Diestel2005}.
A social network will be represented by a \emph{simple}
graph, \emph{i.e.} an undirected graph $G=G\left( V,L\right) $, without
loops or multiple edges, where $V=V\left( G\right) $ is its set of nodes and
$L=L\left( G\right) $ is its set of edges $\left( u,v\right) ;u,v\in
V\left( G\right) ,$ joining nodes $u$ and $v$ in $G.$ Two nodes $u$ and $v$
are \emph{adjacent} if the edge $\left( u,v\right) \in L\left( G\right) ;$
notation $uv$. An edge $\left( u,v\right) $ is \emph{incident} with its
endnodes $u$ and $v$. Let $\left\vert V\right\vert $ denote the size of $G$,
\emph{i.e.} the number of its nodes and $\left\vert L\right\vert $ its number
of edges. Unless specified otherwise we shall assume $\left\vert V\right\vert
=n$ and $\left\vert L\right\vert =m$.\medskip
A subgraph $H=G(V',L')$ of a graph $G$ is a graph such that all its nodes and its edges
are in $G$:\
\[
V' \subseteq V\left( G\right) \text{ and }L' \subseteq L\left(G\right) \text{.}
\]
If $H$~is a subgraph of $G$ then $G$ is called the \emph{supergraph} of
$H$.\bigskip
If a subgraph \textit{G(V')} of $G$, with $ V'\subseteq V $, has \textit{all} edges $ (u,v) $ with $ (u,v) $ $\epsilon $ $ L $, then \textit{G(V')} is an \textit{induced subgraph} of \textit{G}. Unless stated otherwise, we shall use the term subgraph to denote an induced subgraph and consider only subgraphs \textit{G(V')} with at least three nodes and three edges.\\
A \textit{path} $ P_{l}$ is a sequence of distinct adjacent nodes of G, $ \left\lbrace u,x_{1},x_{2},..,x_{l-1},v\right\rbrace $, and consecutive incident edges $ \left\lbrace (u,x_{1}), (x_{1},x_{2}),..,(x_{l-1},v)\right\rbrace $ joining two nodes $ u $ and $ v $ in $ G $.\
Its \emph{length} is the number $ l $ of its
edges. A \emph{chordless path} $P_{l}$ is a path such that no two
non-successive nodes ($\left\vert i-j\right\vert \neq1$) are adjacent. Two
nodes are connected in $G$ if there is a path joining them. The \emph{distance
}$d_{G}\left( u,v\right) =d\left( u,v\right) $ \ between two \ nodes $u$
and $v$ of $G$ is the length of a shortest path joining $u$ and $v$ in $ G $. If the
nodes are not connected then $d\left( u,v\right) $ is defined $\ $as
$\infty$.
$ $
The diameter $dm\left( G\right) $ of $G$ is the largest distance
between nodes in $G$.\\
A \textit{k-club} in $ G $ is an induced subgraph of \textit{G} of diameter at most $ k $ \citep{Mokken1979,Mokken2008}. It is a \textit{maximal} \textit{k-club} of $ G $ if there is no larger \textit{k-club} in $ G $ which includes it. A \textit{maximum} \textit{k-club} is one with the largest size in \textit{G}.\\ Unless stated otherwise in this paper, \textit{k-club}, respectively \textit{2-club}, \textit{of G} will denote a \textit{maximal} \textit{k}-club, or \textit{2}-club, because \textit{k}-clubs in \textit{G}, which are included in larger \textit{k}-clubs, are not of primary interest here. We shall refer to graphs of diameter at most $ 2 $ as \textit{2-clubs}.
A cycle of $G$ is a closed path in $G$ where each node is both a starting and
an endnode in that path and no node occurs more than once. Its length $\left(l\right) $ is the number of edges (or nodes) of it. The smallest cycle $\left(C_{3}\right) $, a triangle, has length three. \emph{\ }A graph with cycles is \emph{cyclic. }A cycle which is an induced subgraph of $ G $ is called a chordless cycle or (for $ l > 3 $) a hole of \textit{G} \citep{Nikolopoulos2007}.\\ Unless stated otherwise 'cycle' will denote a $ C_{3} $ or a hole of $ G $.
Any edge $\left( u,v\right) $ of a cyclic graph can be a part of multiple cycles, to be denoted as its cycles. Its removal from $ G $ can increase some distances between nodes in $
G $. For instance, the distance $ d(u,v) $ then increases from $ 1 $ to $ l-1 $ if the length of its shortest cycle is a $ C_{l} $.\footnote{For related points see \citep{Granovetter1973,Everett1982}}
The \emph{degree} $ d_{G}(u) =d\left( u\right) $ of a node $u$
is the number of edges incident with $u$, which in a simple graph is equal to
its number of neighbors. An isolated node has degree $0$. A \emph{pendant} is a
node with a single neighbor and has degree $1.$ The \emph{average degree} of a
graph is $\bar{d}_{G}=\frac{2m}{n}$.\
For a connected graph $G$ the degree $ d_{G}\left( u\right) $ of a node $ u $ takes values in the interval \[1\leq\delta\leq d_{G}(u)\leq\Delta\leq |V\left( G\right)| -1= n-1\text{.}\]
where $\delta$ and $\Delta$ are the minimum and maximum degrees of $G$.
A component of a graph is a maximal connected subgraph. A cutpoint is a node, the removal of which increases the number of components, and a bridge is an
edge with the same property. A graph with cutpoints is called \emph{separable}. Connected graphs without cutpoints are called \emph{nonseparable (n-s)} or, alternatively, \emph{2}-connected or bi-connected, and have minimum degree $\delta$ $\geq2.$ Hence it has no pendants. A \textit{bicomponent} of a graph is a maximal biconnected subgraph and is part of a component of that graph.
Such a (sub)graph is also called a \emph{block}.\
Unless specified otherwise we shall assume the simple graph and network to be
connected, thus consisting of a single component.
A connected graph with no cycles (\emph{acyclic}) is called a \emph{tree}.
Each connected graph has a spanning tree, \emph{i.e.} an acyclic subgraph on all nodes of the graph.
A \emph{spanning tree} of a graph $G$ has all nodes of $G$. Every
connected graph has at least one spanning tree. A \emph{shortest spanning
tree} (s.s.t.) of $G$ is a spanning tree with the smallest diameter.
In a \textit{complete} graph $K_{l}$ all $l$ nodes are mutually adjacent and its diameter $dm\left( K_{l}\right) =1$. A \textit{clique} of a graph $ G $ is an induced subgraph of $ G $ which is a complete graph. It is a \textit{maximal} clique of $ G $ if there is no larger clique in $ G $ containing it. A \textit{maximum} clique of $ G $ is one with the largest size.
\\
For a node $u$ $\in V\left( G\right) $ we distinguish:
\begin{itemize}
\item the $k$-\emph{neighbors} of $u$:\emph{\ }$V_{k}(u)$ is the set of all nodes
$v\in V(G)$ with\\ $d(u,v)=k, k=1,...,$ $dm(G)$. Note that $u\notin V_{k}(u)$.
\item the $k$-\emph{neighborhood} of\emph{\ }$u$:\emph{\ }$N_{k}(u)=\bigcup
\limits_{i=1}^{k}$ $V_{i}(u)$, the set of all nodes $v\in V(G)$ with
$d(u,v)=1,2,...,k\leq$ $dm(G)$. Note that $u\notin N_{k}(u)$. The \emph{closed
}$k$-\emph{neighborhood} of\emph{\ }$u$ is defined as $\bar{N}_{k}(u)=N_{k}(u)\cup\left\{ u\right\} $.
\end{itemize}
\medskip
The $k$-\emph{degree} $d_{G}^{\left( k\right) }\left( u\right) = d_{k}\left( u\right) =\left\vert N_{k}(u)\right\vert $ is the size of the
$k$-neighborhood of $u$.
We shall in particular consider the $2$-\emph{neighborhoods}
of nodes \textit{u} of $G$: $N_{2}(u)$ and $\bar{N}_{2}(u)$. The $2$-degree of nodes \textit{u} of $G$, $( d_{2}\left( u\right): u \,\in \,V)$ are bounded by minimum and maximum $2$-degrees
given by $\delta_{2}\left( G\right) =$ $\delta_{2}$ and $\Delta_{2}\left(
G\right) =$ $\Delta_{2}$ in the interval \[2\leq\delta_{2}\leq\Delta_{2}\leq
|V\left( G\right)| -1=n-1.\]
The $k$-\emph{ego-network} of $u$ in $G$ is the subgraph $G(\bar{N}_{k}(u))$ induced by the nodes of its closed $k$-neighborhood, to be denoted as $\mathcal{E}_{k}^{G}(u)$ (or $ \mathcal{E}_{k}(u) $ if the context is clear).
Note that $u$ thus is
part of its ego-network and is its central ego. We shall in particular consider
the ego-networks $\mathcal{E}(u)=\mathcal{E}_{1}(u)$ and $\mathcal{E}_{2}(u)$ of nodes, with sizes
$\left\vert \bar{N}_{1}(u)\right\vert $ and $\left\vert \bar{N}_{2}
(u)\right\vert $.
\emph{Twinned} ego-networks occur when the ego-networks of two or more nodes
$u_{0},u_{1},...$ coincide:
\[
\mathcal{E}(u_{0})=\mathcal{E}(u_{1})=...
\]
thus forming a single ego-network with multiple egos $u_{0},u_{1},..$. Its ego
nodes are called \emph{twinned nodes} or just \emph{twins}.
The set of
central egos $\left\{ u_{0},u_{1},...\right\} $ of a twinned ego-network forms a \textit{clique} and is called
its \emph{center}. Each center $\left\{ u_{0}
,u_{1},...\right\} $ can be represented by one of its ego-nodes $u_{0}$. We can
accordingly define a reduced node set $V^{\left( c\right) }\left( G\right)
$ as the set of ego-nodes of $G$, including just a single ego node $u_{0}$ from
each twinned ego-network.
Observe that if $\mathcal{E}(u_{0})=\mathcal{E}(u_{1})=...$ then $\mathcal{E}_{k}(u_{0})=\mathcal{E}_{k}(u_{1})=...$
for $k \geq 2$, so that for twinned ego nodes all their $k$-ego-networks are twinned ego-networks.
Moreover, as nodes can belong to various (sub)graphs, it should be stressed that the relevant ego-network $ \mathcal{E}_{k}^{H}(u) $ of a node $ u, u\in $ $V(H)\subseteq\, V(G), $ is determined by the particular (sub)graphs $ H $ of $ G $ for which they are induced.
In this paper close communities, such as acquaintance networks, are studied in
the form of simple (sub)graphs of diameter at most 2.\footnote {\label{not:k-Bor} The theorems and corollaries in this paper can be extended and proven for the general case of diameter \textit{k} \textit{e.g.} \textit{k}-clubs, k-clubs, \textit{k}-boroughs, etc. Given the focus
of this paper on diameter 2, and to simplify presentation and analysis accordingly, we shall formulate our results mainly for this special case.} In a close community of that type the closed 2-neighborhood of each of its members covers its complete population: the 2-ego-network ($ \mathcal{E}_{2}(u) $) of each node $ u $ coincides with the network of that community.
\section{Close communities as \textit{2}-clubs of a network}
Close communities are closely-knit in the sense that every pair of its members are neighbors or has at
least one common neighbor, where the neighboring relationship represents a
durable or stable acquaintance, contact or association relation. They are modeled by \emph{2-clubs}: graphs of diameter at most 2. Mokken \citep{Mokken1980}
characterized such graphs in terms of the diameter $ 2 $, $ 3 $, or $ 4 $ of a shortest spanning tree
(s.s.t.) \emph{i.e.} a spanning tree with smallest possible diameter (assuming $ |V(G)|\geq 3 $), as a measure of
their compactness.
\subsection{Close communities: hamlets, social circles, coteries}
\textit{2}-Clubs can only have s.s.t.'s with diameter 2, 3, or 4
corresponding to the following three types:\bigskip
1: \emph{Coteries}. A \emph{coterie} is a \textit{2}-club with a
shortest spanning tree of diameter 2, corresponding to a spanning
star, formed by one central node $u_{0}$, which is adjacent to all other
nodes. Hence a coterie is the ego-network $\mathcal{E}\left( u_{0}\right) $ of its
central ego $u_{0}$. When a coterie has several s.s.t's, each with central
nodes $u_{0},u_{1},...,$ it is a twinned ego-network with twinned ego nodes
$u_{0},u_{1},...$, with the extreme case of a clique (diameter 1) were
each node is the center of a spanning star. Thus a clique is a special case of
a coterie. The smallest separable coterie is a tree of three points.
The smallest nonseparable coterie is $C_{3}$, a triangle (diameter 1).\\
2: \emph{Social circles.} A \emph{social circle} is a \textit{2}-club with
an s.s.t. of diameter 3. Because every spanning tree with odd diameter has a
center consisting of two adjacent nodes \citep{Harary1969,Harary1994}, a social circle
has at least one \emph{central pair} of neighbours (adjacent nodes) $u_{0}v_{0}$, which
together are adjacent to all the other nodes (a \emph{coupled star}; See Fig. 4 in \citep{Mokken1980}). Hence a social circle is a \textit{2}-club, such that there is at least one (central) edge $\left( u,v\right) $ with $V_1(u)\cup V_1(v)=V(G)$.
The smallest social
circle is $ C_{4}$, a rectangle (diameter 2).\medskip
3: \emph{Hamlets}. A \emph{hamlet} is a \textit{2}-club with an s.s.t. of
diameter 4. Such an s.s.t. (a \emph{double, 2-step, star}; Fig. 5 in \citep{Mokken1980}),
can be obtained in two steps from any node of the graph as its center.
Hence a hamlet has no central node or a spanning star, nor a central adjacent
pair of nodes on a coupled star. Each node can be used as the starting node and center of an
s.s.t. The smallest
hamlet is $C_{5}$, a pentagon (diameter 2).\\
\input{figure1.tex}
We summarize the above observations in the following theorem.
\begin{theorem}\label{thm:types2c} Three types of \textit{2}-clubs.
\item (i) A \textit{2}-club is either a coterie, a social circle or a hamlet.
\item (ii) Social circles and hamlets are nonseparable.
\item (iii) Coteries can be separable or nonseparable.
\end{theorem}
Examples of these types are given in Figure \ref{fig:2Ctypes}. If a \textit{2}-club is separable, then it must have a single spanning star and a corresponding single central node, which is its cutpoint \citep{Mokken1980}, p.6). Hence it is a separable ego-network and coterie. Thus only \textit{coteries} can be separable, and then have a single central node, which is its cutpoint (see Figure \ref{fig:2Ctypes},(d) ), while twinned coteries ares always nonseparable (cf. Figure \ref{fig:2Ctypes}(c)) \\
The smallest nonseparable examples of each type are the cycles $C_{3}$ (coterie), $C_{4}$ (social circle), $C_{5}$ (hamlet).\footnote{ Note that $ C_{3} $ has diameter 1 and is a 1-club also.}.
\\
The next theorem shows how the types of nonseparable \textit{2}-clubs are formed by these cycles.
\medskip
\begin{theorem}
\label{thm:k-edges}
Let $ G $ be a nonseparable \textit{2}-club, then for each edge of $ G $ its shortest cycle is $ C_{3}$, $ C_{4} $, or $ C_{5} $, and
\item (\textit{i}): if $ G $ is a coterie: for each edge this is a triangle $ C_{3} $;
\item (\textit{ii}): if $ G $ is a social circle: for each edge this is a triangle $ C_{3} $ or a rectangle $ C_{4} $;
\item (\textit{iii}): if $ G $ is a hamlet: for each edge this is a triangle $ C_{3} $, a rectangle $ C_{4} $, or a pentagon $ C_{5} $.
\end{theorem}
\begin{proof}
No edge of $ G $ can be on a shortest cycle $ C_{k} $ for $ k > 5 $ because then its diameter would be larger than 2.\\
(\textit{i}) \textit{coterie}: $ G $ has a shortest spanning tree (s.s.t.) consisting of a central node adjacent to all other nodes of $ G $. Hence all other edges joining nodes of $ G $ are on at least one triangle $ C_{3} $ of $ G $;\\
(\textit{ii}) \textit{social circle}: $ G $ has an s.s.t. consisting of a central edge $\left( u,v\right) $ with $V_1(u)\cup V_1(v)=V(G)$. Hence any other edge joining nodes of $ G $ forms either a triangle with node \textit{u}, \textit{v} or edge $ (u,v), $ or a square on the edge $ \left( u, v\right) $;\\
(\textit{iii}) \textit{hamlet}: $ G $ has an s.s.t. such that all other edges joining nodes of $ G $ are on triangles, squares, or pentagons.
\end{proof}\\
We shall call these cycles \textit{basic cycles}, and shall denote the set of cycles $ \lbrace C_{3}, C_{4}, C_{5} \rbrace $ of a \textit{2}-club as its set of basic cycles $ \mathcal{C}_{\left[ 3,5\right] }$, or just its \textit{basic set}.\\ More general: we shall call the set of cycles of type $ \lbrace C_{3}, C_{4}, C_{5} \rbrace $ of a graph $ G $ its \textit{set of basic cycles}, or \textit{basic set} $ \mathcal{C}_{\left[ 3,5\right] }$, which form its nonseparable \textit{2}-clubs, as we shall see below. Moreover, we shall denote by \textit{basic edge of G} any edge $ (u,v) $ of \textit{G} which is on at least one basic cycle of \textit{G}.
\subsection{The \emph{2}-clubs of a graph or network}
A \emph{k-club} of a simple graph $G$ is a maximal induced subgraph of diameter at most \emph{k} \citep{Mokken1979,Mokken2008}. It was introduced as a generalized clique concept to distinguish it
from $k$-cliques of a graph, which were defined as maximal clusters of nodes of a graph
within \emph{distance }$k~$ in the distance matrix of that graph \citep{Luce1950}.
However, considered as subgraphs $k$-cliques need not be connected, whereas
$k$-clubs, due to the diameter restriction, are warranted to be connected
subgraphs. The $k$-clubs of a network correspond to locally autonomous
subnetworks in the sense that their nodes can communicate or reach
each other within distance \textit{k} along paths involving only nodes of the
$k$-club, and not outside nodes in the larger supernetwork, as would be the
case for $k$-cliques. Occasionally a $k$-club happens to be a $k$-clique as
well, in which special case it is called a \emph{k-clan}.\footnote{ This case was called a sociometric clique by Alba \citep{Alba1973}.}
Recently \emph{k-}clubs have found interest and applications in many network oriented disciplines, such as telecommunication \citep{Balasundaram2006}, biology \citep{Balasundaram2005}, genetics \citep{Pasupuleti2008} , forensic data mining \citep{Memon2006}, web search \citep{Terveen1999}, graph mining \citep{Cavique2009}, and language processing \citep{Miao2004,Gutierrez2011}.\\
\
Above we defined close communication, contact or association in connected networks and graphs as connectedness along paths of at most length 2 and, accordingly, \emph{close communities} as \emph{\textit{2}-clubs} of a graph or network.\\
As such the concept of \textit{2}-clubs of a network is of central importance for the analysis of close communities and close communication structures
in networks. In that analysis, however, the first type of \textit{2}-club (ego-network or coterie) is of subordinate interest compared to the other two, the social circles and hamlets.\\
\paragraph{\textit{Three types, three levels of close communication}} These three types of \textit{2}-club imply different perspectives or levels of local communication:\
\begin{itemize}
\item \textit{Level 1} and most local (ego-network or \textit{coterie}). Coteries in a graph are rather restricted forms of close communities, as they correspond to just all the ego-networks of a graph, which define and span that graph. There is a central ego node and all close communication is possible via that ego within its ego-network: tightly meshed, involving triangles only.
Thus every ego-network $\mathcal{E}_{1}(u) $ of $ G $ is a rather trivial coterie in \textit{G}
with its ego node(s) $u$ as the center of a spanning star joining all its neighbors.
However, only when it is not included in a larger \textit{2-club} of $G$, and therefore maximal, it is a coterie \textit{of} $G$.
\item \textit{Level 2}: intermediate (\textit{social circle}). There is no central node but instead at least one central pair of nodes: two adjacent neighbors, forming a central edge, which together are adjacent to the other nodes of the social circle. All close communication is possible via two central neighbours within (parts of) their ego-networks: more loosely meshed, along triangles and rectangles.
\item\textit{Level 3} and widest (\textit{hamlet}). In hamlets there are no central nodes or central pairs of nodes and all close communication is between (parts of) ego-networks, widely meshed along triangles, rectangles, pentagons.
\end{itemize}
Thus \emph{coteries} are limited forms of close communities, as they correspond to (maximal) ego-networks of $ G $, varying from stars to cliques. \\
Moreover, any ego-network which, as an induced subgraph of \textit{G}, is separable, \textit{i.e.} has a cutpoint, is a coterie of \textit{G}, because any subgraph of $ G $ containing that ego-network will have diameter larger than 2, as can be verified easily.\
In particular. every pendant of $ G $ promotes the ego-network of its single neighbor to a coterie
of $ G $ (cf Figure \ref{fig:2Ctypes}(d)). Again, long isolated paths $ P_{l} $ in $ G $ are formed by overlapping path segments $ P_{3} $, which are overlapping separable coteries of $ G $, consisting of one central node and two pendant neighbors.\
As a consequence any graph or network will also show a multitude of rather trivial (separable) coteries.
Hence, from a perspective of close communication, the coteries of a network $G$ are relatively elementary, if not trivial, \textit{2}-clubs as such, when compared with the social circles and hamlets of $G$.
They are confined to the level of local communication \textit{within} their ego-network, whereas the hamlets and social circles involve the wider levels of close communication \textit{between} and across (parts of) different ego-networks of $G$.\\
Our main focus will be on the more proper types of \textit{2}-clubs: social circles and hamlets. Moreover, we will consider only \textit{2}-clubs with at least three nodes and three edges.\\
\subsection{The boroughs of a graph or network}
The nonseparable \emph{2}-clubs of a network or graph $G$ are contained in the bicomponents of $G$. We shall now introduce a new type of maximal subgraph of $G$, always contained in a single bicomponent of $G$, which we call a \emph{borough} of $G$.\\
The main result of this section is that each borough contains nonseparable \emph{2}-clubs of $G$, that each nonseparable \emph{2}-club of $G$ is a nonseparable \textit{2}-club of exactly one borough of $G$, and that both nonseparable \textit{2}-clubs and boroughs consist of \textit{edge chained} basic cycles: $ C_{3} $ (triangles), $ C_{4} $ (squares) and $ C_{5} $ (pentagons).\\
Two cycles of a graph $ G $ are said to be \emph{edge connected} when they share at least one common edge.
More generally: a pair of basic cycles $C^{\alpha}, C^{\beta} \in\mathcal{C}_{\left[3, 5\right] }$ of \textit{G} is \emph{edge chained} in \textit{G} if they are edge connected, or there is a sequence of edge connected basic cycles $C^{1},...C^{i},...C^{c}\in\mathcal{C}_{\left[3, 5\right] }$ of \textit{G}, such that $C^{\alpha}$ is edge connected with $C^{1}$, and $C^{c}$ with $ C^{\beta} $, and each intermediate consecutive pair of basic cycles $C^{i}$,
$C^{i+1}\in\mathcal{C}^{\left[3, 5\right] }$ is edge connected as well.\\
Using these concepts we can now define a \textit{borough} in a graph.\footnote{ \citep{Batagelj2007} introduced \textit{k}-gonal connectedness of cycles. Our edge chained connection of basic cycles corresponds to their 5-gonal connectedness.}
\begin{definition}\label{Def:Borgh}
A borough in a graph \textit{G} is an induced subgraph of \textit{G} such that each of its edges is on a shortest cycle $ C_{s}\,\in\,\mathcal{C}_{[3,5]}$, the basic set of \textit{G}, and all pairs of its basic cycles are edge chained in \textit{G}.
\end{definition}
A borough of \textit{G} is \textit{maximal}, denoted as a \textit{borough} (\textit{B}) \textit{of} \textit{G}, if it is not contained in a larger borough of \textit{G}. Unless specified otherwise we shall consider only maximal boroughs of $ G $.\\
A nonseparable \textit{2}-club is a special case of a borough as stated in the next proposition:
\begin{proposition}\label{thm:2c}
A nonseparable \textit{2}-club is a borough of diameter at most 2.
\end{proposition}
\begin{proof} Let \textit{G} be a \textit{2}-club. By Theorem \ref{thm:k-edges} every edge of \textit{G} lies on a basic cycle. Let $ C^{i} $ and $ C^{j} $ be two basic cycles of \textit{G} which are not edge connected in \textit{G} and let $e_{i} $ and $ e_{j} $ be two edges of $ C^{i} $ and $ C^{j} $ respectively. As \textit{G} has diameter 2 and is nonseparable, the endnodes of both edges must have common neigbours in \textit{G}, on intermediate edge connected basic cycles which are edge connected with $ C^{i} $ and $ C^{j} $, thus establishing the edge chained connection between them. Hence $ G $ is a borough.\end{proof}\\
Analogously, two nonseparable \emph{2}-clubs are
said to be \emph{edge connected} when they have at least one common edge.
Two nonseparable \emph{2}-clubs $H_{0}$ and $H_{k}$ are
\emph{edge chained} if they are edge connected or there is a sequence of edge connected $2$-clubs
$H_{0},H_{1},..,,H_{i},..H_{k}$, such that each consecutive pair
$H_{i}$, $H_{i+1}$ is edge connected.\\
Thus we can state the following proposition without proof.
\begin{proposition}
\label{thm:bor}
A set of pairwise edge chained nonseparable $2$-clubs is a borough.
\end{proposition}
An example of a borough, formed by the three edge chained \textit{2}-clubs of Figure \ref{fig:2Ctypes} is given in Figure \ref{fig:FigBorgh}.
\input{figure2.tex}
Thus boroughs and nonseparable \textit{2}-clubs of $ G $ are formed by cycles in its basic set $ C_{\left[3, 5\right] } = $ $ \left\lbrace C_{3}, C_{4,}, C_{5}\right\rbrace $.
\paragraph{Properties of boroughs of $G$}
Taking into account the maximality of boroughs of \textit{G}, a number of properties follow from these definitions and previous results. These are listed below as corollaries. They show that, from a perspective of close communication, boroughs can be seen as larger supercommunities packing or hosting close communities in a social network.\
\begin{corollary} Given the maximality and nonseparability of boroughs of \textit{G}:
\item \textit{(i)} Every borough of a network $G$ is contained in exactly one bicomponent of $G$;
\item \textit{(ii)} Two boroughs of G can not share a basic cycle of G, and each basic cycle of G is part of only one borough of G.
\item \textit{(iii)} Each edge of a borough of \textit{G} is on a basic cycle of \textit{G} and all its basic cycles are part of that borough of \textit{G} only. Hence a basic edge of \textit{G} is a basic edge of one and only one borough of \textit{G}.
\item \textit{(iv)} Any edge of $ G $, which is not part of any borough of $ G $, is not a basic edge of \textit{G} but either a bridge or on a shortest cycle $ C_{l}$; $ l \geq\ 6$ and part of the outback of G.
\item \textit{(v)} Thus the boroughs of \textit{G} are edge induced subgraphs of \textit{G}: they are induced by the basic edges of \textit{G}. The outback of G is induced by the non-basic edges of G.
\end{corollary}\
The maximality and non-separability of boroughs also imply:
\begin{corollary} \label{ns-2C_G-B}
Every nonseparable \textit{2}-club of a network $G$ is a \textit{2}-club of exactly one borough of $G$.
\end{corollary}
Note that a nonseparable \textit{2}-club of a graph {G} is either itself a borough of {G}, or part of just one borough of {G}. If it would have been part of two boroughs of {G} these would share a common edge and could be merged into a larger borough, which contradicts their required maximality.\\
However, the reverse of Corollary \ref{ns-2C_G-B} is true only for social circles and hamlets and not for nonseparable coteries, conform the following theorem:
\begin{theorem}\label{th:HSC_B_G}
Let \textit{B} denote a borough of \textit{G}.\
\item \textit{(i)} A subgraph of G is a hamlet or social circle of G if and only if it is a hamlet or social circle of the corresponding borough B of G.\
\item \textit{(ii)} A coterie of a borough \textit{B} of \textit{G} is either itself a coterie of \textit{G} or included in a larger coterie of \textit{G}.
\end{theorem}
\begin{proof}
\textit{(i): If}: let $ B $ be a borough of $ G $ and assume that a hamlet (or social circle) of $ B $ is not a \textit{2}-club of $ G $. Then, as it is a \textit{2}-club in $ G $, it must be contained in a larger \textit{2}-club of $ G $.\\
\textit{(a)}: that \textit{2}-club cannot be a nonseparable \textit{2}-club of $ G $, because then $ B $ would not be maximal in $ G $;\\
\textit{(b)}: if that \textit{2}-club is separable, then it must be a coterie of $ G $ with a unique central node, adjacent to all the other nodes of the \textit{2}-club (see section 3.1). But then that node must also be a node of $ B $ and the hamlet or social circle would be a coterie of $ B $ instead, contrary to assumption.\
\textit{(i): Only if}: follows directly from Corollary \ref{ns-2C_G-B}.\\
\textit{(ii)} If a coterie of a borough \textit{B} of \textit{G} is not also a coterie of \textit{G}, then it must be included in a larger 2-club of \textit{G}. That must be a coterie of \textit{G}, as by \textit{(i)} it cannot be a hamlet or social circle of \textit{G}, because then it would be included in the same social circle or hamlet of \textit{B} as well, contradicting its maximality in \textit{B}.
Thus a non-separable coterie of \textit{B} can be included in a larger, separable coterie of \textit{G} and therefore, though maximal in \textit{B} and sharing the central node, is not a coterie of {G}.\end{proof}\\
A fictitious and elementary illustration is given with Figure \ref{fig:Graph_2-Boroughs}(a) which gives an example of a simple graph \textit{G} with 29 nodes.\\
Considering only \textit{2}-clubs of at least three points and three lines, a straightforward count of the \textit{2}-clubs of the simple graph \textit{G} of Figure \ref{fig:Graph_2-Boroughs}(a) results in one hamlet, 5 social circles and 13 coteries of \textit{G}.\
The hamlet is the pentagon (C5) formed by nodes \{\textit{13,14,16,17,18}\}. The social circles are identified by:
\begin{enumerate}
\item central pairs: (\textit{w,1}), (\textit{w,6}); size: 6;
\item central pairs: (\textit{6,7}), (\textit{6,v}), (\textit{7,u}); size: 5;
\item central pairs: (\textit{20,19}), (\textit{20,17}), (\textit{19,18}); size: 5;
\item central pairs: (\textit{12,15}), (\textit{15,14}), (\textit{12,13}); size 5;
\item central pairs: (\textit{12,11}), (\textit{11,10}), (\textit{12, v}); size: 5
\end{enumerate}
Graph \textit{G} has 13 coteries, of which one coterie is the nonseparable ego-network of node \textit{3}, which as a 2-club is maximal.\
The other 12 coteries of \textit{G} are the separable ego-networks of the nodes: \textit{1, 7, 6, u, v, 17, 18, 13, 14, 12, 21} and \textit{24}.\
Note that the ego-network of node \textit{w} is nonseparable but not a 2-club of \textit{G}, because it is included in the social circle \textit{1} of \textit{G}.\\
\input{Graph_2-Boroughs.tex}
As shown in Figure \ref{fig:Graph_2-Boroughs}(b) graph \textit{G} has two boroughs: one indicated with red-bold edges and the other with blue-solid edges. Its \textit{outback} is given with black-dashed edges.\
If we consider only the close community area covered by these two boroughs of \textit{G}, we note that all non-separable 2-clubs of \textit{G} are contained by the two boroughs:
\begin{itemize}
\item the top red-bold borough has two social circles, which are the social circles 1 and 2 of \textit{G}, and its nonseparable coterie of node \textit{3};
\item the bottom blue-solid borough has three social circles: the social circles 3, 4 and 5 of \textit{G}, as well as its hamlet.
\end{itemize}
Boroughs of a graph G can have one or more common points, to be called \textit{touch points}, as illustrated in Figure \ref{fig:Graph_2-Boroughs}(b) by the neighbour nodes (\textit{u,v}). The extra bold red edge \textit{(u,v)} belongs to the top red-bold borough only, as it is part of its basic cycle formed by edges \textit{u-v-6-7-u}. Its other cycle \textit{u-v-12-13-18-19,u} is a hole of \textit{G} of 6 nodes and edges, and not a basic cycle of the lower blue-solid borough. Would that hole have been a basic cycle instead, then the two boroughs would have been edge chained and, due to the required maximality for boroughs of \textit{G} (see definition \ref*{Def:Borgh}), form a single borough of \textit{G}. Thus edge (\textit{u,v}) is a basic edge for the top red-bold borough only.\\
More general: if touch points of a graph \textit{G} are on common basic cycles of \textit{G}, then, due to maximality of boroughs of \textit{G}, all these basic cycles belong to the same borough of \textit{G}.\\
Referring to the particular nature of ego-networks as coteries (see conclusion Subsection 3.2 ), we see in Figure \ref*{fig:Graph_2-Boroughs} that the separable coteries of \textit{G} are not fully covered by its boroughs. For instance, the ego-network of touch points between boroughs, or between boroughs and outback of a graph, are separable and therefore coteries of that graph. This is illustrated above for the ego-networks of nodes \textit{u} and \textit{v}. Though both are coteries of \textit{G}, they are dissolved as such, because their edges are distributed over the two boroughs and, for node \textit{v}, the outback of \textit{G}. Again, the ego-networks of nodes \textit{1} and \textit{14}, reduced by missing outback edges, are also (separable) coteries of their boroughs of \textit{G}, but not of graph \textit{G} itself, because they are included in the corresponding unreduced (separable) coterie of \textit{G}.\
Moreover, the separable coteries in the outback of a graph, mainly stars, such as the ego-networks of nodes \textit{21} and \textit{24} in Figure \ref*{fig:Graph_2-Boroughs} \textit{(b)}, will be ignored by a focus on the boroughs only.\\
However: all of the coteries of the boroughs are either also coteries of \textit{G}, or included in a separable coterie of \textit{G} sharing its ego node:
- for the red-bold borough: its three separable coteries are either also coteries of \textit{G} (see nodes \textit{6} and \textit{7}) or included in a coterie of \textit{G} \textit{i.e.} that of node \textit{1};
- for the blue-solid borough: its 5 separable coteries are also coteries of \textit{G}: those of nodes \textit{17, 18, 13} and \textit{12} or included in a coterie of \textit{G} with the same ego node: \textit{e.g.} node \textit{14}, and its only nonseparable coterie of node \textit{3} coincides with its nonseparable coterie of \textit{G}.\\
Consequently a graph or network \textit{G} can be partitioned into its boroughs and its outback, where its basic edges and their basic cycles induce the borough structure of \textit{G}, which contains its areas of \textit{close communication} and its \textit{2}-clubs as close communities, while its non-basic edges determine its \textit{outback} of more remote communication.
So, when the main focus of the analysis of a graph or network is on its close community structure, one might as well ignore its outback part and focus on its boroughs and the \textit{2}-clubs contained by them.\\
Lastly, it is well known that removal of an edge from a nonseparable graph
can increase its diameter. The next corollary shows that for boroughs this increase is limited to at most three.
\begin{corollary}
Let \textit{B} be a borough of \textit{G} and let $ B-(u,v) $, denote the subgraph obtained by removing an edge $ (u,v) $ from \textit{B}, then
\begin{equation*}
dm(B)\leq dm(B-(u,v)) \leq dm(B)+3.
\end{equation*}
\end{corollary}
\begin{proof}
Consider the set of all shortest paths containing $ (u,v) $ defining distances between pairs of nodes of \textit{B}. Note that such pairs can also be joined by alternative shortest paths in \textit{B} not containing \textit{(u,v)}.\\
As \textit{(u,v)} is on a basic cycle, its removal extends the distance between the nodes $ u $ and $ v $ by 1, 2, or 3 along the remaining part of the basic cycle(s) on \textit{(u,v)}. Thus, all distances between pairs of nodes of the set, and the diameter of $ B-(u,v) $, increase by at most 3.
\end{proof}\\
An increase of the diameter by exactly 3 implies that the removed edge is on a basic $ C_{5} $ of a hamlet of \textit{B}, as illustrated by Figure \ref{fig:min3} for the removal of the bold-lined edge.\\
\input{figure3.tex}
\textit{Summary}. We conclude that the set $\mathcal{B}\left(G\right)$ of boroughs of $ G $ contains the proper \emph{2}-clubs of $G$, as distributed across and within its boroughs.
Thus, where we defined \emph{2-clubs} as the basic type of \emph{close
community} in a network, we can see the \emph{boroughs} to which they belong as a
\emph{supercommunity} in that network, enveloping chained sets of such close communities.\
Moreover, conform footnote~\ref{not:k-Bor} these concepts and results can be extended to the general case of diameter $ k $. Corresponding $k- $clubs (diameter at most $ k $) and $ k $-boroughs are then both formed from the basic set $ C_{\left[3, 2k+1\right] } = $ $ \left\lbrace C_{3},...,C_{2k+1}\right\rbrace $ of basic cycles with diameter $ 3 $ to $ k $.
For instance, since the early days of social network analysis (SNA) triangles and triad censuses have been of central importance in the analysis of
social networks (\emph{e.g.} \citep{Holland1970,Holland1971,Davis1972,Johnsen1985,Frank1988,Watts1998}). Such \lq very local structures\rq, \citep{Faust2007} of direct communication between neighbours in triads, correspond to \textit{1-clubs} and \textit{1-boroughs}, as formed by triangles only (\emph{e.g. }$ 3 $-cliques $\left(
K_{3}\right) $ as in \citep{Palla2005}). It is not difficult to
see that the $1$-clubs and $1$-boroughs of a network are nested in the $2$-clubs and ($2$)-boroughs which are the subject of this paper.
\\
In other, \textit{e.g.} topological, contexts $ 3 $-clubs (at most diameter 3) and corresponding $ 3 $-boroughs will require for their formation the smallest 3-clubs hexagon ($ C_{6} $) and heptagon ($ C_{7} $) in the corresponding basic set $\left\lbrace C_{3},..,C_{7}\right\rbrace $. A rather special case is that of a 'football' type of graph, a single borough, consisting of pentagons and hexagons only, so that all its $ 3 $-clubs are formed by just two types from the basic set: the pentagon $\left( C_{5}\right) $ and hexagon $ \left( C_{6}\right) $.
\section{Some applications}
With the introduction of $k$-clubs (\cite{Mokken1979}) it was pointed out that in practice their search, detection, and identification in other than small networks would be a hard, if not prohibitive, computing task, at the time beyond available hardware and algorithmic capabilities, such as the clique algorithm of Bron and Kerbosch (\cite{Bron1973}).\
Later results in computational complexity theory demonstrated that for $ k\geq2 $ several variants of $ k $-club detection were NP-hard (\textit{e.g}. \cite{Bourjolly2002}; \cite{Balasundaram2005}; ), such as, for instance, finding a $ k $-club of size larger than $\Delta(G)+1 $ (\cite{Butenko2007}), or more generally, for a given \textit{k}-club, finding a larger \textit{k}-club containing it (\cite{Pajouh2012}).
\subsection{Finding boroughs and \textit{2}-clubs}
Despite these theoretical limits, in the last decade resources and algorithm theory have made significant progress toward workaround, heuristic and practical detection algorithms to detect\space $k$-clubs \space(\citep{Bourjolly2000,Bourjolly2002,Pasupuleti2008,Asahiro2010,Yang2010,Carvalho2011,Schafer2012,Chang2012,Veremyev2012,Hartung2012,Hartung2013,Pajouh2012,Shahinpour2012}). Most of these algorithms find either $ k $-clubs of at least a given minimum size in a given graph G, or the largest (in number of nodes) $ k $-club of G.\medskip
These sources inspired us to develop some specific software modules enabling us to detect both boroughs and $ 2 $-clubs in a simple graph.
From a perspective of community detection and network analysis it is more interesting to find (inclusion-wise maximal) $ 2 $-clubs than just the largest $ 2 $-clubs
(\textit{i.e.} maximum cardinality). Hence our approach of detecting \textit{2}-clubs in a graph was designed to achieve or approximate that purpose within the limits of available computational capabilities.\
In doing so we made use of the crucial intermediate position of the boroughs, as separate components of a graph or network, hosting its edge-chained $ 2 $-clubs: its hamlets, social circles and (part of) its coteries (see Theorem \ref*{th:HSC_B_G}). This suggested a two step approach to finding all \textit{2}-clubs: first find all boroughs, then find all \textit{2}-clubs inside boroughs, taking into account the proviso at the end of Theorem \ref*{th:HSC_B_G} concerning the special nature of coteries of a network and of its boroughs.\
We thus developed algorithms, conform to Definition \ref*{Def:Borgh}, to detect all boroughs of a graph by joining and chaining cycles from its set of basic cycles (triangles, squares and pentagons), using available methods of finding all cycles (e.g. \cite{Tiernan1970,Weinblatt1972,Fosdick1973,Johnson1975} or, more specifically, of finding only cycles of given length (\cite{Alon1994,Yuster2011}).\medskip
Another set of algorithms was developed for finding all \textit{2}-clubs of a graph, sorted by type (coterie, social circle or hamlet).
Thus we could also detect the \textit{2}-clubs in separate selected (e.g. the largest) boroughs of a network.
The (usually many and overlapping) \textit{2}-clubs that are found are stored in a database, per borough classified according to the three possible types/level of close communication (coteries, social circles, hamlets) and per type sorted according to size. They can then be inspected and analyzed by a Viewer interface. Details are given in \blind{\cite{Laan2012}} and in the available open source licensed package by \blind{\cite{Laan2014}}.
\subsection{Some real network results}
In this section we illustrate the concepts introduced above with some datasets, chosen to cover different data domains as well as to provide some analytic perspectives. The different data domains and associated network data are:
\begin{itemize}
\item the well-known small network of \textit{Zachary's karate club};
\item \emph{corporate board networks} as given by the interlocking directorate networks for the top 300 European firms for the year 2010;
\item \emph{co-authorship data} taken from the large DBLP dataset.
\end{itemize}
The Zachary data will illustrate the perspectives of \textit{2}-club analysis at the micro scale of small face-to-face networks.\
The European corporate network concerns a much larger network and the ensuing multitude of \textit{2}-clubs thus changes the analytic perspectives.
Finally we investigated the distribution of boroughs and their sizes in much larger datasets, as illustrated for the DBLP co-authorship data set.
\subsubsection{Zachary's karate club}
Zachary's (\cite{Zachary1977}) well known data set concerned a voluntary
association, a university student karate club, with a total membership
of about 60 persons/nodes. Zachary analyzed a valued network, where edges
denoted the observed number of 8 types of mutual interaction outside
karate lessons. He restricted his analysis to the main component of
34 interacting members, thus disregarding 26 other non-connected or
isolated members. After conflicting views two factions polarized around
two main opponents - node 1 (labeled 'Mr. Hi', the karate teacher)
and node 34 ('John A.', president and main officer) - and subsequently
split accordingly. Zachary predicted the composition of the splits
using a max-flow-min-cut algorithm.
For this paper we reanalyzed his data in the form of a simple undirected graph with an edge indicating at least one of the 8 types of interaction.
First we used a standard SNA package (\cite{Borgatti2002}) and then applied our borough and \textit{2}-club detection algorithms to the
relevant components. This simple connected network of 34 nodes has
diameter 5 and consists of two bicomponents (size 27 and 7), separated
by a common cut point (node 1: Mr. Hi), and a pendant (node 12) attached
to node 1 (Mr. Hi) as well. Both bicomponents prove to be boroughs
and node 1 (Mr. Hi) is a member, and touch point, of each of these.
Thus the ego-network of cutpoint Mr. Hi is a coterie in the larger graph and distributed over the two boroughs and the pendant node 12.
Given the face-to-face nature of this (subset of) a student association, it is not surprising that all its edges, except the pendant (1,12), are part of at least one \textit{2}-club in just one of the two boroughs.\\
The smallest borough of size 6 contains, in addition to touch node 1 (Mr. Hi), nodes 5, 6, 7, 11, and 17, which were not further considered by Zachary. It has diameter 2 and thus is a \textit{2}-club (a social circle) and a trivial borough as such.\\
The second, largest, borough of size 27 corresponds to the network analyzed in Zachary's paper. This borough has diameter 4 and contains 13 \textit{2}-clubs including 4 coteries (all separable),
8 social circles, and one hamlet. They are listed in Table~\ref{tab:kc}.
The two opposing leaders, Mr. Hi (node 1) and John A. (node 34), are
both part of just three \textit{2}-clubs:
\begin{itemize}
\item the 7-node ego-network (coterie) of node 32;
\item a social circle of size 14 (with central pair 34-14); and
\item the hamlet of size 8.
\end{itemize}
The latter two \textit{2}-clubs are depicted in Figure~\ref{fig:HiClubs}.\\
Moreover, each of the two opposing nodes (1 or 34) are part of five \textit{2}-clubs
excluding the other opponent. Hence, all \textit{ 2}-clubs contain at least
one of the two opponents: node 1 or node 34.
In particular the 14 node social circle and 8 node hamlet look like
negotiation forums of the two opposing sides. For instance, the hamlet
of 8 nodes connects the central egos (nodes 1, 34, 3, and 32) of
the 4 coteries.
\input{table1.tex}
\input{figure4.tex}
These four coteries represent the ego-networks of the two opposing nodes 1 (Mr Hi: size 12) and 34 (John
a.: size 18), and the nodes 32 (size 7) and 3 (size 11), where node 3 appears to be a supporting 'lieutenant' node for Mr. Hi (node 1) and node 32 for his opponent John A. In terms of their \textit{2}-club memberships both nodes show extensive liaison connections with the opposing side.\\
Membership of particular \textit{2}-clubs appears to be a good predictor for
faction membership after the split. To keep within the bounds of this
paper, we can illustrate this with the problematic mysterious node
9, the only node mentioned explicitly by Zachary in his paper, apart
from Mr. Hi (1) and John A. (34). Node 9 was problematic in the sense
that he was classed as a (mild) supporter of the side of 34 (John A.), but in the end showed up as a member of the opposing faction of Mr. Hi after the split. However, this move can be understood by an analysis of his \textit{2}-club memberships.\\
Node 9 was member of eight \textit{2}-clubs, each of which included at least
one of the two opponents 1 or 34, and distributed as follows:
\begin{itemize}
\item five \textit{2}-clubs with only node 1 (Mr. Hi);
\item two \textit{2}-clubs with only node 34 (John A.); and
\item one \textit{2}-club with node 1 and node 34.
\end{itemize}
Moreover, in 7 of its eight \textit{2}-clubs node 9 is accompanied by node
3, its neighbor and firm Mr. Hi supporter, as we noted above.
So, on the basis of its \textit{2}-club affiliations alone, one would have
predicted node 9 to move (or stay) with the faction of Mr. Hi after
the split, as in fact he did. The \textit{2}-club analysis also revealed liaison roles of
two nodes, not mentioned as such in the Zachary paper: node 3 for
Mr. Hi (node 1) and node 32 for John A. (node 34).
\subsubsection{European corporate network 2010}
This network was constructed from the interlocking directorates of the largest 286 stock listed companies, as studied by Heemskerk \citep{Heemskerk2013}.\footnote{The European data for 2010 were kindly made available to us by Eelke Heemskerk.}
Its nodes designate the boards of individual companies and its edges indicate that the companies they connect share at least one common director in their boards. Hence the network provides channels of interpersonal contact and communication between companies at the level of their boards.\\
The source network for these 286 companies had one giant component of 259, which we chose for further analysis.
Apart from three trivial 'boroughs' of sizes 4, 3 and 3, we found one single giant borough of 225 companies.\\
This single borough, covering 87\% of the dominant component and 79\% of all firms, formed a compact sub-network in the corporate European network of 2010, consisting of 2128 overlapping or edge-chained \textit{2}-clubs of corporations, with a median \textit{2}-club size of 10 corporations in a size range of 4-27. This result confirms Heemskerk's (2013) original conclusion that by 2010 this European network appeared to be well integrated. However, with a diameter of 7 the borough was rather stretched.\\
\input{table2.tex}
The multitude of \textit{2}-clubs, to be expected for large networks, can be analyzed by means of views and selections from their database. \textit{Table} \ref{Bor2Comps} shows some results.\\
The first \textit{upper} part of \textit{Table} \ref{Bor2Comps} gives the distribution of these 2128 \textit{2}-clubs of the European borough over the three types and levels of close communication.\\
The \textit{first}, most local level of communication, the \textit{coteries}, formed 6.5\% of the \textit{2}-clubs of the borough. They were the ego-networks of 138 central companies: 62\% of the 225 companies in the borough. Together the nodes of these coteries covered practically all (coverage: 99.9\%) of the companies (nodes) of the borough.\
The ego-networks of the other 85 companies were not coteries of the borough but were included in or split over larger 2-clubs. That was, for example, the case for the German automotive company \textit{Volkswagen AG} and two large banks: the German \textit{Deutsche Bank AG} and the Spanish \textit{Banco Santander SA}.\
Thus this set of 138 coteries formed the local backbone of the borough, as the two next levels of more extended close communication, social circles and hamlets, are formed from parts of the ego-networks of their central companies.
Their composition strongly suggests a predominance of French \textit{2}-clubs in the borough: the 20 largest coteries consist of the ego-networks of 12 French, 3 German, 2 British firms, and a Swedish, a Belgian and a Swiss company.\\
The \textit{second} intermediate level of\textit{ social circles} was formed by one third (717: 33.7\%) of the \textit{2}-clubs of the borough, with a median size of 14 companies in a size range of 5-25. Together the social circles cover 89.4\% of all nodes of the borough. \\
The composition of the largest social circles again confirms the predominance of the largest French companies in the network. They were formed around one or more central pairs of major French companies (\textit{i.e.} pairs of central neighbors adjacent to all others in the \textit{2}-club).\
In \textit{Figure} \ref{fig:SCTotSuezCieNat} a detail is given of one large social circle of 25 companies, mainly French, with 19 French and one Franco-Belgian, 2 British, 2 Dutch firms and a Luxembourg company. It shows its densest part around its two central pairs, one formed by the French company \textit{Total SA} with the French firm \textit{GDF Suez SA} and the other by \textit{Total SA} and the Belgian company \textit{Compagnie Nationale \`{a} Portefeuille SA}.\\
The \textit{third} and widest level of close communication, the \textit{hamlets}, occupied a major part (1273: 59.8\%) of the \textit{2}-clubs of the European borough, with a median size of 16 in a size range of 5-24. Altogether the hamlets of the borough cover nearly all \textit{i.e.} 92.5\% of its 225 nodes. An example is given by \textit{Figure} \ref{fig:HamlABB} to which we will return later.\\
Heemskerk (2013, p. 91) cites \textit{Compagnie Nationale \`{a} Portefeuille SA}, a Belgian investment holding of the Fr\`{e}re family, as most involved in European interlocks, with 17 European and 2 national (Belgian) interlocks.
We investigate this conclusion further in terms of its participation in major \textit{2}-clubs of the European borough of corporate interlocks.\\
\input{figure5.tex}\
The second, \textit{lower} part of \textit{Table 2} summarizes this analysis. It was a member of almost half (990: 46.5 \%) of the \textit{2}-clubs of the borough.\
At the most local level it was a member of 15 (10.9\%) of the coteries of the borough. Of these 15 ego-networks, identified by their central (ego) company and ordered by size, it was the center of the sixth coterie (size 22). Of the other 14 coteries ten were large French companies,two were Luxembourg based, followed by another Belgian company and a German company.
This predominant francophone orientation suggests that it was more part of the French regional network than of a cross-European one. Widening the level of close communication to its (\textit{Compagnie Nationale \`{a} Portefeuille}) membership of social circles and hamlets of the borough confirmed this impression: it was part of 284 social circles (39.6\%)and 691 (54.3\%
) hamlets. Among the largest social circles it formed part of one or more central pairs with the the five largest French companies, as illustrated by Figure \ref{fig:SCTotSuezCieNat}, where it forms one of the two central pairs with the French company \textit{Total SA}.\\
We therefore studied its \textit{2}-club memberships together with those of the largest French bank: \textit{BNP Parisbas SA}, as summarized in the lower part of \textit{Table} \ref{Bor2Comps}. \textit{BNP Paribas SA} itself was included in 467 (21.9\%) of the \textit{2}-clubs of the borough.
The combined membership of \textit{Compagnie Nationale \`{a} Portefeuille} or \textit{BNP Parisbas} accounted for 1109 (52.1\%), or more than half of the 2128
\textit{2}-clubs in the European borough. In 348 of those they participated together.
Consequently \textit{Compagnie Nationale \`{a} Portefeuille} participated in almost three quarter (74.5\%) of the \textit{2}-clubs to which \textit{BNP Parisbas} belonged.\
Hence in terms of \textit{2}-club memberships the Belgian \textit{Compagnie Nationale \`{a} Portefeuille} was clearly a part of the center of the French corporate sub-network in 2010.
\input{figure6.tex}
Subsequent developments appear to support this conclusion. The controlling Belgian holding \textit{ERBE}, for 53\% owned by the Belgian Fr\`{e}re family and for 47\% by \textit{BNP Parisbas}, removed \textit{Compagnie Nationale \`{a} Portefeuille} from the Belgian stock exchange on 2 May 2011, after a succesful bid for outstanding stock. This appeared to be part of a familial succession strategy and an agreement allowing \textit{BNP Paribas} to withdraw from \textit{ERBE}. In a press release of December 10, 2013 \textit{BNP Paribas} announced its completion of this arrangement through the purchase by the \textit{Fr\`{e}re Group} of the entire \textit{BNP Paribas} shareholding in \textit{ERBE}.\\
As the second firm, most involved in European interlocks, with a reported 14 European and 2 national interlocks, Heemskerk (\textit{l.c}) cites \textit{ABB Ltd}, a Swiss based multinational corporation operating mainly in power and automation technology, such as robotics.
In this case his conclusion appears to be fully supported by investigating its \textit{2}-club memberships in the corporate European borough for 2010.
Not surprisingly it was central ego of a coterie (size 17) of the borough, consisting of companies from six European nations: 3 German, 2 French, 3 Swiss, 5 Swedish, 3 Dutch and 1 Finnish.
\textit{ABB Ltd} participated in 64 social circles (size 7-21): the first 5 largest social circles solidly German with central pairs from the largest German companies. After those follow a number of mainly Swedish social circles and a number of social circles of mixed nationality.
At the widest level of close communication \textit{ABB Ltd} participated in 75 hamlets (size 5-19) of different nationalities. An example is given with the hamlet of Figure \ref{fig:HamlABB}, containing thirteen firms: three French, four British, one Swedish and one Swiss, ABB Ltd itself.\\
For a more elaborate analysis, beyond the scope of this paper, we refer to \citep{Mokken2015b}
\subsubsection{Boroughs in DBLP co-authorship networks}
We use the DBLP database of Computer Science publications to obtain some insights on the availability of boroughs in large real live networks\footnote{Downloaded from \url{http://dblp.uni-trier.de/xml} at 2012-02-21.}. DBLP can be seen as a bipartite network consisting of authors and publications as nodes connected by the \lq is author of\rq \space relation.
For a given integer threshold $t$, we induce an undirected co-authorship network between authors from the DBLP network by relating two author nodes when they have coauthored at least $t$ publications.
For $t$ between 5 and 10, Table~\ref{DBLP} contains basic statistics about the number of boroughs, and their distribution according to their size. Thus, for a threshold $t$, the number of nodes in the $t$-co-authorship network is the number of authors who have at least $t$ joint publications with one other author. The density of the resulting networks is fairly stable and slowly increases from $2\cdot 10^{-5}$ for $t=5$ to $4\cdot 10^{-5}$ for $t=10$.\\
We can draw several conclusions from this experiment:
\input{table3.tex}
\begin{itemize}
\item Large sparse networks contain relatively many boroughs, roughly an order of magnitude smaller than the number of nodes in the network.
\item Large networks contain one \lq giant\rq \space borough, whose number of nodes is roughly an order of magnitude smaller than the number of nodes in the network.
\item The number of nodes of all boroughs except the \lq giant\rq \space one is small.
\item The sizes of the boroughs of the DBLP co-authorship networks are distributed according to a power-law.
\end{itemize}
Given the complexity of finding basic cycles for such large networks and available capacities, we only computed the boroughs for $t$ from 5 to 10.
\section{Discussion}
Our reanalysis of the Zachary karate club network demonstrated the usefulness of \textit{2}-club analysis for relatively small 'very local' networks \citep{Faust2007}. The next example, concerning the networks of corporate interlocking directorates for Europe in 2010, illustrated the huge numbers of distinct, but overlapping \textit{2}-clubs of the three types to be expected for larger, possibly dense networks. However, once the boroughs and their \textit{2}-clubs are detected, identified and stored, the challenge of their analysis can be met with the plethora of currently available statistical methods of search, data mining and matching techniques of massive databases.\\
Our exercise with the large DBLP data set shows that a much larger challenge will be how to combine the micro, \textit{i.e} very local, in-depth focus of close communication by boroughs and \textit{2}-clubs with the global analysis of the Big Data massive networks which currently confront community detection. Promising techniques can be based on the analysis of appropriate segments of such networks, using their hierarchical modularities with techniques such as proposed by \cite{Blondel2008} or by focusing on selected 2-neighborhoods.\\
Finally, some researchers (\textit{e.g.} \citep{Hartung2012}) have noted that the largest \textit{2}-clubs they found in real-world networks just coincided with the ego-network of a node with maximum degree ($\Delta\left(G\right)+1$). As the size of a coterie cannot be larger than that limit, any \textit{2}-club of larger size than the maximum degree plus one must be a hamlet or a social circle.
In our analyses of various real world networks we also did not find a social circle or hamlet larger than the maximum degree, the largest coterie.\
As it is not difficult to construct examples of networks with hamlets or social circles which exceed that limit, a question of further research is to hunt for empirical, real-world datasets where that is indeed the case. Networks with no or limited preferential attachment seem likely candidates.\\
\blind{\section*{Acknowledgment}
We thank Johan van Doornik for several suggestions in the initial stages of our research and are grateful to some referees for suggestions for improvement.}
\bibliographystyle{elsarticle-harv}
|
2,877,628,090,343 | arxiv | \section{Introduction}\label{s:intro}
The fundamental question of the final fate of a massive star,
when it exhausts its internal nuclear fuel and collapses continually
under the force of its own gravity, was highlighted by Chandrasekhar
way back in 1934 (Chandrasekhar 1934), who pointed out:
``Finally, it is necessary to emphasize one
major result of the whole investigation, namely, that the
life-history of a star of small mass must be essentially different from
the life-history of a star of large mass. For a star of small mass
the natural white-dwarf stage is an initial step towards complete
extinction. A star of large mass ($ > M_c$) cannot pass
into the white-dwarf stage, and one is left speculating on
other possibilities.''
We can see the seeds of modern black hole physics
already present in the inquiry made above on the final fate
of massive stars. The issue of endstate of large mass stars has,
however, remained unresolved and elusive for a long time
of many decades after that. In fact, a review of the status
of the subject many decades later notes, ``Any stellar core
with a mass exceeding the upper limit that undergoes
gravitational collapse must collapse to indefinitely high
central density... to form a (spacetime) singularity''
(Report of the Physics Survey Committee 1986).
The reference above is to the prediction by general
relativity, that under reasonable physical conditions,
the gravitationally collapsing massive star must terminate
in a spacetime singularity (Hawking \& Ellis 1973).
The densities, spacetime curvatures,
and all physical quantities must typically go to
arbitrarily large values close to such a singularity.
The above theoretical result on the
existence of singularities is, however, of a rather general
nature, and provides no information on the nature and
structure of such singularities. In particular, it
gives us no information as to whether such singularities,
when they form, will be covered in horizons of gravity
and hidden from us, or alternatively these could be visible to external
observers in the Universe.
One of the key questions in black hole physics
today therefore is, are such singularities resulting from collapse,
which are super-ultra-dense regions forming in spacetime,
visible to external observers in the Universe? This is
one of the most important unresolved issues in gravitation
theory currently. Theorists generally believed that in such
circumstances, a black hole will always form covering the
singularity, which will then be always hidden from external
observers. Such a black hole is a region of spacetime from
which no light or particles can escape. The assumption
that spacetime singularities resulting from collapse would be
always covered by black holes is called the
Cosmic Censorship Conjecture (CCC; Penrose 1969).
As of today, we do not have any proof or any specific
mathematical formulation of the CCC available within the
framework of gravitation theory.
If the singularities were always covered in horizons and if
CCC were true, that would provide a much needed basis for the
theory and astrophysical applications of black holes. On the
other hand, if the spacetime singularities which result from a
continual collapse of a massive star were visible to external
observers in the Universe, we would then have the opportunity to
observe and investigate the super-ultra-dense regions in the
Universe, which form due to gravitational collapse and where
extreme high energy physics and also quantum gravity
effects will be at work.
My purpose here is to review the above and some of
the related key issues in gravitation theory and black hole
physics today. This will be of course from a perspective
of what I think are the important problems, and no claim
to completeness is made. In Section 2, we point out
that in view of the lack of any theoretical progress on CCC,
the important way to make any progress on this problem
is to make a detailed and extensive study of gravitational
collapse in general relativity. Some recent progress
in this direction is summarized. While we now seem to have
a good understanding of the black hole and naked singularity
formations as final fate of collapse in many
gravitational collapse models,
the key point now is to understand the genericity and stability
of these outcomes, as viewed in a suitable framework.
Section 3 discusses these issues in some detail. Recent
developments on throwing matter into a black hole and the effect
it may have on its horizon are pointed out in Section 4,
and certain quantum aspects are also discussed. The issue
of predictability or its breakdown in gravitational collapse
is discussed in Section 5. We conclude by giving a brief
idea of the future outlook and possibilities in the
final section.
\section{What is the final fate of a massive star?}\label{s:final fate}
While Chandra's work pointed out the stable configuration limit
for the formation of a white dwarf, the issue of the final fate of a star
which is much more massive (e.g. tens of solar masses) remains
very much open even today. Such a star cannot settle either as
a white dwarf or as a neutron star.
The issue is clearly important both in high energy astrophysics
and in cosmology. For example, our observations today on the
existence of dark energy in the Universe and its acceleration
are intimately connected to the the observations of Type Ia supernovae
in the Universe. The observational evidence coming from
these supernovae, which are
exploding stars in the faraway Universe, tells us
on how the Universe may be accelerating away and the rate
at which such an acceleration is taking place. While Type Ia
supernovae result from the explosion of a white dwarf star,
at the heart
of a Type II supernova underlies the phenomenon of a
catastrophic gravitational collapse of a massive star,
wherein a powerful shock wave is generated, blowing
off the outer layers of the star.
If such a star is able to throw away enough of matter in
such an explosion, it might eventually settle as a neutron star.
But otherwise, or if further matter is accreted onto
the neutron star, there will be a further continual collapse,
and we shall have to then explore and investigate
the question of the final fate of such a massive
collapsing star. But other stars, which are more
massive and well above the normal supernova mass limits
must straightaway enter a continual collapse mode
at the end of their life cycle,
without an intermediate neutron star stage. The final
fate of the star in this case must be decided by
general relativity alone.
The point here is, more massive stars which are tens of
times the mass of the Sun burn much faster and are far more
luminous. Such stars then cannot survive more than about ten
to twenty million years, which is a much shorter life span
compared to stars like the Sun, which live billions of years.
Therefore, the question of the final fate of such short-lived
massive stars is of central importance in astronomy
and astrophysics.
What happens then, in terms of the final outcome, when
such a massive star dies after exhausting its internal nuclear fuel?
As we indicated above, the general theory of relativity
predicts that the collapsing massive star must
terminate in a spacetime singularity, where the matter
energy densities, spacetime curvatures and other physical
quantities blow up.
It then becomes crucial to know whether such super-ultra-dense
regions, forming in stellar collapse,
are visible to an external
observer in the Universe, or whether they will be
always hidden within a black hole and an event horizon
that could form as the star collapses.
This is one of the most important issues in black hole
physics today.
The issue has to be probed necessarily within the framework of
a suitable theory of gravity, because the strong gravity effects
will be necessarily important in such a scenario. This was
done for the first time in the late 1930s, by the works of
Oppenheimer and Snyder, and Datt
(Oppenheimer \& Snyder 1939; Datt 1938).
They used the general theory
of relativity to examine the final
fate of an idealized massive matter cloud, which was taken to be
a spatially homogeneous ball which had no rotation or internal
pressure, and was assumed to be spherically symmetric.
The dynamical collapse studied here resulted in the formation
of a spacetime singularity, which was preceded by the development
of an event horizon, which created a black hole in the spacetime.
The singularity was hidden inside such a black hole, and the
collapse eventually settled into a final state which was
the Schwarzschild geometry (see Fig. 1).
\begin{figure}
\centerline{\includegraphics[width=9cm]{psj1.eps}}
\caption{Dynamical evolution of a homogeneous
spherical dust cloud collapse, as described by
the Oppenheimer-Snyder-Datt solution. \label{f:one}}
\end{figure}
There was, however, not much attention paid to this model
at that time, and it was widely thought by gravitation theorists
as well as astronomers that it would be absurd for a star
to reach such a final ultra-dense state of its evolution.
It was in fact only as late as 1960s, that a resurgence of
interest took place in the topic, when important
observational developments in astronomy
and astrophysics revealed several very high energy
phenomena in the Universe, such as quasars and radio galaxies,
where no known physics was able to explain
the observations of such extremely high energy phenomena
in the cosmos. Attention was drawn then to dynamical
gravitational collapse and its final fate, and in fact
the term `black hole' was also popularized just around the
same time in 1969, by John Wheeler.
The CCC also came into existence in 1969. It suggested
and assumed that what happens in the Oppenheimer-Snyder-Datt (OSD)
picture of gravitational collapse, as discussed above, would be
the generic final fate of a realistic collapsing massive
star in general. In other words, it was assumed that
the collapse of a realistic massive star will terminate
in a black hole, which hides the singularity,
and thus no visible or naked singularities will develop
in gravitational collapse.
Many important developments then took
place in black hole physics
which started in earnest, and several important theoretical
aspects as well as astrophysical applications of black holes
started developing. The classical as well as quantum aspects
of black holes were then explored and interesting
thermodynamic analogies for black holes were also developed.
Many astrophysical applications for the real Universe
then started developing for black holes, such as making
models using black holes for phenomena such as jets from
the centres of galaxies and the extremely energetic gamma
rays bursts.
The key issue raised by the CCC, however, still
remained very much open, namely whether a real star will
necessarily go the OSD way for its final state of collapse,
and whether the final singularity will be always necessarily
covered within an event horizon. This is
because real stars are inhomogeneous, have internal pressure
forces and so on, as opposed to the idealized OSD assumptions.
This remains an unanswered question, which is one
of the most important issues in gravitation physics and
black hole physics today.
A spacetime singularity that is visible to faraway
observers in the Universe is called a naked singularity
(see Fig. 2).
The point here is, while general relativity predicts the
existence of singularity as the endstate for collapse, it gives
no information at all on the nature or structure of such
singularities, and whether they will be covered by
event horizons, or would be visible to external
observers in the Universe.
\begin{figure}
\centerline{\includegraphics[width=9cm]{psj2.eps}}
\caption{A spacetime singularity of gravitational collapse
which is visible to external observers in the Universe,
in violation to the cosmic censorship conjecture. \label{f:one}}
\end{figure}
There is no proof, or even any mathematically
rigorous statement available for CCC after many decades
of serious effort. What is really needed to resolve the
issue is gravitational collapse models for a
realistic collapse configuration, with inhomogeneities
and pressures included. The effects need to be worked out and
studied in detail within
the framework of Einstein gravity. Only such considerations
will allow us to determine
the final fate of collapse in terms of either
a black hole or a naked singularity final state.
Over the past couple of decades, many such
collapse models have been worked out and studied in
detail. The generic conclusion that has emerged
out of these studies is that both the black holes
and naked singularity final states do develop
as collapse endstates, for a realistic gravitational
collapse that incorporates inhomogeneities as well as
non-zero pressures within the interior of the collapsing
matter cloud. Subject to various regularity and
energy conditions to ensure the physical reasonableness
of the model, it is the initial data, in terms of the
initial density, pressures, and velocity profiles for
the collapsing shells, that determine the final fate
of collapse as either a naked singularity or a
black hole (for further detail and references
see e.g. Joshi 2008).
\section{The genericity and stability of collapse outcomes}
While general relativity may predict the
existence of both black holes and naked singularities
as collapse outcome, an important question then is,
how would a realistic continual gravitational collapse
of a massive star in nature would end up.
Thus the key issue under active debate now
is the following:
Even if naked singularities did develop as collapse
end states, would they be generic or stable in some
suitably well-defined sense, as permitted by the gravitation
theory? The point here is, if naked singularity formation
in collapse was necessarily `non-generic' in some
appropriately well-defined sense,
then for all practical purposes, a realistic physical
collapse in nature might always end up in a black hole,
whenever a massive star ended its life.
In fact, such a genericity requirement has been
always discussed and desired for any possible mathematical
formulation for CCC right from its inception. However,
the main difficulty here has again been that, there
is no well-defined or precise notion of genericity
available in gravitation theory and the general theory
of relativity. Again, it is only various gravitational
collapse studies that can provide us with
more insight into this genericity aspect also.
A result that is relevant here is the following
(Joshi \& Dwivedi, 1999; Goswami \& Joshi, 2007).
For a spherical gravitational collapse of a
rather general (type I) matter field, satisfying the
energy and regularity conditions, given any regular
density and pressure profiles at the initial epoch,
there always exist classes of velocity profiles for the
collapsing shells and dynamical evolutions as determined
by the Einstein equations, that, depending on the
choice made, take the collapse to either a black hole
or naked singularity final state
(see e.g. Fig. 3 for a schematic illustration of such a scenario).
\begin{figure}
\centerline{\includegraphics[width=9cm]{psj3.eps}}
\caption{Evolution of spherical collapse for a
general matter field with inhomogeneities and
non-zero pressures included. \label{f:three}}
\end{figure}
Such a distribution of final states of collapse
in terms of the black holes and naked singularities
can be seen much more transparently when we consider
a general inhomogeneous dust collapse,
for example, as discussed by Mena, Tavakol \& Joshi (2000)
(see Fig.4).
\begin{figure}
\centerline{\includegraphics[width=9cm]{psj4.eps}}
\caption{Collapse final states for inhomogeneous
dust in terms of initial mass and velocity profiles
for the collapsing shells. \label{f:four}}
\end{figure}
What determines fully the final fate of collapse
here are the initial density and velocity profiles
for the collapsing shells of matter.
One can see here clearly how the different choices
of these profiles for the collapsing
cloud distinguish between the two
final states of collapse, and how each of the
black hole and naked singularity states
appears to be `generic' in terms of their being
distributed in the space of final states.
Typically, the result we have here is, given any
regular initial density profile for the collapsing dust
cloud, there are velocity profiles that take the collapse
to a black hole final state, and there are other
velocity profiles that take it to naked singularity
final state. In other words, the overall available
velocity profiles are divided into two distinct
classes, namely the ones which take the given density
profile into black holes, and the other ones that take
the collapse evolution to a naked singularity
final state. The same holds conversely also, namely
if we choose a specific velocity profile, then the
overall density profile space is divided into two
segments, one taking the collapse to black hole final
states and the other taking it to naked singularity
final states. The clarity of results here gives us
much understanding on the final fate of a collapsing
matter cloud.
Typically, all stars have a higher density at
the centre, which slowly decreases as one moves away.
So it is very useful to incorporate inhomogeneity into
dynamical collapse considerations. However, much more
interesting is the collapse with non-zero pressures
which are very important physical forces within a
collapsing star. We briefly consider below a typical
scenario of collapse with a non-zero pressure component,
and for further details we refer to
Joshi \& Malafarina (2011).
For a possible insight into genericity of naked
singularity formation in collapse, we investigated
the effect of introducing small tangential pressure
perturbations in the collapse dynamics of the classic
Oppenheimer-Snyder-Datt gravitational collapse, which is
an idealized model assuming zero pressure, and which
terminates in a black hole final fate as discussed above.
Thus we study the stability of the OSD black hole under
introduction of small tangential stresses.
It is seen explicitly that there exist
classes of stress perturbations such that the
introduction of a smallest tangential pressure within
the collapsing OSD cloud changes the endstate of collapse
to formation of a naked singularity, rather than a black hole.
What follows is that small stress perturbations
within the collapsing cloud change the final fate of
the collapse from being a black hole to a naked singularity.
This can also be viewed as perturbing the spacetime metric
of the cloud in a small way. Thus we can understand here
the role played by tangential pressures in a well-known
gravitational collapse scenario. A specific and physically
reasonable but generic enough class of perturbations is
considered so as to provide a good insight into the
genericity of naked singularity formation in collapse
when the OSD collapse model is perturbed by introduction
of a small pressure. Thus we have an important insight
into the structure of the censorship principle which
as yet remains to be properly understood.
The general spherically symmetric metric
describing the collapsing matter cloud can be written
as,
\begin{equation}\label{metric}
ds^2=-e^{2\sigma(t, r)}dt^2+e^{2\psi(t, r)}dr^2+R(t, r)^2d\Omega^2,
\end{equation}
with the stress-energy tensor for a generic matter source
being given by,
$T_t^t=-\rho, \; T_r^r=p_r, \; T_\theta^\theta=T_\phi^\phi=p_\theta$.
The above is a general scenario, in that it involves no assumptions
about the form of the matter or the equation of state.
As a step towards deciding the stability or otherwise
of the OSD collapse model under the injection of small
tangential stress perturbations, we consider the dynamical
development of the collapsing cloud, as governed by the
Einstein equations. The visibility or otherwise of the
final singularity that develops in collapse is determined
by the behaviour of the apparent horizon in the spacetime,
which is the boundary of the trapped surface region
that develops as the collapse progresses. First, we define a
scaling function $v(r,t)$ by the relation $R=rv$.
The Einstein equations for the above spacetime geometry
can then be written as,
\begin{eqnarray}\label{p2}
p_r&=&-\frac{\dot{F}}{R^2\dot{R}}, \; \rho = \frac{F'}{R^2R'} \; ,\\ \label{sigma2}
\sigma'&=&2\frac{p_\theta-p_r}{\rho+p_r}\frac{R'}{R}-\frac{p_r'}
{\rho+p_r} \; ,\\ \label{G2}
2\dot{R}'&=&R'\frac{\dot{G}}{G}+\dot{R}\frac{H'}{H} \; ,\\
\label{F2}
F&=&R(1-G+H) \; ,
\end{eqnarray}
The functions $H$ and $G$ in the above
are defined as,
$H =e^{-2\sigma(r, v)}\dot{R}^2 , \; G=e^{-2\psi(r, v)}R'^2$.
The above are five equations in
seven unknowns, namely $\rho,\; p_r, \; p_{\theta}, \; R,\; F,\; G,\; H$.
Here $\rho$ is the mass-energy density, $p_r$ and
$p_\theta$ are the radial and tangential stresses respectively,
$R$ is the physical radius for the matter cloud, and ${F}$
is the Misner-Sharp mass function.
It is possible now, with the above definitions
of $v, H$ and $G$, to substitute the unknowns $R, H$
with $v$, $\sigma$. Then, without loss of generality, the
scaling function $v$ can be written as $v(t_i, r)=1$
at the initial time $t_i=0$, when the collapse begins.
It then goes to zero at the spacetime singularity $t_s$, which
corresponds to $R=0$, and thus we have $v(t_s, r)=0$.
This amounts to the scaling $R=r$ at the initial
epoch of the collapse, which is an allowed freedom.
The collapse condition here is $\dot R<0$ throughout
the evolution, and this is equivalent to $\dot{v}<0$.
One can integrate the Einstein equations, at least
up to one order, to reduce them to a first order system,
to obtain the function $v(t,r)$. This function, which is
monotonically decreasing in $t$ can be inverted to obtain
the time needed by a matter shell at any radial value
$r$ to reach the event with a particular value $v$.
We can then write the function $t(r, v)$ as,
\begin{equation}
t(r, v)= \int^1_{v}\frac{e^{-\sigma}}{\sqrt{\frac{F}{r^3\tilde{v}}
+\frac{be^{2rA}-1}{r^2}}}d\tilde{v} \; .
\end{equation}
The function $A(r,v)$ in the above depends on the
nature of the tangential stress perturbations chosen.
The time taken by the shell at $r$ to reach the spacetime
singularity at $v=0$ is then $t_s(r)=t(r, 0)$.
Since $t(r, v)$ is in general at least $C^2$ everywhere in the spacetime
(because of the regularity of the functions involved), and is continuous at the
centre, we can write it as,
\begin{equation}\label{t}
t(r, v)= t(0, v)+r\chi(v)+O(r^2) \;
\end{equation}
Then, by continuity, the time for a shell located at any $r$ close to
the centre to reach the singularity is given as,
\begin{equation}
t_s(r)= t_s(0)+r\chi(0)+O(r^2)
\end{equation}
Basically, this means that the singularity curve should have a well-defined
tangent at the center. Regularity at the center also implies that
the metric function $\sigma$ cannot have constant or linear terms in
$r$ in a close neighborhood of $r=0$, and it must go as $\sigma\sim r^2$
near the center. Therefore the most general choice of the free function
$\sigma$ is,
\begin{equation}
\sigma(r,v)=r^2g(r,v) \;
\end{equation}
Since $g(r, v)$ is a regular function (at least $C^2$), it can be
written near $r=0$ as,
\begin{equation}\label{expand-g}
g(r, v)=g_0(v)+g_1(v)r+g_2(v)r^2+...
\end{equation}
We can now investigate how the OSD gravitational
collapse scenario, which gives rise to a black hole as
the final state, gets altered when small stress perturbations
are introduced in the dynamical evolution of collapse.
For that we first note that the dust model is obtained if
$p_r=p_{\theta}=0$ in the above. In that case,
$\sigma'=0$ and together with the condition $\sigma(0)=0$ gives
$\sigma=0$ identically.
In the OSD homogeneous collapse to a black hole,
the trapped surfaces
and the apparent horizon develop much earlier before
the formation of the final singularity.
But when density inhomogeneities are allowed in the
initial density profile, such as a higher density at the
centre of the star, then the trapped surface formation is
delayed in a natural manner within the collapsing cloud.
Then the final singularity becomes visible to
faraway observers in the Universe
(e.g. Joshi, Dadhich \& Maartens 2002).
The OSD case is obtained from the inhomogeneous dust
case, when we assume further that the collapsing dust
is necessarily homogeneous at all epochs of collapse. This
is of course an idealized scenario because realistic
stars would have typically higher densities at the centre, and
they also would have non-zero internal stresses.
The conditions that must be imposed to obtain the OSD case
from the above are given by $M=M_0\; v=v(t)\; b_0(r)=k$.
Then we have $F'=3M_0r^2$, $R'=v$, the energy density
is homogeneous throughout the collapse, and the density
is given by $\rho=\rho (t)= {3M_0}/{v^3}$.
The spacetime geometry then becomes the Oppenheimer-Snyder
metric, which is given by,
\begin{equation}
ds^2=-dt^2+\frac{v^2}{1+kr^2}dr^2+r^2v^2d\Omega^2, \;
\end{equation}
where the function $v(t)$ is a solution of the equation
of motion, $ \frac{dv}{dt}=\sqrt{(M_0/v)+k}$,
obtained from the Einstein equation.
In this case we get $\chi(0)=0$ identically. All the
matter shells then collapse into a simultaneous singularity,
which is necessarily covered by the event horizon that
developed in the spacetime at an earlier time.
Therefore the final fate of collapse is
a black hole.
To explore the effect of introducing small pressure
perturbations in the above OSD scenario and to study the
models thus obtained which are close to the Oppenheimer-Snyder,
we can relax one or more of the above conditions.
If the collapse outcome is not a black hole, the final
collapse to singularity cannot be simultaneous.
We can thus relax the condition $v=v(t)$ above,
allowing for $v = v(t,r)$. We keep the other conditions
of the OSD model unchanged, so as not to depart too much
from the OSD model, and this should bring out more
clearly the role played by the stress perturbations
in the model.
We know that the metric function $\sigma(t,r)$ must
identically vanish for the dust case. On the other hand,
the above amounts to allowing for small perturbations in $\sigma$,
and allowing it to be non-zero now. This is equivalent
to introducing small stress perturbations in the model,
and it is seen that this affects the apparent horizon
developing in the collapsing cloud.
We note that taking $M=M_0$ leads to $ F=r^3M_0$.
In this case, in the small $r$ limit we
obtain $G(r,t)=b(r)e^{2\sigma(r, v)}$.
The radial stress $p_r$ vanishes here as $\dot F=0$,
and the tangential pressure turns out to have
the form, $p_\theta= p_1 r^2 + p_2 r^3 +...$, where
$p_1, p_2$ are evaluated in terms of coefficients of
$m, g$, and $R$ and its derivatives, and we get,
\begin{equation}\label{pt}
p_\theta=3\frac{M_0g_0}{vR'^2}r^2+\frac{9}{2}\frac{M_0g_1}{vR'^2}r^3+...
\end{equation}
As seen above, the choice of the sign of
the functions $g_0$ and $g_1$ is enough to ensure
positivity or negativity of the pressure $p_\theta$.
The first order coefficient $\chi$ in
the equation of the time curve of the singularity
$t_s(r)$ is now obtained as,
\begin{equation}\label{chi}
\chi(0)=-\int^1_0\frac{v^{\frac{3}{2}}g_1(v)}{(M_0+vk+2vg_0(v))^{\frac{3}{2}}}dv \; .
\end{equation}
As we have noted above, it is the quantity
$\chi(0)$ that governs the nature of the singularity curve,
and whether it is increasing or decreasing away from
the center. It can be seen from above that it is the matter
initial data in terms of the density and stress profiles,
the velocity of the collapsing shells, and the allowed
dynamical evolutions that govern and fix
the value of $\chi(0)$.
The apparent horizon in the spacetime and the
trapped surface formation as the collapse evolves is
also governed by the quantity $\chi(0)$, which in turn
governs the nakedness or otherwise of the singularity.
The equation for the apparent horizon is given by ${F}/{R}=1$.
This is analogous to that of the dust case
since ${F}/{R}={rM}/{v}$ in both these cases.
So the apparent horizon curve $r_{ah}(t)$ is given by
\begin{equation}\label{ah}
r_{ah}^2=\frac{v_{ah}}{M_0},
\end{equation}
with $v_{ah}=v(r_{ah}(t), t)$, which can also be inverted as a
time curve for the apparent horizon $t_{ah}(r)$.
The visibility of the singularity at the center of
the collapsing cloud to faraway observers is determined by the
nature of this apparent horizon curve which is given by,
\begin{equation}\label{t-ah}
t_{ah}(r)=t_s(r)-\int_0^{v_{ah}}\frac{e^{-\sigma}}
{\sqrt{\frac{M_0}{v}+\frac{be^{2\sigma}-1}{r^2}}}dv
\end{equation}
where the $t_s(r)$ is the singularity time curve, and its
initial point is $t_0=t_s(0)$. Near $r=0$
we then get,
\begin{equation}
t_{ah}(r) =t_0+\chi(0)r+o(r^2) \; .
\end{equation}
From these considerations, it is possible to see
how the stress perturbations affect the time of formation
of the apparent horizon, and therefore the formation of
a black hole or naked singularity. A naked singularity
would typically occur as a collapse endstate when a
comoving observer at a fixed $r$ value does not encounter
any trapped surfaces before the time of singularity formation.
For a black hole to form, trapped surfaces must develop
before the singularity. Therefore it is required
that,
\begin{equation}
t_{ah}(r) \le t_0 ~~\mbox{for}~~ r>0, ~~\mbox{near}~~ r=0 \; .
\end{equation}
As can be seen from above, for all functions $g_1(v)$
for which $\chi(0)$ is positive, this
condition is violated and in that case the apparent
horizon is forced to appear after the formation of the
central singularity. In that case, the apparent horizon
curve begins at the central singularity $r=0$ at $t=t_0$
and increases with increasing $r$, moving to the future.
Then we have $t_{ah} > t_0$ for $r > 0 $ near the center.
The behaviour of outgoing families of null geodesics
has been analyzed in detail in the case
when $\chi(0)>0$ and we can see that the geodesics
terminate at the singularity in the past. Thus timelike
and null geodesics come out from the singularity,
making it visible to external observers
(Joshi \& Dwivedi 1999).
One thus sees that it is the term $g_1$ in the
stresses $p_\theta$ which decides either the black hole or
naked singularity as the final fate for the collapse. We can choose
it to be arbitrarily small, and it is then possible to
see how introducing a generic tangential stress perturbation
in the model would change drastically the final outcome
of the collapse. For example, for all non-vanishing
tangential stresses with $g_0=0$ and $g_1<0$, even
the slightest perturbation of the Oppenheimer-Snyder-Datt
scenario, injecting a small tangential stress would result
in a naked singularity. The space of all functions $g_1$
that make $\chi(0)$ positive, which includes all the
strictly negative functions $g_1$, causes the collapse
to end in a naked singularity. While this is an explicit
example, it is by no means the only class. The important
feature of this class is that it corresponds to a
collapse model for a simple and straightforward perturbation
of the Oppenheimer-Snyder-Datt spacetime metric.
In this case, the geometry near the centre can
be written as,
\begin{equation}\label{pert}
ds^2=-(1-2g_1 r^3)dt^2+\frac{(v+rv')^2}{1+kr^2-2g_1 r^3}dr^2+r^2v^2d\Omega^2 \; ,
\end{equation}
The metric above satisfies the Einstein equations
in the neighborhood of the center of the cloud when the
function $g_1(v)$ is small and bounded. We can take
$0<|g_1(v)|<\epsilon$, so that, the smaller we take the
parameter $\epsilon$, the bigger will be the radius
where the approximation is valid.
We can consider here the requirement that a realistic
matter model should satisfy some energy conditions ensuring
the positivity of mass and energy density.
The weak energy condition would imply restrictions
on the density and pressure profiles. The energy density
as given by the Einstein equation must
be positive. Since $R$ is positive, to ensure positivity
of $\rho$ we require $F>0$ and $R'>0$. The choice of
positive $M(r)$, which clearly holds for $M_0>0$,
and is physically reasonable, ensures positivity of the
mass function. Here $R'>0$ is a sufficient condition
for the avoidance of shell crossing singularities.
The tangential stress can now be written, with
$p_r=0$, and is given by
\begin{equation}
p_\theta=\frac{1}{2}\frac{R}{R'}\rho\sigma'
\end{equation}
So the sign of the function $\sigma'$ would determine
the sign of $p_\theta$. Positivity of $\rho+p_\theta$ is
then ensured for small values of $r$ throughout collapse
for any form of $p_\theta$. In fact, regardless of the values
taken by $M$ and $g$, there will always be a neighbourhood
of the center $r=0$ for which $|p_\theta|<\rho$ and
therefore $\rho+p_\theta\geq0$.
What we see here is that, in the space of initial data
in terms of the initial matter densities and velocity
profiles, any arbitrarily small neighborhood of the OSD
collapse model, which is going to end as a black hole,
contains collapse evolutions that go to naked singularities.
Such an existence of subspaces of collapse solutions,
that go to a naked singularity rather than a black
hole, in the arbitrary vicinity of the OSD black hole,
presents an intriguing situation. It gives an idea of the
richness of structure present in the gravitation theory, and
indicates the complex solution space of the Einstein equations
which are a complicated set of highly non-linear partial
differential equations. What we see here is the existence of
classes of stress perturbations such that an arbitrarily
small change from the OSD model is a solution going to a naked
singularity.
This then provides an intriguing insight into
the nature of cosmic censorship, namely that the collapse
must necessarily be properly fine-tuned if it is to produce
a black hole only as the final endstate. Traditionally it
was believed that the presence of stresses or pressures
in the collapsing matter cloud would increase the chance of
black hole formation, thereby ruling out dust models that
were found to lead to a naked singularity as the collapse endstate.
It now becomes clear that this is actually not the case.
The model described here not only provides a new class
of collapses ending in naked singularities, but more importantly,
shows how the bifurcation line that separates the phase space
of `black hole formation' from that of `naked singularity
formation' runs directly over the simplest and most studied
of black hole scenarios such as the OSD model.
It has to be noted of course that the general issue
of stability and genericity of collapse outcomes has been
a deep problem in gravitation theory, and requires
mathematical breakthroughs as well as evolving further
physical understanding of the collapse phenomenon.
As noted above, this is again basically connected
with the main difficulty of cosmic censorship itself,
which is the issue of how to define censorship.
However, it is also clear from the
discussion above, that consideration of various collapse
models along the lines as discussed here does yield
considerable insight and inputs in understanding
gravitational collapse and its final outcomes.
\section{Spinning up a black hole and quantum aspects}
It is clear that the black hole and naked singularity
outcomes of a complete gravitational collapse for a massive
star are very different from each other physically, and
would have quite different observational signatures.
In the naked singularity case, if it occurs in nature,
we have the possibility of observing the physical effects
happening in the vicinity of the ultra-dense regions that form
in the very final stages of collapse. However, in a black
hole scenario, such regions are necessarily hidden
within the event horizon of gravity. The fact that a
slightest stress perturbation of the OSD collapse could
change the collapse final outcome drastically, as we
noted in the previous section, changing it from black
hole formation to a naked singularity, means that the
naked singularity final state for a collapsing star
must be studied very carefully to deduce its physical
consequences, which are not well understood so far.
It is, however, widely believed that when we have
a reasonable and complete quantum theory of gravity
available, all spacetime singularities, whether naked
or those hidden inside black holes, will be resolved away.
As of now, it remains an open question if quantum
gravity will remove naked singularities.
After all, the occurrence of spacetime singularities
could be a purely classical phenomenon, and
whether they are naked or covered should not be relevant,
because quantum gravity will possibly remove them
all any way. But one may argue that looking at the
problem this way is missing the real issue.
It is possible that in a suitable quantum gravity theory
the singularities will be smeared out, though this has
not been realized so far. Also there are indications that
in quantum gravity also the singularities may not
after all go away.
In any case, the important and real issue is,
whether the extreme strong gravity regions formed due
to gravitational collapse are visible to faraway observers
or not. It is quite clear that gravitational collapse
would certainly proceed classically, at least till
quantum gravity starts governing and dominating the
dynamical evolution at scales of the order
of the Planck length, {\it i.e.} till extreme gravity
configurations have been already developed due to
collapse. The key point is the visibility or
otherwise of such ultra-dense regions
whether they are classical or quantum (see Fig. 5).
\begin{figure}
\centerline{\includegraphics[width=9cm]{psj5.eps}}
\caption{Even if the naked singularity is resolved by
the quantum gravity effects, the ultra-strong
gravity region that developed in gravitational collapse
will still be visible to external observers
in the Universe. \label{f:four}}
\end{figure}
What is important is, classical gravity implies
necessarily the existence of ultra-strong gravity
regions, where
both classical and quantum gravity come into their own.
In fact, if naked singularities do develop in gravitational
collapse, then in a literal sense we come face-to-face
with the laws of quantum gravity, whenever such an
event occurs in the Universe (Wald 1997).
In this way, the gravitational collapse phenomenon
has the potential to provide us with a possibility of
actually testing the laws of quantum gravity.
In the case of a black hole developing in the
collapse of a finite sized object such as a massive star,
such strong gravity regions are necessarily hidden
behind an event horizon of gravity, and this would happen
well before the physical conditions became extreme
near the spacetime singularity.
In that case, the quantum effects, even if they caused
qualitative changes closer to singularity, will be
of no physical consequence. This is because no causal
communications are then allowed from such regions. On
the other hand, if the causal structure were that
of a naked singularity, then communications from
such a quantum gravity dominated extreme curvature
ball would be visible in principle. This will be so
either through direct physical processes near a
strong curvature naked singularity, or via the
secondary effects, such as the shocks produced in
the surrounding medium.
Independently of such possibilities connected with
gravitational collapse as above, let us suppose that
the collapse terminated in a black hole. It is generally
believed that such a black hole will be described by
the Kerr metric. A black hole,
however, by its very nature accretes matter from the
surrounding medium or from a companion star. In that case,
it is worth noting here that there has been an active
debate in recent years about whether
a black hole can survive as it is, when it accretes
charge and angular momentum from the surrounding medium.
The point is, there is a constraint in this case
for the horizon to remain undisturbed, namely that
the black hole must not contain too much of charge and
it should not spin too fast. Otherwise,
the horizon cannot be sustained. It will breakdown
and the singularity within will become visible.
The black hole may have formed with small enough
charge and angular momentum to begin with; however,
we have the key astrophysical process of accretion from
its surroundings, of the
debris and outer layers of the collapsing star.
This matter around the black hole will fall
into the same with great velocity, which could be classical
or quantized, and with either charge or angular momentum
or perhaps both. Such in-falling particles could
`charge-up' or `over-spin' the black hole, thus eliminating
the event horizon. Thus, the very fundamental
characteristic of a black hole, namely its trait of
gobbling up the matter all around it and continuing to grow
becomes its own nemesis and a cause of
its own destruction.
Thus, even if a massive star collapsed into a
black hole rather than a naked singularity, important issues
remain such as the stability against accretion of
particles with charge or large angular momentum, and whether
that can convert the black hole into a naked singularity by
eliminating its event horizon. Many researchers have claimed
this is possible, and have given models to create naked
singularities this way. But there are others who claim
there are physical effects which would save the black hole
from over-spinning this way and destroying itself, and the issue is
very much open as yet. The point is, in general, the
stability of the event horizon and the black hole
continues to be an important issue for black
holes that developed in gravitational collapse.
For a recent discussion on some of these issues,
we refer to Matsas \& da Silva (2007), Matsas et al. (2009),
Hubeny (1999), Hod (2008), Richartz \& Saa (2008),
Jacobson \& Sotiriou (2009, 2010a,b), Barausse, Cardoso \& Khanna
(2010), and references therein.
The primary concern of the cosmic censorship
hypothesis is the formation of black holes
as collapse endstates. Their stability, as discussed
above, is only a secondary issue. Therefore, what this
means for cosmic censorship is that the collapsing massive star
should not retain or carry too much charge or spin;
otherwise it could necessarily end up as a naked
singularity, rather than a black hole.
\section{Predictability, Cauchy horizons and all that}
A concern sometimes expressed is that if naked
singularities occurred as the final fate of gravitational
collapse, that would break the predictability in the
spacetime. A naked singularity is characterized by the
existence of light rays and particles that emerge from the
same. Typically, in all the collapse models discussed
above, there is a family of future directed non-spacelike
curves that reach external observers, and when extended
in the past these meet the singularity.
The first light ray that comes out from the singularity
marks the boundary of the region that can be predicted
from a regular initial Cauchy surface in the spacetime,
and that is called the Cauchy horizon for the spacetime.
The causal structure of the spacetime would differ
significantly in the two cases, when there is a Cauchy
horizon and when there is none. A typical
gravitational collapse to a naked singularity, with
the Cauchy horizon forming is shown in Fig. 6.
\begin{figure}
\centerline{\includegraphics[width=9cm]{psj6.eps}}
\caption{The existence of a naked singularity is
typically characterized by existence of a Cauchy horizon
in the spacetime. Very high energy particle collisions
can occur close to such a Cauchy horizon. \label{f:four}}
\end{figure}
The point here is, given a regular initial data on a
spacelike hypersurface, one would like to predict the future
and past evolutions in the spacetime for all times
(see for example, Hawking \& Ellis (1973)
for a discussion).
Such a requirement is termed the global hyperbolicity
of the spacetime. A globally hyperbolic spacetime is a fully
predictable universe. It admits a Cauchy surface, the data
on which can be evolved for all times in the past as well
as in future. Simple enough spacetimes such as the Minkowski
or Schwarzschild are globally hyperbolic, but the Reissner-Nordstrom
or Kerr geometries are not globally hyperbolic. For further
details on these issues, we refer to
Hawking \& Ellis (1973) or Joshi (2008).
Here we would like to mention certain recent intriguing
results in connection to the existence of a Cauchy horizon
in a spacetime when there is a naked singularity resulting as
final fate of a collapse. Let us suppose the collapse resulted
in a naked singularity. In that case, there are classes
of models where there will be an outflow of energy and
radiations of high velocity particles close to the Cauchy
horizon, which is a null hypersurface in the spacetime.
Such particles, when they collide with incoming particles,
would give rise to a very high center of mass energy
of collisions.
The closer we are to the Cauchy horizon, higher is the
center of mass energy of collisions. In the limit of
approach to the Cauchy horizon, these energies approach
arbitrarily high values and could reach the
Planck scale energies (see for example, Patil \& Joshi 2010, 2011a,b;
Patil, Joshi \& Malafarina 2011).
It has been observed recently that in the vicinity
of the event horizon for an extreme Kerr black hole,
if the test particles arrive with fine-tuned velocities,
they could undergo very high energy collisions with other
incoming particles. In that case, the possibility
arises that one could see Planck scale physics
or ultra-high energy physics effects
near the event horizon, given suitable circumstances
(Banados, Silk \& West 2009; Berti et al. 2009;
Jacobson \& Sotiriou 2010a,b; Wei et al. 2010;
Grib \& Pavlov 2010; Zaslavskii, 2010).
What we mentioned above related to the particle
collisions near Cauchy horizon is similar to the situation
where the background geometry is that of a naked
singularity. These results could mean that in strong
gravity regimes, such as those of black holes or naked
singularities developing in gravitational collapse,
there may be a possibility to observe
ultra-high energy physics effects, which would
be very difficult to see in the near future
in terrestrial laboratories.
While these phenomena give rise to the prospect of
observing Planck scale physics near the Cauchy horizon
in the gravitational collapse
framework, they also raise the following intriguing question.
If extreme high energy collisions do take place very close
to the null surface which is the Cauchy horizon, then
in a certain sense it is essentially equivalent to creating
a singularity at the Cauchy horizon. In that case, all or
at least some of the Cauchy horizon would be converted into
a spacetime singularity, and would effectively mark
the end of the spacetime itself. In this case, the
spacetime manifold terminates
at the Cauchy horizon, whenever a naked singularity is
created in gravitational collapse. Since the Cauchy horizon
marks in this case the boundary of the spacetime itself,
predictability is then restored for the spacetime,
because the rest of the spacetime below and in the
past of the horizon
was predictable before the Cauchy horizon formed.
\section{Future perspectives}
We have pointed out in the considerations here
that the final fate of a
massive star continues to be a rather exciting research
frontier in black hole physics and gravitation
theory today. The outcomes here will be fundamentally
important for the basic theory and astrophysical
applications of black hole physics, and for modern
gravitation physics. We highlighted certain key
challenges in the field, and also several recent
interesting developments were reviewed. Of course,
the issues and the list given here are by no means complete
or exhaustive in any manner, and there are several
other interesting problems in the field as well.
In closing, as a summary, we would like to mention
here a few points which we think require the most immediate
attention, and which will have possibly maximum
impact on future development in the field.
1. The genericity of the collapse outcomes, for
black holes and naked singularities need to
be understood very carefully and in further detail.
It is by and large well-accepted now, that the general
theory of relativity does allow and gives rise to
both black holes and naked singularities as the final
outcome of continual gravitational collapse, evolving
from a regular initial data, and under reasonable
physical conditions.
What is not fully clear yet is the distribution
of these outcomes in the space of all allowed outcomes
of collapse. The collapse models discussed above
and considerations we gave here would be of some
help in this direction, and may throw some light on
the distribution of black holes and naked singularity
solutions in the initial data space.
2. Many of the models of gravitational collapse
analyzed so far are mainly of spherical symmetric collapse.
Therefore, the non-spherical collapse needs to be
understood in a much better manner. While there are
some models which illustrate what the departures
from spherical symmetry could do (see e.g.
Joshi \& Krolak 1996),
on the whole, not very much is known for non-spherical
collapse. Probably numerical relativity could be
of help in this direction
(see for example
Baiotti \& Rezzolla 2006),
for a discussion on the evolving developments as
related to applications of numerical methods
to gravitational collapse issues).
Also, another alternative would
be to use global methods to deal with the spacetime
geometry involved, as used in the case of singularity
theorems in general relativity.
3. At the very least, the collapse models
studied so far do help us gain much insight into the
structure of the cosmic censorship, whatever
final form it may have.
But on the other hand, there have also been attempts
where researchers have explored the physical applications
and implications of the naked singularities investigated
so far (see e.g.
Harada, Iguchi \& Nakao 2000, 2002;
Harada et al. (2001) and also references therein).
If we could find astrophysical applications of
the models that predict naked singularities,
and possibly try to test the same through
observational methods and the signatures predicted,
that could offer a very interesting avenue to get
further insights into this problem as a whole.
4. An attractive recent possibility in that
regard is to explore the naked singularities as
possible particle accelerators as we pointed
out above.
Also, the accretion discs around a naked
singularity, wherein the matter particles are
attracted towards or
repulsed away from the singularities with great
velocities could provide an excellent venue to test
such effects and may lead to predictions of
important observational signatures to distinguish
the black holes and naked singularities in
astrophysical phenomena
(see Kovacs \& Harko 2010; Pugliese, Quevedo \& Ruffini 2011).
5. Finally, further considerations of quantum
gravity effects in the vicinity of naked singularities,
which are super-ultra-strong gravity regions,
could yield intriguing theoretical insights into
the phenomenon of gravitational collapse
(Goswami, Joshi \& Singh 2006).
\section*{Acknowledgments}
Over past years, discussions with many colleagues and
friends have contributed greatly to shape my understanding
of the questions discussed here. In particular, I would like to
thank I. H. Dwivedi, N. Dadhich, R. Goswami, T. Harada,
S. Jhingan, R. Maartens, K. Nakao, R. Saraykar, T. P. Singh,
R. Tavakol, and also many other friends with whom I
have extensively discussed these issues.
Fig. 4 here is from Mena, Tavakol \& Joshi (2000),
and Fig. 6 is from Patil, Joshi \& Malafarina (2011). The rest of the figures
are from Joshi (2008).
|
2,877,628,090,344 | arxiv | \section{Introduction}\label{Intro}
For decades, the distribution of the Local Group's satellite galaxy
population has presented a puzzle. This began with the discovery of the
alignment of a number of Milky Way (MW) satellite galaxies with the
orbital plane of the Magellanic Clouds \citep{Lynden-Bell1976},
identifying a vast polar structure in the Milky Way, with a second similar grouping
noted by \citet{Lynden-Bell1982}, containing Fornax, Leo I, Leo II and
Sculptor.
More recently,
\citet{Kroupa2005} showed that the 11 brightest satellites of the MW
are on a plane with a thickness of 20 {\rm\,kpc}\ \citep{Pawlowski2012b,Pawlowski2014}, and aligned with the pole of the Galaxy. The
recent analysis of the Pan-Andromeda Archaeological Survey (PAndAS)
resulted in the accurate distance measurements to the known population of M31
satellites using a sensitive homogeneous method \citep{Conn2012}, revealing
their three-dimensional distribution.
As shown in \citet{Ibata2013} about 15 of the 30 known satellites that orbit M31 form an extremely thin plane, with a thickness of 12.6$\pm$0.6 {\rm\,kpc}, but with an overall extent of $\sim$200 {\rm\,kpc}. The significance of the presence of this plane was boosted with the discovery of kinematically coherent orbital motion for the plane members, with the southern most satellites moving towards us, while those in the north are moving away (correcting for the motion of M31). The possibility that this plane represents a chance alignment appears to be highly improbable, with a resulting significance of the presence of this planar structure of approximately 99.998 \%\ \citep{Conn2013,Ibata2013}.
From our vantage point, the Andromeda plane is seen edge-on, and is approximately aligned with the pole of our own Galaxy, being significantly tilted with respect to Andromeda's optical disk \citep{Ibata2013,Shaya2013}.
The existence of additional planes in the Local Group was noted by \citet{Tully2015}, who
finds a second plane in the M31 system. They also find two parallel planes in the Centaurus A group.
There are various open questions in our understanding of planes of satellite galaxies, in particular,
how such a structure forms and how stable it is over a cosmologically significant period of time; these are the questions we address in this paper.
We begin in Section~\ref{BGround} by giving an
overview of the recent discussions and
debates the issues of satellite planes, while
Section~\ref{NumMod} introduces the numerical model used to describe
both the MW and M31 potentials.
In Sections~\ref{MW_pl} and~\ref{Stb_VelMass}, we describe the stability tests for orbiting satellite
galaxies in a plane, and examine how the MW and satellite mass and velocity
can influence a plane of satellite.
In Section~\ref{Stb_NSH}, we study the influence of the properties of the host galaxy's dark halo on
a planar structure, focusing in particular upon the orientation of plane structure with regards to flattening of the potential. Section~\ref{Discn} presents our comparison of simulations with observations and conclusions on the longevity of planes of satellites.
\section{Background}\label{BGround}
In the decades since the discovery of MW's generally asymmetric distribution of satellite galaxies, a significant effort has been put into understanding its existence in light of the standard $\Lambda$ Cold Dark Matter ($\Lambda$CDM) cosmology. A search for coherent orbits within numerical simulations of structure formation reveals an overall isotropic distribution for satellite galaxies, with plane-like structures occurring in about 2\% of MW/M31-like halos \citep{Boylan-Kolchin2009, Bahl2014}. The chance of such occurrences become even lower when looking for structures with properties similar to the M31 system \citep{Ibata2014}, suggesting that strongly anisotropic distribution of satellites are not a natural phenomenon within simulations of $\Lambda$CDM cosmology. However, the discrepancies between dark matter simulations and the observations in the Local Group might be resolved by introducing baryonic physics. `Zoom in' simulations of the Local Group with more detailed hydrodynamical simulations, including supernova feedback, show anisotropic distributions of satellite galaxies around Milky Way-like halos, not dissimilar to those around the MW \citep{Sawala2014}. These results seem to be in agreement with those presented in \citet{Ibata2014b}, where the authors investigate the incidence of planar structures in a larger galaxy population using the Sloan Digital Sky Survey (SDSS). Their analysis suggests that corotating dwarf galaxies might be common, although this result has been challenged by recent papers \citep[e.g.][]{Phillips2015}.
Repeating the analysis presented in \citet{Ibata2014b}, \citet{Cautun2015a} conclude that while co-rotating satellite pairs are seen, they do not imply the presence of co-rotating planes of satellites. In the same paper, the authors compare the occurrence of co-rotating galaxy pairs in the SDSS to those in the Millennium II cosmological simulations, finding a general agreement between the two. \citet{Cautun2015b} explore the Millennium II (MII) and COCO \citep{Hellwing21042016} dark matter only simulations, extending the search for planes of satellites. Around 10\% of MW/M31-like hosts were found to contain satellite planes, albeit not always as rich in number and as thin as those observed in M31 (and MW). This suggests that satellite planes are predicted in $\Lambda$CDM, but with properties (e.g. -varying thickness, fraction of total satellite population on the plane and radial distribution) that can be different to those observed in the M31/MW. Due to this variety, a search matching only the two prominent planes seen in our Local Group in the simulations may yield results that classify the planes as an extreme rarity.
While corroborating findings on the diversity of planes in simulations, \citet{Buck2015} explore how the dark halo environment may help or hinder the presence of planes. Their zoom-in dark matter simulations show that high-concentration halos are more likely to host planes of satellites, having accreted their satellites and mass at early times. However, the general conclusion is that planes of satellites are a transitory phenomenon. Similar conclusions are drawn by the \citet{Gillet2015} analysis of CLUES (Constrained Local UniversE Simulations). The planes they find contained 11 satellite galaxies at most, indicating the likely possibility of M31's plane of 15 satellites being an extreme occurrence. About a third of satellites have high velocities that are perpendicular to the planar formation, making their presence in the plane purely coincidental. \citet{Gillet2015} conclude that the observed satellite planes appear to be in agreement with the $\Lambda$CDM but as a transitory feature, created with a few planar satellites and several timely interlopers. Such coincidental placement of galaxies may reduce the number of observed kinematically coherent planes. If this holds true, we are living in a ‘special’ time when these structures can be observed both in M31 and MW.
The question of the formation and evolution of planar populations of dwarfs also remains unanswered. There have been several suggested mechanisms, with the feeding of dwarf galaxies through dark matter filaments taking a prominent position \citep{Libeskind2005, Lovell2011}. \citet{Buck2015} claim that M31-like planes emerged in very early forming host halos, where the satellites are accreted around $z \geq 3$ via filaments that were thinner in early times (see \citet{Pawlowski2012b, Pawlowski2014} for different interpretations). More radical solutions have been proposed as well, e.g: \citet{Kroupa2005} suggest that the MW plane, having origins in tidal dwarf galaxies, formed as a result of larger galaxy interactions \citep{Hammer2013}. However, \citet{Collins2015} find that satellites on and off the plane exhibit no segregating features other than their spatial alignment. This would contradict a merger with a similar mass galaxy as a formation scenario for M31; distinct histories would result in different properties for satellites on and off the M31 plane. Nevertheless, a recent accretion event of such magnitude is found to be improbable \citep{Angus2011}. \citet{Smith2016} explore the behavior of satellite planes after similar mass mergers and find that about 65- 70 \% survive to the simulation's end, while the rest are destroyed or merged. Satellites positioned in/close to dark matter filaments have a slightly higher survival rate and time through their simulations. Although nothing notable is affected by the variations in the applied cosmology, prograde satellites were found to be more likely to stay as satellites, as opposed to retrograde orbits.
The longevity of a plane of dwarfs in a galactic halo has been found to depend on the orientation of its orbit. \citet{Bowden2013} argue that a thin disc of satellite galaxies can persist over cosmological times if and only if it lies in a plane that is aligned with one of the semi-major or semi-minor axis of a triaxial halo, or in the equatorial or polar planes of a spheroidal halo. In this latter case, perturbations from the disk can act to disperse a satellite plane. In any other orientation, the disc thickness would double on 10 Gyr timescales and so, to get planes as thin as the one observed in M31, these must have been born with what they address as an implausibly small perpendicular scale height.
Despite the intricacies of formation processes of the plane of satellites, their longevity will depend on environmental properties (such as host halo shape and the particular orbits the satellite lie on). In this paper, we explore the effect that halo shape and initial properties of the satellites will have on the stability of planar structures.
\section{Numerical Model}\label{NumMod}
Our numerical model is based upon two static potentials to represent the M31-MW system. Even though the
focus of this paper is considering the orbits of dwarfs in the vicinity of M31, we chose not to neglect the perturbing influence of an external large galaxy like the MW. The potential of M31 consists of three components representing the
halo, disk and bulge in separate equations, as first proposed in
\citet{2006MNRAS.366..996G} and used in \citet{Arias2016}. The dark
matter halo is described as a Navarro, Frenk and White (NFW) potential
\citep{1997ApJ...490..493N} given by
\begin{equation}
\label{EqNFW}
\Phi_{\rm{halo}}(r)=-4{\pi}G\delta_{\rm{c}}\rho_{\rm{c}}{r_{\rm{h}}}^2\left(\frac{r_{\rm{h}}}{r}\right)\ln \left(\frac{r+r_{\rm{h}}}{r_{\rm{h}}}\right)
\end{equation}
where $r_{\rm{h}}$ is the scale radius, the present day critical
density is $\rho_{\rm{c}}=277.7\,h^2{\rm\,M_\odot }\,{\rm{kpc^{-3}}}$, $h=0.71$ in
the unit of 100 $h\ \ {\rm\,km\,s^{-1} }\rm{Mpc^{-1}}$ \citep{2006MNRAS.366..996G} and
$\delta_{\rm{c}}$ is a dimensionless density parameter.
The disk component of the potential is given by a Miyamoto-Nagai potential \citep{1975PASJ...27..533M}
\begin{equation}
\label{EqMiyaN}
\Phi_{\rm{disk}}(R,z)=-\frac{\rm{GM_{disk}}}{\left(R^2+\left(r_{\rm{disk}}+{\sqrt{(z^2+b^2)}}\right)^2\right)^{1/2}}
\end{equation}
and the bulge component follows a \citet{1990ApJ...356..359H} profile.
\begin{equation}
\label{EqHernquist}
\Phi_{\rm{bulge}}(r)=-\frac{\rm{GM_{{bulge}}}}{r_{\rm{bulge}}+r}\mbox{.}
\end{equation}
The Milky Way potential contains similar components, a NFW halo
(Eq.~\ref{EqNFW}), a Hernquist bulge (Eq.~\ref{EqHernquist}) and
stellar disk defined as a Miyamoto-Nagai potential (Eq.~\ref{EqMiyaN}). All the parameters we use are listed in Table \ref{tabPot} and are
similar to values used in \citet{Arias2016}.
\begin{table*}
\centering
\begin{tabular}{l l p{55mm} p{15mm} l l}
\hline
& & & & \textbf{M31} & \textbf{MW}\\
\hline
\hline
& & & & &\\
$\Phi_{\rm{halo}}(r)$ & = & \parbox{58mm}{\begin{flushright}\begin{equation*}
\label{EqNFW}
-4{\pi}G\delta_{\rm{c}}\rho_{\rm{c}}{r_{\rm{h}}}^2\left(\frac{r_{\rm{h}}}{r}\right)\log \left(\frac{r+r_{\rm{h}}}{r_{\rm{h}}}\right)
\end{equation*}\end{flushright} } & $\rm{r}_{h}$ & $13.5\,\rm{kpc}$ & $24.54\,\rm{kpc}$ \\
& & & $\rm{M}_{halo}$ & $1.037\times{10^{12}}\,\rm{M}_{\odot}$ & $0.9136\times{10^{12}}\,\rm{M}_{\odot}$ \\
$\Phi_{\rm{bulge}}(r)$ & = & \parbox{58mm}{\begin{flushright} \begin{equation*}
\label{EqHernquist}
-\frac{\rm{GM_{{bulge}}}}{r_{\rm{bulge}}+r
\end{equation*} \end{flushright}} & $\rm{M}_{bulge}$ & $2.86\times{10^{10}}\,\rm{M}_{\odot}$ & $3.4\times{10^{10}}\,\rm{M}_{\odot}$ \\
& & & $\rm{r}_{bulge}$ & $0.61\,\rm{kpc}$ & $0.7\,\rm{kpc}$\\
$\Phi_{\rm{disk}}(R,z)$ & = &\parbox{58mm}{\begin{flushright} \begin{equation*}
\label{EqMiyaN}
-\frac{\rm{GM_{disk}}}{\left(R^2+\left(r_{\rm{disk}}+{\sqrt{(z^2+b^2)}}\right)^2\right)^{1/2}}
\end{equation*} \end{flushright}} & $\rm{M}_{disk}$ & $2.86\times{10^{10}}\,\rm{M}_{\odot}$ & $10.0\times{10^{10}}\,\rm{M}_{\odot}$\\
& & & $\rm{r}_{disk}$ & $5.4\,\rm{kpc}$ & $6.65\,\rm{kpc}$\\ \\
& & & $\rm{b}$ & $0.3\,\rm{kpc}$ & $0.26\,\rm{kpc}$\\
& & & & &\\
\hline
\hline
\end{tabular}
\caption{Parameters used for the M31 and Milky Way potential. The M31
parameters are consistent with \protect\cite{2006MNRAS.366..996G} and \protect\cite{Widrow2005} and
the Milky Way disk and bulge parameters are taken from
\protect\cite{2005ApJ...635..931B}.}
\label{tabPot}
\end{table*}
To assess the stability of satellite galaxy planes through time, the
numerical model focuses on a simulated set of satellite galaxies
around M31. We create 30 satellite galaxies as point masses initially
set on a plane aligned with the disk of M31 ($z=0$), and will be referred to
as an $equatorial$ plane or as the $\theta=0^o$ plane. We construct the
initial $\theta=0^o$ plane by giving the satellites a random value of
their z-position in the range -5 to +5 {\rm\,kpc}\ drawn from an uniform distribution.
Additionally, their
radial distances are randomly scattered within 50 and 250 kpc of the
centre of M31 potential. Velocities of the satellites are determined by
two components - the velocity on the plane (planar velocity) and
perpendicular to the plane (perpendicular velocity). The planar
velocity is calculated for right-handed co-rotation and restricted to
keep the satellites on elliptical orbits bound to M31. Orbital ellipticities
range from $\epsilon$ = [0.5, 1.0] to create nearly circular orbits. Velocities perpendicular to the
plane are set to 0.0 ${\rm\,km\,s^{-1} }$ in order to create a motion that is completely
confined to the plane. This allows us to observe the effects of various
parameter changes more clearly. By rotating this initial plane and its
velocities about a given axis, we later manipulated the plane's
alignment with the M31 galactic disc and dark matter halo equator.
The numerical models are integrated for 5 Gyrs, with snapshots
at intervals of 0.1 Gyrs, using a Leapfrog algorithm \citep{Springel2005}. We begin by looking for a collection of 10
or more satellites in the plane that was set at $t = 0 $ (the initial
plane), that includes the centre of M31. The root-mean-square distance
($D_{\rm{rms}}$) of a satellite galaxy to the plane is taken to be $\leq$15
{\rm\,kpc}\ for it to be considered a part of the planar formation. The
plane considered here is more than twice the thickness of the M31 Vast Thin Plane of Satellites (VTPoS) and about the thickness of the MW Great Polar Plane. The
number of satellites creating a plane is smaller than the number of
satellites in both M31 and MW planes. If we cannot find 10 or more
satellites within 15{\rm\,kpc}\ at a given time-step, it is regarded as a
snapshot where a plane cannot be observed. We search for satellites
that are in the above mentioned range of the initially set plane at
each time-step. We count the number of satellites on the plane and
divide by the number of satellites to calculate the probability of
seeing a satellite on a plane at a given time step- $P$. The
probability is obtained by averaging values from 750 orbital
integrations for each variation of each parameter tested.
\section{Effect of the Milky Way on the Plane of Satellites}\label{MW_pl}
Before exploring the effects of properties such as perpendicular velocity, we investigate the role of the MW on the probability of seeing satellite planes. Throughout the paper tests are conducted with a static potential representing the MW. We also conduct our tests in a M31 potential which gave a smooth dark matter halo, and an environment void of any dark subhalos. Our numerical model (Sec. ~\ref{NumMod}) Milky Way keeps the MW at a distance of 779 kpc and conducts tests for an integration time of 5 Gyrs. The M31 plane is observed to be inclined at $\sim$50$\degree$ to the disk of the galaxy \citep{Ibata2013}. Therefore, we also introduce an incline to our satellite plane formations- where plane orientation and satellites' total velocities are rotated at an angle of $\theta$ about the $x$ axis
from 0 to 90$\degree$ in 15$\degree$ intervals.
The resulting probabilities for a plane lasting through the 5 Gyrs without the destructive forces of additional dark matter halos remain high ($P \geq 0.9$) for planes of all orientations. Extending the integration time to 10 Gyrs, we noticed a striking difference in the probabilities for the existence of the satellite plane (Fig.~\ref{Fig.Gyrs10}). But the equatorial planes show a dramatic drop in $P$ after the 5 Gyr mark. We find a little less than $P = 0.6$ probability for detecting an equatorial plane by the end of 7 Gyrs. Finer examinations of the orbits show that the critical factor in dispersing the equatorial planes is the long-term effect of the MW, positioned at an angle of $\sim$10$\degree$ off the M31 galactic disk. Another important observation is that polar planes are not affected by the MW position in 10 Gyrs of orbital time. With the MW nearly tangent to polar planes of M31 and at a larger distance than for equatorial orientations, its pull on a plane of satellites is minimal. This dispersion of equatorial planes is not seen in simulations without the MW potential. The first 5 Gyr integration period shows no pronounced effects from the MW potential on the plane of M31 satellites. As for most of recent history the MW maintains a distance larger than the current distance to M31 for large portion of the integration time, a plane's probability of survival for an initial 5 Gyr integration period is not greatly affected by the MW.
We can conclude that the MW's influence in the long-term survival of a plane in M31, varies with position and movement. The static Milky Way's influence on a satellite plane in M31 appears after the 5 Gyr period that is examined by the tests and models explored next (Sections~\ref{Stb_VelMass} and ~\ref{Stb_NSH}). Proper motion of the Milky Way puts it at a larger distance through the explored 5 Gyr period into the past. However, the nature of the nearest large neighbour's influence during integration times $\leq$ 10 Gyrs becomes non-negligible and distance to the Milky Way becomes crucial to maintaining the plane of satellites.
Members of VTPoS share their neighbourhood with larger satellites such as M33, that may act as perturbers of the plane and influence a plane's longevity. While the more massive satellites of M31 (and the higher mass VTPoS members like M32) are likely to have a significant effect on our simulated plane, including such perturbers in our simulation may cloud effects from the more intrinsic properties that we explore in this paper.
\begin{figure}
\includegraphics[width = 1.1\columnwidth]{GPL10_MW.pdf
\caption{Probability ($P$) of finding a satellite (of mass $10^{9} {\rm\,M_\odot }$) on planes at varying angles to the $z=0$ plane in the $x$ direction, in a spherical dark-matter halo during 10 Gyrs.}
\label{Fig.Gyrs10}
\end{figure}
\section{Effects of Velocity and Mass on a Plane of Satellites around M31}\label{Stb_VelMass}
We expect that the velocity and mass of satellite galaxies will play crucial roles in
maintaining a stable planar formation as they are major factors in determining the dispersion and
self-interaction of satellites. In the following section, we will consider the impact
of these on the longevity of satellite planes, by using the numerical model established in Sec~\ref{NumMod}.
\subsection{Variation of the Velocity Perpendicular to the Plane}\label{StbVel}
This section considers the addition of velocity components perpendicular to the planar orbits of the satellites. Velocities are drawn from a Gaussian distribution with a mean, $\mu$ =
0 ${\rm\,km\,s^{-1} }$, and standard deviations, $\sigma$, of 0, 5, 10, 20, 30, 50
and 75 ${\rm\,km\,s^{-1} }$ all calculated in the reference frame centred on M31. The mass of the satellites are
chosen to be $10^9 {\rm\,M_\odot }$, a value representative of the halo population of M31. The satellites are placed initially on the $ y= 0$ plane, in a polar formation with respect to the disk of the M31 potential (Sec. ~\ref{NumMod})\ to discount any effect on the plane from the disk of the host galaxy.
Fig.~\ref{Fig.1} shows the effects of velocity on a plane; remember that an initial population of 30 dwarf galaxies were considered and a plane is defined as consisting of 10 satellite systems. Clearly, if there was no off-plane velocity and gravitational interactions between the dwarfs, all members will retain their planar configuration. This is reflected in Fig.~\ref{Fig.1} initially as $P$= 1.0, but after a couple of Gyrs, the probability for detecting a plane of 10 satellites from a population of 30 steadily falls until, after 5 Gyr, the probability is $> 90\%$. As the off-plane velocity is increased, the probability for detecting a plane of 10 satellites drops relatively rapidly.
Examining Fig.~\ref{Fig.1} in more detail, it is clear that dispersions in the range 5-20 ${\rm\,km\,s^{-1} }$ results in steady decline in the observation of a remaining plane from an initial population.
Increasing the velocity dispersion significantly reduces the longevity of the planar structure. At 30 ${\rm\,km\,s^{-1} }$
the effect is to reduce the observability of satellite planes to 0.5 in the cosmologically short timescale of 1 Gyr, although it reaches a minimum of 0.4 at 0.5 Gyrs. This drop and subsequent rise suggests that the detection of planar structures after 0.5 Gyrs are due to a combination of a smaller number of remaining members on the plane plus interlopers flying through.
Increasing the velocity dispersion results in a more rapid demise of planar structures. At the extreme considered here
the plane disperses almost immediately. This has significant implications for the potential accretion origin of the observed planar
structures, something we will return to in Sec.~\ref{Discn}.
\subsection{Variation of Satellite Galaxy Masses}\label{StbMass}
Simulations of cosmic structure growth produce a broad range of masses for baryonic and dark satellites, as shown in \citet{Sawala2014}, where hundreds of dwarf galaxies in a $10^6$- $10^{12}$ ${\rm\,M_\odot }$ range are found within radius $\leq$ 300 ${\rm\,kpc}$ of a MW-like host. Observationally, the stellar masses for the known satellites of M31 are found in the range $10^6$ - $10^{9}$ ${\rm\,M_\odot }$ \citep{Shaya2013}, and dwarf galaxies (especially dwarf spheroidals) are known to exhibit high mass-to-Light ratios \citep{Mateo2011}, and hence this stellar component is thought to reside in a large dark matter component. To account for this and to test the influence of different satellite masses on the stability of planes of satellites, we give the dwarfs individual masses of $10^7$,$10^8$, $10^9$, $1.5$x$10^9$ and $10^{10}$ ${\rm\,M_\odot }$, encompassing the observed broad range of satellites.
As we have already seen in Fig.~\ref{Fig.1}, the tangential velocity vector heavily influences the longevity of satellite planes, so here we consider the extreme case with all of the satellites initially on the $z=0$ plane with a zero perpendicular velocity. The results of the orbital integration in this scenario are presented in Fig.~\ref{Fig.2}. As expected, the lower mass satellites undergo the weakest self-interactions, with a significant probability for the identification of a plane of satellites after 5 Gyrs. This self-interaction increases as the satellite masses are increased, leading to a more rapid destruction of the initial satellite plane. The orange line represent probability changes for planes where satellite galaxies are given a range of masses from $10^{7}$ to $10^{10}$ ${\rm\,M_\odot }$. Satellites with larger masses affect the smaller satellites to disrupt their orbits on the planes.
It is important to remember that the results presented in Fig.~\ref{Fig.2} are for the most ideal initial conditions with no perpendicular velocity. So comparing Figs~\ref{Fig.1} and~\ref{Fig.2} we can draw interesting conclusions on both the effects of perpendicular velocities and
masses on the longevity of a satellite plane; considering realistic masses for the dwarf satellite population, even in the most idealistic and
unphysical situation, an extremely cold accretion with no velocity dispersion out of the plane, any plane of satellite galaxies will disperse in
a few Gyrs.
\begin{figure}
\includegraphics[width = 1.1\columnwidth]{VelT_init_pls15.pdf
\caption{Probability of finding a satellite on $z=0$ plane ($P$)- varying velocities (${\rm\,km\,s^{-1} }$) perpendicular to plane of satellites of mass $10^{9} {\rm\,M_\odot }$ }
\label{Fig.1}
\end{figure}
\begin{figure}
\includegraphics[width = 1.1\columnwidth]{MassT2_init_pls15_mvar.pdf
\caption{Probability of finding a satellite on $z=0$ plane ($P$)- varying mass of satellites (${\rm\,M_\odot }$) for satellites with plane perpendicular velocity of $0 {\rm\,km\,s^{-1} }$}
\label{Fig.2}
\end{figure}
\section{Influence of Non-Spherical Halos and Relative Orientation of the Plane }\label{Stb_NSH}
So far, we have considered a simple spherical potential to represent dark matter distribution of both M31 and the MW.
However, cosmological simulations suggest that realistic dark matter halos should be flattened or even triaxial; it is well known
that orbits in non-spherical halo orbits precess and we would expect that such precession will significantly impact the
longevity of planar structures. \citet{Bowden2013} consider planes in triaxial halos, to show that, through precession, they double their thickness by the end of 10 Gyrs when the plane is not aligned with the semi-major or semi-minor
axis of the triaxial host halo they inhabit. In this subsection we undertake a broader examination of the
effects of non-spherical halos on a plane of galaxies, by `flattening' the otherwise spherical NFW halo profile.
To study the influence of flattened halo potentials, we introduced a flattening parameter, $q$,
to the $z$ axis of the standard NFW equation (Eq.~\ref{EqNFW}).
For this, we
replace $r^2$ of $\Phi_{\rm{halo}}(r)$ with
\begin{equation}
r^2=x^2 + y^2 + (z*q)^2
\end{equation}
We consider flatness parameters of $q$ =1.67, 1.43, 1.25, 1.11, 1.0, 0.975, 0.95, 0.9, 0.8, 0.7 and 0.6. $q < $ 1.0 produces prolate halos while $q > 1.0$ produces oblate halos (with $q$ =1.11 being equivalent to $q=$1/0.9 etc.).
Again, we restrict the satellites to an equatorial plane of $z$=0, each given a
perpendicular velocity of 0 ${\rm\,km\,s^{-1} }$, with a mass of $10^{9}$ ${\rm\,M_\odot }$ each; examining Fig.~\ref{Fig.2}
shows that this mass scale and lower show little notable gravitational self-interaction between the satellites.
\subsection{Inclined Planes in Non-Spherical Halos}\label{Incl_Pl}
As described in the results of Sec.~\ref{MW_pl}, we found that the satellite planes's orientation to the host halo plays a key role on the plane's stability. In the case of the Milky Way, the observed plane of satellites is perpendicular to the stellar disk \citep{Kroupa2005} whereas in the case of M31 this is not the case. Most MW-M31 size galaxies in cosmological simulations show their disk to be aligned with their dark matter halo \citep{Vera-Ciro2011}. It is therefore likely that the M31 VTPoS is misaligned with the axes of its host's major dark matter halo.
Here, we introduce various inclinations of the plane of satellites
with respect to flattened halos with different values of $q$. Initially the satellite galaxies are on a z=0 equatorial plane (with
velocities restricted to the plane so the off plane dispersion is zero). Following the method of Sec.\ref{MW_pl}, the plane of satellites and their
velocities are rotated at an angle of $\theta$ about the $x$ axis
from 0 to 90$\degree$ in 15$\degree$ intervals. The orbits are
integrated for 5 Gyrs and the values of the probability are shown in Fig.~\ref{Fig.3}.
\begin{figure*
\centering
\includegraphics[width = 0.8\textwidth ]{QPlot_All2.pdf
\caption{Probability of finding a satellite on an inclined plane ($P$) of 10 or more galaxies within $D_{\rm{rms}}$ $\leq$ 15, and of mass $10^9$ ${\rm\,M_\odot }$ - varying $'flatness'(q)$ of the M31 halo}
\label{Fig.3}
\end{figure*}
\begin{table*}
\centering
\begin{minipage}{140mm
\caption{Probability of seeing satellites on planes ($P$)- System of
30 satellites with 0 ${\rm\,km\,s^{-1} }$ perpendicular velocity, satellite mass =
$10^9 {\rm\,M_\odot }$, minimum number of satellites on a plane = 10 ${\rm\,kpc}$ and
$D_{\rm{rms}}$ = $\pm$ 15 ${\rm\,kpc}$ } \label{Table.2}
\begin{tabular}{|p {17mm} | p {8mm}| p {8mm}| p {8mm}| p {8mm}| p {8mm}| p {8mm} |p {8mm} |p {8mm} |p {8mm}|p {8mm}|p {8mm}|}
\hline
\multicolumn{12}{|c|}{\textbf{Part (a).} Average probability of seeing a satellite on initial planes during 5 Gyrs}\\
\hline
\backslashbox{Angle}{q}&1.67&1.43&1.25&1.11&1.0&0.975&0.95&0.9&0.8&0.7&0.6\\
\hline
0$\degree$&0.99&0.99&0.98&0.98&0.93&0.93&0.93&0.92&0.91&0.90&0.89\\
15$\degree$&0.37&0.49&0.69&0.91&0.94&0.93&0.92&0.86&0.68&0.54&0.43\\
30$\degree$&0.23&0.31&0.46&0.76&0.95&0.94&0.89&0.73&0.48&0.32&0.25\\
45$\degree$&0.23&0.29&0.42&0.71&0.96&0.94&0.88&0.67&0.40&0.27&0.21\\
60$\degree$&0.28&0.35&0.49&0.77&0.97&0.96&0.91&0.72&0.43&0.28&0.21\\
75$\degree$&0.49&0.59&0.74&0.92&0.98&0.98&0.96&0.88&0.62&0.41&0.30\\
90$\degree$&0.97&0.97&0.98&0.98&0.98&0.98&0.98&0.98&0.98&0.98&0.98\\
\hline
\multicolumn{12}{|c|}{\textbf{Part (b).} Average probability of seeing a satellite on best- fit planes during 5 Gyrs }\\
\hline
\backslashbox{Angle}{q}&1.67&1.43&1.25&1.11&1.0&0.975&0.95&0.9&0.8&0.7&0.6\\
\hline
0$\degree$&0.99&0.99&0.99&0.96&0.96&0.95&0.95&0.94&0.93&0.98&0.98\\
15$\degree$&0.71&0.75&0.82&0.94&0.96&0.96&0.94&0.90&0.79&0.70&0.62\\
30$\degree$&0.55&0.62&0.71&0.84&0.97&0.96&0.92&0.82&0.67&0.55&0.42\\
45$\degree$&0.48&0.57&0.67&0.81&0.97&0.96&0.91&0.80&0.62&0.48&0.38\\
60$\degree$&0.49&0.59&0.69&0.84&0.98&0.97&0.93&0.81&0.63&0.45&0.34\\
75$\degree$&0.62&0.69&0.80&0.93&0.98&0.98&0.97&0.90&0.70&0.48&0.33\\
90$\degree$&0.97&0.97&0.98&0.98&0.98&0.98&0.98&0.98&0.98&0.98&0.98\\
\hline
\end{tabular}
\end{minipage}
\end{table*}
Fig.~\ref{Fig.3} presents the survivability of planes (of satellites) at the differing inclinations and
flattening values, $q$.
Considering equatorial planes ($\theta$ = $0\degree$), we can see that the non-spherical nature of the halo
has a mild influence on the probability of finding a satellite on a plane.
As the flattening of the potential is increased/decreased from 1.0, the probability for finding a planar distribution experiences little variation with respect to that seen for a fully spherical halo.
Such planes, therefore, possess long-term dynamic stability,
with a high probability of seeing a plane at any given time within 5
Gyrs. As the angle is increased from 45$\degree$- 90$\degree$ the plane
gets closer to the $z$ axis and $P$ is notably increased.
The average probability of finding a plane (minimum of 10 satellites)
for 5 Gyrs is given in part (a) of Table~\ref{Table.2}, columns varying halo flatness and
rows varying plane inclination. Their pattern reflect the overall trends
of Fig.~\ref{Fig.3}- average probability changes with incline and $q$
values. It is evident that in extremely prolate ($q$= 0.6) and oblate ($q$=1.67) halos, even if
satellites start on a plane, each satellite has only a probability of
0.21 of being seen on that plane through 5 Gyrs. The potential change
created by a flattened halo does not greatly affect satellites set at $z=0$, allowing those planes to last longer than inclined
planes.
Planes rotated along the $x$ axis show different probabilities for all
non-spherical halos starting from $q= 0.95$ (Table.~\ref{Table.2}). Though
it appears as a very small change from $q=1.0$, it has a visible
effect on the number of satellites that remain on the initial planar
region for 5 Gyrs. As the rotated planes get closer
to another axis ($z$ in this case), the probability values get higher.
Reducing $q$ to 0.95 will give satellites on any inclined plane a
final probability $\leq$ 0.5 (Fig.~\ref{Fig.3}).
However, starting from $q$ = 0.9 (Fig.~\ref{Fig.3}) the $P$ values decline at a very rapid rate - most clearly seen at the higher angles of inclination. For a plane with a higher inclination about the $x$ axis the probability of seeing a satellite on the initial plane region
decreases. For a 45$\degree$ inclination, the
probability $P$ decreases by $\sim$40\% and the planes disintegrate
faster as the initial plane incline is increased. This trend continues
for other $q \neq 1.0$ values. In fact, the effect is greatly
amplified as $q$ values decrease to form prolate halos or increase to form oblate halos.
In particular for a plane with a 45$\degree$ inclination, the probability
$P$ is similar to that of a random distribution after 5 Gyrs. As we
further reduce $q$, all inclined planes will disintegrate rapidly and reach a random distribution after around 2.5 Gyrs
(Fig.~\ref{Fig.3}). For $q$ = 0.9, planes with an inclination larger than
30$\degree$ to a principal axis are reduced to a probability less than
0.5 in 5 Gyrs. The time scale to reach similar probability for $q$ =
0.8 is effectively halved from 5 Gyrs to 2$\sim$\ 3 Gyrs for all
inclined planes. Further decreasing $q$ will cause the probabilities
to decrease to $P$=0.2 an entire Gyr earlier for all inclined planes,
reducing the lifetime of a visible satellite plane to less than 3
Gyrs. The trends are similar when increasing the oblate nature of the host halo ($q > 1.0$), with the changes to the probability exhibiting a symmetry $q$.
The average time a satellite spends on a plane is calculated and given
in part (a) of Table.~\ref{Table.3}. This does not show the average
continual time, but cumulates all the time-steps when a given
satellite is on its initial plane range, and gives the number of years
that it can be spotted in its starting planar formation. Planes set on
the $x$ and $z$ axes are likely to stay for more than 4.4 out of the 5
Gyrs on the plane despite changes of $q$. Inclined planes
clearly reflect the trends of Table~\ref{Table.2} and Fig.~\ref{Fig.3}. For halos with a flatness of
$q=0.8$ planes at 45$\degree$ will be observable on average 2 Gyrs.
But for more prolate halos ($q=0.6$) and more oblate shapes ($q=1.67$), the overall lifetime of all
inclined planes is smaller than 2 Gyrs. Dark matter only simulations
calculate triaxial axis ratios to be around $b/a \sim$ 0.6 and $b/c
\sim $0.4 \citep{Jing2002, Allgood2006}. Therefore, although $q=0.6 $/$1.67$ represents a smaller flattening than the average values for host halos, planes with inclination as small as 15$\degree$ to major axes have an average lifetime of only 2 Gyrs or less.
\subsection{Fitted Planes and Plane Precession}\label{Fit_Prec}
In the previous sections, we have calculated the probability of finding a satellite
galaxy on the `original' plane it was set on. Flattened halos have shown interesting properties in simulations: the orbits
of satellites display precession through a period of time.
\citet{Ibata2001} demonstrate how the Milky Way is unlikely to have a halo
flatter than $q$ =0.7 with their analysis and simulations of the
Sagittarius dwarf galaxy and its tidal stream. \citet{Bowden2013}\
suggest that precessing planes could not be found for the 30 most massive
satellites examined in their zoom-in dark matter
simulations run for 5 Gyrs. For their test halos, starting from infall
times, the plane found at each timestep was not a slight precession
from the previous plane but an altogether different plane.
What if satellite planes change their orientation and precess with
time? As a plane disperses we still may see planar formations at
different angles of inclination. A plane-fitting method should detect
these precessing planes of satellites as the `best-fit plane' for each
time step. For precessing planes, the probability of finding a
satellite on a fitted plane should be larger than the probability of
finding a satellite on the initial plane (calculated in part (b) of
Table~\ref{Table.2}). To test this hypothesis, we used least squares fitting
to find the best fit plane that also contains the center of M31. Here,
a plane retains the same definition of `10 or more satellites
within a distance of $D_{\rm{rms}}$ = 15 ${\rm\,kpc}$ to the best fit
plane'. Plane fitting is applied to each snapshot of the 5 Gyr
integration which are taken at 0.1 Gyr intervals.
The average probability of a satellite being on the best-fit plane at
any given time through 5 Gyrs is given in part (b) of
Table~\ref{Table.2}. The trend of increasing probability as planes move towards the axes from 45$\degree$ and decreased probability
for flatter halos are already apparent in averaged values in an
initial plane (Table~\ref{Table.2} part (a)) and continue here too.
Halos with $ 0.8 \leq q \leq 1.0$ show around 10\% increase of $P$
from the initial plane probability to best-fit probability and more
prolate/oblate shapes have $P$ increased by $\sim$20\%. Notable changes are
only seen for planes not-aligned with the axes. For satellites on
equatorial and polar planes in halos with $q=0.6$ oblateness, we can
see only a 10\% increase between the probability of finding it on its
original plane to being aligned with the best-fit plane of the
snapshot.
Part (b) of Table~\ref{Table.3} calculates the average time we can see
a satellite on the best-fit planar formation during 5 Gyrs. We see
that the polar and equatorial satellites spend nearly all the 5 Gyrs
in a planar formation that is larger than 10 satellites and has a
thickness of $\sim$30 ${\rm\,kpc}$. Inclined planes of halos with $ 0.8 \leq q \leq 1.0$ show that satellites stay in the planes at most 4 Gyrs and at least around 2 Gyrs. We can see that a group of satellites that
start out on a plane stay in a planar formation for at least 2 out of
5 Gyrs from its time of assembly, even when orbiting dark matter halos
as prolate as $q= 0.6$ and as oblate as $q=1.67$. However, given that our satellites are confined to the plane due to their initial velocities, we can expect the
satellites with non-zero perpendicular velocities to show a lower
average time on their initial planes. It is important to note here
that we are focusing on planes containing at least 30\% of the entire
satellite galaxy population - it is likely that planar formations with
a smaller number of satellites have a higher probability of being
found, but their significance with respect to the rest of the
population is smaller than what we can see in both observational
examples of M31 and MW.
\begin{table*}
\centering
\begin{minipage}{140mm}
\caption{Average time spent on planes - system of 30 satellites with
0 ${\rm\,km\,s^{-1} }$ perpendicular velocity, satellite mass = $10^9 {\rm\,M_\odot }$,
minimum number of satellites on a plane = 10 ${\rm\,kpc}$ and $D_{\rm{rms}}$
= $\pm$15 ${\rm\,kpc}$ } \label{Table.3}
\begin{tabular}{|p {17mm} | p {8mm}| p {8mm}| p {8mm}| p {8mm}| p {8mm}| p {8mm} |p {8mm} |p {8mm} |p {8mm}|p {8mm}|p {8mm}|}
\hline
\multicolumn{12}{|c|}{\textbf{Part (a).} Average time (Gyrs) a satellite spends on its initial plane in 5 Gyrs }\\
\hline
\backslashbox{Angle~}{q~}&1.67&1.43&1.25&1.11& 1.0 & 0.975& 0.95 & 0.9 & 0.8 & 0.7 & 0.6 \\
\hline
0$\degree$ &4.9&4.9&4.9&4.9& 4.7 & 4.7 & 4.6 & 4.6 & 4.6 & 4.5 & 4.4 \\
15$\degree$ &1.9&2.5&3.4&4.6& 4.7 & 4.6 & 4.6 & 4.3 & 3.4 & 2.7 & 2.0 \\
30$\degree$ &1.2&1.5&2.3&3.8& 4.7 & 4.7 & 4.4 & 3.6 & 2.4 & 1.6 & 1.2 \\
45$\degree$ &1.2&1.4&2.1&3.6& 4.8 & 4.7 & 4.4 & 3.3 & 2.0 & 1.3 & 1.0 \\
60$\degree$ &1.4&1.7&2.5&3.9& 4.8 & 4.8 & 4.5 & 3.6 & 2.1 & 1.4 & 1.0 \\
75$\degree$ &2.4&2.9&3.7&4.6& 4.9 & 4.9 & 4.8 & 4.4 & 3.1 & 2.0 & 1.5 \\
90$\degree$ &4.8&4.9&4.9&4.9& 4.9 & 4.9 & 4.9 & 4.9 & 4.9 & 4.9 & 4.8\\
\hline
\multicolumn{12}{|c|}{\textbf{Part (b).} Average time (Gyrs) a satellite spends on the best-fit plane in 5 Gyrs }\\
\hline
\backslashbox{Angle}{q}& 1.67&1.43&1.25&1.11& 1.0 & 0.975& 0.95 & 0.9 & 0.8 & 0.7 & 0.6 \\
\hline
0$\degree$ &4.9&4.9&4.9&4.9& 4.8 & 4.8 & 4.8 & 4.7 & 4.7 & 4.6 & 4.6 \\
15$\degree$ &3.6&3.8&4.1&4.7& 4.8 & 4.8 & 4.7 & 4.5 & 3.9 & 3.5 & 3.1 \\
30$\degree$ &2.8&3.1&3.5&4.2& 4.8 & 4.8 & 4.6 & 4.1 & 3.3 & 2.7 & 2.2 \\
45$\degree$ &2.4&2.8&3.4&4.0& 4.8 & 4.8 & 4.5 & 3.9 & 3.1 & 2.4 & 1.9 \\
60$\degree$ &2.5&2.9&3.5&4.2& 4.9 & 4.9 & 4.6 & 4.1 & 3.1 & 2.2& 1.7 \\
75$\degree$ &3.1&3.5&4.0&4.7& 4.9 & 4.9 & 4.8 & 4.5 & 3.5 & 2.4& 1.6 \\
90$\degree$ &4.9&4.9&4.9&4.9& 4.9 & 4.9 & 4.9 & 4.9 & 4.9 & 4.9 & 4.9\\
\hline
\end{tabular}
\end{minipage}
\end{table*}
Normal vectors of best-fit planes provide a more descriptive view of
the orientation and time evolution of the best-fit planes.
Figures ~\ref{Fig.4} and ~\ref{Fig.5} show the normal direction average
of the best-fit plane for each snapshot in time for $q=0.8$ and
$q=0.6$ in Aitoff projections. A colormap for normal vectors begins at
0 Gyrs in red and proceeds on to blue as time goes to 5 Gyrs.
Fig.~\ref{Fig.4.a}-~\ref{Fig.4.e} show little or no deviation
of the normal vector from the initial position. This shows that a
plane created at any inclination will only show slight deviation from its
initial position through 5 Gyrs in a spherical halo structure. The
next set of figures representing more flatter halo ($q=0.6$) in
Figs.~\ref{Fig.5.a}-~\ref{Fig.5.e} show that the normal vectors of the
best fit planes have a much bigger spread in their directions,
specially in the last 3 Gyrs. The placements of normal vectors in the
first 1-2 Gyrs show a unidirectional time evolution that can visually
seem like a `precessing plane', but this can be attributed to the
increasing thickness of the initial plane. Also, as best-fit models
will choose any satellite within the fitted plane range, it is
unlikely that we are seeing the precession of a single plane. For
every $q \leq 0.8$ halo the largest spread of normals over the Aitoff
projection occurs at 45$\degree$. The planes are oriented in a more
random manner as the flatness of the halo increases. Therefore, we
cannot draw a conclusion on the precession of satellite planes in
prolate/oblate halos.
\begin{figure*
\centering
\subfigure[$\theta$=15$\degree$]{\includegraphics[width=0.45\textwidth]{NV5_q2.pdf}\label{Fig.4.a}}
\subfigure[$\theta$=30$\degree$]{\includegraphics[width=0.45\textwidth]{NV5_q3.pdf}\label{Fig.4.b}}
\subfigure[$\theta$=45$\degree$]{\includegraphics[width=0.45\textwidth]{NV5_q4.pdf}\label{Fig.4.c}}
\subfigure[$\theta$=60$\degree$]{\includegraphics[width=0.45\textwidth]{NV5_q5.pdf}\label{Fig.4.d}}
\subfigure[$\theta$=75$\degree$]{\includegraphics[width=0.45\textwidth]{NV5_q6.pdf}\label{Fig.4.e}}
\caption{Variation of normals of best fit planes and their Aitoff
projection - varying inclines for $q=0.8$ of the M31 halo, for
satellites with $10^{9} {\rm\,M_\odot } $ mass and 0$ {\rm\,km\,s^{-1} }$ perpendicular
velocity}
\label{Fig.4}
\end{figure*}
\begin{figure*
\centering
\subfigure[$\theta$=15$\degree$]{\includegraphics[width=0.45\textwidth]{NV7_q2.pdf}\label{Fig.5.a}}
\subfigure[$\theta$=30$\degree$]{\includegraphics[width=0.45\textwidth]{NV7_q3.pdf}\label{Fig.5.b}}
\subfigure[$\theta$=45$\degree$]{\includegraphics[width=0.45\textwidth]{NV7_q4.pdf}\label{Fig.5.c}}
\subfigure[$\theta$=60$\degree$]{\includegraphics[width=0.45\textwidth]{NV7_q5.pdf}\label{Fig.5.d}}
\subfigure[$\theta$=75$\degree$]{\includegraphics[width=0.45\textwidth]{NV7_q6.pdf}\label{Fig.5.e}}
\caption{Variation of normals of best fit planes and their Aitoff
projection- varying inclines for $q=0.6$ of the M31 halo, for
satellites with $10^{9} {\rm\,M_\odot } $ mass and 0$ {\rm\,km\,s^{-1} }$ perpendicular
velocity}\label{Fig.5}
\end{figure*}
\section{Discussion}\label{Discn}
The above tests give us a view of how the stability of a satellite
galaxy system in an M31-like potential is affected by a few
fundamental factors. Apart from looking at the probability of seeing a
satellite galaxy on a plane at a given time, we have calculated the
average time a satellite galaxy spends as a part of a planar formation
(Tables~\ref{Table.2},~\ref{Table.3}; using the same definition as previous tests).
\subsection{Properties of Satellite Planes and Their Influences on Stability}
As far as the properties of the satellite galaxies are concerned,
velocities perpendicular to the plane exerts the most significant
influence on the stability and longevity of the plane. Even our toy
model of 30 satellite galaxies requires a perpendicular velocity
spread $\sigma \leq$ 30 ${\rm\,km\,s^{-1} }$ (with $\mu = 0 {\rm\,km\,s^{-1} }$)
to produce a plane with at least 10 satellites, that survives a 5 Gyr orbital integration run. Without such a restricted velocity spread,
planes tend to have a lifetime smaller than 2 Gyrs.
Exploration in \citet{Buck2015} of the kinematics of planes in `zoomed
in' cosmological simulations states that around 25\%\ of satellite
galaxies had very high velocity components perpendicular to the planar
formation at the time of accretion. By results presented in this paper (particularly in Fig.~\ref{Fig.1}), such high perpendicular velocities of the plane satellites are likely to produce short lived planes.
All of our satellites are given the same mass. Varying this value shows (Fig.~\ref{Fig.2}) that only the
self-interaction of equal-mass satellites (in a spherical halo) is not enough
to completely disperse a plane of satellites. For this to be apparent, all satellites need to be $\sim$$10^{10}$ ${\rm\,M_\odot }$. But M31 satellites are spread over a wide spectrum in mass and there are only a few galaxies that have a calculated total mass close to the $10^{10} {\rm\,M_\odot }$. These larger members may play a significant role in disrupting the plane, while smaller satellites ($10^{6}, 10^{7}, 10^{8}$ ${\rm\,M_\odot }$) are likely to stay on an initially set plane for 5 Gyrs. To consider this variation, we ran the same tests with a spectrum of masses for the satellites (from $10^{6}$ - $10^{10}$ ${\rm\,M_\odot }$). We observe that inclusion of satellite masses $\leq$ $5$x$10^{9}$ ${\rm\,M_\odot }$ cause interactions that disrupt the orbits of the smaller or similar mass satellites. This results in planes with increased thickness and gives satellites a shorter average time on the planes. However, a plane of lower mass satellites (mass $\leq 10^{9} {\rm\,M_\odot }$) will disperse at a lower perpendicular velocity limit than satellites with larger masses. Therefore, while plane perpendicular velocities
influence the longevity of the plane, the satellite mass has an
effect in altering the maximum perpendicular velocities required for
stable formations.
Additionally, our results suggest that even satellite planes
containing galaxies of total mass $10^{9}$ ${\rm\,M_\odot }$ with a $\sigma = 0{\rm\,km\,s^{-1} }$,
need to be aligned with the host dark halo principal axes or to reside
in halos with a flatness $\leq$ 0.9 to remain stable through 5 Gyrs.
For halos that show an extreme change in its spherical nature ($q$ = 0.8 and 1.67 for prolate and oblate), satellites in planes that are inclined at $45\degree$ to the $x$ axis spend as
little as 1 Gyr within the $D_{\rm{rms}}$ = 15 {\rm\,kpc}\ range of the
initially set plane. When considering a purely observational
arrangement by least-square fitting, where the satellites are seen in
a plane by chance, an average satellite can be seen spending up to 2.5
Gyrs of the total 5 Gyrs on the best-fit plane at each snapshot. This
applies even for the most prolate/oblate halos considered. Our results agree
with \citet{Bowden2013} and their work on the quadrupole moment of the M31
dark matter halos. Their test suggest the torque on their simulated plane of satellite galaxies result in
a thicker distribution at the end of their 7 Gyr simulation. The
final `disc height' of satellites increased up to 25 {\rm\,kpc}\ when the
halo axes are misaligned with the stellar disc of M31 and the `disc of
satellites'.
To place our simulations in context
with observations, we changed our definition of a `satellite plane'-
increasing the threshold number from 10 to 15 out of 30. The trends
that appear in the set of Figure set~\ref{Fig.3} are not changed, but exaggerated.
The most notable change is that most initial formations (excluding
those on $\theta=0\degree, 90\degree$ planes) reach very low
probability of existing as planes of 15 or more satellites
($P\sim0.3$), 0.5 -1.0 Gyrs earlier than planes of 10 or more
satellites. This gives a shorter lifetime for M31-like planes.
Additionally, if we modify the plane-fitting function to fit the
thickness of the M31 plane restrictions (a 14 ${\rm\,kpc}$ thickness instead
of 30 ${\rm\,kpc}$), the longevity of the non-aligned planes in non-spherical
halos will decline further by $\sim$0.5 Gyrs for steeper inclines.
Because keeping a larger (than 1/3) fraction of the entire satellite
population for a 5 Gyrs period are more probable in spherical halos or
axis-aligned planes, these results further support the possibility of
a plane in M31 being closely aligned with a M31 dark halo axis.
Formation theories using Tidal Dwarf Galaxies \citep{1992Natur.360..715B} as building blocks for these structures, a scenario explored by many \citep[e.g.][]{Hammer2013, 2013sf2a.conf..227F, 2014MNRAS.442.2419Y}, also claim that older, higher concentration halos are more likely to host satellite planes - where the structures themselves are formed quite
early ($\geq$ 5 Gyrs ago). This allows for giant tidal streams to
coalesce into the satellites that are seen today. Considering our
results, given the non-alignment of the M31 stellar disc to the
satellite plane and a predicted triaxial halo for M31, a plane surviving longer than 5 Gyrs will require its satellite galaxies to maintain perpendicular velocities extremely restricted to the plane of galaxies. The question to ask
is if the dynamic environments of tidal tails and accretions are
capable of producing such restricted velocities. It is also difficult to make claims on tidal galaxies as origins of these planes from our numerical model. A shared origin would influence intrinsic properties such as velocity in tidal satellite galaxies, whereas our model's positions are chosen randomly and circular velocities are assigned to keep galaxies on their initially set planes. Our model also refers to planes that are already formed, and therefore cannot sufficiently speculate on formation of planes. \citet{Bowden2013}
also consider misaligned planes as good tracers of the underlying
dark matter structure. This, taken into consideration with the fact that the M31 plane forms $\sim 50\degree$\ angle with the stellar disc of M31 create another set of questions on the dark halo-stellar disk alignment of M31 and stability of the M31's VTPoS.
\section{Conclusions}\label{Concln}
Explanations for the M31 plane of satellite galaxies have taken
various avenues. By considering a numerical model of an M31 and MW
potential, we conducted orbital integration on 30 satellite galaxies
with varying environments and properties. We set satellite galaxies on
an initial plane to calculate the probabilities for stability and
longevity in a simulation time of 5 Gyrs. The following properties
were varied both individually and in combination: satellite velocities
perpendicular to the plane, mass of the satellites, inclination of
satellite plane to the triaxial host halo, and the non-spherical
nature of the host halo. From our findings so far, the following
conclusions can be inferred by the statistics obtained from each set
of simulations.\\
[1] A long-lived plane of satellite galaxies must be dynamically cold
- with the magnitude of the velocity vector perpendicular to the plane
being smaller than $20 {\rm\,km\,s^{-1} }$. A satellite plane with a perpendicular
velocity distribution with $\sigma$ = 50${\rm\,km\,s^{-1} }$ disperses to contain
about half the initial count of satellite galaxies in around 2 Gyrs.
\\
[2] The shape of the host halos and the inclination of an initial
plane of satellites, both impart great influence on
the stability of the plane. Unless a host halo is nearly
spherical, it is highly unlikely that a plane that is off a major or
minor axis by an angle larger than 30$\degree$ has existed for a
period longer than 1 Gyr. \\
[3] There are no signs of precession of one single plane continuously
over 1 Gyr. Best-fit planes show that expanding planes might show
precession-like movement, but no long term order is found in even the
simplest scenarios tested. \\
These deductions can provide us with insights into an M31-like system
of satellites. It is unlikely that the plane of
galaxies currently seen is very old ($\leq$ 4 Gyrs) unless the
system has velocities restricted to the plane. If the proper motions
of the satellites show a system with a large variation of velocities,
we can conclude that the plane is a rather young formation that is
dispersing or will do so in a short (galactic) time span. The unusual
thinness of the M31 plane requires either an extraordinarily well-aligned initial plane of galaxies to be older than a few Gyrs, or
dictates that this is a rather recent formation.
The surrounding environment of dark matter subhalos and interactions with them and perturbations by larger satellite galaxies of M31 (e.g. M33) will be addressed subsequently.
\section*{Acknowledgments}
N. F. acknowledges the Dean's International Postgraduate Scholarship of the Faculty of Science, University of Sydney.
\bibliographystyle{mn2e}
|
2,877,628,090,345 | arxiv |
\section{Case Analysis}
We further demonstrate how set-RNN works with two examples.
In the first example from the RCV1-v2 dataset, the most probable set predicted by set-RNN (which is also the correct set in this example) does not come from the most probable sequence. Top sequences in decreasing probability order are listed in Table~\ref{tab:pred_case_study}. The correct label set \{forex, markets, equity, money markets, metals trading, commodity\} has the maximum total probability of 0.161, but does not match the top sequence.
\begin{table}[h]
\resizebox{1.01\columnwidth}{!}
\begin{tabular}{l|l}
PROB & SEQUENCE\\
\hline
0.0236 & equity, markets, money markets, forex\\
0.0196 & {\bf forex, markets, equity, money markets, metals trading, commodity}\\
0.0194 & {\bf equity, markets, forex, money markets, metals trading, commodity}\\
0.0159 & {\bf markets, equity, forex, money markets, metals trading, commodity}\\
0.0157 & {\bf forex, money markets, equity, metals trading, markets, commodity}\\
0.0153 & {\bf forex, money markets, markets, equity, metals trading, commodity}\\
0.0148 & markets, equity, money markets, forex\\
0.0143 & {\bf money markets, equity, metals trading, commodity, forex, markets}\\
0.0123 & {\bf markets, money markets, equity, metals trading, commodity, forex}\\
0.0110 & {\bf markets, equity, forex, money markets, commodity, metals trading}\\
0.0107 & {\bf forex, markets, equity, money markets, commodity, metals trading}\\
0.0094 & {\bf forex, money markets, equity, markets, metals trading, commodity}\\
\hline
\end{tabular}
}
\caption{The set-RNN predicted set (also the correct set) \{forex, markets, equity, money markets, metals trading, commodity\} has the max total probability of 0.161, but does not match the top sequence. Sequences for the correct set are in bold.}\label{tab:pred_case_study}
\end{table}
Next we demonstrate the issue with prescribing the sequence order in seq2seq-RNN with a TheGuardian example\footnote{\scriptsize This document can be viewed at \url{http://www.guardian.co.uk/artanddesign/jonathanjonesblog/2009/apr/08/altermodernism-nicolas-bourriaud}}. Figure~\ref{fig:models} shows the predictions made by seq2seq-RNN and our method. In this particular example the top sequence agrees with the top set in our method's prediction so we can just analyze the top sequence. seq2seq-RNN predicts \texttt{Tate Modern} (incorrect but more popular label) while we predict \texttt{Tate Britain} (correct but less popular label). The seq2seq predicted sequence is in the decreasing label frequency order while our predicted sequence is not. In the training data, \texttt{Exhibition} is more frequent than \texttt{Tate Britain} and \texttt{Tate Modern}. If we arrange labels by decreasing frequency, \texttt{Exhibition} is immediately followed by \texttt{Tate Modern} 19 times, and by \texttt{Tate Britain} only 3 times. So it is far more likely to have \texttt{Tate Modern} than \texttt{Tate Britain} after \texttt{Exhibition}. However, at the set level, \texttt{Exhibition} and \texttt{Tate Modern} co-occurs 22 times while \texttt{Exhibition} and \texttt{Tate Britain} co-occurs 12 times, so the difference is not so dramatic. In this case, imposing the sequence order biases the probability estimation and leads to incorrect predictions.
\begin{figure}[t]
\includegraphics[width=1.0\columnwidth]{figs/train_sequence.png}
\includegraphics[width=1.0\columnwidth]{figs/train_set.png}
\caption{\fontsize{10}{12}\selectfont Top: best sequence by seq2seq-RNN; bottom: best sequence by set-RNN. Above models, at each time, we list the top unigrams selected by attention.\vspace{2ex}}
\label{fig:models}
\end{figure}
\section{Conclusion}
In this work, we present an adaptation of RNN sequence models to the problem of multi-label classification for text. RNN only directly defines probabilities for sequences, but not for sets. Different from previous approaches, which either transform a set to a sequence in some pre-specified order, or relate the sequence probability to the set probability in some ad hoc way, our formulation is derived from a principled notion of set probability. We define the set probability as the sum of all corresponding sequence permutation probabilities. We derive a new training objective that maximizes the set probability and a new prediction objective that finds the most probable set. These new objectives are theoretically more appealing than existing ones, because they give the RNN model more freedom to automatically discover and utilize the best label orders
\label{sec:conclusion}
\section{Introduction}
\label{sec:Introduction}
Multi-label text classification is an important machine learning task wherein one must predict a set of labels to associate with a given document; for example, a news article might be tagged with labels \texttt{sport}, \texttt{football}, \texttt{2018 world cup}, and \texttt{Russia}.
Formally, we are given a set
of label candidates $\mathcal{L}=\{1,2,...,L\}$, and we aim to build a classifier
which maps a document $x$ to a set of labels $\mathbf{y}\subset \mathcal{L}$. The label set $\mathbf{y}$ is typically written as a binary vector $\mathbf{y}\in \{0,1\}^L$, with each bit $y_{\ell}$
indicating the presence or absence of a label.
Naively, one could predict each label independently without considering label dependencies. This approach is called Binary Relevance \cite{DBLP:journals/pr/BoutellLSB04,tsoumakas2007multi}, and is widely used due to its simplicity, but it often does not deliver good performance. Intuitively, knowing some labels---such as \texttt{sport} and \texttt{football}---should make it easier to predict \texttt{2018 world cup} and then \texttt{Russia}. There are several methods that try to capture label dependencies by building a joint probability estimation over all labels $p(\mathbf{y}=(y_1,y_2,...,y_L)|x)$ \cite{ghamrawi2005collective,read2009classifier,DBLP:conf/icml/DembczynskiCH10,li2016conditional}.
The most popular approach, Probabilistic Classifier Chain (PCC) \cite{DBLP:conf/icml/DembczynskiCH10} learns labels one-by-one in a predefined fixed order: for each label, it uses one classifier to estimate the probability of that label given all previous labels predictions, $p(y_l|y_1,...,y_{l-1},x)$. PCC's well known drawback is that errors in early probability estimations tend to affect subsequent predictions, and can become massive when the total number of label candidates $L$ is large.
Recurrent neural network (RNN) is originally designed to output a sequential structure, such as a sentence \cite{DBLP:conf/emnlp/ChoMGBBSB14}. Recently, RNNs have also been applied to multi-label classification by mapping the label set to a sequence \cite{DBLP:conf/cvpr/WangYMHHX16,DBLP:journals/corr/ZhangWSZL16,DBLP:conf/icpr/JinN16,DBLP:conf/iccv/WangCLXL17,DBLP:journals/corr/abs-1709-08553,DBLP:conf/aaai/ChenCYW18,DBLP:journals/corr/abs-1806-04822}. In contrast to PCC where a binary decision is made for each label sequentially, RNN only predicts the positive labels explicitly and therefore its decision chain length is equal to the number of positive labels, not the number of all labels. This makes RNN suffer less from early estimation errors than PCC.
Both PCC and RNN rely heavily on label orders in training and prediction. In multi-label data, the labels are given as sets, not necessarily with natural orders. RNN defines a sequence probability, while PCC defines set probability. Various ways of arranging sets as sequences have been explored: ordering alphabetically, by frequency, based on a label hierarchy, or according to some label ranking algorithm \cite{liu2015optimality}. Previous experimental results show that which order to choose can have a significant impact on learning and prediction \cite{vinyals2015order,DBLP:conf/nips/NamMKF17,DBLP:conf/aaai/ChenCYW18}. In the above example, starting label predictions sequence with \texttt{Russia}, while correct, would make the other predictions very difficult.
Previous work has shown that it is possible to train an RNN on multi-label data without specifying the label order in advance. With special training objectives, RNN can explore different label orders and converge to some order automatically \cite{vinyals2015order}. In this paper we follow the same line of study: We consider how to adapt RNN sequence model to multi-label set prediction without specifying the label order. Specifically, we make the following contributions:
\begin{enumerate}
\item We analyze existing RNN models proposed for multi-label prediction, and show that existing training and prediction objectives are not well justified mathematically and have undesired consequences in practice.
\item We develop efficient approximate training and prediction methods. We propose new training and prediction objectives based on a principled notion of set probability. Our new formulation avoids the drawbacks of existing ones and gives the RNN model freedom to discover the best label order.
\item We crawl two new datasets for multi-label prediction task, and apply our method to them. We also test our method on two existing multi-label datasets. The experimental results show that our method outperforms state-of-the-art methods on all datasets. We release the datasets at \url{http://www.ccis.neu.edu/home/kechenqin}.
\end{enumerate}
\section{Mapping Sequences to Sets}
In this section, we describe how existing approaches map sequences to sets, by writing down their objective functions using consistent notations. To review RNN designed for sequences, let $\mathbf{s}=(s_1,s_2,...,s_T)$ be an input sequence of outcomes, in a particular order, where $s_t \in \{1,2,...,L\}$; the order is often critical to the datapoint. An RNN model defines a probability distribution over all possible output sequences given the input in the form $p(\mathbf{s}=(s_1,s_2,...,s_T)|x)=\prod_{t=1}^T p(s_t|x,s_1,s_2,...,s_{t-1})$. To train the RNN model, one maximizes the likelihood of the ground truth sequence.
At prediction time, one seeks to find the sequence with the highest probability $\mathbf{s}^*=\arg\max_\mathbf{s} p(\mathbf{s}|x)$, and this is usually implemented approximately with a beam search procedure \cite{lowerre1976harpy} (we modified into Algorithm \ref{alg:beam_combine}). The sequence history is encoded with an internal memory vector $h_t$ which is updated over time. RNN is also often equipped with the attention mechanism \cite{DBLP:journals/corr/BahdanauCB14}, which in each timestep $t$ puts different weights on different words (features) and thus effectively attends on a list of important words. The context vector $c_t$ is computed as the weighted average over the dense representation of important words to capture information from the document. The context $c_t$, the RNN memory $h_t$ at timestep $t$, and the encoding of previous label ${s_{t-1}}$ are all concatenated and used to model the label probability distribution at time $t$ as $p(s_t|x,s_1,s_2,...,s_{t-1}) \sim \textit{softmax}(\phi(c_t,h_t,s_{t-1}))$, where $\phi$ is a non-linear function, and $\textit{softmax}$ is the normalized exponential function.
\begin{table*}[t]
\begin{center}
\resizebox{1.0\textwidth}{!}{%
\hspace{-2ex}
\begin{tabular}{|c|c|c|}
\hline
Methods &Training objectives & Prediction objectives\\
\hline
seq2seq-RNN & $\textit{maximize} \sum_{n=1}^N \log p(\mathbf{s}^{(n)}|x^{(n)})$ & $\hat{\mathbf{y}}=set(\mathbf{s}^*)$, $\mathbf{s}^*=\arg\max_{\mathbf{s}} p(\mathbf{s}|x)$\\
\hline
Vinyals-RNN-max & $\textit{maximize} \sum_{n=1}^N \max_{\mathbf{s}\in \pi(\mathbf{y}^{(n)})}\log p(\mathbf{s}|x^{(n)})$ & $\hat{\mathbf{y}}=set(\mathbf{s}^*)$, $\mathbf{s}^*=\arg\max_{\mathbf{s}} p(\mathbf{s}|x)$\\
\hline
Vinyals-RNN-uniform & $\textit{maximize} \sum_{n=1}^N \sum_{\mathbf{s}\in \pi(\mathbf{y}^{(n)})}\log p(\mathbf{s}|x^{(n)})$ & $\hat{\mathbf{y}}=set(\mathbf{s}^*)$, $\mathbf{s}^*=\arg\max_{\mathbf{s}} p(\mathbf{s}|x)$\\
\hline
Vinyals-RNN-sample & $\textit{maximize} \sum_{n=1}^N \sum_{\mathbf{s}\in \pi(\mathbf{y}^{(n)})}p(\mathbf{s}|x^{(n)})\log p(\mathbf{s}|x^{(n)})$ & $\hat{\mathbf{y}}=set(\mathbf{s}^*)$, $\mathbf{s}^*=\arg\max_{\mathbf{s}} p(\mathbf{s}|x)$\\
\hline
set-RNN (ours) & $\textit{maximize} \sum_{n=1}^N \log \sum_{\mathbf{s}\in \pi(\mathbf{y}^{(n)})} p(\mathbf{s}|x^{(n)})$& $\hat{\mathbf{y}}=\arg\max_{\mathbf{y}} p(\mathbf{y}|x)$\\
\hline
\end{tabular}
}
\caption{Comparison between previous and our \emph{set-RNN} training and prediction objectives.} \label{tab_objs}
\end{center}
\end{table*}
To apply RNN to multi-label problems, one approach is to map the given set of labels $\mathbf{y}$ to a sequence $\mathbf{s}=(s_1,s_2,...,s_T)$, on training documents. This is usually obtained by writing the label set in a globally fixed order (e.g.\ by label frequency), as in PCC.
Once the mapping is done, RNN is trained with the standard maximum likelihood objective \cite{DBLP:conf/nips/NamMKF17}:
\begin{align}
\textit{maximize} \sum_{n=1}^N \log p(\mathbf{s}^{(n)}|x^{(n)})
\label{eq:standard_rnn}
\end{align}
where $x^{(n)}$ is the $n$-th document and $N$ is the total number of documents in the corpus.
\newcite{vinyals2015order} proposes to dynamically choose during training the sequence order deemed as most probable by the current RNN model:
\begin{align}
\textit{maximize} \sum_{n=1}^N \max_{\mathbf{s}\in \pi(\mathbf{y}^{(n)})}\log p(\mathbf{s}|x^{(n)})
\label{eq:max_obj}
\end{align}
where the $\pi(\mathbf{y}^{(n)})$ stands for all permutations of the label set $\mathbf{y}^{(n)}$. This eliminates the need to manually specify the label order.
However, as noticed by the authors, this objective cannot be used in the early training stages: the early order choice (often random) is reinforced by this objective and can be stuck upon permanently. To address this issue, \newcite{vinyals2015order}~also proposes two smoother alternative objectives to initialize the model training:
The authors suggest that one first consider many random orders for each label set in order to explore the space:
\begin{align}
\textit{maximize} \sum_{n=1}^N \sum_{\mathbf{s}\in \pi(\mathbf{y}^{(n)})}\log p(\mathbf{s}|x^{(n)})
\label{eq:wrong_obj}
\end{align}
After that, one can sample sequences following the model predictive distribution instead of uniform distribution:
\begin{align}
\textit{maximize} \sum_{n=1}^N \sum_{\mathbf{s}\in \pi(\mathbf{y}^{(n)})}p(\mathbf{s}|x^{(n)})\log p(\mathbf{s}|x^{(n)})
\label{eq:sample_obj}
\end{align}
In training, one needs to schedule the transition among these objectives, a rather tricky endeavor. At prediction time, one needs to find the most probable set. This is done by (approximately) finding the most probable sequence $\mathbf{s}^*=\arg\max_\mathbf{s} p(\mathbf{s}|x)$ and treating it as a set $\hat{\mathbf{y}}=set(\mathbf{s}^*)$. With a large number of sequences, it is quite possible that the argmax has actually a low probability, which can lead to neglecting important information when we ignore sequences other than the top one.
\section{Related Work}
\input{Model.tex}
\input{results.tex}
\input{case_analysis.tex}
\input{conclusion.tex}
\section*{Acknowledgements}
We thank reviewers and Krzysztof Dembczyński for their helpful comments,
Xiaofeng Yang for her help on writing, and Bingyu Wang for his help on proofreading. This work has been generously supported through a grant from the Massachusetts General Physicians Organization.
\section{Adapting RNN Sequence Prediction Model to Multi-label Set Prediction}
\label{sec:Model}
We propose a new way of adapting RNN to multi-label set prediction, which we call \emph{set-RNN}. We appreciate the RNN model structure \cite{rumelhart1988learning} (defines a probability distribution over all possible sequences directly) and introduce training and prediction objectives tailored for sets that take advantage of it, while making a clear distinction between the sequence probability $p(\mathbf{s}|x)$ and the set probability $p(\mathbf{y}|x)$.
We define the set probability as the sum of sequences probabilities for all sequence permutations of the set, namely $p(\mathbf{y}|x)=\sum_{\mathbf{s}\in \pi(\mathbf{y})} p(\mathbf{s}|x)$. Based on this formulation, an RNN also defines a probability distribution over all possible sets indirectly since $\sum_{\mathbf{y}} p(\mathbf{y}|x)=\sum_{\mathbf{y}}\sum_{\mathbf{s}\in \pi(\mathbf{y})} p(\mathbf{s}|x)=\sum_{\mathbf{s}} p(\mathbf{s}|x)=1$. (For this equation to hold, in theory, we should also consider permutations $\mathbf{s}$ with repeated labels, such as $(1,2,3,1)$. But in practice, we find it very rare for RNN to actually generate sequences with repeated labels in our setup, and whether allowing repetition or not does not make much difference.)
In standard maximum likelihood training, one wishes to maximize the likelihood of given label sets, namely, $\prod_{n=1}^N p(\mathbf{y}^{(n)}|x^{(n)})=\prod_{n=1}^N \sum_{\mathbf{s}\in \pi(\mathbf{y}^{(n)})} p(\mathbf{s}|x^{(n)})$, or equivalently,
\begin{align}
\textit{maximize} \sum_{n=1}^N \log \sum_{\mathbf{s}\in \pi(\mathbf{y}^{(n)})} p(\mathbf{s}|x^{(n)})
\label{eq:right_obj}
\end{align}
\subsection{How is our new formulation different?}
This training objective (\ref{eq:right_obj}) looks similar to the objective (\ref{eq:wrong_obj}) considered in previous work \cite{vinyals2015order}, but in fact they correspond to very different transformations. Under the maximum likelihood framework, our objective (\ref{eq:right_obj}) corresponds to the transformation $p(\mathbf{y}|x)=\sum_{\mathbf{s}\in \pi(\mathbf{y})} p(\mathbf{s}|x)$, while objective (\ref{eq:wrong_obj}) corresponds to the transformation $p(\mathbf{y}|x)=\prod_{\mathbf{s}\in \pi(\mathbf{y})} p(\mathbf{s}|x)$. The latter transformation does not define a valid probability distribution over $\mathbf{y}$ (i.e., $\sum_{\mathbf{y}} p(\mathbf{y}|x)\neq 1$), and it has an undesired consequence in practical model training: because of the multiplication operation, the RNN model has to assign equally high probabilities to all sequence permutations of the given label set in order to maximize the set probability. If only some sequence permutations receive high probabilities while others receive low probabilities, the set probability computed as the product of sequence probabilities will still be low. In other words, if for each document, RNN finds one good way of ordering relevant labels (such as hierarchically) and allocates most of the probability mass to the sequence in that order, the model still assigns low probabilities to the ground truth label sets and will be penalized heavily. As a consequence the model has little freedom in discovering and concentrating on some natural label order. In contrast, with our proposed training objective, in which the multiplication operation is replaced by the summation operation, it suffices to find only one reasonable permutation of the labels for each document. It is worth noting that different documents can have different label orders; thus our proposed training objective gives the RNN model far more freedom on label order. The other two objectives (\ref{eq:max_obj}) and (\ref{eq:sample_obj}) proposed in \cite{vinyals2015order} are less restrictive than (\ref{eq:wrong_obj}), but they have to work in conjunction with (\ref{eq:wrong_obj}) because of the self reinforcement issue. Our proposed training objective has a natural probabilistic interpretation, and does not suffer from self reinforcement issue. Thus it can serve as a stand alone training objective. Also, using Jensen’s inequality, one can show that objective (\ref{eq:wrong_obj}) is maximizing a lower bound on the log-likelihood, while objective (\ref{eq:right_obj}) is maximizing it directly.
\begin{algorithm}[ht]
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Instance $x$ \\
Subset of labels considered $G\subset \mathcal{L}$\\
Boolean flag $ALL$: 1 if sequences must contain all $G$ labels; 0 if partial sequences are allowed }
\Output{A list of top sequences and the associated probabilities}
Let $\mathbf{s}_1$,$\mathbf{s}_2$,...,$\mathbf{s}_K$ be the top $K$ sequences found so far. Initially, all $K$ sequences are empty. $\oplus$ means concatenation. \\
\While {true}{
// Step 1: Generate Candidate Sequences from each existing sequence $s_k \in K$ and all possible new labels $l \in G$:\\
Expand all non-stopped sequences:\\
$C = \{ \mathbf{s}_k\oplus l | l\in G, STOP \notin s_k \}$\\
Include stopped sequences:\\
$C = C \cup \{ \mathbf{s}_k | STOP \in s_k \}$\\
Terminate non-stopped sequences:\\
\If{$ALL==0$}{
%
$C =C \cup \{ \mathbf{s}_k\oplus STOP | STOP \notin s_k \}$
}
// Step 2: Select highest probabilities sequences from candidate set $C$\\
$K$
= topK-argmax$_k \{\text{prob}[s_k] | s_k \in C\}$\\
\If {all top $K$ sequences end with $STOP$ or contain all labels in $G$}{
Terminate the algorithm}
}
\Return{sequence list $\mathbf{s}_1$,$\mathbf{s}_2$,...,$\mathbf{s}_K$ and the associated probabilities}
\caption{Beam\_Search}
\label{alg:beam_combine}
\end{algorithm}
\subsection{Training by Maximizing Set Probability}
Training an RNN model with the proposed objective (\ref{eq:right_obj}) requires summing up sequence (permutation) probabilities for a set $\mathbf{y}$, where $|\mathbf{y}|$ is the cardinality of the set. Thus evaluating this objective exactly can be intractable. We can approximate this sum by only considering the top $K$ highest probability sequences produced by the RNN model. We introduce a variant of beam search for sets with width $K$ and with the search candidates in each step restricted to only labels in the set (see Algorithm~\ref{alg:beam_combine} with $ALL=1$
. This approximate inference procedure is carried out repeatedly before each batch training step, in order to find highest probability sequences for all training instances occurring in that batch. The overall training procedure is summarized in Algorithm \ref{alg:train_beam}
\begin{algorithm}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Multi-label dataset $(x^{(n)},\mathbf{y}^{(n)}),n=1,2,...,N$ }
\Output{Trained RNN model parameters}
\ForEach {batch}{
\ForEach{$(x^{n},\mathbf{y}^{n})$ in the batch}{
Get top $K$ sequences :\\
$\{\mathbf{s}^n_{1},...,\mathbf{s}^n_{K}, p(\mathbf{s}^n_{1}|x^n),...,p(\mathbf{s}^n_{K}|x^n)\}$= \hspace{4ex}= Beam\_Search$(x^{n},\mathbf{y}^{n}, ALL=1$)\\
}
Update model parameters by maximizing $\sum\limits_{(x^{n},\mathbf{y}^{n}) \in \text{batch}} \log \sum\limits_{\mathbf{s}\in\{\mathbf{s}^n_{1},...,\mathbf{s}^n_{K}\}} p(\mathbf{s}|x^{n})$
}
\caption{Training method for set-RNN}
\label{alg:train_beam}
\end{algorithm}
\subsection{Predicting the Most Probable Set}
The transformation $p(\mathbf{y}|x)=\sum_{\mathbf{s}\in \pi(\mathbf{y})} p(\mathbf{s}|x)$ also naturally leads to a prediction procedure, which is different from the previous standard of directly using most probable sequence as a set. We instead aim to find the most likely set $\hat{\mathbf{y}}=\arg\max_{\mathbf{y}} p(\mathbf{y}|x)$, which involves summing up probabilities for all of its permutations. To make it tractable, we propose a two-level beam search procedure. First we run standard RNN beam search (Algorithm \ref{alg:beam_combine} with $ALL=0$) to generate a list of highest probability sequences. We then consider the label set associated with each label sequence. For each set, we evaluate its probability using the same approximate summation procedure as the one used during model training (Algorithm~\ref{alg:beam_combine} with $ALL=1$): we run our modified beam search to find the top few highest probability sequences associated with the set and sum up their probabilities. Among these sets that we have evaluated, we choose the one with the highest probability as the prediction. The overall prediction procedure is summarized in Algorithm~\ref{alg:test}. As we shall show in case study, the most probable set may not correspond to the most probable sequence; these are certainly cases where our method has an advantage.
Both our method and the competitor state-of-the-art (Vinyals-RNNs) are at most $K$ times slower than a vanilla-RNN, due to the time spent on dealing with $K$ permutations per datapoint. Our proposed method is about as fast as the Vinyals-RNN methods, except for the Vinyals-RNN-uniform which is a bit faster (by a factor of 1.5) because its epochs do not run the additional forward pass.
\begin{algorithm}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Instance $x$}
\Output{Predicted label set $\hat{\mathbf{y}}$}
Obtain $K$ highest probability sequences :\\
$\{\mathbf{s}_1,...,\mathbf{s}_{K}\}$ = Beam\_Search(x,$\mathcal{L}, ALL=0$)\\
Map each sequence $\mathbf{s}_k$ to the corresponding set $\mathbf{y}_k$ and remove duplicate sets (if any)
\ForEach {$\mathbf{y}_k$}{
Get $K$ most probable sequences associated with $\mathbf{y}_k$ and their probabilities :\\
$\{\mathbf{s'}_{1},...,\mathbf{s'}_{K}, p(\mathbf{s}'_{1}|x),...,p(\mathbf{s}'_{K}|x)\}$= \\ \hspace{6ex} = Beam\_Search(x,$\mathbf{y}_k, ALL=1$)\\
Set probability is approx by summing up :
$p({\mathbf{y}_k}|x) \approx \sum\limits_{\mathbf{s}\in \{\mathbf{s}'_{1},...,\mathbf{s}'_K\}} p(\mathbf{s}|x)$
}
$\hat{\mathbf{y}} = argmax_{{\mathbf{y}_k}}(p({\mathbf{y}_k}|x))$
\caption{Prediction Method for set-RNN}
\label{alg:test}
\end{algorithm}
\section{Results and Analysis}
\label{sec:results}
\subsection{Experimental Setup}
We test our proposed set-RNN method on 4 real-world datasets, RCV1-v2, Slashdot, TheGuardian, and Arxiv Academic Paper Dataset (AAPD) \cite{DBLP:journals/corr/abs-1806-04822}. We take the public RCV1-v2 release\footnote{\scriptsize\url{http://www.ai.mit.edu/projects/jmlr/papers/volume5/lewis04a/lyrl2004_rcv1v2_README.htm}} and randomly sample 50,000 documents. We crawl Slashdot and TheGuardian documents from their websites\footnote{\scriptsize Slashdot: \url{https://slashdot.org/} Note that there is another public Slashdot multi-label dataset \cite{read2009classifier} but we do not use that one because it is quite small. TheGuardian: \url{https://www.theguardian.com}
} and treat the official editor tags as ground truth. We also gather a list of user tags\footnote{\scriptsize \url{www.zubiaga.org/datasets/socialbm0311/}} for each document and treat them as additional features. For AAPD dataset, we follow the same train/test split as in \cite{DBLP:journals/corr/abs-1806-04822}. Table \ref{tab:stats} contains statistics of these four datasets. Links to document, official editor tags, and user tags are avaliable at \url{http://www.ccis.neu.edu/home/kechenqin}.
\begin{table}[h]
\resizebox{1.01\columnwidth}{!}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
Data & \#Train & \#Test & Cardinality & \#Labels & Doc length \\
\hline
Slashdot & 19,258 & 4,814& 4.15& 291&64 \\
RCV1-v2 & 40,000& 10,000& 3.17& 101&121 \\
TheGuardian & 37,638 & 9,409 & 7.41& 1,527&505 \\
AAPD & 53,840 & 1,000 & 2.41& 54 & 163 \\
\hline
\end{tabular}
}
\caption{\fontsize{10}{12}\selectfont Statistics of the datasets.}\label{tab:stats}
\end{table}
\begin{table*}[ht]
\begin{center}
\resizebox{2.1\columnwidth}{!}
\begin{tabular}{|l|cc|cc|cc|cccc|}
\hline
\multirow{2}{*}{Methods} &
\multicolumn{2}{c|}{Slashdot}&\multicolumn{2}{c|}{RCV1-v2}&\multicolumn{2}{c|}{TheGuardian}&\multicolumn{4}{c|}{AAPD} \\
\cline{2-11}
& label-F1 & instance-F1 & label-F1 & instance-F1 & label-F1 & instance-F1 & label-F1 & instance-F1 & hamming-loss & micro-F1 \\
\hline
BR
& .271 & .484 & .486 & .802 &.292 & .572 & .529 & .654 & .0230 & .685 \\
BR-support
& .247 & .516 & .486 & .805 &.296 & .594 & .545 & .689 & \textbf{.0228} & .696\\
PCC
& .279 & .480 & .595 & .818 &- & - & .541 & .688 & .0255 & .682 \\
seq2seq-RNN
& .270 & .528 & .561 & .824 &.331 & .603 & .510 & .708 & .0254 & .701 \\
Vinyals-RNN-uniform
& .279 & .527 & .578 & .826 & .313 & .567 & .532 & .721 & .0241 & .711\\
Vinyals-RNN-sample
& .300 & .531 & .590 & .828 & .339 & .597 & .527 & .706 & .0259 & .697 \\
Vinyals-RNN-max
& .293 & .530 & .588 & .829 & .343 & .599 & .535 & .709 & .0256 & .700 \\
Vinyals-RNN-max-direct
& .226 & .518 & .539 & .808 & .313 & .583 & .490 & .702 & .0257 & .694\\
SGM
&-&-&-&-&-&-& - & - & .0245 & .710 \\
set-RNN
& \textbf{.310} & \textbf{.538} & \textbf{.607} & \textbf{.838} &\textbf{.361} & \textbf{.607} & \textbf{.548} & \textbf{.731} & .0241 & \textbf{.720}\\
\hline
\end{tabular}
}
\end{center}
\caption{Comparison of different approaches. ``-'' means result not available. For \emph{hamming loss}, the lower the value is, the better the model performs. For all other measures, the higher the better
}\label{tab:main}
\end{table*}
\begin{table*}[ht]
\begin{center}
\resizebox{2.1\columnwidth}{!}
\begin{tabular}{|l|cc|cc|cc|cc|}
\hline
\multirow{2}{*}{Methods} &
\multicolumn{2}{c|}{Slashdot}&\multicolumn{2}{c|}{RCV1-v2}&\multicolumn{2}{c|}{TheGuardian}&\multicolumn{2}{c|}{AAPD} \\
\cline{2-9}
& label-F1 & instance-F1 & label-F1 & instance-F1 & label-F1 & instance-F1 & label-F1 & instance-F1\\
\hline
seq2seq-RNN
& .270$\to$.269 & .528$\to$.528& .561$\to$.561 & .824$\to$.824 &.331$\to$.336& .603$\to$.603 & .510$\to$.511 &.708$\to$.709 \\
Vinyals-RNN-uniform
& .279$\to$.288& \textbf{.527}$\to$\textbf{.537}& .578$\to$.587& .826$\to$.833& \textbf{.313}$\to$\textbf{.336}& \textbf{.567}$\to$\textbf{.585} & \textbf{.532}$\to$\textbf{.542} &.721$\to$.724 \\
Vinyals-RNN-sample
& .300$\to$.303 & .531$\to$.537 &.590$\to$.597 & .828$\to$.833 & \textbf{.339}$\to$\textbf{.351} & .597$\to$.602 & .527$\to$.530 & .706$\to$.708 \\
Vinyals-RNN-max
& .293$\to$.301 & .530$\to$.535 & .588$\to$.585 & .829$\to$.830 & .343$\to$.352 &.599$\to$.604 & .535$\to$.537 &.709$\to$.712 \\
Vinyals-RNN-max-direct
& .226$\to$.228& .518$\to$.519& .539$\to$.538& .808$\to$.808& .313$\to$.316&.583$\to$.584 & .490$\to$.490 &.702$\to$.701 \\
set-RNN
& \textbf{.297}$\to$\textbf{.310} & \textbf{.528}$\to$\textbf{.538} & \textbf{.593}$\to$\textbf{.607} & .831$\to$.838 &\textbf{.349}$\to$\textbf{.361} & \textbf{.595}$\to$\textbf{.607} & .548$\to$.548 &.728$\to$.731 \\\hline
\end{tabular}
}
\end{center}
\caption{Predicting the most probable sequence vs. predicting the most probable set. Numbers before the arrow: predicting the most probable sequence. Numbers after the arrow: predicting the most probable set. We highlight scores which get significantly improved in \textbf{bold} (improvement is larger than 0.01).
}\label{table_prediction}
\end{table*}
To process documents, we filter out stopwords and punctuations. Each document is truncated to have maximum 500 words for TheGuardian and AAPD, and 120 for Slashdot and RCV1-v2. Zero padding is used if the document contains less words than the maximum number. Numbers and out-of-vocabulary words are replaced with special tokens. Words, user tags and labels are all encoded as 300-dimensional vectors using \textsc{word2vec} \cite{DBLP:journals/corr/abs-1301-3781}.
We implement RNNs with attention using \textsc{tensorflow-1.4.0} \cite{DBLP:conf/osdi/AbadiBCCDDDGIIK16}. The dynamic function for RNNs is chosen to be Gated recurrent units (GRU) with 2 layers and at most 50 units in decoder. The size of the GRU unit is 300. We set dropout rate to 0.3, and train the model with Adam optimizer \cite{DBLP:journals/corr/KingmaB14} with learning rate $0.0005$. Beam size is set to be 12 at both training and inference stages. We adopt \emph{label-F1} (average F1 over labels) and \emph{instance-F1} (average F1 over instances) as our main evaluation metrics, as defined below:
\begin{align*} \text{label-F1} = \frac{1}{L}\sum_{\ell=1}^L\frac{2\sum_{n=1}^N y^{(n)}_\ell \hat{y}^{(n)}_\ell}{\sum_{n=1}^N y^{(n)}_\ell+\sum_{n=1}^N \hat{y}^{(n)}_\ell}\\
\text{instance-F1} = \frac{1}{N}\sum_{n=1}^N\frac{2\sum_{\ell=1}^L y^{(n)}_\ell \hat{y}^{(n)}_\ell}{\sum_{\ell=1}^L y^{(n)}_\ell+\sum_{\ell=1}^L \hat{y}^{(n)}_\ell}
\end{align*}
where for each instance $n$, $y_\ell^{(n)}=1$ if label $\ell$ is a given label in ground truth; $\hat{y}_\ell^{(n)}=1$ if label $\ell$ is a predicted label.
We compare our method with the following methods:
\begin{itemize}
\item \textbf{Binary Relevance (BR)} \cite{tsoumakas2007multi} with both independent training and prediction;
\item \textbf{Binary Relevance with support inference (BR-support)} \cite{wang2018pipeline} which trains binary classifiers independently but imposes label constraints at prediction time by only considering label sets observed during training, namely $\hat{\mathbf{y}}=\arg\max_{\text{observed~}\mathbf{y}}\prod_{\ell=1}^L p(y_{\ell}|x)$;
\item \textbf{Probabilistic Classifier Chain (PCC)} \cite{DBLP:conf/icml/DembczynskiCH10} which transforms the multi-label classification task into a chain of binary classification problems. Predictions are made with Beam Search.
\item \textbf{Sequence to Sequence RNN (seq2seq-RNN)} \cite{DBLP:conf/nips/NamMKF17} which maps each set to a sequence by decreasing label frequency and solves the multi-label task with an RNN designed for sequence prediction (see Table \ref{tab_objs}).
\item \textbf{Vinyals-RNN-uniform, Vinyals-RNN-sample, and Vinyals-RNN-max} are three variants of RNNs proposed by \cite{vinyals2015order}. They are trained with different objectives that correspond to different transformations between sets and sequences. See Table~\ref{tab_objs} for a summary of their training objectives. Following the approach taken by \cite{vinyals2015order}, Vinyals-RNN-sample and Vinyals-RNN-max are initialized by Vinyals-RNN-uniform. We have also tested training Vinyals-RNN-max directly without having Vinyals-RNN-uniform as an initialization, and we name it as \textbf{Vinyals-RNN-max-direct}.
\item \textbf{Sequence Generation Model (SGM)} \cite{DBLP:journals/corr/abs-1806-04822} which trains the RNN model similar to seq2seq-RNN but uses a new decoder structure that computes a weighted global embedding based on all labels as opposed to just the top one at each timestep.
\end{itemize}
In BR and PCC, logistic regressions with L1 and L2 regularizations are used as the underlying binary classifiers. seq2seq-RNN, PCC, and SGM rely on a particular label order. We adopt the decreasing label frequency order, which is the most popular choice.
\subsection{Experimental Results}
Table \ref{tab:main} shows the performance of different methods in terms of \emph{label-F1} and \emph{instance-F1}. The SGM results are taken directly from \cite{DBLP:journals/corr/abs-1806-04822}, and are originally reported only on AAPD dataset in terms of \emph{hamming-loss} and \emph{micro-F1}. Definitions of these two metrics can be found in \cite{koyejo2015consistent}.
Our method performs the best in all metrics on all datasets (except hamming loss on AAPD, see table \ref{tab:main}). In general, RNN based methods perform better than traditional methods BR, BR-support and PCC. Among the Vinyals-RNN variants, Vinyals-RNN-max and Vinyals-sample work the best and have similar performance. However, they have to be initialized by Vinyals-RNN-uniform. Otherwise, the training gets stuck in early stage and the performance degrades significantly. One can see the clear degradation by comparing the Vinyals-RNN-max row (with initialization) with the Vinyals-RNN-max-direct row (without initialization). By contrast, our training objective in set-RNN does not suffer from this issue and can serve as a stable stand alone training objective.
\begin{figure}[t]
\includegraphics[width=1.0\columnwidth]{figs/labelf1_v2.png}
\caption{Average F1 over rare labels with the same frequency on TheGuardian dataset. Blue($\Delta$)=set-RNN, Red($\cdot$)=seq2seq-RNN.}
\label{fig:labelf1}
\end{figure}
On TheGuardian dataset, set-RNN performs slightly better than seq2seq-RNN in terms of instance-F1, but much better in terms of label-F1. It is known that instance-F1 is basically determined by the popular labels' performance while label-F1 is also sensitive to the performance on rare labels. Figure~\ref{fig:labelf1} shows that set-RNN predicts rare labels better than seq2seq-RNN.
Next we analyze how much benefit our new set prediction strategy brings in. For each RNN-based method, we test two prediction strategies: 1) finding the sequence with the highest probability and outputting the corresponding set (this is the default prediction strategy for all models except set-RNN); 2) outputting the set with the highest probability (this is the default prediction strategy for set-RNN). Table~\ref{table_prediction} shows how each method performs with these two prediction strategies. One can see that Vinyals-RNN-uniform and set-RNN benefit most from predicting the top set, Vinyals-RNN-sample, Vinyals-RNN-max and Vinyals-RNN-max-direct benefit less, and seq2seq RNN does not benefit at all. Intuitively, for the top-set prediction to be different from the top-sequence prediction, the model has to spread probability mass across different sequence permutations of the same set.
\subsection{Analysis: Sequence Probability Dsitribution}
Results in Table~\ref{table_prediction} motivates us to check how sharply (or uniformly) distributed the probabilities are over different sequence permutations of the predicted set. We first normalize these sequence probabilities related to the predicted set and then compute the entropy. To make predictions with different set sizes (and hence different number of sequence permutations) comparable, we further divide the entropy by the logarithm of number of sequences. Smaller entropy values indicate a sharper distributions. The results are shown in Figure~\ref{fig:entropy}.
seq2seq-RNN trained with fixed label order and standard RNN objective (\ref{eq:standard_rnn}) generates very sharp sequence distributions. It basically only assigns probability to one sequence in the given order. The entropy is close to 0. In this case, predicting the set is no different than predicting the top sequence (see Table~\ref{table_prediction}). On the other extreme is Vinyals-RNN-uniform, trained with objective (\ref{eq:wrong_obj}), which spreads probabilities across many sequences, and leads to the highest entropy among all models tested (the uniform distribution has the max entropy of 1). From Table~\ref{table_prediction}, we see that by summing up sequence probabilities and predicting the most probable set, Vinyals-RNN-uniform's performance improves. But as discussed earlier, training with the objective (\ref{eq:wrong_obj}) makes it impossible for the model to discover and concentrate on a particular natural label order (represented by a sequence). Overall Vinyals-RNN-uniform is not competitive even with the set-prediction enhancement. Between the above two extremes are Vinyals-RNN-max and set-RNN (we have omitted Vinyals-RNN-sample and Vinyals-RNN-max-direct here as they are similar to Vinyals-RNN-max). Both models are allowed to assign probability mass to a subset of sequences. Vinyals-RNN-max produces sharper sequence distributions than set-RNN, because Vinyals-RNN-max has the incentive to allocate most of the probability mass to the most probable sequence due to the max operator in its training objective (\ref{eq:max_obj}). From Table~\ref{table_prediction}, one can see that set-RNN clearly benefits from summing up sequence probabilities and predicting the most probable set while Vinyals-RNN-max does not benefit much. Therefore, the sequence probability summation is best used in both training and prediction, as in our proposed method.
Comparing 4 datasets in Table~\ref{table_prediction}, we also see that Slashdot and TheGuardian, which have larger label cardinalities (therefore more permutations for one set potentially), benefit more from predicting the most probable set than RCV1 and AAPD, which have smaller label cardinalities.
\begin{figure}[t]
\includegraphics[width=1.00\columnwidth]{figs/entropy.png}
\caption{Entropy of sequence probability distribution for each model. Blue(\textbackslash)=Vinyals-RNN-uniform, Orange(+)=set-RNN, Green($\times$)=Vinyals-RNN-max, Red($\cdot$)=seq2seq-RNN.}
\label{fig:entropy}
\end{figure}
|
2,877,628,090,346 | arxiv | \section{Introduction}\label{sec:intro}
Intermittency
is an important feature
in the theory of fluid and plasma turbulence \citep{sreenivasan1997AnRFM,matthaeus2015ptrs}, and
has gained increasing attention
in the
study of space plasmas, including the
corona, the magnetosphere, and the solar wind \citep{ abramenko2008,chhiber2018MMS,bruno2019ESS}. In each of these venues the emergence
of localized \replaced{strong gradients}{sporadic features of the primitive fields} is a consequence of the turbulent cascade of energy.
The resulting coherent structures, of
electric current density, vorticity, or density,
are likely sites of
enhanced
kinetic dissipation,
and heating \citep[e.g.,][]{osman2012PRL}.
Therefore intermittency
is crucial
in terminating the
cascade and
heating the plasma.
These
coherent structures
also compartmentalize the plasma, \added{forming boundaries associated with} distinctive flux tube ``texture'' that organizes quantities such as temperature, density, magnetic intensity,
and energetic particles \citep{borovsky2008JGR,tessein2013ApJ}.
Coherent structure forms
in similar ways in hydrodynamics \citep{sreenivasan1997AnRFM}, magnetohydrodynamics \citep{wan2009PoP}, and plasmas \citep{burlaga1991JGR,sorriso-valvo1999GRL,matthaeus2015ptrs}, with
important differences, particularly approaching kinetic scales. Statistics in the
plasma kinetic range provide
insight regarding
physical mechanisms responsible for
dissipation \citep[e.g.,][]{goldstein2015RSPTA,chen2016jpp,matthaeus2020},
thus addressing fundamental questions related to coronal heating and acceleration of the solar wind \citep{fox2016SSR}.
\added{Qualitatively speaking, monofractality is associated with structure that is non space-filling but lacking a preferred scale over some range (i.e., scale-invariance). In contrast, multifractality also implies non space-filling structure but with at least one preferred scale within the relevant range \citep{frisch1995book}. Such a distinction has likely implications for the preference of a system for specific classes of dissipative mechanisms. In the solar wind, and in collisionless plasmas in general,} it remains unclear whether statistics at subproton scales remain strongly intermittent and multifractal, or become monofractal \citep{kiyani2009PRL,leonardis2013prl,leonardis2016pop,wan2016PoP}, or even return to
Gaussianity \citep{koga2007pre,wan2012ApJ,wu2013ApJ,chhiber2018MMS,roberts2020PRR}.
These questions persist in part because of the scarcity of high time-resolution data at locations well separated from the terrestrial bow shock. Here we address these issues by employing high-resolution measurements of the magnetic field made by the \textit{Parker Solar Probe} (\textit{PSP} ) in near-Sun solar wind \citep{fox2016SSR}.
\section{Theoretical Background}\label{sec:theory}
In turbulence, considerable information
is contained in the
statistics of fluctuations and increments of the primitive variables. These
are velocity in hydrodynamics, velocity and magnetic fluctuations in magnetohydrodynamics (MHD), density for compressible flows, and additional variables for complex fluids and plasmas. The basic second-order statistics
include
two-point correlations,
their Fourier transforms, i.e., wavenumber ($k$) spectra \citep{matthaeus1982JGR}, and the second-order structure functions
\citep{burlaga1991JGR}.
These and other relevant statistics are moments of the underlying joint probability distributions functions \citep[PDFs; e.g.,][]{Monin1971book}.
Second-order moments describe the distribution of energy over spatial scales $\ell\sim 1/k$. To describe the spatial concentration of energy in {\it intermittent} structures, we go beyond second-order statistics and consider {\it higher-order} moments of PDFs \citep[e.g.,][]{frisch1995book}.
The original K41 similarity hypothesis \citep{kolmogorov1941DoSSR} postulates the statistical behavior of longitudinal velocity increments at spatial lag \(\bm{\ell}\), namely
$ \delta u_\ell
= \bm{\hat{\ell}} \cdot
[ \bm{u} (\bm{x} + \bm{\ell}) - \bm{u}(\bm{x}) ]$.
K41 asserts that
$ \delta u_\ell \sim
\epsilon^{1/3} \ell^{1/3}$ where $\epsilon$ is the total
dissipation rate and isotropy is assumed. Thus, for an appropriate averaging operator \(\langle \dots \rangle\), all
increment moments are determined as the structure functions
$S^{(p)} = \langle \delta u_\ell^p \rangle = C_p \epsilon^{p/3} \ell^{p/3}$,
a form that includes the second-order law $S^{(2)} = C_2 \epsilon^{2/3} \ell^{2/3}$ and (formally) the exact third-order law $S^{(3)}= -(4/3) \epsilon \ell$ as special cases.
The refined similarity hypothesis \citep[][K62]{kolmogorov1962JFM}
takes into account intermittency, averaging the
local dissipation rate $\epsilon_\ell$
over a volume of linear dimension $\ell$ and introducing this as an additional
random variable.
Incorporating the suggestion \citep{oboukhov1962JFM}
that such irregularity of dissipation changes scalings of increments with $\ell$,
the refined
K62 hypothesis becomes
$\delta u_\ell = A(*)
\epsilon_\ell ^{1/3} \ell^{1/3}$.
Here
$A(*)$ is a random function that depends on
local Reynolds number, but not on $\epsilon_\ell$ or $\ell$ separately,
and takes on a unique form at infinite Reynolds number.
For
moments $S^{(p)} =
\langle \delta u_\ell^p \rangle$, the
hypothesis implies \begin{equation}
S^{(p)} = C_p \langle \epsilon_\ell^{p/3}\rangle \ell^{p/3} = C_p \epsilon^{p/3} \ell^{p/3 + \mu(p)}
\label{eq:K62}
\end{equation}
where $\mu(p)$ is a measure of the
intermittency. We define the scaling exponent \(\zeta (p) = p/3 + \mu(p) \).
\section{Outstanding Observational Questions}\label{sec:back_observe}
The solar wind magnetic field spectrum in the inertial range admits a power law over several decades
\citep{Coleman1966prl,matthaeus1982JGR}, although
discussion persists concerning exact spectral indices and
anisotropy \citep[e.g.,][]{tessein2009ApJ}. PDFs of magnetic increments exhibit non-Gaussian features that are increasingly prominent at smaller lags within the inertial range \citep{sorriso-valvo1999GRL}.
As such, it is understood that the scale-dependent kurtosis $\kappa(\ell)$ (SDK; see Section \ref{sec:intermit})
increases with decreasing $\ell$
in the inertial range, while
higher-order exponents exhibit
multifractal scaling \citep{frisch1995book}.
Overall, this picture is
consistent with expectations from
MHD \citep{carbone1995prl,politano1998EuroPhysLet}
which in turn are consistent with
hydrodynamic scaling \citep{sreenivasan1997AnRFM}.
The situation is less clear
when comparing solar wind statistics in the kinetic range
with either MHD or plasma
studies. A major issue is the evidence that solar wind subproton-scale kurtosis {\it decreases} in the kinetic range \citep{koga2007pre,wan2012ApJ,chhiber2018MMS}. This is partially at odds with kinetic \citep{leonardis2013prl} and MHD simulation \citep{wan2012ApJ} as well as
observations in the terrestrial magnetosheath \citep{chhiber2018MMS}. A putative decrease may be due to interference by incoherent waves from foreshock activity, or noise of instrumental or numerical origin
\citep{chian2009AnGeo,wu2013ApJ}, while a constant SDK may signify a physically relevant transition to monofractal scaling
\citep{kiyani2009PRL,leonardis2016pop}. If incoherent plasma waves are the culprit then proximity to the terrestrial bowshock may play a role, and there are some suggestions to this effect in contrasting \textit{ACE} and \textit{Cluster} observations \citep{wan2012ApJ}. These issues are resolved below in the \textit{PSP}\ observations that we present.
\section{\textit{PSP}\ Observations in near-Sun solar wind}\label{sec:data}
\begin{table*}[ht]
\centering
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c |}
\hline
Time on 2018-11-06 & \(\langle V\rangle\) & \(\langle v\rangle\) & \(\langle T_\text{i}\rangle\) & \(\langle n_\text{i}\rangle\) & \(d_\text{i}\) & \(\langle B\rangle\) & \(\langle b\rangle\) & \(\langle V_\text{A}\rangle\) & \(\beta_\text{i}\) \\ \hline
UTC 02:00:00 - 03:00:00 & 343 km/s & 52 km/s & \(3.8\degree\times 10^5\) K & 304 \(\text{cm}^{-3}\) & 13 km & 99 nT & 63 nT & 124 km/s & 0.4 \\ \hline
\end{tabular}
\caption{Bulk plasma parameters. Shown are the average values of proton speed \(\langle V\rangle \equiv \langle \sqrt{V_R^2 + V_T^2 +V_N^2}\rangle\), rms velocity fluctuation \(\langle v\rangle \equiv \sqrt{\langle |\bm{V} - \langle \bm{V}\rangle|^2 \rangle}\), ion temperature \(\langle T_\text{i}\rangle\), ion density \(\langle n_\text{i} \rangle\), ion inertial scale \(d_\text{i}\), magnetic field magnitude \(\langle B\rangle \equiv \langle \sqrt{B_R^2 + B_T^2 + B_N^2}\rangle\), rms magnetic fluctuation \(\langle b\rangle\equiv \sqrt{\langle |\bm{B} - \langle \bm{B}\rangle|^2 \rangle}\), Alfv\'en speed \(\langle V_\text{A}\rangle \equiv \langle B\rangle/\sqrt{4\pi m_\text{i} \langle n_\text{i}\rangle } \), and ion beta.
Averaging is performed over the entire interval.}\label{tab:bulk}
\end{table*}
We examine higher-order inertial and kinetic scale statistics in a region of young solar wind explored for the first time recently by \textit{PSP}\ \citep{fox2016SSR}, using measurements of the magnetic field from the FIELDS instrument \citep{bale2016SSR}. We focus on a 1-hour interval near first perihelion from UTC 2018-11-06T02:00:00 to 2018-11-06T03:00:00, when \textit{PSP}\ was at \(\sim 35.6~\text{R}_\odot\). We use the SCaM data product, which merges fluxgate and search-coil magnetometer (SCM) measurements by making use of frequency-dependent merging coefficients, thus enabling magnetic field observations from DC to 1 MHz with an optimal signal-to-noise ratio \citep{bowen2020JGR}. Solar Probe Cup (SPC) data from the SWEAP instrument \citep{kasper2016SSR,case2020ApJS} provide estimates of bulk plasma properties.
For the interval used here, the SCaM data-set is resampled to 0.0034 s time cadence. Time series of heliocentric \(RTN\) components \citep{franz2002pss} of the magnetic field are shown in Figure \ref{fig:tser}. SPC measurements of ion density, velocity, and thermal speed are resampled to 1 s resolution and cleaned using a time-domain Hampel filter \citep[e.g.,][]{pearson2002hampel}. The general properties of the plasma during the interval are listed in Table \ref{tab:bulk}. The radial velocity \(V_R\) during this interval indicates a slow wind, with \(V_R \lesssim 450~\kmps\). Three prominent reversals, or switchbacks \citep{DudokDeWit2020ApJS}, of the radial magnetic field are present (see Footnote \ref{ftnt:stationarity}). A high degree of correlation, or Alfv\'enicity, of velocity and magnetic field is observed \citep[][]{kasper2019Nat,chen2020ApJS}.
\begin{figure}
\centering
\includegraphics[width=.47\textwidth]{tser}
\includegraphics[width=.47\textwidth]{spect_compensate}
\caption{\textit{Top}: Time series of heliocentric RTN components of magnetic field, and radial ion velocity. \textit{Bottom}: Trace magnetic field power spectral density (PSD) \(\times\ 1/3\) (dark blue). Instrumental noise floor of SCM (brown). Inset shows compensated PSD.
Vertical lines mark ion gyrofrequency \(f_{ci}\), and inverse of ion and electron inertial lengths (\(1/d_\text{i}\) and \(1/d_\text{e}\)) on wavenumber axis. Equal ion and electron densities are assumed to compute \(d_\text{e}\).}
\label{fig:tser}
\end{figure}
The correlation time \citep{matthaeus1982JGR} is \(\sim 450\) s, corresponding
to a correlation length of \(\sim 1.5\times 10^5\) km, using Taylor's frozen-in approximation \citep{taylor1938ProcRSL} with a mean speed of \(340\) km/s. Taylor's hypothesis has reasonable validity during the first \textit{PSP}\ orbit \citep[][]{chhiber2019psp2,chen2020ApJS}; here it is reaffirmed by noting from Table \ref{tab:bulk} that \(\langle v\rangle /\langle V\rangle\sim 0.15\) and \(\langle V_\text{A}\rangle /\langle V\rangle\sim 0.36\).
Figure \ref{fig:tser} also shows the average power spectral density of the \(RTN\) magnetic field components. Similar spectra from \textit{PSP}\ have been reported previously \citep[e.g.,][]{chen2020ApJS}. We find an inertial range that extends more than two decades in wavenumber; above the ion gyrofrequency the spectrum steepens to a \(\sim -8/3\) power law \citep[e.g.,][]{goldstein2015RSPTA}. Crucial for this work is the signal-to-noise ratio \((S/N)\) at high (kinetic range) frequencies, where the relevant instrumental noise floor is that of the SCM \citep{bowen2020JGR}, shown in Figure \ref{fig:tser}. Clearly \(S/N \ge 100\) up to 100 Hz, and remains \(\ge 5\)
up to the highest available frequency, providing
a measure of confidence that these
measurements are unaffected by instrumental noise.
\section{Intermittency Observed by \textit{PSP}}\label{sec:intermit}
\begin{figure}
\centering
\includegraphics[width=.44\textwidth]{SB}
\includegraphics[width=.44\textwidth]{slopes1}
\includegraphics[width=.44\textwidth]{slopes2}
\caption{\textit{Top}: Structure functions for \(\delta B\)
(Equation \eqref{eq:struc})
{\it vs.}
temporal (\(\tau\)) and spatial (\(\ell\)) lags. A reference \(\ell^{2/3}\) curve (dashed, purple) is shown.
Shaded region (cream) \(\ell = 10 - 10^3~d_\text{i}\) demarcates inertial-range. Shaded region (blue) \(\ell = 0.1-2~d_\text{i}\) demarcates kinetic-range. \textit{Middle}:
Scaling exponents \(\zeta (p)\)
{\it vs.} $p$ for inertial,
kinetic, and intermediate ranges. Dashed line: K41 prediction \(\zeta (p) = p/3\). Moments not determined with reliable accuracy: grey-shaded region. \(1 \sigma\) uncertainty estimates for straight line fits to determine \(\zeta (p)\) are shown, but are generally smaller than the symbols.
\textit{Bottom}: Same as middle, but using ESS; scaling exponents for each range of lags are divided by \(\zeta (3)\) for the respective range.
Kinetic-range curve (blue triangles) overlaps the K41 curve.}
\label{fig:struct}
\end{figure}
We define increments of magnetic-field components at time \(t\) as
\begin{equation}
\delta B_i(t,\tau) = B_i(t+\tau) - B_i(t),\label{eq:inc2}
\end{equation}
where \(i \in \{R,T,N\}\) and \(\tau\) is a temporal lag. To convert temporal lags to spatial lags we use the Taylor approximation, wherein the spatial lag corresponding to \(\tau\) is \(\ell = \langle V_R\rangle \tau\) \citep[see][]{chhiber2020ApJS}, with mean radial solar-wind speed \(\langle V_R\rangle \sim 335~\kmps\) here. In this way we obtain spatial increments \(\delta B_i (t,\ell)\) using Equation \eqref{eq:inc2}. The magnitude of the vector magnetic increment is then \(\delta B(t,\ell) \equiv (\delta B_R^2 + \delta B_T^2 + \delta B_N^2)^{1/2}\).
The $p$-th order structure functions of \(\delta B\) are
\begin{equation}
S^{(p)}_B (\ell) = \langle [\delta B(t,\ell)]^p \rangle_T,
\label{eq:struc}
\end{equation}
where the \(\langle \dots \rangle\) refers to averaging over the time interval \(T \gg \tau \). Similarly, for
for each component \({B}_i\),
\begin{equation}
S^{(p)}_{B_i} (\ell) = \langle [\delta B_i(t,\ell)]^p \rangle_T.
\label{eq:struc_comp}
\end{equation}
The accuracy of computed
higher-order moments is affected by
sample size; a rule of thumb
is that the highest order that can be computed reliably is
\(p_\text{max}=\log N-1\), where \(N\) is the number of samples \citep{dudokdewit2013SSR}. With \(N\sim 1.1\times 10^6\) for the present interval we get \(p_\text{max} = 5\). Statistics of higher order than this are interpreted with some reservation.
The top panel of Figure \ref{fig:struct} shows \(S^{(p)}_B (\ell)\) for \(p\) ranging from 1 to 8, and spatial lags \(\ell\) ranging from \(\sim 0.1~d_\text{i}\), deep within the kinetic range, to \(10^4~d_\text{i}\),
close to the energy-containing scales (the correlation length is \(\sim 1.1\times 10^4~d_\text{i}\)).
The slopes of \(S^{(p)}\) {\it vs.} \( \ell \) are larger at kinetic scales, indicating the presence of relatively stronger gradients.
Structure functions for individual components (Equation \eqref{eq:struc_comp}) are very similar (not shown).
Next we investigate the slopes of the structure functions in greater detail. For Gaussian and non-intermittent statistics
consistent with K41 (see \S \ref{sec:theory})
one expects
\(S^{(p)}(\ell) \propto \ell^{\zeta (p)}\) with \(\zeta (p) = p/3\).
Figure \ref{fig:struct} (middle panel) shows the scaling exponents \(\zeta (p)\) vs \(p\), computed separately for the inertial \((10 - 10^3~d_\text{i} )\) and kinetic \((0.1 - 2~d_\text{i} )\) ranges, as well as an intermediate range \((2 - 10~d_\text{i})\). The exponents are computed by using chi-squared error minimization to fit straight lines to \(\ln{S^{(p)}}\) vs \(\ln{\ell}\). Inertial range exponents (red circles) begin to diverge from the K41 curve beyond \(p=3\), with higher orders showing larger departures, indicating strong intermittency with multifractal statistics (see Equation \eqref{eq:K62}). The kinetic-range curve (blue diamonds) also lies far from the K41 prediction, but is rather close to a straight line, suggesting monofractal and scale-similar but non-Gaussian statistics. These results are consistent with analyses of near-Earth solar wind turbulence based on \textit{Cluster} measurements \citep{kiyani2009PRL,alberti2019Entropy}. Exponents for the intermediate range show a transition from inertial to kinetic range behavior.
The bottom panel of Figure \ref{fig:struct} employs the Extended Self Similarity (ESS) hypothesis \citep{benzi1993PRE}, which posits that scalings of structure functions at each order are related to that of other orders. In particular the scaling of \(S^{(p)}(\ell)\) with order $p>3$ may relate better to the behavior of \(S^{(3)}(\ell )\) than to the lag $\ell$ itself. Accordingly we proceed by dividing \(\zeta (p)\) for the different lag ranges by \(\zeta (3)\) for the respective range. This rescaling does not affect the inertial range result significantly. Remarkably, the kinetic-scale exponents collapse almost perfectly to the K41 line. As far as we are aware, this has not been previously demonstrated for magnetic fluctuations in the solar wind. Similar use of ESS has been applied in kinetic simulations \citep{wan2016PoP,leonardis2016pop} and to solar wind density fluctuations at subproton scales \citep{chen2014ApJ}. The intermediate range once again exhibits transitional behavior.\footnote{The present application of ESS is somewhat at variance with the original usage \citep{benzi1993PRE}, where the significance of \(\zeta(3)\) derives from its correspondence to the third-order energy transfer law. This connection is lost in a plasma because the energy flux involves \textit{mixed} third-order structure functions of both magnetic and velocity fields \citep{politano1998PRE}; in this sense ESS should properly be based on mixed correlators \citep{politano1998EuroPhysLet}. However, including velocity statistics here is not an option since \textit{PSP}\ plasma data are not available at sufficiently high cadence to probe the kinetic range \citep{case2020ApJS}. Nevertheless, the use of ESS as we have implemented it clearly organizes the data in a revealing way.}
To further investigate near-Sun kinetic scale intermittency we examine PDFs of increments of \(B_R\) at lags ranging from near energy-containing scales, through the inertial range, down to subproton scales. We first normalize increments (Equation \eqref{eq:inc2}) at each lag by the corresponding standard deviation, and then compute PDFs by calculating the relative frequency of occurrence of increments within designated bins and dividing these frequencies by the bin width to obtain probability densities. The resulting PDFs (Figure \ref{fig:pdf}) are compared
with a Gaussian PDF for reference. Increments at \(\ell = 5000~d_\text{i}\) measure structures at scales of about half a correlation length, and these non-uniform, ``system-size'' structures exhibit a highly irregular PDF, which nevertheless has the narrowest tails of all. PDFs for the two inertial range lags (100 and \(10~d_\text{i})\) show wide, super-Gaussian tails, signifying the presence of outlying ``extreme'' events and intermittency. The \(10~d_\text{i}\) lag has slightly wider tails, consistent with the well known property of stronger intermittency at smaller inertial-range scales \citep[e.g.,][]{sorriso-valvo1999GRL,chhiber2018MMS}.
\begin{figure}
\centering
\includegraphics[width=.47\textwidth]{pdf_r2}
\caption{PDFs of \(\delta B_R\) normalized by their standard deviation
$\sigma$. Gaussian PDF shown for reference (dashed line).
\textit{Inset}: PDFs of unnormalized \(\delta B_R\); lags from \(1~d_\text{i}\) (outermost curve, green) to \(0.1~d_\text{i}\) (innermost, red). Collapse of green and red curves after rescaling is evident in the main graphic. All PDFs include bins with population $\ge 5$.}
\label{fig:pdf}
\end{figure}
Moving on to kinetic-range lags (1 and \(0.1~d_\text{i}\)), we see super-Gaussian tails in PDFs, indicating the continued presence of intermittent structures at these scales. However, the widths of these tails are comparable to (perhaps even slightly narrower than) the \(10~d_\text{i}\) case, suggesting a saturation of the level of intermittency at proton scales (see also Figure \ref{fig:kurt}, below). Furthermore, the scale similarity suggested by the investigation of scaling exponents in the kinetic range (Figure \ref{fig:struct}) is reaffirmed by the fact that PDFs of the 1 and \(0.1~d_\text{i}\) lags overlap to large degree. To emphasize this ``monoscaling'', the inset in Figure \ref{fig:pdf} shows PDFs of increments of \(B_R\) for \(\ell = \{0.1,0.3,0.5,0.8,1\}~d_\text{i}\), \textit{not} normalized by the respective standard deviations as in the main graphic. The outermost (green) curve is for \(\ell=1~d_\text{i}\) and the innermost (red) curve is for \(\ell = 0.1~d_\text{i}\). The scale-similar monoscaling of the PDFs is demonstrated by the fact that these PDFs collapse on to each other after being rescaled by their standard deviations \citep[c.f.][]{kiyani2009PRL,osman2015ApJL}. PDFs of \(\delta B_T\) and \(\delta B_N\) behave similarly.
The final diagnostic of intermittency we examine is the SDK,
a normalized fourth-order moment
that emphasizes the tails of PDFs presented previously:
\begin{equation}
\kappa (\ell) = \frac{S^{(4)} (\ell)}{\left [S^{(2)} (\ell)\right]^2},\label{eq:kurt}
\end{equation}
where \(S^{(p)}\) can be defined using either Equation \eqref{eq:struc} or \eqref{eq:struc_comp}. \(\kappa (\ell )\) may be thought of as the inverse of the filling fraction for structures at scale \(\ell\); i.e., if \(\kappa (\ell)\) increases with decreasing \(\ell\) then the fraction of volume occupied by structures at scale \(\ell\) decreases with decreasing \(\ell\). The scalar Gaussian distribution has \(\kappa = 3\); a value \(\kappa > 3\) is a manifestation of wider tails relative to the Gaussian \citep[e.g.,][]{decarlo1997kurtosis}.
\begin{figure}
\centering
\includegraphics[width=.44\textwidth]{kurt}
\caption{Scale-dependent kurtosis of magnetic field.}
\label{fig:kurt}
\end{figure}
Figure \ref{fig:kurt} shows \(\kappa (\ell)\) for individual components of \(\delta \bm{B}\) as well as it's magnitude. All four cases behave similarly -- the kurtosis is near Gaussian at the largest lags, increases to values between 10 and 25 as the lag is decreased across the inertial range to \(\sim 10~d_\text{i}\), and then stays roughly constant down to \(0.1~d_\text{i}\). Once again, this indicates a saturation of the intermittency and scale-similar, monofractal behavior at kinetic scales.\footnote{\label{ftnt:stationarity} To test the robustness of our results and their sensitivity to interval selection (stationarity), the analysis was repeated separately for the first and second halves of the interval, as well as the ``quiet'' period between 02:20 and 02:50 (see Figure \ref{fig:tser}). The results were essentially unchanged, although SDK in the quiet period is relatively smaller and flattens at relatively larger scales (tens of \(d_\text{i}\)), suggesting weaker multifractality in the inertial range.} This result is consistent with kinetic simulations and \textit{Cluster} observations in the solar wind presented by \cite{wu2013ApJ}. A likely candidate for producing monofractal kinetic-scale kurtosis is a scale-independent fragmentation of current structures between ion and electron scales, as suggested by some kinetic simulations \citep{karimabadi2013coherent}. Note the marked contrast to Figure 8 of \cite{chhiber2018MMS}, where SDK is re-Gaussianized at kinetic scales presumably due to terrestrial foreshock activity and/or instrumental noise in \textit{MMS} measurements.
\section{Discussion and Conclusions}\label{sec:Disc}
In this paper we investigated intermittency in near-Sun solar wind observations of inertial and kinetic range magnetic turbulence, using standard measures including SDK and scaling of higher-order moments up to eighth. Use of a unique \textit{PSP}\ FIELDS dataset, merged from fluxgate and search-coil magnetometer measurements \citep{bowen2020JGR}, enables study of high frequencies well into the subproton scales, taking Taylor's hypothesis into account.
Our main results extended several prior studies and clarified outstanding questions concerning solar wind intermittency. First, we observed clearly a monofractal, non-Gaussian, subproton kinetic range, consistent with near-Earth observations \citep{kiyani2009PRL,alberti2019Entropy}. In particular, with \textit{PSP}\ data close to the sun at 36 $\text{R}_\odot$, far from any foreshock activity, and measurements unaffected by noise \citep{koga2007pre,chian2009AnGeo,wan2012ApJ,wu2013ApJ,chhiber2018MMS}, it is possible to establish clearly that the kurtosis does not re-Gaussianize at sub-ion scales and the statistics remain intermittent. Another major result of interest from the perspective of turbulence theory \citep{benzi1993PRE,benzi1993EPL} is that implementation of ESS
for sub-ion scales causes a collapse to linear, Kolmogorov-like
behavior for the scaling exponents -- consistent with results reported for kinetic simulation \citep{wan2016PoP}. As far as we are aware this has not been previously demonstrated for magnetic fluctuations in the solar wind. Finally, we report evidence that the magnetic field in near-Sun solar wind exhibits multifractal scaling in the inertial range \citep[c.f.,][]{zhao2020ApJ,alberti2020psp}, which is consistent with near-Earth observations \citep[][and references therein]{bruno2019ESS}, as well as kinetic simulations of turbulence
\citep{leonardis2016pop,wan2016PoP}.
Multifractal inertial-range scaling of higher-order moments is a familiar result in large turbulent MHD systems \citep{politano1998EuroPhysLet,wan2012ApJ}.
We emphasize that the present results are enabled by the the unique orbital position of \textit{PSP}, along with the high-cadence low-noise character of the
FIELDS/SCaM magnetic field dataset. Even with the clarifications this analysis provides, there remain unanswered questions. One major outstanding
issue is why the subproton range becomes monofractal. This implies self-similarity (or ``rescaling'') of the underlying PDF over that range \citep[e.g.,][]{kiyani2009PRL}.
One possible interpretation is that the range between proton and electron scales is populated by scale-invariant sheet-like concentrations of electric current density. In fact, large numbers of highly dynamic subproton-scale current sheets are seen in kinetic simulations \citep[e.g.,][]{karimabadi2013coherent} and have been inferred in observations \citep[e.g.,][]{retino2007}. This may be distinguished from an effect of incoherent
linear (noninteracting) waves, which may be expected to produce a return to Gaussianity \citep{koga2007pre,chhiber2018MMS}, and not an onset of monofractal scaling. However, a rigorous connection of structure with monofractality remains to be established and is deferred to future research.
\acknowledgments
We thank A. Chasapis for useful discussions. This research was supported in part by the \textit{PSP}\ mission under the IS\(\odot\)IS\ project (contract NNN06AA01C) and a subcontract to University of Delaware from Princeton (SUB0000165), and NASA HSR grant 80NSSC18K1648. We acknowledge the \textit{PSP}\ mission for use of the data, which are publicly available at the \href{https://spdf.gsfc.nasa.gov/}{NASA Space Physics Data Facility}. FIELDS data are publicly available at \url{https://fields.ssl.berkeley.edu/data/}.
|
2,877,628,090,347 | arxiv | \section{Introduction}
Time-series classification (TSC) problems involve training a classifier on a set of cases, where each case contains an ordered set of real valued attributes and a class label. Time-series classification problems arise in a wide range of fields including, but not limited to, data mining, statistics, machine learning, signal processing, environmental sciences, computational biology, image processing and chemometrics. A wide range of algorithms have been proposed for solving TSC problems (see, for example~\cite{rodriguez05svm,ding08querying,corduas08timeseries,ye09shapelets,abraham10integrated,batista11complexity,jeong11weighted,bagnall12ensemble,douzal12trees,hills13shapelet,deng13forest}).
In~\cite{bagnall12ensemble}, Bagnall {\em et al.} argue that the easiest way to gain improvement in accuracy on TSC problems is to transform into an alternative data space where the discriminatory features are more easily detected. They constructed classifiers on data in the time, frequency, autocorrelation and principal component domains and combined predictions through alternative ensemble schemes. The main conclusion in~\cite{bagnall12ensemble} is that for problems where the discriminatory features are based on similarity in change and similarity in shape, operating in a different data space produces a better performance improvement than designing a more complex classifier for the time domain. However, the issue of what is the best technique in a single data domain is not addressed in~\cite{bagnall12ensemble}. Our aim is to experimentally determine the best method for constructing classifiers in the time domain, an area that has drawn most of the attention of TSC data mining researchers~\cite{batista11complexity,jeong11weighted,rodriguez05svm,douzal12trees,deng13forest}. The general consensus amongst data mining researchers is that {\em ``simple nearest neighbor classification is very difficult to beat"}~\cite{batista11complexity}. For problems with few training cases, an elastic distance measure such as dynamic time warping (DTW) or longest common subsequence (LCSS) is often superior to Euclidean distance, but as the number of series increases {\em``the accuracy of elastic measures converge with that of Euclidean distance"}~\cite{ding08querying}.
Our objective is to empirically test various aspects of these commonly made assertions. We examine whether one nearest neighbour (1-NN) is in fact not significantly worse than other classifiers and whether setting the parameter $k$ for nearest neighbour through cross validation on the training data improves performance. For DTW, the key parameter is the warping window size. This dictates the largest allowable displacement between two points in the warping path.
We evaluate whether setting the warping window through cross validation makes the classifier more accurate. Several alternative forms of DTW have been proposed in the literature. A version of DTW that weights against large warpings (WDTW) is described in~\cite{jeong11weighted}. The weighting scheme can be used in conjunction with dynamic time warping and an alternative version based on first order differences (DDTW), described in~\cite{keogh01derivative}. We extend the experiments described in~\cite{jeong11weighted} to test whether their conclusions hold over a large number of data sets with parameter optimisations for all of the algorithms considered. All datasets and code to reproduce experiments and results are available online~\cite{TSC_Web}.
To summarise, we have conducted extensive experiments to answer the following questions:
\begin{enumerate}
\item Is Euclidean/DTW nearest neighbour (1-NN) really better than other commonly used algorithms such as tree-based or probabilistic classifiers?
\item Does the accuracy of 1-NN Euclidean approach that of 1-NN DTW as training set size increases?
\item Is it better to use $k$ nearest neighbours ($k$-NN), with $k$ set through cross validation, rather than 1-NN?
\item Is it worthwhile setting the warping window for DTW through cross validation?
\end{enumerate}
We have answered these questions through over three million experiments on 77 TSC problems. 43 datasets come from the the UCR repository~\cite{UCRWeb}, 24 problems are from other published research, including~\cite{ye09shapelets,ShapeletWeb}, and 5 are new data sets on electricity device classification problems described in~\cite{lines14elastic}.
The structure of this paper is as follows. In Section~\ref{background} we provide background into TSC and review related research on DTW. In Section~\ref{data} we detail the 77 data sets we used in experiments. In Section~\ref{results} we present our results and in Section~\ref{conc} we summarise our conclusions.
\section{Background and Related Work}
\label{background}
\subsection{Time Series Classification (TSC)}
We define time series classification as the problem of building a classifier from a collection of labelled training time series. We limit our attention to problems where each time series has the same number of observations. We define a time series $\bf{x_i}$ as a set of ordered observations
$${\bf x_i} = <x_{i1},\ldots,x_{im}>$$ and an associated class label $y_i$. The training set is a set of $n$ labelled pairs $$D=\{({\bf x_1}, y_1), \ldots, ({\bf x_n}, y_n)\}.$$ For traditional classification problems, the order of the attributes is unimportant and the interaction between variables is considered independent of their relative positions. For time series data, the ordering of the variables is often crucial in finding the best discriminating features. There are three broad categories of TSC discriminating features which are described by three general approaches to measuring similarity between time series: similarity in shape, similarity in change and similarity in time.
Similarity in shape describes the scenario where class membership is characterised by a common shape but the discriminatory shape is phase independent. If the common shape involves the whole series, but is phase shifted between instances of the same class, then transformation into the frequency domain is often the best approach (for example, see~\cite{agrawal93efficient}). If the common shape is local and embedded in confounding noise, then subsequence techniques such as Shapelets can be employed~\cite{ye09shapelets,hills13shapelet}.
Similarity in change refers to the situation where the relevant discriminatory features are related to the autocorrelation function of each series. The most common approach in this situation is to fit an ARMA model, then base similarity on differences in model parameters~\cite{corduas08timeseries}. The common element to similarity in shape and change is that similarity between series is not measured in the time domain.
However, the majority of the data mining research into TSC has concentrated in similarity in time. This can be quantified by measures such as Euclidean distance or correlation~\cite{douzal12trees,abraham10integrated}. Similarity in time is characterised by the situation where the series from each class are observations of an underlying common curve in the time dimension. Variation around this underlying common shape is caused by noise in observation, and also by possible noise in indexing which may cause a slight phase shift. A classic example of this type of similarity is the Cylinder-Bell-Funnel artificial data set, where there is noise around the underlying shape, but also noise in the index of where the underlying shape transitions~\cite{douzal12trees}.
The commonly used benchmark classification algorithm for problems with small phase shift is 1-NN with an elastic measure such as DTW or LCSS to allow for small shifts in the time axis. In a comprehensive study~\cite{ding08querying}, DTW was found to be as least as good as other elastic measures based on edit distance, and constraining the warping window was found to speed up computation, {\em ``while yielding the same or even better accuracy"}~\cite{ding08querying}. The experimentation in~\cite{ding08querying} addresses the issue of what distance measure to use and is the starting point for our research. We begin by testing their assumptions about classifier selection and parameter setting before investigating alternative variants of DTW and combination schemes for the classifiers.
\subsection{Classification Algorithms}
The nearest neighbour classifier is a lazy classifier (i.e. requires no training) that classifies new cases by finding the closest case in the training set with a distance function, then using the class of the closest case as the predicted class for the new case. Given the focus on distance functions in time series data mining research, it is perhaps unsurprising that the majority of classification has used 1-NN. Whilst often highly effective, 1-NN is known to be susceptible to problems such as outliers in the training set and large numbers of redundant features. Outliers can be compensated for by using $k$ nearest neighbours and a voting scheme. Redundant features may be dealt with by filtering or by employing one of the plethora of alternative classifiers. Filtering was found to be not effective in~\cite{bagnall12ensemble}. We compare nearest neighbour classifiers against C4.5~\cite{quinlan93c4}, Random Forest~\cite{breiman01random}, Rotation Forest~\cite{rodriguez06rotation}, Naive Bayes~\cite{lewis98naive} and Bayesian networks~\cite{pearl88bayesnet} and Support Vector Machines with linear and quadratic kernels~\cite{cortes95support}.
\subsection{Dynamic Time Warping}
For similarity in shape, Dynamic Time Warping (DTW) is commonly used to mitigate against distortions in the time
axis~\cite{ratanamahatana05threemyths}. Suppose we want to measure the distance between two series, $\mathbf{a}=\{a_1,a_2,\ldots,a_m\}$ and $\mathbf{b}=\{b_1,b_2,\ldots,b_m\}$. Let $M(\mathbf{a},\mathbf{b})$ be the $m \times
m$ pointwise distance matrix between $\mathbf{a}$ and $\mathbf{b}$, where
$M_{i,j}= (a_i-b_j)^2$.
A warping path $P=<(e_1,f_1),(e_2,f_2),\ldots,(e_s,f_s)>$ is a set of points (i.e. pairs of indexes) that
define a traversal of matrix $M$. So, for example, the Euclidean distance $d_E(\mathbf{a,b})=\sum_{i=1}^m (a_i-b_i)^2$ is the path along the diagonal of $M$, i.e.
$P_e=<(1,1,(2,2),\ldots,(m,m)>$.
A valid warping path must satisfy the conditions $(e_1,f_1)=(1,1)$ and $(e_s,f_s)=(m,m)$ and
that $0 \leq e_{i+1}-e_{i} \leq 1$ and $0 \leq f_{i+1}- f_i \leq 1$
for all $i < m$.
The DTW distance between series is the path through $M$ that minimizes the total distance, subject to constraints on
the amount of warping allowed. Let $p_i=M_{a_{e_i},b_{f_i}}$ be the distance
between element at position $e_i$ of $\mathbf{a}$ and at position $f_i$ of $\mathbf{b}$ for the $i^{th}$
pair of points in a proposed warping path $P$. The distance for any path $P$ is
\[ D_P(\mathbf{a},\mathbf{b}) =\sum_{i=1}^s p_i.\]
If $\mathcal{P}$ is the space of all possible paths, the DTW path $P^*$ is the path that has the minimum distance, i.e.
$$P^* = \min_{P \in \mathcal{P}}(D_P(\mathbf{a},\mathbf{b})),$$
and hence the DTW distance between series is
$$D_{P*}(\mathbf{a},\mathbf{b}) = \sum_{i=1}^k p_i.$$
The optimal path $P^*$ can be found exactly through a dynamic programming formulation. This can be a time consuming operation, and it is common to put a restriction on the amount of warping allowed. This restriction is equivalent to putting a maximum allowable distance between any pairs of indexes in a proposed path. If the warping window, $r$, is the proportion of warping allowed, then the optimal path is constrained so that $$|e_i-f_i| \leq r\cdot m \;\;\; \forall (e_i,f_i) \in P^*.$$
\subsection{Longest Common Subsequence}
The Longest Common Subsequence Distance (LCSS) is based on the solution to the longest common subsequence problem in pattern matching. The typical problem is to find the longest subsequence that is common to two discrete series based on the edit distance. This approach can be extended to consider real-valued time series by using a distance threshold $\epsilon$, which defines the maximum difference between a pair of values that is allowed for them to be considered a match~\cite{kuzmanic07handshape}. LCSS finds the optimal alignment between two series by inserting gaps to find the greatest number of matching pairs. For example, consider the discrete case where we have two strings S = ``ABCADACDAB" and T = ``BCDADBCACB". If we simply observe the point-wise matches between the two sequences, we can extract matching pairs for the substring ``ADCB" as shown in Figure~\ref{img:lcssPointwise}.
\begin{figure}[htbp]
\centering
\includegraphics[width=4cm]{lcssPointwise}
\caption{The point-wise matches for ``ABCADACDAB" and ``BCDADBCACB"}
\label{img:lcssPointwise}
\end{figure}
However, using LCSS we can find a longer matching subsequence by inserting spaces into the two strings. In this case, the elasticity of the measure means we find the subsequence ``BCDACAB", as depicted in Figure~\ref{img:lcssMatchedExample}.
\begin{figure}[htbp]
\centering
\includegraphics[width=4cm]{lcssMatchedExample}
\caption{The LCSS of ``ABCADACDAB" and ``BCDADBCACB"}
\label{img:lcssMatchedExample}
\end{figure}
The LCSS between two series ${\bf a}$ and ${\bf b}$ can be found using Algorithm~\ref{algo1}.
\begin{algorithm}[htbp]
\caption{LCSS (${\bf a},{\bf b}$)}
\label{algo1}
\begin{algorithmic}[1]
\STATE Let $L$ be an $(m+1)\times(m+1)$ matrix initialised to zero.
\FOR{$i \leftarrow m$ to $1$}
\FOR{$j \leftarrow m$ to $1$}
\STATE $L_{i,j} \leftarrow L_{i+1,j+1}$
\IF{$a_i = b_j$}
\STATE $L_{i,j} \leftarrow L_{i,j}+1$
\ELSIF{$L_{i,j+1} > L_{i,j}$}
\STATE $L_{i,j} \leftarrow L_{i,j+1}$
\ELSIF {$L_{i+1,j} > L_{i,j}$}
\STATE $L_{i,j} \leftarrow L_{i+1,j}$
\ENDIF
\ENDFOR
\ENDFOR
\RETURN $L_{1,1}$
\end{algorithmic}
\end{algorithm}
The LCSS distance between ${\bf a}$ and ${\bf b}$ is
\[d_{LCSS}({\bf a,b}) = 1- \frac{LCSS({\bf a,b})}{m}.\]
\subsection{Derivative Dynamic Time Warping}
Keogh and Pazzani proposed a modification of DTW called Derivative Dynamic Time Warping (DDTW)~\cite{keogh01derivative} that first transforms the series into a series of first differences. Given a series $\mathbf{a}=\{a_1,a_2,\ldots,a_m\}$, the difference series is $\mathbf{a'}=\{a'_2,a'_2,\ldots,a'_{m-1}\}$ where $a'_i$ is defined as the average of the slopes between $a_{i-1}$ and $a_i$ and $a_i$ and $a_{i+1}$, i.e.
$$a'_i = \frac{(a_i-a_{i-1})+(a_{i+1}-a_{i-1})/2}{2},$$
for $1<i<m$. DDTW is designed to mitigate against noise in the series that can adversely affect DTW.
\subsection{Weighted Dynamic Time Warping}
A weighted form of DTW (WDTW) was proposed by Jeong {\em et al.}~\cite{jeong11weighted}. WDTW adds a multiplicative weight penalty based on the warping distance between points in the warping path. It favours reduced warping, and is a smooth alternative to the cutoff point approach of using a warping window. When creating the distance matrix $M$, a weight penalty $w_{|i-j|}$ for a warping distance of $|i-j|$ is applied, so that
$$M_{i,j}= w_{|i-j|} (a_i-b_j)^2.$$
A logistic weight function is proposed in~\cite{jeong11weighted}, so that a warping of $a$ places imposes a weighting of
$$w(a)=\frac{w_{max}}{1+e^{-g\cdot(a-m/2)}},$$
where $w_{max}$ is an upper bound on the weight (set to 1), $m$ is the series length and $g$ is a parameter that controls the penalty level for large warpings. The larger $g$ is, the greater the penalty for warping.
Jeong {\em et al.} compared WDTW to Euclidean distance, full window DTW and LCSS on 20 UCR data sets using a 1-NN classifier. Half of the test data was used as a validation set for setting the value of $g$ and the other half was used to measure accuracy. They state their results demonstrate {\em ``WDTW and WDDTW clearly outperform standard DTW, DDTW and LCSS measures."} We test this assertion in Section~\ref{results}.
\section{Data Sets}
\label{data}
We have collected 77 data sets, the names of which are shown in Table~\ref{tab1}. 43 of these are available from the UCR repository~\cite{UCRWeb}, 29 were used in other published work~\cite{hills13shapelet,ye09shapelets,ShapeletWeb} and 5 are new data sets we present for the first time. Further information and the data sets we have permission to circulate are available from~\cite{TSC_Web}.
\begin{table*}
\begin{center}
\caption{Data sets grouped by problem type. The actual file names are in a string array in class TimeSeriesClassification.fileNames}
\label{tab1}
\small
\begin{tabular}{|cccccc|} \hline
\multicolumn{6}{|c|}{\bf Image Outline Classification}\\ \hline
DistPhalanxAge &DistPhalanxOutline &DistPhalanxTW &FaceAll &FaceFour &WordSynonyms \\
MidPhalanxAge &MidPhalanxOutline &MidPhalanxTW &OSULeaf &Phalanges &yoga \\
ProxPhalanxAge &ProxPhalanxOutline &ProxPhalanxTW &ShapesAll &SwedishLeaf&MedicalImages \\
Symbols & Adiac &ArrowHead &BeetleFly &BirdChicken &DiatomSize\\
FacesUCR&fiftywords &fish&HandOutlines &Herring & \\\hline
\multicolumn{6}{|c|}{\bf Motion Classification}\\ \hline
CricketX&CricketY&CricketZ&UWaveX&UWaveY&UWaveZ\\
UWaveAll&GunPoint&Haptics&InlineSkate & ToeSeg1&ToeSeg2\\
\hline
\multicolumn{6}{|c|}{\bf Sensor Reading Classification}\\ \hline
Beef&Car&Chlorine&CinCECG&Coffee&Computers\\
FordA&FordB&ItalyPower&LargeKitchen&Lighting2&Lighting7\\
StarLightCurves&Trace&TwoLeadECG&wafer & RefrigerationDevices& MoteStrain\\
Earthquakes&ECG200&ECGFiveDays&ElectricDevices&SonyRobot1&SonyRobot2\\
OliveOil&Plane&ScreenType&SmallKitchenAppliances& MALLAT&\\
&&ECGThorax1&ECGThorax2&& \\ \hline
\multicolumn{6}{|c|}{\bf Simulated Classification Problems}\\ \hline
ARSim&CBF&SyntheticControl&ShapeletSim&TwoPatterns&\\ \hline
\end{tabular}
\end{center}
\end{table*}
We have grouped the problems into categories to help aid interpretation. The group of Sensor readings forms the largest category with 31 data sets. If we had more data sets it would be sensible to split the sensor categories into subtypes, such as human sensors and spectrographs. However, at this point such sub-typing would lead to groups that are too small. Image outline classification is the second largest category, with 29 problem sets. Many of the image outline problems, such as {\em BeetleFly}, are not rotationally aligned, and the expectation would be that classifiers in the time domain will not necessarily perform well with these data. The group of 14 Motion problems contains data taken from motion capture devices attached to human subjects. The final category is simulated data sets.
\section{Results}
\label{results}
We conducted our classification experiments with WEKA~\cite{hall09weka} source code adapted for time series classification, using the 77 data sets described in Section~\ref{data}.
All datasets are split into a training and testing set, and all parameter optimisation is conducted on the training set only. We use a train/test split for several reasons. Firstly, it is common practice to do so with the UCR datasets. Secondly, some of the data sets are designed so the train/test split removes bias. Combining to perform a cross validation would reintroduce the bias. And finally, it is not computationally feasible to cross validate everything. We ran over 3 million experiments on a 4148 core High Performance Cluster (with a theoretical peak performance of 65TFlops) over a period of a month. Adding another level of cross validation would have increased the time required, the number of experiments and the complexity of the code by an order of magnitude.
For hypothesis testing purposes, we are assuming that these data sets are a random sample from the set of all possible TSC problems. This is true in the sense that we have not collected these data with any agenda in mind. We are not attempting to find data sets more suitable for one particular algorithm. However, as Table~\ref{tab1} demonstrates, there is a bias in areas of application we are considering. There are, for example, no problems from econometrics or finance. Because of this, we also present results split by problem type where appropriate. All the results are available on an Excel spreadsheet from~\cite{TSC_Web}.
\subsection{Are 1-NN Classifiers are hard to beat?}
Our null hypothesis is that the average accuracy of 1-NN Euclidean and 1-NN DTW are the same as other classifiers in the time domain. If we can reject this hypothesis for the one sided alternative that the average accuracy is significantly worse than at least one other classifier, then we have evidence that the assertion {\em ''1-NN Classifiers are hard to beat"} is incorrect.
Figure~\ref{nnComp} presents the critical difference diagram for nine different classifiers, all trained with the default Weka parameters. It shows that the bottom group of C4.5, Naive Bayes and the Bayesian Network are significantly worse that the other classifiers. It also shows that there is no significant difference between the top three classifiers (1-NN with DTW, quadratic support vector machine and rotation forest). The middle group of linear SVM, 1-NN Euclidean and Random Forest are tightly grouped, and are all significantly worse than the top performing classifier, rotation forest.
\begin{figure}[*h]
\centering
\includegraphics[width=9cm]{classifierCompare_scissored.pdf}
\caption{The average ranks for nine different classifiers on 77 data sets. Critical difference is 1.3691}
\label{nnComp}
\end{figure}
Clearly, there is no evidence to refute the hypothesis concerning 1-NN DTW. Figure~\ref{nnComp} would suggest that 1-NN with Euclidean distance is significantly worse than the rotation forest. Paired tests of difference in average for 1-NN Euclidean vs SVMQ, Rotation Forest and 1-NN DTW all indicate that we should reject the null hypothesis that difference in means and medians is zero in each case. We used a paired T test for the mean and a Wilcoxon signed rank test for the median, full results are available from~\cite{TSC_Web}. Our conclusion from these experiments is that, whilst it is true that 1-NN DTW is hard to beat, 1-NN with Euclidean distance is beaten by two off-the-shelf classifiers constructed with no parameter optimisation.
Table~\ref{1NNsplit} shows the average ranks of the classifiers split by problem type. This indicates that the NN classifiers perform much better on the motion data sets than the image outline problems. The poor performance on outlines is caused in part by rotation in the image data sets.
\begin{table}[htbp]
\centering
\caption{Average ranks of 6 of the 9 classifiers split by problem type. Naive Bayes, Bayes Net and C4.5 have been removed for clarity}
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
Type &NNEucid&NNDTW &RandF&RotF&SVML&SVMQ\\ \hline
IMAGE &5.27 &4.52 &4.91&3.18&3.93&3.23\\
MOTION &3.08 &3.00 &3.71&2.21&6.04&4.13\\
SENSOR &5.16 &4.23 &4.65&3.08&5.08&3.55\\
SIM &5.20 &1.40 &6.20&3.90&5.70&4.50\\ \hline
\end{tabular}
\label{1NNsplit}
\end{table}
We also found no evidence that 1-NN Euclidean converges to 1-NN DTW based on training size, although this is probably because of a lack of problems with large train set sizes. Figure~\ref{fig2} shows the plot of the difference in accuracy of 1-NN Euclidean and 1-NN DTW against the number of training cases.
\begin{figure}[htbp]
\centering
\includegraphics[width=9cm]{EuclidvsDTW_scissored3.pdf}
\caption{(Euclidean accuracy - DTW accuracy) plotted against the number of training cases and the least squares regression line. The slope of the regression line is not significantly different to zero.}
\label{fig2}
\end{figure}
Our conclusion from these experiments is that 1-NN with Euclidean distance should no longer be used as a standard benchmark against which to compare new algorithms for time series classification. We would recommend that nearest neighbour DTW classifiers should be the new default for distance measure based classifiers, and that the results for SVMQ and rotation forest should also be reported. We address the exact nature of how to use DTW nearest neighbour classifiers in the remainder of this section.
\subsection{Is it worth setting $k$ through cross validation?}
One commonly used method of improving nearest neighbour classifiers is to set $k$ through cross validation on the training data. This requires the calculation of the distance matrix for the training set, and so adds some time overhead. This is particularly time consuming when cross validating against window size for DTW, since every new window size may create a different distance matrix. We are using 7 variants of distance measure with nearest neighbour classifiers: Eulcidean (Euclid), DTW with full warping window (DTWR1), DTW with warping window set through cross validation (DTWRN), Derivative DTW with full and CV set warping windows (DDTWR1 and DDTWRN) and weighted DTW and DDTW (WDTW and WDDTW).
\begin{table}[htbp]
\centering
\caption{Summary statistics on the difference in accuracy between 1-NN and $k$-NN classifiers. The final two columns give the p value for a paired two sample T test for difference in means and
a Wilcoxon signed rank test for difference in medians}
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
Distance & Mean &1NN &Equal &kNN &T Test & Rank \\
Measure & Difference & Better & & Better &P value & P Value \\ \hline
Euclidean &0.16\%&11&48&18&0.3590&0.1748\\
DTWR1 &0.28\%&11&38&28&0.2454&0.0362\\
DTWRN &1.26\%&18&37&22&0.0004&0.3799\\
DDTWR1 &0.73\%&8&36&33&0.0360&0.0038\\
DDTWRN &0.17\%&12&42&23&0.3195&0.1044\\
WDTW &6.67\%&16&36&25&0.0000&0.1989\\
WDDTW &0.80\%&12&38&27&0.0058&0.0100\\ \hline
\end{tabular}
\label{warpingwindow}
\end{table}
Table~\ref{warpingwindow} shows the summary of the improvement in test accuracy of finding $k$ through cross validation rather than setting $k=1$. The largest average improvement is with WDTW, which improved 6.67\%. However, this
was skewed by two data sets with very large improvements, and the non-parametric Wilcoxon signed rank test could not detect a significant difference. The algorithm that showed significant improvement through setting $k$ was Derivative DTW. We conclude that if it is feasible to do so, then it is worthwhile setting $k$ through cross validation, but not doing so is unlikely to have a significant effect on accuracy.
\subsection{Is it worth finding the DTW window size through cross validation?}
The short answer to this question is yes. Figure~\ref{dtwcv} summarises the improvement in test accuracy over all 77 data sets. The mean improvement is 1.8\% and the median is 0.3\% (both significant at the 5\% level).
\begin{figure*}[htbp]
\centering
\includegraphics[width=11cm]{DTWimprovement_scissored2.pdf}
\caption{Histogram of the percentage improvement in accuracy when the DTW window size is set through cross validation. }
\label{dtwcv}
\end{figure*}
\subsection{Which is better for DTW, setting window size or setting weights?}
The results published for the weighting algorithm proposed in~\cite{jeong11weighted} are summarised in the critical difference diagram in Figure~\ref{cd1}.
\begin{figure}[htbp]
\centering
\includegraphics[width=9cm]{wdtw_cd_scissored.pdf}
\caption{The average ranks for 1-NN classifiers for Euclidean Distance (ED), Longest Common Subsequence (LCSS), Dynamic Time Warping (DTW), Derivative Dynamic Time Warping (DDTW), Dynamic Time Warping with window set through cross validation (DTWCV) and weighted versions (WDTW and WDDTW) for 20 data sets. Data taken directly from~\cite{jeong11weighted}. }
\label{cd1}
\end{figure}
Their claim that the weighting leads to clear improvement is not backed up by these results. Figure~\ref{cd1} indicates that although weighted versions have the highest rank, the result cannot be claimed to be significant.
The only significant difference is between the weighted versions of DTW and Euclidean distance. This demonstrates the need for testing on a large number of data sets. Furthermore, the comparisons they make are biased. Firstly, they compare full window DTW to the weighting scheme, when it would be more appropriate to compare against DTW with window size set through cross validation. Secondly, they use half the testing set data to validate the weighting parameter, but do not allow the other classifiers access to this data set.
The average rank is presented, along with the groups within which there is no significant difference in rank. The average difference in accuracy between DTW and WDTW is just 0.02145, but the difference between DTWCV and WDTW is merely 0.0056. We claim that DTWCV vs WDTW is a fairer comparison because WDTW has had the parameter $g$ set through cross validation.
\begin{figure}[htbp]
\centering
\includegraphics[width=9cm]{wdtw_cd_full_scissored.pdf}
\caption{The average ranks for 1-NN classifiers on the 77 data sets described in Section~\ref{data}. The critical difference is 1.0264}
\label{cd2}
\end{figure}
We have implemented their weighting algorithm and evaluated it on the 77 data sets described in Section~\ref{data}. In our experiments all parameter optimisation is conducted on the training set through cross validation. Figure~\ref{cd2} shows that although weighted DTW still has the highest rank, the difference between WDTW and DTWCV is very small and there is no significant difference between the top four classifiers. There is a significant difference between the full window classifiers and those that restrict the window, but there is no evidence to suggest the weighting algorithm is better than the window algorithm. LCSS performs surprisingly well. Table~\ref{1NN} shows the average ranks split by problem type. LCSS performs well on the motion data and image, but ranks poorly on sensor. Conversely, weighted and windowed DTW rank highly on motion and sensor but relatively poorly on image outlines. LCSS is in many ways closer to a Shapelet approach~\cite{ye09shapelets} than DTW, and this result suggests that subsequence matching techniques such as LCSS and Shapelets may be better for image outline classification.
\begin{table}[htbp]
\centering
\caption{Average ranks of 6 1-NN elastic measure classifiers split by problem type.}
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
TYPE &DTWCV &DTW&WDTW&DDTW&WDDTW&LCSS\\ \hline
IMAGE &3.72 &4.60&3.86&4.43&3.19&3.12\\
MOTION &2.42 &4.25&2.21&6.17&4.79&2.96\\
SENSOR &3.31 &4.13&2.97&4.84&3.48&4.45\\
SIM &3.80 &2.50&2.30&5.10&5.50&3.00\\ \hline
&3.36 &4.22&3.14&4.91&3.71&3.62\\ \hline
\end{tabular}
\label{1NN}
\end{table}
\section{Conclusions}
\label{conc}
We have conducted extensive experiments on the largest set of time series classification problems ever used in the literature (to the best of our knowledge). We have performed these tests to validate commonly held assumptions, evaluate a recently published algorithm and assess methods for ensembling.
Firstly, we conclude that comparisons against 1-NN with Euclidean distance for new TSC algorithms are not particularly informative, since three standard classifiers applied to the raw data perform significantly better with no parameter tuning at all. We think that a new algorithm is only of interest in terms of accuracy if it can significantly outperform 1-NN DTW with a full warping window. Secondly, when using a NN classifier with DTW on a new problem, we would advise that it is not particularly important to set $k$ through cross validation, but that setting the warping window size is worthwhile.
Thirdly, we conclude that the weighting algorithm for DTW described in~\cite{jeong11weighted} is significantly better than DTW with a full warping window, but not significantly different to DTW with the window set through cross validation.
\bibliographystyle{IEEEtran}
|
2,877,628,090,348 | arxiv |
\section*{Acknowledgments}
\label{sec:acknowledgments}
This report is an outcome of a workshop in August 2014 on {\em Future
Directions in CSE Education and Research}, sponsored by the Society
for Industrial and Applied Mathematics~\cite{siam:homepage} and
the European Exascale Software Initiative~\cite{EESI-generic}.
We gratefully acknowledge La Joyce Clark and Cheryl Zidel of Argonne
National Laboratory for workshop support.
We also thank Jenny Radeck of Technische Universit\"{a}t M\"{u}nchen for
assistance with graphics and Gail Pieper of Argonne National Laboratory for
editing this document.
The first author gratefully acknowledges support by the Institute of Mathematical Sciences
at the National University of Singapore during the work on this report.
The work of the third author was spported by the U.S. Department of
Energy, Office of Science, under contract number DE-AC02-06CH11357.
\subsection{Advances in CSE through Algorithms}
\label{sec:core-cse}
Algorithms occupy a central role in CSE.
They all transform inputs to outputs,
but they may differ in their generality, in their robustness and stability, and in their complexity---that is, in the way their costs in operations, memory, and data motion scale with the size of the input.
The types of algorithms employed in CSE are diverse.
They include geometric modeling, mesh generation and refinement,
discretization, partitioning, load balancing,
solution of ordinary differential equations (ODEs) and differential algebraic equations,
solution of partial differential equations (PDEs),
solution of linear and nonlinear systems, eigenvalue computations, sensitivity analysis, error estimation and adaptivity,
solution of integral equations,
surrogate and reduced modeling, random number generation,
upscaling and downscaling between models, multiphysics coupling,
uncertainty quantification, numerical optimization, parameter identification, inverse problems,
graph algorithms, discrete and combinatorial algorithms, graphical models,
data compression, data mining, data visualization, and data analytics.
\subsubsection{Impact of Algorithms in CSE}
Compelling CSE success stories stem from breakthroughs in applied mathematics and computer science
that have dramatically advanced simulation capabilities through better algorithms, as
encapsulated in robust and reliable software.
The growing importance of CSE in increasingly many application areas has paralleled the exponential growth in computing power according to ``Moore's law''---the observation that over the past five decades the density of transistors on a
chip has doubled approximately every 18 months as a result of advances in lithography allowing miniaturization.
Less appreciated but crucial for the success of
CSE is the progress in algorithms in this time span.
The advances in computing power have been matched or even exceeded by
equivalent advances
of the mathematics-based computational algorithms that lie at the heart of CSE.
Indeed, the development of efficient new algorithms
is crucial to the effective use of advanced computing
capabilities to address the pressing problems of humankind.
And as the pace of advancement in Moore's law
slows,\footnote{as documented in the TOP 500 list, \url{https://www.top500.org}} advances in algorithms and software will become even more important.
Single-processor clock speeds have stagnated, and further increase in computational power must come from increases in parallelism. CSE now faces the challenge of developing efficient methods and implementations in the context of ubiquitous parallelism (as discussed in Section \ref{sec:hpc-cse}).
As problems scale in size and memory to address increasing needs for fidelity and resolution in grand-challenge simulations,
the
computational complexity must scale as close to linearly in the problem size as possible.
Without this near-linear scaling, increasing memory and processing power in proportion---the way parallel computers are architected---the result will be computations that {\em slow down\/} in wall-clock time as they are scaled up.
In practice, such {\em optimal algorithms\/} are allowed to have a complexity
of $N(\log N)^p$, where $N$ is the problem size and $p$ is some small power, like 1 or 2.
Figure~\ref{Fig:Algorithmic-Moore} illustrates the importance of algorithmic innovation since the beginnings of CSE.
We contrast here the importance of algorithmic research as compared with technological progress in computers by using the historical example of linear solvers for elliptic PDEs.
Consider the problem of the
Poisson equation on a cubical domain, discretized into $n$ cells on a side, with total problem size $N=n^3$. The total memory occupied is $O(n^3)$, and the time to read in the problem or to write out the solution is also $O(n^3)$.
Based on the natural ordering,
banded Gaussian elimination applied to this problem requires
$O(n^7)$ arithmetic operations.
Perhaps worse, the memory to store intermediate results bloats to $O(n^5)$---highly nonoptimal, so that if we initially fill up the memory with the largest problem that fits, it overflows.
Over a quarter of a century, from a paper by Von Neumann and Goldstine in 1947 to a paper by Brandt in 1974 describing optimal forms of multigrid, the complexity of both operations and storage was reduced, in a series of algorithmic breakthroughs, to an optimal $O(n^3)$ each. These advances are depicted graphically in a log-linear plot of effective speedup over time in the left-hand side of Figure 5. During the same period, Moore's law accounted for approximately 24 doublings, or a factor of $2^{24} \approx 16$ million in arithmetic processing power per unit square centimeter of silicon, with approximately constant electrical power consumption. This same factor of
16 million was achieved
by mathematical research on algorithms
in the case that $n=2^6$ in the example above.
For grids finer than $64 \times 64 \times 64$, as we routinely use today,
the progress of optimal algorithms overtakes the progress stemming from Moore's law, by an arbitrarily large factor.
Remarkable progress in multigrid has been made since
this graph
was first drawn. Progress can be
enumerated along two directions: extension of optimality to problems
with challenging features not present in the original Poisson problem
and extension of optimal algorithms to the challenging environments of
distributed memory, shared memory, and hybrid parallelism, pushing
toward extreme scale.
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{./figures/Algorithmic_Moores_Law.png}
\includegraphics[width=0.48\textwidth]{./figures/complexity-diagram.png}
\caption{\label{Fig:Algorithmic-Moore}
{\em Left:} Moore's law for algorithms to solve the 3D Poisson equation (black) plotted with Moore's law for transistor density (red), each showing 24 doublings (factor of approximately 16 million) in performance over an equivalent period. For algorithms, the factor can be made arbitrarily large by increasing the problem size $N=n^3$. Here $n=64$, which is currently a modest resolution even on a single processor. {\em Right:} Increase in CSE model complexity and approximate computational cost over time, where the y-axis indicates a qualitative notion of complexity in the combination of models, algorithms, and data structures. Simulations have advanced from modestly sized forward simulations in simple geometries to incorporate complex domains, adaptivity, and feedback loops. The stage is set for new frontiers of work on advanced coupling, numerical optimization, stochastic models, and many other areas that will lead to truly predictive scientific simulations.}
\end{figure}
Algorithmic advances of similar dramatic magnitudes across many areas
continue to be the core of CSE research.
Each decade since Moore stated his law in 1965,
computational mathematicians have produced new algorithms
that have revolutionized computing.
The impact of these algorithms in science and engineering,
together with the technological advances following Moore's law,
has led to the creation of CSE as a discipline and the ability to
tackle problems with increasing realism and complexity,
as shown in the right-hand side of Figure~\ref{Fig:Algorithmic-Moore}.
\medskip
\begin{tcolorbox}[title=CSE Success Story: Lightning-Fast Solvers for the Computer Animation Industry]
\begin{wrapfigure}{R}{0.7\textwidth}
\vspace{-0.15in}
\includegraphics[width=0.7\textwidth]{./figures/Disney1.png}
\end{wrapfigure}
CSE researchers have teamed up with computer animators at Walt Disney Animation Studios Research to dramatically improve the efficiency in linear system solvers that lie at the heart of many computer animation codes. Building on advanced multilevel methods originally developed for engineering simulations of elastic structures and electromagnetic systems,
it was shown that movie animations with cloth simulation on a fully dressed character discretized on an unstructured computational grid with 371,000 vertices could be accelerated by a factor of 6 to 8 over existing solution techniques.\protect\footnotemark ~These improvements in computational speed enable greater productivity and faster turnaround times for feature film production with realistic resolutions. Another application is real-time virtual try-on of garments in e-commerce.
\end{tcolorbox}
\footnotetext{See video at \url{https://youtu.be/_mkFBaqZULU} and paper by R. Tamstorf, T. Jones, and S. McCormick, ACM SIGGRAPH Asia 2015, at \url{https://www.disneyresearch.com/publication/smoothed-aggregation-multigrid/}.}
\subsubsection{Challenges and Opportunities in CSE Methods and Algorithms}
Without attempting to be exhaustive, the following sections highlight several areas of current interest where novel CSE algorithms have produced important advances. Also included is a discussion of challenges and opportunities for the next decade in these fields.
\input{solvers}
\input{uq}
\input{optimization}
\input{high-order-AMR}
\input{model-order-reduction}
\input{randomized-algorithms}
\input{multibody}
\input{multiphysics}
\section{Conclusions and Recommendations}
\label{sec:conclusions}
\pagebudget{1}
\team{All}
\subsection{Summary}
Over the past two decades, computational science and engineering has become tremendously successful and influential at driving progress and innovation in the sciences and technology.
CSE is intrinsically interdisciplinary, and as such it often suffers from the entrapments created by disciplinary boundaries.
While CSE and its paradigm of quantitative computational analysis and discovery are permeating increasingly many areas of science, engineering, and beyond, CSE has been most successful when realized as a clearly articulated focus within its own well-defined academic structures and its own targeted funding programs and aided by its own focused educational programs.
The past decade has seen a comprehensive broadening of the application fields and methodologies of CSE. For example, mathematics-based computing is an important factor in the quantitative revolution that is sweeping through the life sciences and medicine, and powerful new methods for uncertainty quantification are being developed that build on advanced statistical techniques.
The impact of CSE on our society has been enormous, but focused and sustained CSE research funding programs are rare: CSE research is poorly recognized by funding agencies and is not currently funded in proportion to its immense potential in leading to breakthrough advances in increasingly many sectors of science and technology.
Quantitative and computational thinking is becoming ever more important in almost all areas of scientific endeavor. Hence, CSE skills and expertise must be included in curricula across the sciences, including the biomedical and social sciences.
A well-balanced system of educational offerings is the basis for shaping the CSE ecosystem. %
Creating a unique identity for CSE education is essential.
Dedicated CSE programs have been created up to now only in a relatively small number of universities, mostly in the United States and Europe. More such undergraduate and graduate-level (masters and Ph.D.) programs in CSE are necessary in order to train and prepare the future generation of %
CSE scientists to make new scientific and engineering discoveries. This core CSE education will require designing dedicated curricula; and where they already exist, continuous adaptation is needed to address needs of the rapidly changing landscape of CSE.
\bigskip
\subsection{Central Findings}
\begin{itemize}
\item \textbf{F1:}
\textbf{CSE as a discipline.}
CSE has matured to be a discipline in its own right.
It has its own unique research agenda, namely, to invent, analyze, and implement
broadly applicable computational methods and algorithms
that drive progress in science, engineering and technology.
A major current focus of CSE is to create truly predictive capability in science.
Such CSE-based scientific predictions will
increasingly become the foundation of
technical, economic, societal, and public policy advances and decisions
in the coming decades.
\item \textbf{F2:}
\textbf{Algorithms and software as research artifacts.}
Computational algorithms lie at the core of CSE advances.
Scientific software, which %
codifies and organizes algorithmic models of reality,
is the primary means of encapsulating CSE research
to enable advances in scientific and engineering understanding.
CSE algorithms and software can be created, understood, and properly employed
only by using a unique synergy of knowledge that combines an
understanding of mathematics, computer science, and target problem areas.
\item \textbf{F3:}
\textbf{CSE and the data revolution.}
CSE methods and techniques are essential in order to capitalize
on the rapidly growing ubiquitous availability of scientific and technological data.
In order to achieve deeper scientific benefit, data analytics must proceed beyond the exposition of correlations.
CSE develops new statistical computing techniques that are efficient at scale,
and it incorporates physical models informed by first principles to
extract from the data insights that go far beyond what can be recovered
by statistical modeling alone.
\end{itemize}
\subsection{General Recommendations}
\begin{itemize}
\item \textbf{R1:}
Universities and research institutions should \textbf{remove disciplinary barriers} to allow CSE to realize its broad potential for driving scientific and technological progress in the 21st century. New multidisciplinary research and education structures where CSE is a clearly articulated focus should be increasingly encouraged.
\item \textbf{R2:}
Funding agencies should develop \textbf{focused and sustained funding programs} that address the specific needs of research in CSE. These programs should acknowledge the multidisciplinary nature of CSE and account for specific research agendas of CSE, including CSE algorithms and software ecosystems as critical instruments of a novel kind of predictive science and access to leading high-performance computing facilities.
\item \textbf{R3:}
CSE researchers should continue to \textbf{engage with new application areas and new methodologies} in order to realize the full potential of predictive simulation and data analytics as catalysts for innovation. Now ubiquitous large data sets are key to developing new techniques and insights but even greater opportunities are promised by combining them with large-scale simulation models.
\end{itemize}
\subsection{Specific Recommendations for CSE Education}
\begin{itemize}
\item \textbf{E1:}
Universities should \textbf{strengthen and broaden computational thinking in all relevant academic areas and on all levels}. This effort is vital for driving scientific, technological, and societal progress and needs to be addressed systematically at the university level as a crucial factor in workforce development for the 21st century.
\item \textbf{E2:}
\textbf{Dedicated CSE programs at all university degree levels} should be created to educate future core CSE researchers for jobs in the private and government sectors, in research laboratories, and in academia. New CSE-centric teaching materials are required to support such programs.
\item \textbf{E3:}
The \textbf{common core of CSE and data science}, as well as their synergy, should be exploited in educational programs that will prepare the computational and data scientists of the future. Aided by scientific visualization and interactive computational experiments, CSE is a powerful motivator for study in the STEM disciplines at pre-university levels. Outreach materials are required.
\end{itemize}
\newpage
\subsection{CSE and the Data Revolution: The Synergy between Computational Science and Data Science}
\label{sec:data-cse}
\pagebudget{3}
The world is experiencing an explosion of digital data.
Indeed, since 2003, new data has been growing at an annual rate that exceeds the data contained in all previously created documents.
The coming of extreme-scale computing and data acquisition from high-bandwidth experiments across the sciences is creating a phase change.
The rapid development of networks of sensors and
the increasing reach of the Internet and other digital networks in our connected society
create entirely new data-centric analysis applications in broad areas of science,
commerce, and technology \cite{baker2010data,oden2011grand,jahanian2013testimony}.
These massive
amounts of data offer tremendous potential for generating new
quantitative insight, not only in the natural sciences and
engineering, where they enable new approaches such as data-driven
scientific discovery and data-enabled uncertainty quantification, but also
in almost all other areas of human activity.
For example, biology and
medicine have increasingly become quantitative sciences over the past
two or three decades, aided by the generation of large data sets. Data-driven
approaches are also starting to change the social sciences,
which are becoming more quantitative \cite{king2014restructuring}.
\subsubsection{CSE and the Data Revolution: The Paradigms of Scientific Discovery}
CSE has its roots in the third paradigm of scientific discovery:
CSE drives scientific and technological progress
through computational modelling (the third paradigm) in conjunction with theory and experiment (the first two paradigms), making use of first-principles models that reflect the laws of nature. These models may, for example, include the PDEs of fluid mechanics and quantum physics
or the laws governing particles in molecular dynamics.
The advent of big data is sometimes seen as
enabling a fourth paradigm of scientific discovery \cite{hey2009fourth},
in which the sheer amount of data combined with statistical models
leads to new %
analysis methods in areas where first-principles models do not exist (yet) or are inadequate.
Massive amounts of data are indeed creating a sea change in scientific discovery.
In third-paradigm CSE applications (that are based on first-principles models) big data leads to
tremendous advances: it enables revolutionary methods of data-driven discovery,
uncertainty quantification, data assimilation, optimal design and control, and, ultimately,
truly predictive CSE. At the same time, in fourth-paradigm approaches big data
makes the scientific method of quantitative, evidence-based analysis applicable to
entirely new areas where until recently quantitative data and models were mostly
nonexisting.
The fourth paradigm also enables new approaches in the physical sciences
and engineering, for example, for pattern finding in large amounts of observational data.
Clearly, CSE methods and techniques have an essential role to play
in all these quantitative endeavors enabled by big data.
\subsubsection{Role of Big Data in CSE Applications}
In core application areas of CSE~\cite{oden2011grand}, our ability to produce data is rapidly outstripping our ability to use it.
With exascale data sets, we will be creating far more data than we can explore in a lifetime with current tools.
Yet, exploring these data sets is the essence of new paradigms of scientific discovery.
Thus, one of the greatest challenges is
to create new theories, techniques, and software that can be used to
understand and make use of this rapidly growing data
for new discoveries and advances in science and engineering.
For example, the CSE focus area of
uncertainty quantification aims at characterizing and managing the uncertainties
inherent in the use of CSE models and data. To this end, new methods are
being developed that build on statistical techniques such as Monte Carlo methods, Bayesian inference,
and Markov decision
processes. While these underlying techniques have broad applications
in many areas of data science, CSE efforts tend to have a special
focus on developing efficient structure-exploiting computational techniques at scale, with
potential for broad applicability in other areas of data analytics
and data science.
Data assimilation methods have over several decades evolved into crucial techniques
that ingest large amounts of measured data into large-scale computational models for
diverse geophysical applications such as weather prediction and hydrological forecasting.
Large amounts of data are also a crucial component in other
CSE focus areas, such as validation and verification, reduced-order
modeling, and analysis of graphs and networks. Also, enormous potential
lies in the emerging model-based interpretation of patient-specific data from
medical imaging for diagnosis and therapy planning.
CSE techniques to
address the challenges of working with massive data sets include
large-scale optimization, linear and nonlinear solvers, inverse problems, stochastic methods, scalable
techniques for scientific visualization, and high-performance parallel
implementation.
Exploiting large amounts of data is having a profound influence in many areas of CSE
applications. The following paragraphs describe some striking examples.
Many {\bf geoscience systems} are characterized by complex behavior
coupling multiple physical, chemical, and/or biological processes over
a wide range of length and time scales. Examples include earthquake
rupture dynamics, climate change, multiphase reactive subsurface
flows, long-term crustal deformation, severe weather, and mantle
convection. The uncertainties prevalent in the mathematical and
computational models characterizing these systems have made
high-reliability predictive modeling a challenge. However, the
geosciences are at the cusp of a transformation from a largely
descriptive to a largely predictive science. This is driven by
continuing trends: the rapid expansion of our ability to instrument
and observe the Earth system at high resolution, sustained
improvements in computational models and algorithms for complex
geoscience systems, and the tremendous growth in computing power.
The problem of how to estimate unknown parameters (e.g., initial
conditions, boundary conditions, coefficients, sources) in complex
geoscience models from large volumes of observational data is
fundamentally an inverse problem. Great strides have been made in the
past two decades in our ability to solve very large-scale geoscience
inverse problems, and many efforts are under way to parlay these
successes for deterministic inverse problems into algorithms for
solution of Bayesian inverse problems, in which one combines possibly
uncertain data and models to infer model parameters and their
associated uncertainty. When the parameter space is large and the
models are expensive to solve (as is the usual case in geoscience
inverse problems), the Bayesian solution is prohibitive.
However, advances in large-scale UQ algorithms in recent years
\cite{AdamsHigdonEtAl12} are beginning to make feasible the use
of Bayesian inversion to infer parameters and their uncertainty in
large-scale complex geoscience systems from large-scale satellite
observational data. Two examples are global ocean modeling
and continental ice sheet modeling.
Continued advances in UQ algorithms,
Earth observational systems, computational modeling, and HPC systems
over the coming decades will lead to more sophisticated geoscience
models capable of much greater fidelity. These
in turn will lead to a better understanding of Earth dynamics as well
as improved tools for simulation-based decision making for critical
Earth systems.
Big data methods are revolutionizing
the related fields of {\bf chemistry} and {\bf materials science}, in a
transformation that is illustrative of those sweeping all of science,
leading to successful transition of basic science into practical tools
for applied research and early engineering design.
Chemistry and materials science are both
mature computational disciplines that through advances in theory,
algorithms, and computer technology are now capable of increasingly
accurate predictions of the physical, chemical, and electronic
properties of materials and systems. The equations of quantum mechanics
(including Schr\"odinger's, Dirac's, and density functional representations)
describe the electronic structure of solids and molecules that controls many
properties of interest, and statistical mechanics %
must be employed to
incorporate the effects of finite temperature and entropy. These are
forward methods---given a chemical composition and approximate
structure, one can determine a nearby stable structure and
compute its properties. To design new materials or chemical
systems, however, one must solve the inverse problem---what is the system
that has specific or optimal properties? Moreover, the system must be
readily synthesized, inexpensive, and thermally and chemically stable
under expected operating conditions. Breakthrough progress has recently been
made in developing effective constrained search and optimization
algorithms for precisely this purpose
\cite{ceder2013stuff},
with this process
recognized in large funding initiatives such as the multiagency U.S. Materials
Genome Initiative \cite{materialsGenome}. This success has
radically changed the nature of computation in the field. Less than
ten years ago most computations were generated and analyzed by a human,
whereas now 99.9\% of computations are
machine generated and processed as part of automated searches that are
generating vast databases with results of millions of
calculations to correlate structure and function
\cite{materialsProject,molecularSpace}.
In addition to opening important new challenges in robust and
reliable computation, the tools and workflows of big data are now
crucial to further progress.
\medskip
\begin{tcolorbox}[title=CSE Success Story: Visual Analytics Brings Insight to Terabytes of Simulation Data]
\begin{wrapfigure}{R}{0.7\textwidth}
\vspace{-0.15in}
\includegraphics[width=0.7\textwidth]{./figures/Combustion-Topo.jpeg}
\end{wrapfigure}
New techniques are being developed that allow scientists to sift through terabytes of simulation data in order to discover important new insights deriving from science and engineering simulations on the world's largest supercomputers. The figure shows a visualization of a topological analysis and volume rendering of one timestep in a large-scale, multi-terabyte combustion simulation. The topological analysis identifies important physical features (ignition and extinction events) within the simulation, while the volume rendering allows viewing the features within the spatial context of the combustion simulation.\protect\footnotemark
\end{tcolorbox}
\footnotetext{Simulation by J. Chen, Sandia National Laboratories; visualization by the Scientific Computing and Imaging Institute, University of Utah.}
In {\bf scientific visualization}, new techniques are being developed to give visual insight
into the deluge of data that is transforming scientific research. %
Data analysis and visualization are key technologies for enabling future advances in simulation and data-intensive-based science, as well as in several domains beyond the sciences. Specific big data visual analysis challenges and opportunities include in situ interactive analysis, user-driven data reduction, scalable and multilevel hierarchical algorithms, representation of evidence and uncertainty, heterogeneous data fusion, data summarization and triage for interactive queries, and analysis of temporally evolved features \cite{johnson2006nih,johnson2007visualization,wong2012top}.
Computation and big data also meet in {\bf characterization of physical
material samples} using techniques such as X-ray diffraction and adsorption,
neutron scattering, pytchography, transmission electron, and atomic
microscopes. Only for essentially perfect crystals or simple
systems can one directly invert the experimental data and determine
the structure from measurements.
Most real systems, typically with nanoscale features and no long-range
order, are highly underdetermined \cite{billinge2010nano}. Reliable
structure determination requires fusion of multiple experimental data
sources (now reaching multiple terabytes in size) and computational
approaches. Computation provides a forward simulation
(e.g., given a structure, determine what spectrum or diffraction pattern results),
and techniques from uncertainty quantification are among those proving
successful in making progress.
\subsubsection{Synergy between Computational Science and Data Science}
Big data is transforming the fabric of society, in areas that go
beyond research in the physical sciences and engineering
\cite{jahanian2013testimony}. Data analytics aims at extracting information
from large amounts of data in areas as diverse as business intelligence, cybersecurity, social
network recommendation, and government policy. Analysis of the data is
often based on statistical and machine learning methods from data
science. Similar to CSE, data science is built on fundamentals from
mathematics and statistics, computer science, and domain knowledge;
and hence it possesses an important synergy with CSE.
The paradigm of scalable algorithms and implementations
that is central to CSE and HPC is also relevant to
emerging trends in data analytics and data science.
Data analytics is quickly moving in the direction of mathematically more sophisticated analysis algorithms and parallel implementations. CSE will play an
important role in developing the next generation of parallel
high-performance data analytics approaches that employ
descriptions of the data based on physical or phenomenological models
informed by first principles,
with the promise of extracting valuable insight from the data that crucially goes beyond
what can be recovered by statistical modeling alone.
HPC supercomputers and cloud data centers serve different needs and are
optimized for applications that have fundamentally different characteristics.
Nevertheless, they face challenges that have many
commonalities in terms of extreme scalability,
fault tolerance, cost of data movement, and power management.
The advent of big data has spearheaded new large-scale distributed
computing technologies and parallel programming models
such as MapReduce, Hadoop, Spark, and Pregel,
which offer innovative approaches to scalable high-throughput
computing, with a focus on data locality and fault tolerance.
These frameworks are finding applications in CSE problems, for
example, in network science; and large-scale CSE methods,
such as advanced distributed optimization algorithms, are
increasingly being developed and implemented in these
environments.
Extensive potential exists for cross-fertilization of
ideas and approaches between extreme-scale HPC
and large-scale computing for data analysis.
Economy-of-scale pressures will contribute
to a convergence of technologies for computing
at large scale.
Overall, the analysis of big data
requires efficient and scalable mathematics-based algorithms executed
on high-end computing infrastructure, which are core CSE competencies
that translate directly to big data applications. CSE
education and research must foster the important synergies with
data analytics and data science that are apparent in a variety of
emerging application areas.
\section{CSE Education and Workforce Development}
\label{sec:education}
With the many current and expanding opportunities for the CSE field, there is a growing demand for CSE graduates and a need to expand CSE educational offerings. This need includes CSE programs at both the undergraduate and graduate levels, as well as continuing education and professional development programs. In addition, the increased presence of digital educational technologies provides an exciting opportunity to rethink CSE pedagogy and modes of educational delivery.
\subsection{Growing Demand for CSE Graduates in Industry, National Labs, and Academic Research}
Industry, national laboratories, government, and broad areas of academic research are making more use than ever before of simulations, high-end computing, and simulation-based decision-making. This trend is apparent broadly across domains---for example, energy, manufacturing, finance, and transportation are all areas in which CSE is playing an increasingly significant role, with many more examples across science, engineering, business, and government. Research and innovation, both in academia and in the private sector, are increasingly driven by large-scale computational approaches.
A National Council on Competitiveness report points out that high-end computing plays a ``vital role in driving private-sector competitiveness ... all businesses that adopt HPC consider it indispensable for their ability to compete and survive" \cite{ConC2008}.
With this significant and increased use comes a demand for a workforce versed in technologies necessary for effective and efficient mathematics-based computational modeling and simulation. There is high demand for graduates with the interdisciplinary expertise needed to develop and/or utilize computational techniques and methods in order to advance the understanding of physical phenomena in a particular scientific, engineering, or business field and to support better decision-making \cite{GlotzerKimEtAl2009}.
As stated in a recent report on workforce development by the U.S. Department of Energy Advanced
Scientific Computing Advisory Committee \cite{ASCAC2014}, ``All large
DOE national laboratories face workforce recruitment and retention
challenges in the fields within Computing Sciences that are relevant
to their mission. ...
There is a growing national demand for graduates in Advanced Scientific Computing Research-related Computing Sciences
that far exceeds the supply from academic institutions. Future
projections indicate an increasing workforce gap.''
This finding was based on a number of reports,
including one from the High End Computing Interagency Working Group \cite{HEC-IWG2013}
stating: ``High end computing (HEC) plays an important role
in the development and advanced capabilities of many of the products,
services, and technologies that are part of our everyday life. The
impact of HEC on the agencies of the federal government, on the quality
of academic research, and on industrial competitiveness is substantial
and well documented. However, adoption of HEC is not uniform, and to
fully realize its potential benefits we must address one of the most
often cited barriers: lack of HEC skills in the workforce."
Additional workforce and education issues are discussed in \cite{HerouxAllenEtAl2016}.
The U.S. Department of Energy has for 25 years been investing in the
Computational Science Graduate Fellowship~\cite{CSGF:website} program
to prepare approximately 20 Ph.D.\ candidates per year for
interdisciplinary roles in its laboratories and beyond. Fellows take
at least two graduate courses in each of computer science, applied
mathematics, and an application from science or engineering requiring
large-scale computation, in addition to completing their degree
requirements for a particular department. They also spend at least
one summer at a DOE laboratory in a CSE internship and attend an
annual meeting to network with their peers across other institutions.
This program has been effective in creating a sense of community for
CSE students that is often lacking on any individual traditionally
organized academic campus.
In order to take advantage of the transformation that high-performance
and data-centric computing offers to industry, the critical factor is a
workforce versed in CSE and capable of developing the algorithms,
exploiting the compute platforms, and designing the analytics that
turn data with its associated information into knowledge to act.
This is the case for large companies that have traditionally had in-house simulation capabilities and may have dedicated CSE-focused groups to support a wide range of products; it is also increasingly the case for small- and medium-sized companies with more specialized products and a critical need for CSE to support their advances in research and development.
In either case, exploiting emerging computational tools requires the critical thinking and
the interdisciplinary background that is prevalent in CSE training~\cite{oden2011grand}.
The CSE practitioner has both the expertise to apply computing tools
and the analytical skills to tease out the problems that often are
encountered when commercial enterprises seek to design new products,
develop new services, and create novel approaches from the wealth of data
available. The CSE practitioner knows how to use
computational tools and analytics in uncharted areas, often applying
previous domain-specific understanding to these new areas. The CSE
practitioner, while often a member of a team of others from varying
disciplines, is the catalyst driving the change that industry seeks in order
not only to remain competitive but also to be first to market, providing the
necessary advantage to thrive in a rapidly evolving technological
ecosystem.
\subsection{Future Landscape of CSE Educational Programs}
CSE educational programs are needed in order to create
professionals who meet this growing demand and who support the
growing CSE research field. These include CSE programs at both the
undergraduate and graduate levels, as well as continuing education and
professional development programs. These also include programs that are
``CSE focused'' and those that follow more of a ``CSE infusion''
model. The former includes programs that have CSE as their primary
focus (e.g., B.S., M.S., or Ph.D. in computational science and engineering),
while the latter includes
programs that embed CSE training within another degree structure
(e.g., a minor, emphasis, or concentration in CSE complementing a major
in mathematics, science, or engineering or a degree in a specific
computational discipline such as computational finance or
computational geosciences). In fact, interdisciplinary quantitative and
computer modeling skills are quickly becoming indispensable for any
university graduate, not only in the physical and life sciences, but also in the social sciences.
Universities must equip their graduates with these skills using mechanisms
such as the CSE infusion model.
Information about a variety of CSE educational programs can be found online~\cite{SIAG-CSE:wiki,CSE-GraduateProgramSurvey2012}.
%
At the undergraduate level, the breadth and depth of topics covered in CSE degrees
will depend on the specific degree focus. However, the following high-level topics are
important content for an undergraduate program:
\begin{enumerate}
\item Foundations in mathematics and statistics, including calculus, linear algebra, mathematical analysis, ordinary and partial differential equations, applied probability, and discrete mathematics
\item Simulation and modeling, including conceptual, data-based, and physics-based models, use of simulation tools, and assessment of computational models
\item Computational methods and numerical analysis, including errors, solutions of systems of linear and nonlinear equations, Fourier analysis, interpolation, regression, curve fitting, optimization, numerical differentiation and integration, Monte Carlo methods, numerical methods for ODEs, and numerical methods for PDEs
\item Computing skills, including compiled high-level languages, algorithms (numerical and nonnumerical), elementary data structures, analysis of algorithms and their implementation, parallel programming, scientific visualization, awareness of computational complexity and cost, and use of good software engineering practices including version control
\end{enumerate}
Feedback from the community has noted an increasing demand for CSE graduates trained at the bachelor's level, with particular note of the increased opportunities at small- and medium-sized companies.
At the graduate level, again the breadth and depth of topics covered will depend on the specific degree focus. In the next section, we make specific recommendations in terms of a set of learning outcomes desired for a CSE graduate program.
We also note the growing importance of and demand for terminal master's degrees, which can play a large role in fulfilling the industry and national laboratory demand for graduates with advanced CSE skills.
All CSE graduates should possess the attributes of having a solid foundation in mathematics; an understanding of probability and statistics; a grasp of modern computing, computer science, programming languages, principles of software engineering, and high-performance computing; and an understanding of foundations of modern science and engineering, including biology. These foundations should be complemented by deep knowledge in a specific area of science, engineering, mathematics and statistics, or computer science. CSE graduates should also possess skills in teamwork, multidisciplinary collaboration, and leadership.
A valuable community project would be to collect resources to assist early-career researchers in advancing skills to support CSE collaboration and leadership.
A third area of educational programs is that of continuing and professional education. Opportunities exist for SIAM or other institutions
to engage with industry to create and offer short courses, including those that target general CSE skills for the non-CSE specialist as well as those that target more advanced skills in timely opportunity areas (such as parallel and extreme-scale computing,
CSE-oriented software engineering, and computing with massive data).
Often one assumes that much of the workforce for industry in CSE will
come at the postgraduate level; increasingly, however, industry needs
people who have an understanding of CSE even at the undergraduate
level in order to realize the full potential growth in a rapidly
expanding technological workplace.
Future managers and leaders in
business and industry must be able to appreciate the skill
requirements for the CSE professional
and the benefits that accrue from CSE. Continuing
education can play a role in fulfilling this need. The demand for
training in CSE-related topics exists broadly among graduate
students and researchers in academic institutions and national
laboratories, as evidenced by the growing number of summer schools
worldwide, as well as short courses aimed at the research community.
For example, the Argonne Training Program for Extreme-Scale
Computing~\cite{atpesc:website} covers key topics that CSE researchers
must master in order to develop and use leading-edge applications on
extreme-scale computers. The program targets early-career researchers
to fill a gap in the training that most computational scientists
receive and provides a more comprehensive program than do typical short
courses.\footnote{Videos and slides of lectures are available via the ATPESC
website \url{http://extremecomputingtraining.anl.gov}.}
The recent creation of the SIAM Activity Group on Applied Mathematics Education
represents another opportunity for collaboration to pursue some of these ideas in continuing and professional education.
\medskip
\begin{tcolorbox}[title=CSE Success Story: Computer-Aided Engineering in the Automotive Industry]
\begin{wrapfigure}{R}{0.7\textwidth}
\includegraphics[width=0.7\textwidth]{./figures/audi-revised.png}
\end{wrapfigure}
CSE-based simulation using
computer-aided engineering (CAE) methods
and tools has become an indispensable component of developing
advanced products in industry. Based on mathematical
models (e.g., differential equations and variational principles), CAE
methods such as multibody simulation, finite elements,
and computational fluid dynamics are essential for
assessing the functional behavior of products
early in the design cycle
when physical prototypes are not yet available.
The many advantages of virtual testing
compared with physical testing, include flexibility, speed, and cost.
This figure\protect\footnotemark ~shows selected
application areas of CAE in the automotive industry.
CSE provides widely applicable methods and tools.
For example, drop tests of
mobile phones are investigated by applying simulation methods that are
also used in automotive crash analysis.
\end{tcolorbox}
\footnotetext{Figure courtesy of AUDI AG.}
\subsection{Graduate Program Learning Outcomes}
\input{outcomes-education}
A learning outcome is defined as what a student is expected to be able
to do as a result of a learning activity. In this section, we describe
a set of learning outcomes desired of a student graduating from a CSE
Ph.D. program. We focus on outcomes because they describe the set of
desirable competencies without attempting to prescribe any specific
degree structure. These outcomes can be used as a guide to define a
Ph.D. program that meets the needs of the modern CSE graduate; they can
also play an important role in defining and distinguishing the CSE
identity and in helping employers understand the skills and potential
of CSE graduates.
In Table~\ref{tab:LO}, we focus on the ``CSE Core Researchers and Developers" category in
Figure \ref{Fig:CSE-community}. We distinguish between a CSE Ph.D. with a broadly applicable
CSE focus and a CSE Ph.D. with a domain-driven focus. An example of the
former is a ``Ph.D. in computational science and engineering" while
an example of the latter is a ``Ph.D. in computational
geosciences." The listed outcomes relate primarily to those
CSE-specific competencies that will be acquired through classes. Of
course, the full competencies of the Ph.D. graduate must also include
the more general Ph.D.-level skills, such as engaging deeply in a
research question, demonstrating awareness of research context and
related work, and producing novel research contributions, many of
which will be acquired through the doctoral dissertation.
We also note that it would be desirable for graduates of a CSE master's degree program to also achieve most (if not all) of the outcomes in Table~\ref{tab:LO}.
In particular, in educational systems with no substantial classwork component for the Ph.D., the learning outcomes of
Table~\ref{tab:LO} would also apply to the master's or honors degree that may precede the Ph.D.
In the next two subsections, we elaborate more on the interaction between CSE education and two areas that have seen considerable change since the design of many existing CSE programs: extreme-scale computing and computing with massive data.
\subsection{Education in Parallel Computing and Extreme-Scale Computing}
\team{Uli: Voedivin, Keyes, Jimack}
Extreme-scale computing poses new challenges to education, but there
is also the broader and fundamental need to educate a wide spectrum
of engineers and scientists to be
better prepared for the age of ubiquitous parallelism, as addressed in Section~\ref{sec:hpc-cse}; see also \cite{HEC-IWG2013,EESI-generic,ASCAC2014,wissenschaftsrat-simulation}.
Parallelism has become the basis for all
computing technology, which necessitates a shift
in teaching even the basic concepts.
Simulation algorithms and their
properties have been in the core of CSE education, but now we must
emphasize parallel algorithms.
The focus used to be on abstract notions of accuracy of methods
and the complexity of algorithms;
today it must be shifted to the complexity of parallel
algorithms and the real-life cost of solving a computational
problem---a completely different notion.
Additionally,
the asymptotic complexity and thus algorithmic scalability become more
important when the machines grow larger.
At the same time, the traditional complexity metrics increasingly fail to give guidance
about which methods, algorithms, and implementations are truly efficient.
As elaborated in Sections~\ref{sec:hpc-cse} and \ref{sec:software-cse}, designing
simulation software has become an extremely complex, multifaceted art.
The education of future computational scientists must
address these topics that arise from
the disruptive technology that is dramatically
changing the landscape of computing.
Today's extreme scale is tomorrow's desktop. An analogous statement
holds for the size of the data that must be processed and that is
generated through simulations. In education we need to distinguish
between those whose research aims to simulate computationally
demanding problems (see Section~\ref{sec:hpc-cse} and Figure~\ref{fig:emergent-cse})
and the wider class of people who are less driven
by performance considerations. For example, many
computational engineering problems exist in which the models are
not extremely demanding computationally
or in which model reduction
techniques are used to create relatively cheap models.
Education in programming techniques needs
to be augmented with parallel programming elements
and a distinctive awareness of performance and computational cost.
Additionally the current trends are characterized by a growing complexity in the design
of the computer architectures, which are becoming hierarchical and heterogeneous.
These architectures are reflected by
complex and evolving programming models
that should be addressed in a modern CSE education.
In defining the educational needs in parallel and high-performance computing
for CSE, we must distinguish
between different intensities. Any broad education in CSE will benefit
from an understanding of parallel computing, simply because sequential
computers have ceased to exist. All students must be trained to
understand concepts such as concurrency, algorithmic complexity, and
its relation to scalability, elementary performance metrics, and systematic
benchmarking methodologies.
In more demanding applications, parallel computing expertise and
performance awareness are necessary and must go significantly beyond the
content of most current curricula. This requirement is equally true in those
applications that may be only of moderate scale but that nevertheless
have high-performance requirements, such as those in real-time
applications or that require interactivity; see Figure~\ref{fig:emergent-cse}.
Here, CSE education must include
a fundamental understanding of computer architectures and the programming
models that are necessary to exploit these architectures.
Besides classification according to scientific content
and HPC intensity,
educational structures in CSE must also address the wide spectrum of
the CSE community that was described and analyzed in Section~\ref{sec:cse-community}
(see also Figure \ref{Fig:CSE-community}).
\smallskip
\noindent
{\bf CSE Domain Scientists and Engineers -- Method Users.}
Users of CSE technology will typically use dedicated supercomputer
systems and
specific software on these computers; they will
usually not program HPC systems from scratch.
Nevertheless, they need to understand the systems and the software
they use, in order to achieve leading-edge scientific results.
If needed, they must be capable of extending the existing applications,
possibly in collaboration with CSE and HPC specialists.
An appropriate educational program for CSE users in HPC
can be organized in courses and tutorials on specific topics such as
are regularly offered by computing centers and other institutions.
These courses are often taught in compact format (ranging from a few hours to a week)
and are aimed at enabling participants to use specific methods and software or specific systems and tools. They naturally are of limited depth, but a wide spectrum of such courses
is essential in order to widen the scope of CSE and HPC technology and to bring it to bear fruit as
widely as possible.
\smallskip
\noindent
{\bf CSE Domain Scientists and Engineers -- Method Developers.}
Developers of CSE technology are often domain scientists or engineers who have specialized in using computational techniques in their original field. They often have decades of experience in computing
and using HPC,
and thus historically they are mostly self-taught.
Regarding the next generation of scientists, students
of the classical fields
(such as physics, chemistry, or engineering) will increasingly want to put stronger
emphasis on computing within their fields.
The more fundamental knowledge that will be needed to competently use the next generation of HPC systems thus cannot be adequately addressed by compact courses as described above. A better integration of these topics into the university curriculum is necessary,
by teaching the use of computational methods as part of existing courses or by offering dedicated HPC- and simulation-oriented courses (as electives) in the curriculum.
An emphasis on CSE and HPC within a classical discipline may be taught
in the form of a selection of courses that are offered as electives
by CSE or HPC specialists, or---potentially especially attractive---by co-teaching of a CSE specialist jointly with a domain scientist.
\smallskip
\noindent
{\bf CSE Core Researchers and Developers.}
Scientists who work at the core of CSE are classified in two groups according to Figure~\ref{Fig:CSE-community}. {\em Domain-driven} CSE students as well as those focusing on {\em broadly applicable methods} should be expected to spend a significant amount of time learning about HPC and parallel computing topics. These elements must be well integrated into the CSE curriculum. Core courses from computer science (such as parallel programming, software engineering, and computer architecture) may present the knowledge that is needed also in CSE, and they can be integrated into a CSE curriculum. Often, however, dedicated courses that are
especially designed for students in CSE will be significantly
more effective, since they can be adapted to the special prerequisites of the student group
and can better focus on the issues that are
relevant for CSE. Often again co-teaching such courses, labs, or projects may be fruitful, especially when such courses cover several stages of the CSE pipeline (see
Figure~\ref{Fig:CSE-pipeline}).
These three levels of CSE education are naturally interdependent,
but we emphasize that all three levels are relevant and important.
In particular, the problem
of educating the future generation of scientists
in the competent
use of computational techniques cannot be addressed solely by
offering one-day courses on how to use the latest machine in the computing center.
\medskip
\begin{tcolorbox}[title=CSE Success Story: Simulation-Based Optimization of 3D Printing]
\begin{wrapfigure}{R}{0.66\textwidth}
\vspace{-0.15in}
\includegraphics[width=0.33\textwidth]{./figures/3D-printing-example/build-chamber-vb.pdf}
\hfill
\begin{minipage}[b]{0.33\textwidth}
\includegraphics[width=.9\textwidth]{./figures/3D-printing-example/CamAuto002.png}
\\[1ex]
\includegraphics[width=.9\textwidth]{./figures/3D-printing-example/CamAuto059.png}
\end{minipage}
\end{wrapfigure}
CSE researchers have developed advanced models of 3D printing processes, where
thin layers of metal powder are molten by a
high-energy electron beam that welds the
powder selectively to create
complex 3D metal structures
with an almost arbitrary geometry by repeating the process layer by layer.
The two snapshots on the right visualize the effects of
a simulated electron beam that scans over a powder bed in a sequence of
parallel lines. Simulation can be used for designing the electron beam gun, developing
the control system, and generating the powder layer, thereby
accelerating the printing process in commercial manufacturing, for
example, of patient-specific medical implants.
The greatest simulation challenge is to develop numerical models for the
complex 3D multiphysics welding process.
A realistic simulation with physical resolution of a few microns
requires millions of mesh cells and several hundreds of thousands of
timesteps---computational complexity that can be tackled only with
parallel supercomputers and sophisticated software.\protect\footnotemark
\end{tcolorbox}
\footnotetext{
Simulation results from M. Markl, R. Ammer, U. R\"{u}de, and C. K\"{o}rner,
International Journal of Advanced Manufacturing Technology, 78, 239-247, 2015.}
\subsection{CSE Education in Uncertainty Quantification and Data Science}
The rising importance of massive data sets in application areas of
science, engineering, and beyond has broadened the skillset that
CSE graduates may require. For example, data-driven uncertainty
quantification requires statistical approaches that may include tools
such as Markov chain Monte Carlo methods and Bayesian
inference. Analysis of large networks requires skills in discrete
mathematics, graph theory, and combinatorial scientific
computing. Similarly, many data-intensive problems require approaches
from inverse problems, large-scale optimization, machine learning, and
data stream and randomized algorithms.
The broad synergies between computational science and data science
offer opportunities for educational programs. Many CSE competencies
translate directly to the analysis of massive data sets at scale using
high-end computing infrastructure. Computational science and data
science are both rooted in solid foundations of mathematics and
statistics, computer science, and domain knowledge; this common
core may be exploited in educational programs that prepare the
computational and data scientists of the future.
We are already beginning to see the emergence of such programs. For example, the new undergraduate major in ``Computational Modeling and Data Analytics" at Virginia Tech\footnote{\url{http://www.science.vt.edu/ais/cmda/}} includes deep integration among applied mathematics, statistics, computing, and science and engineering applications. This new degree program is intentionally designed \emph{not} to be just a compilation of existing classes from each of the foundational areas; rather, it comprises mostly new classes with new perspectives emerging from the intersection of fields and team taught by faculty across departments.
Another relevant example is the Data Engineering and Science Initiative at Georgia
Tech.\footnote{\url{http://bigdata.gatech.edu/}, \url{http://www.analytics.gatech.edu/}} Degree programs offered include a one-year M.S. in analytics, and M.S. and Ph.D. programs with a data focus in CSE and biotech fields. These programs are jointly offered by academic units drawn from the colleges of computing, engineering, and business. About a quarter of the courses are offered by the School of CSE, with the focus on computational algorithms and high-performance analytics.
A similar picture is emerging around the world, with interdisciplinary programs that combine data analytics and mathematics-based high-end computing.\footnote{See, e.g., several of the programs listed
at \url{http://www.kdnuggets.com/education/index.html}}
\subsection{Software Sustainability, Data Management, and Reproducibility}
As discussed in Section~\ref{sec:software-cse},
simulation software is becoming increasingly complex and often
involves many developers, who may be geographically distributed
and who may enter or leave the project at different times.
Education is needed on issues in software productivity and sustainability,
including software engineering for CSE and tools for software project
management.
For example, code repositories that support version control are increasingly used as
a management tool for projects of all sizes. Teaching students at
all levels to routinely use version control will increase their
productivity and allow them to participate in open-source software
projects in addition to better preparing them for many jobs in
large-scale CSE.
Researchers in CSE fields (and from
governments, funding agencies, and the public) also have experienced growing concern about the lack of
reproducibility of many scientific results based on code and data that is
not publicly available, and often not properly archived
in a manner that allows future confirmation of the results.
Many agencies and journals are beginning to require open sharing of data and/or
code. CSE education should include training in the techniques
that support this trend, including data management and provenance,
licensing of code and data,
the practice of full specification of models and algorithms within publications,
and archiving code and data in repositories
that issue permanent identifiers such as DOIs.
\subsection{Changing Educational Infrastructure}
As we think about CSE educational programs, we must also consider the changing external context of education, particularly with regard to the advent of digital educational technologies and their associated impact on the delivery of education programs.
One clear impact is an increased presence of online digital materials, including digital textbooks, open educational resources, and massive open online courses (MOOCs). Recent years have already seen the development of online digital CSE resources, as well as widespread availability of material in fields relevant to CSE, such as HPC, machine learning, and mathematical methods.
An opportunity exists to make better community use of current materials, as well as to create new materials. There is also an opportunity to leverage other resources, such as Computational Science Graduate Fellowship essay contest winners \footnote{\url{https://www.krellinst.org/csgf/outreach/cyse-contest}} and archived SIAM plenaries and other high-profile lectures.
The time is right for establishing a SIAM working group that creates and curates a central repository linking to CSE digital materials and coordinates community development of new CSE online modules.
This effort could also be coordinated with an effort to pursue opportunities in continuing education.
Digital educational technologies are also having an impact on the way university courses are structured and offered. For example, many universities are taking advantage of digital technologies and blended learning models to create ``flipped classrooms," where students watch video lectures or read interactive online lecture notes individually and then spend their face-to-face class time engaged in active learning activities and problem solving. Digital technologies are also offering opportunities to unbundle a traditional educational model---introducing more flexibility and more modularity to degree structures. Many of these opportunities are well suited for tackling the challenges of building educational programs for the highly interdisciplinary field of CSE.
\subsection{CSE and High-Performance Computing -- Ubiquitous Parallelism}
\label{sec:hpc-cse}
\pagebudget{3}
\team{Uli:Voedivin, Keyes, Jimack}
The development of CSE and high-performance computing are closely interlinked.
The rapid growth of available compute power drives CSE research toward ever more complex
simulations in ever more disciplines.
In turn, new paradigms in HPC present
challenges and opportunities that drive future CSE research and education.
\subsubsection{Symbiotic Relationship between CSE and HPC}
HPC and CSE are intertwined in a symbiotic relationship:
HPC technology enables breakthroughs in CSE
research, and leading-edge CSE applications are the main drivers
for the evolution of supercomputer
systems~\cite{scales03,EESI-generic,wissenschaftsrat-hpc,wissenschaftsrat-simulation,Rosner10,ASCACTopTen2014}.
Grand challenge applications exercise computational technology at its limits and beyond.
The emergence of CSE as a fundamental pillar
of science has become possible because
computer technology is beginning to deliver
sufficient compute power to create effective computational models.
Combined with the tremendous algorithmic advances
(see Figure~\ref{Fig:Algorithmic-Moore}),
computational models can deliver predictive power and serve
as a basis for important decisions.
On the other hand, the computational needs of CSE applications
are a main driver for HPC research.
CSE applications often require closely interlinked systems,
where not only is the aggregate instruction throughput essential,
but also the tight interconnection between components:
CSE often requires high-bandwidth and low-latency interconnects.
These requirements differentiate scientific computing in CSE
from other uses of information-processing systems.
In particular, many CSE applications cannot be served efficiently
by weakly coupled networks as in grid computing or
generic cloud computing services.
\subsubsection{Ubiquitous Parallelism: A Phase Change for CSE Research and Education}
Parallelism for CSE
is fundamental for extreme-scale computing,
but the significance of parallelism goes much beyond the topics arising in supercomputing.
All modern computer architectures are parallel, even those of
moderate-size systems or on the desktop.
Since single-processor clock speeds have stagnated,
any further increase of computational power can
be achieved only by a further increase in parallelism.
High-performance computing architectures will involve an ever-increasing number of parallel threads,
possibly reaching a billion by 2020.
These further increases in computational power can be delivered only with ever more
complex hierarchical and heterogeneous system designs.
The technological challenges in miniaturization, clock rate, bandwidth limitations,
and power consumption will require deep and disruptive innovation.
In CSE this will be reflected in dramatically increasing complexity in software development
and will require the design of
efficient and sustainable realizations of numerical libraries and CSE frameworks.
Future mainstream computers for science and engineering will not be accelerated
versions of current architectures, but smaller versions of extreme-scale machines.
In particular, they will inherit the node and core architecture from these systems; and thus
the programming methodology must be adapted for these systems,
just as it must be adapted for the extreme scale.
The inevitable trend to parallel computing is complicated by the
hierarchical and heterogeneous architectures of modern parallel systems.
Although computer science research is making progress in developing
techniques to make these architectural features
transparent to the application developer, we have reached a state where,
for high-end applications, the specifics of an architecture must be explicitly exploited
in the algorithm design and the software development.
Inevitably, all CSE methods and algorithms will have to be
considered in the context of such parallel settings.
Consequently, parallel computing in its full breadth
has become a central and critical issue for
CSE research and education.
\subsubsection{Emergent Topics in HPC-Related Research and Education}
{\bf Extending the scope of CSE through HPC technology.}
Low-cost %
computational power, becoming available through
accelerator hardware, such as graphic processing units (GPUs), will
increasingly enable nontraditional uses of HPC technology for CSE.
One significant opportunity arises in real-time and embedded supercomputing.
Figure~\ref{fig:emergent-cse} illustrates a selection of possible
future development paths, many of which involve
advanced interactive computational steering
and/or real-time simulation.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{./figures/Figs-AdobeIllustrator/extended-cse-applications_05.pdf}
\caption{Some emerging developments based on real-time or embedded HPC methodology for CSE applications.}
\label{fig:emergent-cse}
\end{figure}
Once simulation software can be used in real time,
it can also be used for training and education.
A classical application is the flight simulator, but the
methodology can be extended to many other situations where humans operate complex technical objects
and where systematic training on a simulator may save time and money as well as increase preparedness for emergency situations.
Further uses of fast, embedded CSE systems include the development of
simulators for the modeling of predictive control systems and for patient-specific biomedical diagnosis.
The development of
emerging CSE applications, shown
in Figure~\ref{fig:emergent-cse}, will require a focused investment
in parallel computing research and education.
As another example, extreme-scale computing will enable mesoscale simulation
to model the collection of cells that make up a human organ or a large collection of particles
{\em directly}, without resorting to averaging approaches.
The simulation of granular material has tremendous practical
importance.
The examples range from the transport and processing of
bulk materials and powders in industry
to the simulation of avalanches and landslides.
The potential that arises with the advent of powerful supercomputers
can be seen when realizing that exascale means $10^{18}$ but that
a human has ''only'' around $10^{11}$
neurons and $10^{13}$ red blood cells, and that
the 3D printing of a medical implant may require to process
$10^8$ individual grains of titanium alloy.
Thus, extreme-scale computation may open the route to modeling techniques where each cell or grain is represented individually. This gives rise to
research directions that are out of reach on conventional computer systems
but that will exceed the predictive power of continuum models
for such simulation scenarios.
In order to exploit these opportunities, new simulation methods must be devised,
new algorithms invented, and new modeling paradigms formulated.
New techniques for validation and verification are needed.
Fascinating opportunities in fundamental research arise that go
far beyond just developing new material laws and
increasing the mesh resolution in continuum models.
\medskip
\noindent
{\bf Quantitative performance analysis of algorithms and software.}
The advent of exascale and other performance-critical applications
requires that CSE research and education address the performance
abyss that widens between the traditional mathematical assessment of
computational cost and the
implementation of concrete algorithms on current computer systems.
The traditional cost metrics that are based on counting
floating-point operations fail increasingly to correlate with the truly relevant
cost factors, such as time to solution or energy consumption.
Research will be necessary in order to quantify
more complex algorithmic characteristics, such as memory footprint and
memory access structure (i.e., cache reuse, uniformity of access,
utilization of block transfers, etc.), processor utilization,
communication, and synchronization requirements. These effects must be
built into better cost and complexity models---models that
capture the true nature of
computational costs better than just counting the flops.
Furthermore, the traditional approach to
theory in numerical analysis
provides only an insufficient basis to quantify the efficiency of
algorithms and software, since many theorems are only qualitative and
leave the constants unspecified.
Such mathematical results, although themselves rigorous,
permit only heuristic---and thus often misleading---predictions of real computational
performance.
Thus much of current numerical analysis research must be
fundamentally extended to become better guiding principles for
the design of efficient simulation methods in practice.
\medskip
\noindent
{\bf Performance engineering and co-design.}
In current CSE practice, performance models are used to analyze existing
applications for current and future computer systems;
but the potential of performance analysis techniques is rarely used %
as a systematic tool for designing, developing, and implementing
CSE applications.
In many cases an a priori analysis
can be used to determine the computational resources that
are required for executing a specific algorithm.
Where available, such requirements (e.g., flops, memory, memory bandwidth, network bandwidth)
should be treated as nonfunctional goals
for the software design.
Required here is a fundamental shift from current practice
of treating performance as an a posteriori diagnostic assessment to recognize it as
an a priori design goal.
This step is essential because when performance criteria are considered
too late in the design process,
fundamental decisions (about data structures and algorithms) cannot be revised,
and improvements are then often limited to an unsatisfactory
code-tuning and tweaking.
The idea of an a priori treatment of performance goals in scientific software engineering
is related to the {\em co-design} paradigm
and has become a new trend for developing next-generation
algorithms and application software systems.
The nations of the G-8 have instituted regular meetings to strategize
about the formidable task of porting to emerging exascale
architectures the vast quantity of software on which computational
science and engineering depend. These co-design efforts were codified
in the 2011 International Exascale Software Project Roadmap~\cite{dongarra_ijhpca2011}.
\medskip
\noindent
{\bf Ultrascalability and asynchronous algorithms.}
For the foreseeable future all growth in compute power
will be delivered through increased parallelism.
Thus, we must expect that high-end applications will reach
degrees of parallelism of up to $10^9$ within a decade.
This situation poses a formidable challenge to the design
and implementation of algorithms and software.
Traditional paradigms of bulk-synchronous operation are likely to face significant performance obstacles.
Many algorithms permit increased asynchronous executions, enabling processing to continue even if a small number of processors stay behind; but this is a wide-open area of research because it requires a new look at data dependencies and possibly
also nondeterministic execution schedules.
Additionally, system software must be extended to permit the efficient
and robust implementation of such asynchronous algorithms.
\medskip
\noindent
{\bf Power wall.}
The increasing aggregate compute capacity in highly parallel computers both on the desktop and on the supercomputer promises to enable the simulation of
problems with unprecedented size and resolution.
However, electric power consumption per data element is
not expected to drop at the same rate. This creates a {\it power wall},
and power consumption is emerging as one of the fundamental
bottlenecks of large-scale computing.
How dramatic this will be for computational science can be seen from the following simple example.
Assume that the movement of a single word of data can be estimated by
$1$ NJoule ($=10^{-9}$ Joule) of energy~\cite{SachsYelickEtAl2011}.
If we now assume that a specific computation deals with
$N=10^{9}$ entities (such as mesh nodes or particles),
then using an $O(N^2)$ algorithm to transfer $N \times N$ data items for an all-to-all interaction,
such as computing a pairwise distance,
will cause an energy dissipation of $10^{18}$ NJoule $\approx 277$ kWh.
Assuming a petaflop computer (which may become available on the desktop in the coming decade),
we could in theory execute %
the $N^2$ operations in 1000~seconds.
However, a cost of 277 kWh for the naively implemented data movement
will require more than 1 MW of sustained power intake.
Clearly such power levels are neither feasible nor affordable in a standard environment.
The situation gets even more dramatic
when we transition to {\em tera}scale problems with $N=10^{12}$ on supercomputers.
Then the same all-to-all data exchange will dissipate an enormous 277 GWh,
which is equivalent to the energy
output of a medium-size nuclear fusion explosion.
Clearly, in application scenarios of such scale, a global data movement with
$O(N^2)$ complexity must be
classified as practically impossible.
Only with suitable hierarchical algorithms
that dramatically reduce the complexity
can we hope to
tackle computational problems of such size.
This kind of bottleneck must be addressed in both research and education.
In the context of CSE, the power wall becomes
primarily a question of designing the most efficient algorithm in terms of
operations {\em and also} the data movement.
Additionally, more energy-efficient hardware systems need to be
developed.
\medskip
\noindent
{\bf Fault tolerance and resilience.}
With increasing numbers of functional units and cores and with continued miniaturization,
the potential for hardware failures rises.
Fault tolerance on the level of a system can be reached only by redundancy,
which drives energy and investment cost. At this time
many algorithms used in CSE are believed to have good potential for so-called algorithm-based fault tolerance. That is, the
algorithm either is naturally tolerant against certain faults
(e.g., still converges to the correct answer, but perhaps more slowly)
or can be augmented to compensate for different types of failure (by exploiting specific features of the data structures and the algorithms, for example).
Whenever there is hierarchy, different levels and pre-solutions
can be used for error detection and circumvention.
At present, many open research questions arise from these considerations,
especially when the systems, algorithms, and applications are studied in combination.
\section{CSE: Driving Scientific and Engineering Progress}
\label{sec:intro}
\pagebudget{5}
\subsection{Definition of CSE}
Computational science and engineering (CSE) is a multidisciplinary
field of research and education lying at the intersection of applied
mathematics, statistics, computer science, and core disciplines of
science and engineering (Figure~\ref{Fig:CSE-diagram}).
While CSE builds on these disciplinary areas, its focus is on the
integration of knowledge and methodologies from all of them and the
development of new ideas at their interfaces. As such, CSE is a field
in its own right, distinct from any of the core disciplines.
CSE is devoted to the development and use of computational methods for
scientific discovery in all branches of the sciences, for the
advancement of innovation in engineering and technology, and for the
support of decision-making across a spectrum of societally important
application areas.
CSE is a broad and vitally
important field encompassing methods of high-performance computing
(HPC) and playing a central role in the data revolution.
\begin{figure}[hb]
\centering
\includegraphics[width=0.6\textwidth]{./figures/Figs-AdobeIllustrator/cse_diagram.pdf}
\caption{CSE at the intersection of mathematics and statistics, computer science, and core disciplines from the sciences and engineering. This combination gives rise to a new field whose character is different from its original constituents.
\label{Fig:CSE-diagram}
}
\end{figure}
While CSE is rooted in the mathematical and statistical sciences,
computer science, the physical sciences, and engineering, today it
increasingly pursues its own unique research agenda. CSE is now widely
recognized as an essential cornerstone that drives scientific and
technological progress in conjunction with theory and experiment.
Over the past two decades CSE has grown beyond its classical roots
in mathematics and the physical sciences
and has started to revolutionize
the life sciences and medicine.
In the 21st century its pivotal
role continues to expand to broader areas that now include the social
sciences, humanities, business, finance, and government policy.
\subsection{Goal of This Document}
The goal of this document is twofold: (1)
examine and assess the rapidly expanding role of CSE in the
21st-century landscape of research and education and (2) discuss new
directions for CSE
in the coming decade.
The document explores challenges and opportunities across CSE methods,
algorithms, and software, while examining the impact of disruptive
developments resulting from emerging extreme-scale computing systems,
data-driven discovery, and comprehensive broadening of the
application fields of CSE. Among the many exciting challenges and
opportunities that arise for CSE, this report
discusses particular advances in CSE methods and algorithms,
the ubiquitous parallelism of
all future computing, the sea change provoked by the data revolution,
the importance of software as a foundation for sustained CSE collaboration,
and the resulting challenges for CSE education
and workforce development. This report follows in the footsteps of
the 2001 report on graduate education in CSE by the SIAM Working Group
on CSE Education \cite{education2001graduate}, which was instrumental in
setting directions for the nascent CSE field,
and a more recent report on undergraduate CSE education
\cite{education2011undergraduate}.
\subsection{Importance of CSE } %
CSE has had and will continue to have profound implications for the
health, economic well-being, quality of life, safety, and security of
humans and our planet as a whole. The impact of CSE on our society
has been so enormous---and the role of modeling and simulation so
ubiquitous---that it is nearly impossible to measure CSE's impact and
too easy to take it for granted. It is hard to imagine the design or
control of a system or process that has not been thoroughly
transformed by predictive modeling and simulation. Advances in CSE
have led to more efficient aircraft, safer cars, higher-density
transistors, more compact electronic devices, more powerful chemical
and biological process systems, cleaner power plants,
higher-resolution medical imaging devices, and more
accurate geophysical exploration technologies---to name just a
few.
A rich variety of fundamental advances have been enabled by CSE in areas such as
astrophysics,
biology,
climate modeling,
fusion-energy science,
hazard analysis,
human sciences and policy,
materials science,
management of greenhouse gases,
nuclear energy,
particle accelerator design, and
virtual product design
\cite{scales03,pitac05,OdenEtAl06,Brown08,Breakthroughs08,GlotzerKimEtAl2009,BrownMessina10,oden2011grand,KeyesMcInnesWoodwardEtAl13}.
\smallskip
\noindent
{\bf CSE as a complement to theory and experiment.}
CSE closes the centuries-old gap between theory and experiment by
providing an unparalleled technology that converts theoretical models into
predictive simulations. It creates a systematic method to
integrate experimental data with algorithmic models.
CSE has become the essential driver for progress in
science when classical experiments or conventional theory
reach their limits,
and in applications where experimental approaches are too
costly, slow, dangerous, or impossible. Examples include automobile
crash tests, nuclear test explosions, emergency flight maneuvers,
and operator emergency response training.
Experiments in fundamental
science may be impossible when the systems under study span
microscopic or macroscopic scales in space or time that are beyond
reach. Although traditional theoretical analysis would not suffer
from these limitations,
theory alone is insufficient to create predictive capabilities. For
example, while the well-established mathematical models for fluid
dynamics provide an accurate theoretical description of the
atmosphere, the equations elude analytical solutions for problems of
interest because of their nonlinearity. Only when combined with the power
of numerical simulation and techniques to assimilate vast amounts of measured data
do they become useful for predicting tomorrow's weather or for
designing more energy-efficient aircraft wings.
Another example is the use of simulation models to
conduct systematic virtual experiments of exploding supernovae: CSE
technology serves as a virtual telescope reaching farther than any
real telescope, expanding human reach into outer space. And
computational techniques can equally well serve as a virtual
microscope, being used to understand quantum phenomena at scales so
small that no physical microscope could resolve them.
\smallskip
\noindent
{\bf CSE and the data revolution.}
The emergence and growing importance of massive data sets in many
areas of science, technology, and society, in conjunction with the
availability of ever-increasing parallel computing power, are
transforming the world. Data-driven approaches
enable novel ways of scientific discovery.
Using massive amounts of data and mathematical techniques to assimilate the
data in computational models offers new ways of quantifying
uncertainties in science and engineering and thus helps
make CSE truly predictive.
At the same time, relying on new forms of massive data, we can now
use the scientific approach of quantitative, evidence-based analysis
to drive progress in many areas of society
where qualitative forms of analysis, understanding, and decision-making
were the norm until recently.
Here the CSE paradigm contributes as a keystone technology to the
data revolution.
In these and many other ways, CSE is becoming
essential for increasingly broad areas of science, engineering, and
technology, expanding human capability beyond its classical
limitations.
\subsection{CSE Success Stories}
CSE has become indispensable for leading-edge
scientific investigation and engineering design that increasingly rely on advanced modeling
and simulation as a complement to theory and experiment.
For example, CSE addresses questions such as the following:
\begin{titemize}
\item What is the best therapy for a patient with a specific disease
that minimizes risk and maximizes success probability?
\item What are the intricate functions of the human brain, the human
nervous system, and the cardiovascular system, and how can we better
understand them so as to prolong or improve our quality of life?
\item What are the likely results of hurricanes, tornados, or storm
surges on coastal regions, and what plans can be implemented to
minimize loss of human life and property?
\item How will our climate evolve, and how can we predict the outcomes
of climate change?
\item How can a region be made less susceptible to failures in the power grid, and
how can we devise strategies to recover as quickly as possible
when part of the power grid becomes inoperable?
\end{titemize}
Throughout this document, we highlight a few examples of CSE success stories
in call-out boxes to illustrate how combined advances in CSE
theory, analysis, algorithms, and software have made CSE technology
indispensable for applications throughout science and industry.
\medskip
\begin{tcolorbox}[title=CSE Success Story: Computational Medicine]
\begin{wrapfigure}{R}{0.61\textwidth}
\vspace{-0.15in}
\includegraphics[width=0.61\textwidth]{./figures/cardiac-krause/electropic_2000.png}
\end{wrapfigure}
Computational medicine has always been at the frontier of CSE: the virtual design
and testing of new drugs and therapies accelerate medical progress and
reduce cost for development and treatment.
For example, CSE researchers have developed elaborate models of the electromechanical
activity of the human heart.\protect\footnotemark
~Such complex processes within the human body lead to elaborate multiscale models.
Cardiac function builds on a complicated interplay between different temporal
and spatial scales (i.e., body, organ, cellular, and molecular
levels), as well as different physical models (i.e., mechanics,
electrophysiology, fluid mechanics, and their interaction).
CSE advances in computational medicine are helping, for example, in placing electrodes
for pacemakers and studying diseases such as atrial fibrillation.
Opportunities abound for next-generation CSE advances:
The solution of inverse problems can help
identify suitable values for material parameters, for example, to detect scars
or infarctions. Using uncertainty quantification, researchers can estimate the
influence of varying these parameters or varying geometry.
\end{tcolorbox}
\footnotetext{Parallel and adaptive simulation method described in T. Dickopf, T. Krause, R. Kraus, and M. Potse, SIAM J Sci Comput 36(2), C163-C189, 2014.}
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{./figures/Figs-AdobeIllustrator/cse_loop.pdf}
\caption{CSE pipeline, from physical problem to model and algorithms
to efficient implementation in simulation software with verification
and validation driven by data. The pipeline is actually a loop that
requires multiple feedbacks.
\label{Fig:CSE-pipeline}}
\end{figure}
\subsection{CSE: A New Academic Endeavor}
CSE is unique in that it enables progress in virtually all other
disciplines by providing windows of discovery when traditional means
of research and development reach their limits.
Many CSE problems can
be characterized by a {\em pipeline} that includes mathematical
modeling techniques (based on physical or other principles),
simulation techniques (discretizations of equations, solution
algorithms, data structures, software libraries and frameworks),
and analysis techniques (data mining, data management,
and visualization, as well as the analysis of error, sensitivity,
stability, and uncertainty). In practice the CSE pipeline is a loop
connected through multiple feedbacks, as illustrated in
Figure~\ref{Fig:CSE-pipeline}. Models are revised and updated with new
data.
When they reach a sufficient
level of predictive fidelity, they can be used for design and control,
which are often posed formally as optimization problems.
\smallskip
\noindent
{\bf Universality.} CSE research often creates broad impact, since CSE
methods and findings tend to have wide applicability beyond any single discipline.
The abstract concepts of algorithm and method development form a central
part of CSE (Figure~\ref{Fig:CSE-components}).
They generally apply to a wide range of
disciplinary problems where they can then lead to breakthroughs
that could not be achieved otherwise.
For example, a novel method to simulate large
ensembles of interacting particles may become
a central breakthrough achievement for an astrophysicist
studying galaxy formation as well as for a nanotechnology researcher
exploring molecular dynamics. Ultimately, therefore, CSE aims to develop a
universal set of simulation methods and tools for scientists and
engineers. The universality of CSE is difficult to leverage
in the classical disciplinary organization of science, however,
and must be reflected in education, in institutional structures,
and in funding programs.
This point becomes even more important because the universality of CSE can
also become a weakness in the competition for resources in research
and education.
\smallskip
\noindent
{\bf Institutional structure.}
Because of CSE's intrinsically
interdisciplinary nature and its research agenda reaching beyond the
traditional disciplines, the development of CSE
is often impeded by traditional institutional boundaries.
CSE research and education have found great success over the past
decade in those settings where CSE became a clearly articulated focus of
entire university departments,\footnote{For example, the School of
Computational Science \& Engineering at the Georgia Institute of
Technology and the Department of Scientific Computing at Florida State
University.
%
} faculties,\footnote{For example, the
Division of Computer, Electrical, and Mathematical Sciences and
Engineering at the King Abdullah University of Science and
Technology (KAUST).
%
}
or large
interdisciplinary centers.\footnote{For example, the Institute for
Computational Engineering and Sciences at the University of Texas at
Austin, the Scientific Computing and Imaging Institute at the
University of Utah, the Cluster of Excellence in Simulation
Technology at the University of Stuttgart,
and CERFACS (Centre Européen de Recherche et de
Formation Avancée en Calcul Scientifique) in Toulouse.}
In
many of the newer universities in the world, institutional structures
often develop naturally in line with the CSE paradigm.\footnote{KAUST
is, again, a good example.}
In other cases, institutional traditions and realities make it more
natural for successful CSE programs to develop within existing
departments\footnote{For example, the master's program in CSE at the
Technische Universit{\"a}t M\"{u}nchen.} or in
cross-departmental\footnote{For example, CSE graduate programs in
engineering faculties at the University of Illinois at
Urbana-Champaign, at the Massachusetts Institute of Technology,
and at the Technische Universit{\"a}t Darmstadt}
or cross-faculty\footnote{For example, the
School of Computational Science and Engineering at McMaster
University, the Institute for Computational and Mathematical Engineering at Stanford University,
and the master's program in CSE at the Ecole
Polytechnique Federale de Lausanne.} initiatives.
Information about a variety of CSE educational programs can be found online~\cite{SIAG-CSE:wiki,CSE-GraduateProgramSurvey2012}.
In any case, universities and research institutes
will need to implement new and
effective multidisciplinary structures that enable
more effective CSE research and education.
To fully realize its potential, the CSE endeavor requires its own
academic structures, funding programs, and educational programs.
\medskip
\begin{tcolorbox}[title=CSE Success Story: SIAM Working Group Inspires Community to Create CSE Education Programs]
\begin{wrapfigure}{R}{0.68\textwidth}
\vspace{-0.15in}
\includegraphics[width=0.68\textwidth]{./figures/petzold-report.pdf}
\end{wrapfigure}
The landmark 2001 report on ``Graduate Education in Computational Science and Engineering'' by L.~Petzold et al.~\cite{education2001graduate} played a critical role in helping define the then-nascent field of CSE. The report proposed a concrete definition of CSE's core areas and scope, and it laid out a vision for CSE graduate education. In doing so, it contributed a great deal to establishing CSE's identity, to identifying CSE as a priority interdisciplinary area for funding agencies, to expanding and strengthening the global offerings of CSE graduate education, and ultimately to creating the current generation of early-career CSE researchers.
\smallskip
Much of the 2001 report remains relevant today; yet much has changed. Fifteen years later, there is a sustained significant demand for a workforce versed in mathematics-based computational modeling and simulation, as well as a high demand for graduates with the interdisciplinary expertise needed to develop and/or utilize computational techniques and methods in many fields across science, engineering, business, and society. This demand necessitates that we continue to strengthen existing programs as well as leverage new opportunities to create innovative programs.
\end{tcolorbox}
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{./figures/Figs-AdobeIllustrator/cse_components3.pdf}
\caption{Interaction among different CSE components. The development
of new algorithms and software is at the core of this view of CSE.
}
\label{Fig:CSE-components}
\end{figure}
\subsection{Challenges and Opportunities for the Next Decade}
While the past several decades have witnessed tremendous progress in
the development of CSE methods and their application within a broad
spectrum of science and engineering problems, a number of challenges
and opportunities are arising that define important research directions
for CSE in the coming decade.
In science and engineering simulations, large differences in temporal and
spatial scales must be
resolved together with handling uncertainty in parameters and
data, and often different models must be coupled together
to become complex multiphysics simulations.
This integration is necessary in order to tackle \textbf{applications in a multitude of new fields} such as
the biomedical sciences. High-fidelity predictive simulations require
feedback loops that involve inverse problems, data assimilation,
and optimal design and control.
Algorithmic advances in these areas
are at the core of CSE research; and in order to deal with the requirements of
ever more complex science and engineering applications, new
\textbf{fundamental algorithmic developments} are required.
Several recent disruptive developments yield the promise of further fundamental
progress if new obstacles can be overcome.
Since single-processor clock speeds have stagnated,
any further increase of computational power must result from a
further increase in parallelism.
New mathematical and computer science techniques need to be explored that can
guide development of modern algorithms that are effective in the
new era of \textbf{ubiquitous parallelism} and extreme-scale computing.
In addition, the sea change provoked by the data revolution requires new methods for
\textbf{data-driven scientific discovery} and new algorithms for data analytics
that are effective at very large scale, as part of the comprehensive
broadening of the application fields of CSE to almost every field of
science, technology, and society.
Moreover, software itself is now broadly recognized as a key crosscutting technology that connects advances
in mathematics, computer science, and domain-specific science and engineering
to achieve robust and efficient simulations on advanced computing systems.
In order to deliver the comprehensive CSE promise, the role of
{\bf CSE software ecosystems} must be
redefined---encapsulating advances in algorithms, methods, and
implementations and thereby providing critical instruments to enable
scientific progress.
These exciting research challenges and opportunities will be
elaborated on in Section \ref{sec:research} of this document.
\subsection{The Broad CSE Community}
\label{sec:cse-community}
The past two decades have seen tremendous growth in the CSE community,
including a dramatic increase in both the size and breadth of intellectual
perspectives and interests. The growth in community size can be seen,
for example, through the membership of the SIAM Activity Group on CSE,
which has steadily increased from approximately 1,000 members in 2005 to about 2,300
in 2015.
The biennial SIAM CSE Conference~\cite{siam-cse-conference-series}
is now SIAM's largest conference, with growth
from about 400 attendees in 2000 to 1,700 attendees in 2015.
The increased breadth
of the community is evidenced in many ways: by the diversity of
minisymposium topics at SIAM CSE conferences; through a new broader
structure for SIAM's Journal on Scientific Computing, including a new
journal section that focuses on computational methods in specific
problems across science and engineering; and by the sharply increased
use of CSE approaches in industry~\cite{oden2011grand,HEC-IWG2013}.
As we envision the future of CSE, and in particular as we consider
educational programs, we must keep in mind that such a large and broad
intellectual community has a correspondingly broad set of
needs. Figure~\ref{Fig:CSE-community} presents one way to view the
different aspects of the broad CSE community: (1) \textit{CSE Core
Researchers and Developers}---those engaged in the conception,
analysis, development, and testing of CSE algorithms and software and
(2) \textit{CSE Domain Scientists and Engineers}---those primarily engaged
in developing and exploiting CSE methods for progress in
particular science and engineering campaigns.
The latter community can usefully be further
subcategorized into those who interact with the core technologies at a
developer level within their own applications, creating their own
implementations and contributing to methodological/algorithmic
improvements, and those who use state-of-the-art CSE technologies as
products, combining them with their expert knowledge of an application
area to push the boundaries of a particular application. Within the
\textit{CSE Core Researchers and Developers} group in
Figure~\ref{Fig:CSE-community}, we further identify two groups: those
focused on broadly applicable methods and algorithms and those focused
on methods and algorithms motivated by a specific domain of
application. This distinction is a useful way to
cast differences in desired outcomes for different types of CSE
educational programs
as they will be discussed in Section
\ref{sec:education}.
As with any categorization, the dividing lines
in Figure~\ref{Fig:CSE-community} are fuzzy, and in fact any single
researcher might span multiple categories.
\begin{figure}[htb]
\centering
\includegraphics[width=0.65\textwidth]{./figures/Figs-AdobeIllustrator/cse_community.pdf}
\caption{One view of the different aspects of the broad CSE community. The part of the CSE community that focuses on developing new methods and algorithms is labeled {\em CSE Core Researchers and Developers}. This group may be driven by generally applicable methods or by methods developed for a specific application domain. {\em CSE Domain Scientists and Engineers} focus their work primarily in their scientific or engineering domain and make extensive use of CSE methods in their research or development work.
}
\label{Fig:CSE-community}
\end{figure}
\subsection{Organization of This Document}
The remainder of this document is organized as follows.
Section~\ref{sec:research} presents challenges and opportunities in
CSE research, organized into four main areas. First we discuss
key advances in core CSE methods and algorithms, and the
ever-increasing parallelism in computing hardware culminating
in the drive toward extreme-scale applications.
Next we describe how the ongoing data revolution offers tremendous
opportunities for breakthrough advances in science and engineering by
exploiting new techniques and approaches in synergy with data science,
and we discuss the challenges in advancing CSE software
given its key role as a crosscutting CSE technology.
Section~\ref{sec:education} discusses how major changes in
the CSE landscape are affecting the needs and goals of CSE education
and workforce development. Section~\ref{sec:conclusions} summarizes findings and formulates
recommendations for CSE over the next
decade.
\subsection{Emergence of Predictive Science}
\label{sec:predictive-cse}
The advances in CSE modeling, algorithms, simulation, big data analytics, HPC, and
scientific software summarized in this document
all aim toward the overarching goal of achieving truly predictive science capabilities.
Scientific experimentation and theory, the classical paradigms of the scientific method,
both strive to describe the physical world.
However, high-fidelity predictive capabilities can
be achieved only by numerical computation.
Predictive science now lies at the core of the new CSE discipline,
and this is why CSE will lead to fundamental changes for all of science.
CSE draws its predictive power from mathematics, statistics, and the natural sciences
as they underlie model selection, model calibration, model validation,
and model and code verification, all in the presence of uncertainties.
Ultimately CSE must include also the propagation of uncertainties through the forward problem
and the inverse problem, to quantify the uncertainties of the outputs that are the target goals of the simulation.
When actual computer predictions are used for critical decisions,
all of these sources of uncertainty must be taken into account.
Ironically, at this time, when we witness the emergence of the new predictive science paradigm,
the algorithms and methods called upon to cope with these issues
have their roots in the mathematics and statistics of the past century and earlier.
Thus,
the new models, algorithms, and methodologies that will move this
area forward will require significant breakthroughs and substantial research efforts.
The need for new algorithms that cope with the complexities
of predictive modeling is a challenging and fundamentally
important goal for CSE research. Their development, analysis,
and implementation constitute the new research agenda for CSE.
What predictive science and therefore what CSE will eventually be
is not yet fully understood.
We may see the coastline of
the ``continent of predictive science'' ahead of us, but we still have to explore
the whole mass of land that lies behind this coastline.
Even now, however, we can clearly see
that CSE and the transition to predictive science
will have a profound impact on education,
on how scientific software is developed,
on research methodologies,
and on the design of tomorrow's computers.
\section{Challenges and Opportunities in CSE Research}
\label{sec:research}
The field of CSE faces important challenges and opportunities for the next decade, following disruptive developments in extreme-scale parallel computing and ubiquitous parallelism, the emergence of big data and data-driven discovery, and a comprehensive broadening of the application fields of CSE. This section highlights important emerging developments in CSE methods and algorithms, in HPC, in data-driven CSE, and in software.
\input{advances-cse-core}
\input{hpc-cse}
\input{data-cse}
\input{software-cse}
\input{predictive-cse}
\section*{} so need to explicitly specify \markboth (see acknowledgments.tex)
\rhead{}
\lhead{\leftmark}
\cfoot{\thepage}
\input{abstract}
\mbox{} \clearpage
\input{dedication}
\mbox{} \clearpage
\newpage
\tableofcontents
\clearpage
\newpage
\pagenumbering{arabic}
\setcounter{page}{1}
\input{intro}
\input{research}
\input{education}
\input{conclusions}
\input{acknowledgments}
\addcontentsline{toc}{section}{Acknowledgments}
\newpage
\phantomsection
\addcontentsline{toc}{section}{References}
\bibliographystyle{siam}
\subsection{CSE Software}
\label{sec:software-cse}
\pagebudget{3}
CSE software ecosystems provide fundamental and pervasive
technologies that connect advances in applied mathematics, computer
science, and core disciplines of science and engineering
for advanced modeling, simulation, discovery, and analysis in CSE.
We discuss the importance and scope of CSE software, the increasing challenges in CSE software development and sustainability, and the emergence of software as a research agenda for CSE.
\subsubsection{Importance and Scope of CSE Software}
Software is an essential
product of CSE research when complex models of reality are cast into algorithms; moreover,
as shown in Figure~\ref{Fig:CSE-components}, the development of efficient, robust, and
sustainable software is at the core of CSE.
The CSE agenda for research includes the systematic design and
analysis of (parallel) software, its accuracy, and its computational
complexity (see Section~\ref{sec:hpc-cse}).
Beyond this, CSE research must deal with the
assessment of computational cost on the relevant hardware platforms,
as well as with soft criteria such as flexibility, usability,
extensibility, and interoperability.
Software that contributes to modeling, solution, and analysis is only
part of the software required in CSE. Equally important are operating
systems, programming models, programming languages, compilers,
debuggers, profilers, source-to-source translators, build systems,
dynamic resource managers, messaging systems, I/O systems, workflow
controllers, and other types of system software that support
productive human-machine interaction.
Software in this wider sense
also includes the infrastructure necessary to support a CSE research
ecosystem, such as version control, automatic tests for correctness
and consistency, documentation, handbooks, and tutorials. All of this
software is essential for CSE to continue to migrate up computational
scales, and it requires an interdisciplinary community to produce it
and to ensure that it coheres.
While the volume and complexity of scientific software have grown
substantially in recent decades~\cite{GroppHarrisonEtAl2016},
scientific software traditionally has not received the focused
attention it so desperately needs in order to fulfill this key
role as a cornerstone of long-term CSE
collaboration and scientific
progress~\cite{SoftwareProductivityWorkshop14,HerouxAllenEtAl2016,Hettrick2016}.
Rather, ``software has evolved organically and
inconsistently, with its development and adoption coming largely as
by-products of community responses to other targeted
initiatives'' \cite{KeyesTaylor2011}.
\subsubsection{Challenges of CSE Software}
The community faces increasing challenges in CSE software design, development,
and sustainability due to the confluence of disruptive changes in
computing architectures and demands for more complex simulations. New
architectures require fundamental algorithm and software
refactoring, while at the same time enabling new ranges of
modeling, simulation, and analysis.
\medskip
\smallskip
\noindent
{\bf New science frontiers: increasing software demands.} CSE's
continual push toward new capabilities that enable truly predictive science
dramatically affects how codes are designed, developed, and used.
Software that incorporates multiphysics and multiscale modeling, capabilities
beyond interpretive simulations (such as UQ
and design optimization), and coupled data
analytics presents a host of difficulties not faced in
traditional contexts, because of the compounded complexities of code
interactions~\cite{KeyesMcInnesWoodwardEtAl13,HerouxAllenEtAl2016,SoftwareProductivityWorkshop14}.
A key challenge is
enabling the introduction of new models, algorithms, and data
structures over time---that is, balancing competing goals of interface
stability and software reuse with the ability to innovate
algorithmically and develop new approaches.
\smallskip
\noindent
{\bf Programmability of heterogeneous architectures.}
Designing and developing CSE software
to be sustainable are challenging
software engineering tasks,
not only in the extreme scale, but also in conventional applications
that run on standard hardware.
The best software architecture is often
determined by performance considerations, and it is a high
art to identify kernel routines that can be used
as an internal interface for a software performance layer that
can be optimized for various architectures.
All modern computers are hierarchically structured. This structure, in turn, creates the need to
develop software with the hierarchy and the architecture in mind,
often using a hybrid combination of different languages and tools.
For example, a given application may utilize MPI on the system level,
OpenMP on the node level, and special libraries or low-level intrinsics to exploit core-level vectorization.
Newer techniques from computer science, such as automatic program generation, annotations,
and domain-specific languages, may eventually help reduce the gap between
real-life hardware structures and model complexity.
This complexity will need to be managed, and to some extent alleviated, in the future.
For example, the development of new and improved unifying languages, combined with the tools to select appropriate algorithms for target architectures and to implement these algorithms automatically,
may ease the burden on CSE software developers.
Such tools are topics
of current research and are therefore far from reaching the level of
maturity required to support large-scale development.
Consequently, CSE developers must currently rely on an approach that includes hardware-optimized libraries; or they must
master the complexity---typically in larger teams where members can specialize---by undertaking explicitly hardware-aware development.
This task is even more complex when accelerators,
such as GPUs, are to be used.
\smallskip
\noindent
{\bf Composability, interoperability, extensibility, portability.}
As CSE applications increase in
sophistication, no single person or team possesses the expertise and
resources to address all aspects of a simulation. Interdisciplinary
collaboration using software developed by independent groups becomes
essential. CSE researchers face daunting challenges in developing,
deploying, maintaining, extending, and effectively using libraries,
frameworks, tools, and application-specific infrastructure.
Practical difficulties in collaborative research software stem from
the need for composable and interoperable code with support for
managing complexity and change as architectures, programming models,
and applications continue to advance. Challenges include
coordinating the interfaces between components that need to
interoperate and ensuring that multiple components can be
used side by side without conflicts between programming models and
resources. Even more difficult challenges arise with the need to
exchange or control data between components, where many issues center
on ownership and structure of the data on which components act.
Moreover, good software must be extensible, to meet not only requirements known at
the time of its design but also unanticipated needs that change over time.
Software must also be portable across target architectures, including
laptops, workstations, and moderately sized clusters for much of the
CSE community. Even researchers who employ the full resources of
emerging extreme-scale machines typically develop and test their code
first on laptops and clusters, so that portability across this
entire spectrum is essential.
\medskip
\begin{tcolorbox}[title=CSE Success Story:
Numerical Libraries Provide Computational Engines for Advanced CSE Applications]
\begin{wrapfigure}{R}{0.58\textwidth}
\vspace{-0.15in}
\includegraphics[width=0.58\textwidth]{figures/xsdk-diagram-for-siam-report-compressed.pdf}
\end{wrapfigure}
Community collaboration and support are essential in
driving and transforming how large-scale, open-source software
is developed, maintained, and used. Work is under way in
developing an Extreme-scale Scientific Development Kit
(xSDK),\protect\footnotemark ~which is improving the interoperability,
portability, and sustainability of CSE libraries and application
components. The vision of the xSDK is to provide the foundation of
an extensible scientific software ecosystem
developed by diverse, independent teams throughout the
community, in order to improve the quality, reduce the cost, and
accelerate the development of CSE applications.
The xSDK incorporates, for example, the high-performance numerical
libraries hypre, PETSc, Sundials, SuperLU, and Trilinos, which are
supported by the U.S. Department of Energy and
encapsulate cutting-edge algorithmic advances
to achieve robust, efficient, and scalable performance on
high-performance architectures.
These packages provide the
computational engines for thousands of
advanced CSE applications, such as
hydrology and biogeochemical cycling simulations using PFLOTRAN and
coupled 3D microscopic-macroscopic steel simulations (far left and far right images, respectively, in the 'CSE Applications' box in the above diagram).
For example, this multiphase steel application, which
uses nonlinear and linear FETI-DP domain decomposition methods (in PETSc) and algebraic
multigrid (in hypre),
demonstrates excellent performance on the entire Blue Gene/Q at the
J\"{u}lich Supercomputing Centre (JUQUEEN, 458K cores) and the
Argonne Leadership Computing Facility (Mira, 786K cores).
\end{tcolorbox}
\footnotetext{
Information on xSDK available via \url{https://xsdk.info}.
PFLOTRAN simulations by G. Hammond (Sandia National Laboratories).
Multiphase steel simulations described in A.~Klawonn, M. Lanser, and O. Rheinbach,
SIAM J Sci Comput 37(6), C667-C696, 2015; image courtesy of
J\"org Schr\"oder, Universit\"at Duisburg-Essen.}
\subsubsection{Software as a Research Agenda for CSE}
CSE software ecosystems, which support scientific research in
much the same way that a light source or telescope does, require a
substantial investment of human and capital resources, as well as
basic research on scientific software productivity so that the resulting software
products are fully up to the task of predictive simulations and
decision support.
Scientific software often has a much longer lifetime than does hardware;
in fact, software frequently outlives the teams that originally create it.
Traditionally, however, support for software has generally been
indirect, from funding sources focused on science or engineering
outcomes, and not the software itself. This circumstance---sporadic,
domain-specific funding that considers software only secondary to the
actual science it helps achieve---has caused huge difficulties,
not only for sustainable CSE software artifacts, but also for
sustainable CSE software careers.
This in turn has increasingly led to a mismanagement of
research investment, since scientific software as an
important CSE research outcome is rarely leveraged to its full potential.
Recent community reports express the imperative to firmly embrace
the fundamental role of open-source CSE software as a valuable research product
and cornerstone of CSE collaboration and thus to increase direct investment in
the software itself, not just as a byproduct of other
research~\cite{pitac05,GroppHarrisonEtAl2016,HerouxAllenEtAl2016,Hettrick2016,SoftwareProductivityWorkshop14,KeyesTaylor2011}.
The past decade has seen the development of many successful community-based open-source CSE software projects
and community science codes.
Aided by advances in supporting technology such as version control, bug tracking, and online collaboration, these projects leverage broad communities to develop free software with features at the leading edge of algorithmic research.
Examples in the area of finite-element methods include the deal.II, Dune, and FEniCS projects; and similar efforts have made tremendous contributions in many other areas of CSE.
\smallskip
\noindent
{\bf Reproducibility and sustainability.}
CSE software often captures the essence of research results.
It must therefore be considered equivalent to other scientific
outcomes and must be subjected to equivalent quality assurance procedures.
This requirement in turn means that criteria such as the reproducibility
of results~\cite{StoddenEtAl2013} must be given higher priority and that CSE software must be
more rigorously subjected to critical evaluation
by the scientific community.
Whenever a research team is faced with increased expectations for
independent review of computational results, the team's interest in
improved software methodologies increases commensurately. In fact, it
is not too strong to say that the affordability and feasibility of
reproducible scientific research are directly proportional to the
quality and sustainability of software methodologies and ecosystems.
Community efforts are beginning to address issues in
software sustainability, or the ability to maintain the scientifically useful
capability of a software product over its intended life span,
including understanding and modifying a software product's behavior to reflect
new and changing needs and technology~\cite{wssspe:home,HerouxAllenEtAl2016,Hettrick2016}.
Work is needed to determine value metrics for CSE software that fully
acknowledge its key role in scientific progress; to increase rewards
for developers of open-access, reliable, extensible, and sustainable
software; and to expand career paths for expert CSE software
developers.
\smallskip
\noindent
{\bf Software engineering and productivity.}
The role of software ecosystems as foundations for CSE discoveries
(see Figure \ref{Fig:CSE-components}) brings to the forefront issues of
software engineering and productivity,
which help address reproducibility and sustainability.
Software productivity expresses the effort, time, and cost of
developing, deploying, and maintaining a product having needed software
capabilities in a targeted scientific computing
environment~\cite{HerouxAllenEtAl2016,SoftwareProductivityWorkshop14}.
Work on software productivity focuses on improving the quality, decreasing
the cost, and accelerating the delivery of scientific applications,
as a key aspect of improving overall scientific productivity.
Software engineering, which can be defined as ``the application of
a systematic, disciplined, quantifiable approach to the development,
operation, and maintenance of software'' \cite{IEEE-glossary}, is central to any effort
to increase CSE software productivity.
While the
scientific community has much to learn from the mainstream software
engineering community,
CSE needs and environments are in combination sufficiently unique so as to
require fundamental research specifically for scientific software.
In particular, scientific software domains require extensive academic
background in order to understand how software can be designed,
written, and used for CSE investigations. Also, scientific
software is used for discovery and insight,
and hence requirements (and therefore all other phases of the
software lifecycle) are frequently changing.
Consequently, CSE software ecosystems and processes urgently require
focused research and substantial investment.
Another pressing need is education on
software engineering and productivity methodologies that are
specifically tailored to address the unique aspects of CSE, in the
contexts of both academic training and ongoing professional
development (see Section \ref{sec:education}).
With respect to many of these issues, CSE software research is still nascent,
since these themes have
been largely neglected in the evolution of the field to date.
As stated in a recent NITRD report~\cite{HerouxAllenEtAl2016},
``The time is upon us to address the growing challenge of
software productivity, quality, and sustainability that imperils the whole endeavor of
computation-enabled science and engineering.''
|
2,877,628,090,349 | arxiv |
\section{Introduction}
Approximation theory in module categories was studied in the setting of finite dimensional algebras by Auslander, Reiten, and Smal\o \ and independently by Enochs and Xu for modules over arbitrary rings using the terminology of preenvelopes and precovers.
An important problem in approximation theory is when minimal approximations, that is covers or envelopes, exist. In other words, for a certain class $\mathcal{C}$, the aim is to characterise the rings over which every module has a minimal approximation provided by $\mathcal{C}$ and furthermore to characterise the class $\mathcal{C}$ itself.
Bass proved in~\cite{Bass} that projective covers rarely exist, that is he introduced and characterised the class of perfect rings which are exactly the rings over which every module admits a projective cover. This motivated the study of minimal approximations for an arbitrary class $\mathcal{C}$.
Among the many characterisations of perfect rings, the most important from the homological point of view is the closure under direct limits of the class of projective modules. A famous theorem of Enochs says that for a class $\mathcal{C}$ in $\mathrm{Mod}\textrm{-}R$, if $\mathcal{C}$ is closed under direct limits, then any module that has a $\mathcal{C}$-precover has a $\mathcal{C}$-cover \cite{Eno}, \cite[Theorem 2.2.6 and 2.2.8]{Xu}.
The converse problem, that is the question of when $\mathcal{C}$ is covering implies that $\mathcal{C}$ is closed under direct limits, is still an open problem which is known as Enochs' Conjecture.
In 2018, Angeleri H\"ugel-\v Saroch-Trlifaj in \cite{AST17} proved that Enochs' Conjecture holds for a large collection of left-hand classes of cotorsion pairs. Explicity, they proved that for a cotorsion pair $(\mathcal{A}, \mathcal{B})$ such that $\mathcal{B}$ is closed under direct limits, $\mathcal{A}$ is covering if and only if it is closed under direct limits. To prove this, Angeleri H\"ugel-\v Saroch-Trlifaj used methods developed in \v{S}aroch's paper \cite{S17}, which uses sophisticated set-theoretical methods in homological algebra.
In this paper we are interested in Enochs' Conjecture for the class $\mathcal{P}_1(R)$. The question naturally splits into two cases: the case that the cotorsion pair $(\mathcal{P}_1(R), \mathcal{P}_1(R)^\perp)$ is of finite type (which occurs if and only if $\mathcal{P}_1(R)^\perp$ is closed under direct sums, or equivalently when $(\mathcal{P}_1(R), \mathcal{P}_1(R)^\perp)$ is a $1$-tilting cotorsion pair), and the case when it is not of finite type.
In a forthcoming paper \cite{BLG20} we consider $1$-tilting cotorsion pairs $(\mathcal{A}, \mathcal{B})$ over commutative rings $R$ and characterise the rings over which $\mathcal{A}$ is covering using a purely algebraic approach.
In this paper we consider the case that the cotorsion pair $(\mathcal{P}_1(R), \mathcal{P}_1(R)^\perp)$ is not necessarily of finite type (see Proposition~\ref{P:not-finite-type}). To the best of our knowledge, up until now there are no positive results for this question. Thus this paper provides a first positive result in the case of non-finite type.
In the investigation of when $\mathcal{P}_1(R)$ is covering, the class $\varinjlim \mathcal{P}_1(R)$ plays an important role, although it is not always well understood. Unlike the case of the projective modules, where their direct limit closure is the class of flat modules, it is not necessarily true that the direct limit closure of $ \mathcal{P}_1(R)$ coincides with the class $\mathcal{F}_1(R)$, of the modules of weak dimension at most $1$. The inclusion $\varinjlim \mathcal{P}_1(R) \subseteq \mathcal{F}_1(R)$ always holds, however an example of rings where $\varinjlim \mathcal{P}_1(R) \subsetneq \mathcal{F}_1(R)$ can be found in \cite[Example 9.12]{GT12}. For certain nice rings, such as commutative domains, the two classes $\varinjlim \mathcal{P}_1(R)$ and $\mathcal{F}_1(R)$ coincide (\cite[Theorem 9.10]{GT12}).
This paper is structured as follows.
We begin in Section~\ref{S:prelim} with some preliminaries.
The aim of Section~\ref{S:limP_1} is to give a characterisation of the class $\varinjlim \mathcal{P}_1(R)$ for a not-necessarily commutative ring $R$ which has a classical ring of quotients $Q$. This generalises a result from \cite{BH09} which was proved under the additional assumption that the little finitistic dimension of $Q$ is zero. A main result of Section~\ref{S:limP_1} is Proposition~\ref{P:P1-F1}, which states that $\varinjlim \mathcal{P}_1(R)$ is exactly the intersection of $\mathcal{F}_1(R)$ with the left $\operatorname{Tor}^R_1$-orthogonal of the minimal cotilting class of cofinite type of $Q\textrm{-}\mathrm{Mod}$ (see the definitions in Section~\ref{S:limP_1}).
After an overview of some useful results for commutative rings in Section~\ref{S:properties}, in Section~\ref{S:sh-P1-cov} we
assume that $\mathcal{P}_1(R)$ is a covering class, and state some consequences of this assumption for the total ring of quotients $Q$ of $R$ and for localisations of $R$.
Finally in Section~\ref{S:sh-comm} we restrict to looking at only commutative semihereditary rings.
The main result of this paper is a positive solution of Enochs' Conjecture for the class $\mathcal{P}_1(R)$ over a commutative semihereditary ring $R$. In Theorem~\ref{T:sh-P1-cov-lim} we show that in this case $\mathcal{P}_1(R)$ is covering if and only if the ring is hereditary, which clearly implies that $\mathcal{P}_1(R)$ is closed under direct limits.
This provides us with an example of a class of rings for which $\mathcal{P}_1(R)$ satisfies Enochs' Conjecture even though $(\mathcal{P}_1(R), \mathcal{P}_1(R)^\perp)$ may not be of finite type.
\section{Preliminaries}\label{S:prelim}
$R$ will always denote an associative ring with unit and $\Modr R$ ($R\textrm{-}\mathrm{Mod}$) the category of right (left) $R$-modules.
For a ring $R$, $\modr R$ will denote the class of right $R$-modules admitting a projective resolution consisting of finitely generated projective modules.
Let $\mathcal{C} $ be a class of right $R$-modules. The right $\operatorname{Ext}^1_R$-orthogonal and right $\operatorname{Ext}^\infty_R$-orthogonal classes of $\mathcal{C}$ are defined as follows.
\[
\mathcal{C} ^{\perp_1} =\{M\in \Modr R \ | \ \operatorname{Ext}_R^1(C,M)=0 \ {\rm for \
all\ } C\in \mathcal{C}\}\]
\[\mathcal{C}^\perp = \{M\in \Modr R \ | \ \operatorname{Ext}_R^i(C,M)=0 \ {\rm for \
all\ } C\in \mathcal{C}, \ {\rm for \
all\ } i\geq 1 \}\]
The left Ext-orthogonal classes ${}^{\perp_1} \mathcal{C}$ and ${}^\perp \mathcal{C}$ are defined symmetrically.
For $\mathcal{C}$ a class in $\mathrm{Mod}\textrm{-}R$, the right $\operatorname{Tor}^R_1$-orthogonal and right $\operatorname{Tor}^R_\infty$-orthogonal classes are classes in $R\textrm{-}\mathrm{Mod}$ defined as follows.
\[\mathcal{C}^{\intercal_1}=\{M\in R\textrm{-}\mathrm{Mod} \ | \ \operatorname{Tor}^R_1(C,M)=0, \ {\rm for \
all\ } C\in \mathcal{C} \}
\]
\[\mathcal{C}^{\intercal}=\{M\in R\textrm{-}\mathrm{Mod} \ | \ \operatorname{Tor}^R_i(C,M)=0 \ {\rm for \
all\ } C\in \mathcal{C}, \ {\rm for \ all\ } i \geq1\}\]
The left $\operatorname{Tor}^R_1$-orthogonal and left $\operatorname{Tor}^R_\infty$-orthogonal classes ${}^{\intercal_1}\mathcal{C}$, ${}^{\intercal}\mathcal{C}$ are classes in $\mathrm{Mod}\textrm{-}R$ which are defined symmetrically for a class $\mathcal{C}$ in $R\textrm{-}\mathrm{Mod}$.
If the class $\mathcal{C}$ has only one element, say $\mathcal{C} = \{X\}$, we write $X^{\perp_1}$ instead of $\{X\}^{\perp_1}$, and similarly for the other $\operatorname{Ext}$-orthogonal and $\operatorname{Tor}$-orthogonal classes.
\\
We denote by $\mathcal{P}_n(R)$, ($\mathcal{F}_n(R)$) the class of right $R$-modules of projective (flat) dimension at most $n$ and by $\clP_1(\mathrm{mod}\textrm{-}R)$ the class $\mathcal{P}_1(R)\cap \mathrm{mod}\textrm{-}R$, that is the class of finitely presented right $R$-modules of projective dimension at most $1$.
The projective dimension (weak or flat dimension) of a right $R$-module $M$ is denoted $\operatorname{p.dim}_R M$ ($\operatorname{w.dim}_R M$).
We will omit the $R$ when the ring is clear from context.
Given a ring $R$, the {\sl right big finitistic dimension}, $\operatorname{F.dim} R$, is the supremum of the projective dimension of right $R$-modules with finite projective dimension and the {\sl right big weak finitistic dimension}, $\operatorname{F.w.dim} R$ is the supremum of the flat dimension of right $R$-modules with finite flat dimension. The {\sl right little finitistic dimension}, f.dim $R$, is the supremum of the projective dimension of right $R$-modules in $\modr R$ with finite projective dimension.
\\
For any class $\mathcal{C}$ of modules we recall the notion of
a $\mathcal{C}$-{\sl precover}, a {\sl special} $\mathcal{C}$-{\sl precover} and of a $\mathcal{C}$-{\sl cover} (see \cite{Xu}).
\begin{defn} Let $\mathcal{C}$ be a class of modules, $M$ a right $R$-module and $C\in \mathcal{C}$. A homomorphism $\phi\in \operatorname{Hom}_R(C, M)$ is called a
$\mathcal{C}$-{\sl precover} (or right approximation) of $M$ if for every homomorphism $f '\in \operatorname{Hom}_R(C', M)$ with $C'\in \mathcal{C}$ there exists a homomorphism
$f\colon C'\to C$ such that $f '= \phi f$.
A $\mathcal{C}$-precover, $\phi\in \operatorname{Hom}_R(C, M)$ is called a $\mathcal{C}$-{\sl cover} (or a minimal right approximation) of $M$ if for every endomorphism $f$ of $C$ such that $\phi=
\phi f$,
$f$ is an automorphism of $C$. So a $\mathcal{C}$-cover is a minimal version of a $\mathcal{C}$-precover.
A $\mathcal{C}$-precover $\phi$ of $M$ is said to be {\sl special} if $\phi$ is an epimorphism and $\operatorname{Ker} \phi\in \mathcal{C}^{\perp_1}$.
\end{defn}
The notions of $\mathcal{C}$-{\sl preenvelope} (left approximations), {\sl special} $\mathcal{C}$-{\sl preenvelope} and $\mathcal{C}$-{\sl envelope} (minimal left approximations) are defined dually.
The relation between $\mathcal{C}$-precovers and $\mathcal{C}$-covers is provided by the following results due to Xu.
\begin{prop}\label{P:Xu} \cite[Corollary 1.2.8]{Xu} Let $\mathcal{C}$ be a class of modules and assume that a module $M$ admits a $\mathcal{C}$-cover. Then a $\mathcal{C}$-precover
$\phi\colon C\to M$ is a
$\mathcal{C}$-cover if and only if $\operatorname{Ker} \phi$ does not contain any non-zero direct summand of $C$.
\end{prop}
A class $\mathcal{C}$ of $R$-modules is called {\sl covering} ({\sl precovering, special precovering}) if every module admits a $\mathcal{C}$-cover ($\mathcal{C}$-precover, special $\mathcal{C}$-precover).
In approximation theory of modules, one is interested in when certain classes provide minimal approximations. Enochs and Xu ~ \cite[Theorem 2.2.8]{Xu}, proved that a precovering class closed under direct limits is covering.
Enochs posed the question to see if the closure under direct limits of a class $\mathcal{C}$ is a necessary condition for the existence of $\mathcal{C}$-covers.
Our aim is to investigate this problem for the class $\mathcal{P}_1(R)$.\\%
We consider precovers and preenvelopes for particular classes of modules, that is classes which form a cotorsion pair.
A pair of classes of modules $(\mathcal{A}, \mathcal{B})$ is a {\sl%
cotorsion pair} provided that $\mathcal{A} = {}^{\perp_1}
\mathcal{B}$ and $
\mathcal{B} =\mathcal{A}^{\perp_1}$. A cotorsion pair $(\mathcal{A}, \mathcal{B})$ is called {\sl hereditary} if $\mathcal{A} = {}^{\perp} \mathcal{B}$ and $\mathcal{B} = \mathcal{A}^{\perp}$.
A cotorsion pair $(\mathcal{A}, \mathcal{B})$ is {\sl complete} provided that every $R$-module $M$ admits a {special $\mathcal{B}$-preenvelope} or equivalently, every $R$-module $M$ admits a {special $\mathcal{A}$-precover} (\cite{Sal}).
A hereditary cotorsion pair $(\mathcal{A}, \mathcal{B})$ is of {\sl finite type} if there is a set $\mathcal{S}\subseteq\mathrm{mod}\textrm{-}R$ such that $\mathcal{S}^\perp=\mathcal{B}.$
Examples of complete cotorsion pairs in $\mathrm{Mod}\textrm{-}R$ include $(\mathcal{P}_0, \mathrm{Mod}\textrm{-}R)$, $(\mathcal{F}_0, \mathcal{F}_0^\perp)$, $(\mathcal{P}_1(R), \mathcal{P}_1(R)^\perp)$ (see \cite[Theorem 8.10]{GT12}).\\%\cite[Theorem 4.1.12]{GT}).
A useful result for covers is given by the following.
\begin{prop}\label{P:A-covers}
Let $(\mathcal{A}, \mathcal{B})$ be a complete cotorsion pair in $\mathrm{Mod}\textrm{-}R$. Assume that
$ A\overset{\phi}\to M\to 0$ is an $\mathcal{A}$-cover of the $R$-module $M$. Let $\alpha$ be an automorphism of $M$ and let $\beta $ be an endomorphism of $A$ such that $\phi\beta=\alpha\phi$. Then $\beta$ is an automorphism of $A$.
\end{prop}
\begin{proof}
By the Wakamatsu Lemma (see \cite[Lemma 2.1.1]{Xu}) $\phi$ gives rise to an exact sequence
$0\to B\overset{\mu}\to A\overset{\phi}\to M\to 0$ with $B\in \mathcal{B}$. Since $\alpha$ is an automorphism of $M$, it is immediate to see that $\operatorname{Ker} \alpha \phi \cong B \in \mathcal{B}$ and
that $0\to B\to A\overset{\alpha\phi}\to M\to 0$ is an $\mathcal{A}$-cover of $M$.
Let $\beta$ be as assumed and consider an endomorphism $g$ of $A$ such that $\alpha\phi g=\phi$. Then $\phi\beta g=\phi$ and thus $\beta g$ is an automorphism of $A$, since $\phi$ is an $\mathcal{A}$ cover of $M$. This implies that $\beta$ is an epimorphism.
To see that $\beta$ is a monomorphism, note that $\alpha \phi g \beta = \phi \beta g \beta = \phi \beta = \alpha \phi$, thus by the cover property of $\alpha \phi$, $g \beta$ is an automorphism, thus $\beta$ is an automorphism as required.
\end{proof}
\begin{cor}\label{C:cov-of-localisation}
Let $R$ be a commutative ring, $R[S^{-1}]$ be the localisation of $R$ at a multiplicative subset $S$. Let $(\mathcal{A}, \mathcal{B})$ be a cotorsion pair in $\mathrm{Mod}\textrm{-}R$. Suppose
\[
(\ast)\quad 0 \to B \to A \overset{\phi} \to M\to 0
\]
is an $\mathcal{A}$-cover of an $R[S^{-1}]$-module $M$.\\Then $(\ast)$ is a short exact sequence in $\mathrm{Mod}\textrm{-}R[S^{-1}]$.
\end{cor}
\begin{proof}
Let $M$ and $\phi$ be as assumed. The multiplication by an element of $S$ is an automorphism of $M$. Therefore by Proposition~\ref{P:A-covers}, $A$ is an $R[S^{-1}]$-module. Thus the sequence $(\ast)$ is an exact sequence in $\mathrm{Mod}\textrm{-}R[S^{-1}]$ since $R \to R[S^{-1}]$ is a ring epimorphism, hence the embedding $\mathrm{Mod}\textrm{-}R[S^{-1}] \to \mathrm{Mod}\textrm{-}R$ is fully faithful.
\end{proof}
\section{The direct limit closure of $\mathcal{P}_1(R)$}\label{S:limP_1}
From now on, $R$ will always be a ring such that $\Sigma$, the set of regular elements of $R$, satisfies both the left and right Ore conditions. The {\sl classical ring of quotients} of $R$, denoted $Q = Q(R)$ is the ring $R[\Sigma^{-1}] = [\Sigma^{-1}] R$ which is flat both as a right and a left $R$-module.
Additionally, we recall that an ideal $I$ of $R$ is called {\sl regular} if $I$ contains a regular element of $R$, that is $I \cap \Sigma \neq \emptyset$.
Recall that $\mathcal{P}_1(R)$ denotes the class of right $R$-modules with projective dimension at most $1$ and $\clP_1(\mathrm{mod}\textrm{-}R)$ is the set $\mathcal{P}_1(R)\cap \mathrm{mod}\textrm{-}R$.
A {\sl $1$-cotilting class of cofinite type} is the $\operatorname{Tor}^R_1$-orthogonal of a set of modules in $\clP_1(\mathrm{mod}\textrm{-}R)$. Thus the {\sl minimal} $1$-cotilting class of cofinite type is $\clP_1(\mathrm{mod}\textrm{-}R)^\intercal$, which we will denote by $\mathcal{C}(R)$. \\
The purpose of this section is to describe the class $\varinjlim \mathcal{P}_1(R)$ generalising a result in \cite[Theorem 6.7 (vi)]{BH09} which was proved under the assumption $\operatorname{f.dim} Q =0$.
We begin by recalling the following corollary, which states that one can consider only the finitely presented modules in $\mathcal{P}_1(R)$ to find its direct limit closure.
\begin{thm}\cite[Corollary 9.8]{GT12}\label{T:lim-P1-P1modR}
Let $R$ be a ring. Then $\varinjlim \mathcal{P}_1(R) = \varinjlim \clP_1(\mathrm{mod}\textrm{-}R)= {}^\intercal(\clP_1(\mathrm{mod}\textrm{-}R)^\intercal)$ and $\clP_1(\mathrm{mod}\textrm{-}R)^\intercal = \mathcal{P}_1(R)^\intercal=\big(\varinjlim \mathcal{P}_1(R)\big)^\intercal.$
\end{thm}
Following the nomenclature of \cite{BH09}, $\mathcal{D}$ will denote the class $\{D\in{}\mathrm{Mod}\textrm{-}R \mid \operatorname{Ext}^1_R(R/rR, D)=0,\, r \in{}\Sigma \}$ of {\sl divisible} right $R$-modules and $\mathcal{T}\mathcal{F}$ will denote the class $\{ N\in R\textrm{-}\mathrm{Mod} \mid \operatorname{Tor}^R_1(R/rR,\, N)=0,\, r \in \Sigma \}$ of {\sl torsion-free} left $R$-modules. The analogous statements hold for the divisible modules in $R\textrm{-}\mathrm{Mod}$ and the torsion-free modules in $\mathrm{Mod}\textrm{-}R$.
By \cite[Lemma 5.3]{BH09}, for a ring $R$ with a classical ring of quotients $Q$ and every torsion-free left $R$-module ${}_RN$, $\operatorname{Tor}^R_1(Q/R, N)=0$. Analogously, for every torsion-free right $R$-module $N_R$, $\operatorname{Tor}^R_1(N, Q/R)=0$.
Additionally, \cite[Lemma 6.2]{BH09} establishes that for a ring $R$ with classical ring of quotients $Q$, a right $Q$-module $V$ is in $\mathcal{P}_1(Q)$ if and only if there is a right $R$-module $M$ in $\mathcal{P}_1(R)$ such that $V=M\otimes_RQ$.
The following lemma is a sort of analogue to \cite[Lemma 6.2]{BH09} for the finitely presented case.
\begin{lem}\label{L:fp-P1-Q}
Let $R$ be a ring with classical ring of quotients $Q$. If $C_Q \in \clP_1(\mathrm{mod}\textrm{-}Q)$, then there exist $P_Q \in \mathcal{P}_0(\mathrm{mod}\textrm{-}Q)$ and $N_R \in \clP_1(\mathrm{mod}\textrm{-}R)$ such that $C_Q \oplus P_Q \cong N \otimes_R Q$.
\end{lem}
\begin{proof}
The argument follows from the proofs of ~\cite[Lemma 6.4]{BH09} and \cite[Lemma 6.2]{BH09}, which we reiterate here for completeness.
Take $C_Q \in \clP_1(\mathrm{mod}\textrm{-}Q)$. Then by \cite[Lemma 6.4]{BH09}, there exists a $P_Q \in \mathcal{P}_0(\mathrm{mod}\textrm{-}Q)$ and a short exact sequence
\[
0 \to Q^m \overset{\mu}\to Q^n \to C_Q \oplus P_Q \to 0
\]
Let $(d_1, \dots, d_m)$ be the canonical basis of the right
$Q$-free module $Q^{m}$. The monomorphism $\mu$ is represented
by a column-finite matrix $A'$ with entries in $Q=R[\Sigma^{-1}]$
acting as left multiplication on the basis elements $d_i$.
Change the basis $(d_1, \dots, d_m)$ to the basis $(r_id_i\colon 1\leq i\leq m)$ where $r_1\in \Sigma$ is a common denominator of the elements of the $i^{\rm{th}}$ column of $A'$, so that the morphism $\mu$ can be represented
by a column-finite matrix $A$ with entries in $R$. As $R$ is contained in
$Q$, we get the short exact sequence
\[0\to R^m\overset{\nu}\to R^n\to\operatorname{Coker} \nu\to 0,\]
where the map $\nu$ is represented by the matrix $A$. Therefore $\nu \otimes_R \operatorname{id}_Q = \mu$ so $\operatorname{Coker} \nu \in \clP_1(\mathrm{mod}\textrm{-}R)$ is the desired $N$.
\end{proof}
Recall that $\mathcal{C}(R)=\clP_1(\mathrm{mod}\textrm{-}R)^\intercal=\mathcal{P}_1(R)^\intercal=\big(\varinjlim \mathcal{P}_1(R)\big)^\intercal$.
\begin{lem}\label{L:min-cotilt-Q}
Let $R$ be a ring with classical ring of quotients $Q$. Then the following hold.
\begin{enumerate}
\item[(i)] $\mathcal{C}(R) \cap Q\textrm{-}\mathrm{Mod} = \mathcal{C}(Q)$.
\item[(ii)] If $Z \in \mathcal{C}(R)$, then $Q \otimes_R Z \in \mathcal{C}(Q)$.
\end{enumerate}
\end{lem}
\begin{proof}
(i) By well-known homological formulas, the flatness of $Q$ implies that for each $M \in \mathrm{Mod}\textrm{-}R$ and $N \in Q\textrm{-}\mathrm{Mod}$, there is the following isomorphism.
\[\operatorname{Tor}^R_1(M, N) \cong \operatorname{Tor}^Q_1(M \otimes_R Q, N)\]
Suppose $M \in \mathcal{P}_1(R)$. Then $N \in \mathcal{C}(R) \cap Q\textrm{-}\mathrm{Mod}$ if and only if the left-hand side in the above isomorphism vanishes. On the other hand, $N{}\in{} \mathcal{C}(Q)$ \iff $N \in \mathcal{P}_1(Q)^\intercal$ which in view of \cite[Lemma 6.2]{BH09} amounts to the right-hand side in the above isomorphism vanishing. Therefore $N \in \mathcal{C}(R) \cap Q\textrm{-}\mathrm{Mod}$ \iff $N \in \mathcal{C}(Q)$, which proves $\mathcal{C}(R) \cap Q\textrm{-}\mathrm{Mod} = \mathcal{C}(Q)$.
For (ii), we first note that $\mathcal{C}(R)$ is closed under direct limits as $\operatorname{Tor}$ commutes with direct limits.
As $Q$ is both left and right flat, one can write $Q$ as a direct limit of finitely generated free right $R$-modules $\varinjlim_\alpha R^{n_\alpha}$. Fix a $Z \in \mathcal{C}(R)$. Then $Q \otimes_R Z \cong \varinjlim_\alpha Z^{n_\alpha}$ which must be in $\mathcal{C}(R)$ as $\mathcal{C}(R)$ is closed under direct limits. Moreover, $Q \otimes_R Z \in Q\textrm{-}\mathrm{Mod}$, thus since $\mathcal{C}(R) \cap Q\textrm{-}\mathrm{Mod} = \mathcal{C}(Q)$ from (i), $Q \otimes_R Z \in \mathcal{C}(Q)$.
\end{proof}
\begin{prop}\label{P:P1-F1}
Let $R$ be a ring with classical ring of quotients $Q$. Then
\[ \varinjlim \mathcal{P}_1(R) = \mathcal{F}_1(R) \cap {}^{\intercal} \mathcal{C} (Q),\]
where ${}^{\intercal} \mathcal{C} (Q)$ represents the left $\operatorname{Tor}^R_1$-orthogonal of $\mathcal{C}(Q)$ in $\mathrm{Mod}\textrm{-}R$.
In particular, if $\operatorname{f.dim} Q =0$, $\varinjlim \mathcal{P}_1(R) = \mathcal{F}_1(R) \cap {}^{\intercal} Q\textrm{-}\mathrm{Mod}$.
\end{prop}
\begin{proof}
First we suppose that $M \in \varinjlim \mathcal{P}_1(R)$ and show that $M \in \mathcal{F}_1(R) \cap {}^{\intercal} \mathcal{C} (Q)$. The inclusion $\varinjlim \mathcal{P}_1(R) \subseteq \mathcal{F}_1(R)$ always holds so it remains to show that $M \in {}^{\intercal} \mathcal{C} (Q)$. By Theorem~\ref{T:lim-P1-P1modR}, if $M \in \varinjlim \mathcal{P}_1(R)$, then $M \in {}^{\intercal} \mathcal{C} (R)$. As $\mathcal{C}(Q) \subseteq \mathcal{C}(R)$ by Lemma~\ref{L:min-cotilt-Q}(i), it follows that ${}^{\intercal} \mathcal{C} (R) \subseteq {}^{\intercal} \mathcal{C} (Q)$, so $M \in {}^{\intercal} \mathcal{C} (Q)$ as required.
For the converse, fix $M \in \mathcal{F}_1(R) \cap {}^{\intercal} \mathcal{C} (Q)$. We will show $M \in {}^{\intercal} \mathcal{C} (R)$, thus the conclusion follows as $\varinjlim \mathcal{P}_1(R) = {}^{\intercal} \mathcal{C} (R)$ by Theorem~\ref{T:lim-P1-P1modR}.
The class $\mathcal{C}(R)$ is contained in $\{ R/sR \mid s \in \Sigma\}^\intercal$, so it consists of torsion-free left $R$-modules, thus by \cite[Lemma 5.3]{BH09}, $\operatorname{Tor}^R_1(Q/R, N)$ for every $N \in \mathcal{C}(R)$.
Therefore for every $N \in \mathcal{C}(R)$ there is a short exact sequence in $R\textrm{-}\mathrm{Mod}$,
\[ 0 \to N \to Q \otimes_R N \to Q/R \otimes_R N \to 0.\]
Apply $M \otimes_R -$ to this sequence to get the following exact sequence
\[\operatorname{Tor}^R_2(M, Q/R \otimes_R N) \to \operatorname{Tor}^R_1(M, N) \to \operatorname{Tor}^R_1(M, Q \otimes_R N).\]
The left-most term vanishes as $M \in \mathcal{F}_1(R)$ and the right-most term vanishes as $M \in {}^{\intercal} \mathcal{C} (Q)$ and $N \in \mathcal{C}(R)$ implies $Q \otimes_R N \in \mathcal{C}(Q)$ by Lemma~\ref{L:min-cotilt-Q}(ii). Therefore the central term vanishes for every $N \in \mathcal{C}(R)$, that is $M \in {}^{\intercal}\mathcal{C}(R)$.
The final statement follows since, if $\operatorname{f.dim} Q=0$, then $\clP_1(\mathrm{mod}\textrm{-}Q) = \mathcal{P}_0(\mathrm{mod}\textrm{-}Q)$. Therefore $\mathcal{C}(Q)= \mathcal{P}_0(\mathrm{mod}\textrm{-}Q)^\intercal= Q\textrm{-}\mathrm{Mod}$.
\end{proof}
We denote by $\mathcal{B}(R)$ the right $\operatorname{Ext}_R^1$-orthogonal class $\mathcal{P}_1(R)^\perp$ in $\mathrm{Mod}\textrm{-}R$. This notation is particularly useful because we often change between the ring $R$ and its localisation $Q$ and their module categories.
Recall that by ~\cite[Proposition 4.1]{BH09}, the cotorsion pair $(\mathcal{P}_1(R),\mathcal{B}(R))$ is of finite type if and only if $\mathcal{B}(R)$ is closed under direct sums.
\\
{\sl From now on all rings will be commutative}.
When $R$ is commutative, its classical ring of quotients is also called the total ring of quotients.
We begin with a lemma.
\begin{lem}\label{L:P1-perp}
Let $R$ be a commutative ring. Then $\mathcal{B}(R) \cap \mathrm{Mod}\textrm{-}Q = \mathcal{B}(Q)$.
\end{lem}
\begin{proof}
Fix some $P \in \mathcal{P}_1(R)$ and $B \in \mathrm{Mod}\textrm{-}Q$ and consider the natural isomorphism
\[
\operatorname{Ext}^1_R(P, B) \cong \operatorname{Ext}^1_Q(P \otimes_R Q, B)
.\]
If $B \in \mathcal{B}(R)$, then the left-hand side in the isomorphism vanishes, thus $B \in \mathcal{B}(Q)$, as every module in $\mathcal{P}_1(Q)$ is of the form $P \otimes_R Q$ for some $P \in \mathcal{P}_1(R)$ by \cite[Lemma 6.2]{BH09}. Conversely, if $B \in \mathcal{B}(Q)$, then the right-hand side vanishes, so we conclude that $B \in \mathcal{B}(R)$.
\end{proof}
Recall that a commutative ring $R$ is {\sl perfect} if and only if $\operatorname{F.dim} R=0$.
In \cite{BSa} and \cite{FS}, a commutative ring $R$ is called {\sl almost perfect} if its total ring of quotients $Q$ is a perfect ring and for every regular non-unit element $r$ in $R$, $R/rR$ is a perfect ring.
If $R$ is a ring and $\{ a_1, a_2, \dots, a_n, \dots\}$ is a sequence of elements of $R$, a {\sl Bass right $R$-module} is a flat module of the form
\[F=\varinjlim(R\overset{a_1}\to R\overset{a_2}\to R\overset{a_3}\to\cdots).\]
All Bass $R$-modules have projective dimension at most one. Thus the class of Bass $R$-modules is contained in $\mathcal{F}_0(R) \cap \mathcal{P}_1(R)$.
In \cite{Bass}, Bass noticed that a (not-necessarily commutative) ring $R$ is right perfect if and only if every Bass right $R$-module is projective.
\begin{prop}\label{P:conj-perfect} Let $R$ be a commutative ring with total ring of quotients $Q$. Consider the following conditions:
\begin{enumerate}
\item[(i)] $\mathcal{P}_1=\mathcal{F}_1$.
\item[(ii)] $\mathcal{P}_1$ is closed under direct limits.
\item[(iii)] For every regular non-unit element of $R$, $R/rR$ is a perfect ring.
\item[(iv)] $R$ is an almost perfect ring.
\end{enumerate}
Then (i) $\Rightarrow$ (ii) $\Rightarrow$ (iii) and (iv) $\Rightarrow$ (iii).
If $\operatorname{F.w.dim} Q=0$ then (i) and (ii) are equivalent; if moreover $Q$ is a perfect ring, then all the four conditions are equivalent.
\end{prop}
\begin{proof}
The implications (i) $\Rightarrow$ (ii) and (iv) $\Rightarrow$ (iii) are straightforward.
The implication (ii) $\Rightarrow$ (iii) is a slight generalisation of \cite[Proposition 8.5]{BP1}, where (i) $\Rightarrow$ (iii) is proved. Suppose $\mathcal{P}_1(R)$ is closed under direct limits, fix an element $r \in \Sigma$ and a Bass $R/rR$-module $N$. We will show that $N$ is projective as an $R/rR$-module, so that we conclude that $R/rR$ is a perfect ring. As $r$ is regular, $R/rR \in \mathcal{P}_1(R)$ so all projective $R/rR$-modules are in $\mathcal{P}_1(R)$. $N$ is a flat $R/rR$-module, so $N \in \varinjlim \mathcal{P}_0(R/rR) \subseteq \varinjlim \mathcal{P}_1(R) = \mathcal{P}_1(R)$, where the last equality holds by assumption. Moreover, $N \in \mathcal{P}_1(R/rR)$ so we can apply the Change of Rings Theorem obtaining
\[
\operatorname{p.dim}_R N = \operatorname{p.dim}_{R/rR} N +1
.\] It follows that $\operatorname{p.dim}_{R/rR} N=0$, as required.
If $\operatorname{F.w.dim} Q=0$, then (i) and (ii) are equivalent by \cite[Corollary 6.8]{BH09}.
If $Q$ is perfect ring, the equivalence of the four conditions is proved in \cite[Theorem 6.1]{FS} or \cite[Proposition 8.7]{BP1}.
\end{proof}
The example below shows that the condition $\mathcal{P}_1(R)=\mathcal{F}_1(R)$ doesn't imply that the total ring of quotients $Q$ is a perfect ring.
\begin{expl}\label{Ex:berg}
In \cite[5.1]{Ber} it is shown that there is a totally disconnected topological space $X$ whose ring $R$ of continuous functions is Von Neumann regular and hereditary. Moreover, every regular element of $R$ is invertible. Hence $R$ coincides with its own total ring of quotients and $\mathcal{P}_1(R)=\mathcal{F}_0(R)=\Modr R$, but $R$ is not perfect, since it is not semisimple.
Moreover, since $R$ is hereditary, the class $\mathcal{P}_1(R)^\perp$ coincides with the class of injective $R$-modules which is not closed under direct sums, since $R$ is not noetherian.\end{expl}
\section{Properties of some classes of commutative rings}\label{S:properties}
We recall now the characterisations of some classes of commutative rings.
Recall that a commutative ring $R$ is {\sl semihereditary} if every finitely generated ideal is projective.
By \cite[Corollary 4.2.19]{Glaz89} $R$ is semihereditary if and only if $Q(R)$ is Von Neumann regular and for every prime ideal $\mathfrak{p}$, $R_\mathfrak{p}$ is a valuation domain. In particular, by \cite[Theorem 4.2.2]{Glaz89}, $R$ is reduced, that is $R$ contains no nilpotent elements.
The following proposition is modelled on Cohen's Theorem, which states that if all prime ideals are finitely generated, then all ideals are finitely generated, see for example \cite[Theorem 8]{Kap} or \cite[Theorem 3.4]{Na}. In the following we consider only the regular ideals.
%
\begin{prop}\label{P:prime-reg-fg}
Let $R$ be a commutative ring. If every regular prime ideal is finitely generated, then every regular ideal is finitely generated.
\end{prop}
\begin{proof} The arguments we use are exactly as in Cohen's Theorem, but we repeat the proof to outline the steps in which regularity is used.
Let $\Theta$ be the collection of regular ideals which are not finitely generated with a partial order by inclusion. Assume, by way of contradiction that $\Theta$ is not empty.
Let $\Phi$ be a totally ordered subset of $\Theta$ and let $I:= \bigcup_{J \in \Phi}J$. We claim that $I$ is in $\Theta$ so that it is an upper bound of $\Phi$ in $\Theta$. Clearly $I$ contains a regular element, so it remains to show that $I$ is not finitely generated. Suppose for contradiction that $I$ has a finite set of generators $\{ a_1, \dots, a_n \}$. Then there exists a $J_0 \in \Phi$ such that $I=< a_1, \dots, a_n > \subseteq J_0 \subseteq I $, therefore $J_0$ is finitely generated which is a contradiction.
Thus by Zorn's Lemma, $\Theta$ has a maximal element. We will show that such a maximal element is prime, obtaining a contradiction. Fix a maximal element $L$ of $\Theta$, and suppose it is not prime, that is there exist two elements $a, b \in R \setminus L$ such that $ab \in L$. Then both $L + aR$ and $L + bR$ strictly contain $L$, so they are both finitely generated. Therefore, there exist $x_1, \dots, x_n \in L$ and $y_1, \dots, y_n \in R$ so that $
\{ x_1 +ay_1, \dots, x_n +ay_n\}
$ is a generating set of $L+aR$.
Consider the ideal $H := (L: a)= \{ r \mid ra \in L\}$. Then $L \subsetneq L+bR \subseteq H$, therefore also $H$ is finitely generated, and so also $aH$ is finitely generated. We now will show that $L = <x_1, \dots, x_n>+aH$ so $L$ is finitely generated. An element $r \in L \subsetneq L+aR$ can be written as follows.
\[
r = s_1(x_1 + a y_1) + \cdots + s_n(x_n + a y_n) = \Sigma_i s_i x_i + a \big( \Sigma_i s_i y_i \big)
\]
$ \Sigma_i s_i y_i \in H$ as $a \big( \Sigma_i s_i y_i \big) = r - \Sigma_i s_i x_i \in L$.\\ Therefore $L \subseteq <x_1, \dots, x_n>+aH$. The converse inclusion is clear, so $L = <x_1, \dots, x_n>+aH$ which implies that $L$ is finitely generated as $H$ is, a contradiction. Therefore $L$ is prime, and so by the assumption that every prime ideal is finitely generated, $\Theta$ must be empty.
\end{proof}
\begin{lem}\label{L:sh-reg-ideals}
Let $R$ be a reduced commutative ring with total ring of quotients $Q$ of Krull dimension $0$ (for example when $R$ is semihereditary). An ideal $I$ of $R$ is contained in a minimal prime ideal of $R$ if and only if $I$ is not regular.
\end{lem}
\begin{proof}
First suppose that $I \subseteq \mathfrak{p}$ where $\mathfrak{p}$ is a minimal prime ideal of $R$. Since $R$ is reduced it is an easy exercise to show that the set of zero divisors of $R$ coincides with the union of the minimal prime ideals (see \cite[Exercise 2.2.13, page 63]{Kap}).
For the converse, suppose that $I$ is not regular. Let $L$ be an ideal maximal with respect to the properties $I \subseteq L$, $L \cap \Sigma = \emptyset$. Then as in \cite[Theorem 1.1]{Kap}, $L$ is a prime ideal.
Assume there is a prime ideal $\mathfrak{p} \leq L$. Since $\mathfrak{p}$ and $L$ are not regular, $\mathfrak{p} Q\leq LQ$.
By assumption $Q$ has Krull dimension $0$, hence $\mathfrak{p} Q=LQ$, which implies $\mathfrak{p} =L$, that is $L$ is a minimal prime.
%
\end{proof}
We also have the following proposition from a paper of Vasconcelos.
\begin{prop}\cite[Proposition 1.1]{Vas}\label{P:proj-ideals}
Let $R$ be a commutative ring with a projective ideal $I$. If $I$ is not contained in any minimal prime ideal it is finitely generated.
\end{prop}
\section{When $\mathcal{P}_1(R)$ is covering for commutative rings}\label{S:sh-P1-cov}
In this section we collect some facts about when $\mathcal{P}_1(R)$ is a covering class, and in particular we state some consequences for the total ring of quotients $Q$ of $R$ or for localisations of $R$.
\begin{lem}\label{L:local-P1-cov}
Let $R$ be a commutative ring and suppose $\mathcal{P}_1(R)$ is covering in $\mathrm{Mod}\textrm{-}R$. Then the following hold.
\begin{enumerate}
\item[(i)] $\mathcal{P}_1(R) \cap \mathrm{Mod}\textrm{-}Q = \mathcal{P}_1(Q)$
\item[(ii)] $\mathcal{P}_1(Q)$ is covering in $\mathrm{Mod}\textrm{-}Q$.
\end{enumerate}
\end{lem}
\begin{proof}
(i) The inclusion $\mathcal{P}_1(R) \cap \mathrm{Mod}\textrm{-}Q \subseteq \mathcal{P}_1(Q)$ is clear. For the converse, take $M \in \mathcal{P}_1(Q)$ and consider a $\mathcal{P}_1(R)$-cover of $M$,
$0 \to B \to A \overset{\phi}\to M \to 0.$
Then $B\in \mathcal{B}(R)$ and by Corollary~\ref{C:cov-of-localisation}, $A, B \in \mathrm{Mod}\textrm{-}Q$. From Lemma~\ref{L:P1-perp}, $B \in \mathcal{B}(Q)$, so $\phi$ splits and $M$ must be in $\mathcal{P}_1(R)$.
(ii) Fix a $Q$-module $M$ and let $0 \to B \to A \overset{\phi}\to M \to 0 $ be a $\mathcal{P}_1(R)$-cover of $M$. Then again by Corollary~\ref{C:cov-of-localisation}, by (i) and by Lemma~\ref{L:P1-perp}, $A \in \mathcal{P}_1(Q)$ and $B \in \mathcal{B}(Q)$ so $\phi$ must be a $\mathcal{P}_1(Q)$-precover of $M$. To see that is a cover, any endomorphism $f$ of $A$ in $\mathrm{Mod}\textrm{-}Q$ is also a homomorphism in $\mathrm{Mod}\textrm{-}R$, therefore by the minimality property of $\phi$ as a $\mathcal{P}_1(R)$-cover, $\phi$ is also a $\mathcal{P}_1(Q)$-cover.
\end{proof}
We now find some consequences of when $\mathcal{P}_1(R)$ is covering for localisations of $R$.
\begin{lem}\label{L:Matlis} Let $R$ be a commutative ring and let $S$ be a multiplicative subset of $R$. Then the following hold.
\begin{enumerate}
\item[(i)] If $R[S^{-1}]$ has a $\mathcal{P}_1(R)$-cover, then $\operatorname{p.dim}_RR[S^{-1}] \leq 1$.
\item[(ii)] If $M\in \mathcal{P}_1(R)$ and $M\otimes_RR[S^{-1}]$ admits a $\mathcal{P}_1(R)$-cover, then $\operatorname{p.dim}_R(M\otimes_RR[S^{-1}])\leq 1$.
\item[(iii)] Suppose $\mathcal{P}_1(R)$ is covering. Let $S$, $T$ be multiplicative systems of $R$ with $S\subseteq \Sigma$.
Then $\operatorname{p.dim}_R\dfrac{R[S^{-1}]\otimes_R R[T^{-1}]}{R[T^{-1}]}\leq 1$.
\end{enumerate}
\end{lem}
\begin{proof}
(i) Let the following be a $\mathcal{P}_1(R)$-cover of $R[S^{-1}]$.
\begin{equation}\label{eq:book}
0\to Y\to A\to R[S^{-1}]\to 0
\end{equation}
By Corollary~\ref{C:cov-of-localisation}, both $A$ and $Y$ are $R[S^{-1}]$-modules as well. Thus (\ref{eq:book}) is an exact sequence of $R[S^{-1}]$-modules, hence it splits. We conclude that $R[S^{-1}]$ is a direct summand of $A$ as an $R$-module, hence $\operatorname{p.dim} R[S^{-1}]\leq1$.
(ii)Suppose $M\in \mathcal{P}_1(R)$ and let the following be a $\mathcal{P}_1(R)$-cover of $M\otimes_RR[S^{-1}]$.
\begin{equation}\label{eq:book2}
0\to Y\to A\to M\otimes_RR[S^{-1}]\to 0
\end{equation}
As in (i), we conclude by Corollary~\ref{C:cov-of-localisation} that the sequence (\ref{eq:book2}) is in $\mathrm{Mod}\textrm{-}R[S^{-1}]$. Thus, $\operatorname{Ext}^1_{R[S^{-1}]}(M\otimes_RR[S^{-1}], Y)\cong \operatorname{Ext}^1_R(M, Y)=0$ since $Y\in \mathcal{B}(R)$ and $M\in \mathcal{P}_1(R)$. Therefore $M\otimes_RR[S^{-1}]$ is a summand of $A$, hence it has projective dimension at most one.
(iii)Suppose $\mathcal{P}_1(R)$ is covering in $\mathrm{Mod}\textrm{-}R$. By (i) $\operatorname{p.dim}_RR[S^{-1}]/R\leq 1$ as $S \subseteq \Sigma$ so $R \to R[S^{-1}]$ is a monomorphism. Hence (iii) follows by (ii).
\end{proof}
\begin{rem}\label{R:ft-comm-dom} If the class $\mathcal{P}_1(R)^\perp$ is closed under direct sums, then by \cite[Theorem 1.3.6]{AST17}, or by \cite[Remark 7.4]{BPS}, $\mathcal{P}_1(R)$ is covering if and only if it is closed under direct limits.
When $R$ is a commutative domain, then by ~\cite[Corollary 8.1]{BH09}, the class $\mathcal{P}_1(R)^\perp$ is closed under direct sums (and it coincides with the class of divisible modules) and the four conditions in Proposition~\ref{P:conj-perfect} are equivalent.
\end{rem}
\section{When $\mathcal{P}_1(R)$ is covering for commutative semihereditary rings}\label{S:sh-comm}
In this section we restrict to looking at commutative semihereditary rings. First we characterise the class of semihereditary commutative rings such that the cotorsion pair $(\mathcal{P}_1(R), \mathcal{B}(R))$ is of finite type, or equivalently such that $\mathcal{B}(R)$ is closed under direct sums or equivalently $\mathcal{B}(R)= \mathcal{P}_1(\mathrm{mod}\textrm{-}R)^\perp$ (see \cite[Lemma 4.1]{BH09}).
\begin{prop}\label{P:not-finite-type} Let $R$ be a commutative semihereditary ring. The following are equivalent.
\begin{enumerate}
\item[(i)] The cotorsion pair $(\mathcal{P}_1(R), \mathcal{B}(R))$ is of finite type.
\item[(ii)] The total quotient ring $Q$ of $R$ is semisimple.
\item[(iii)] $R$ is a finite direct product of Pr\"ufer domains.
\end{enumerate}
\end{prop}
\begin{proof} (i) $\Rightarrow$ (ii). By assumption, $\mathcal{B}(R)$ is closed under direct sums, thus also $\mathcal{B}(Q)$ is closed under direct sums by Lemma~\ref{L:P1-perp}. Thus $\mathcal{B}(Q)= \mathcal{P}_1(\mathrm{mod}\textrm{-}Q)^\perp$.
By assumption $Q$ is Von Neumann regular, hence $\mathcal{P}_1(\mathrm{mod}\textrm{-}Q)=\mathcal{P}_0(\mathrm{mod}\textrm{-}Q)$ as every $Q$-module is flat. Therefore $\mathcal{B}(Q)=\mathrm{Mod}\textrm{-}Q$ and we conclude that $\mathcal{P}_1(Q)=\mathcal{P}_0(Q)$, that is $Q$ is a perfect ring. Thus, $\mathrm{Mod}\textrm{-}Q=\mathcal{F}_0(Q)=\mathcal{P}_0(Q)$, which means that $Q$ is semisimple.
(ii) $\Rightarrow$ (iii). Follows by \cite[Corollary, page 117]{Endo} as a semihereditary ring $R$ has weak global dimension at most one.
(iii) $\Rightarrow$ (i). Obvious by Remark~\ref{R:ft-comm-dom}.
\end{proof}
By the above proposition we conclude that the class of semihereditary commutative rings such that the cotorsion pair $(\mathcal{P}_1(R), \mathcal{B}(R))$ is of not finite type is rather big.
We now begin our investigation of the cotorsion pair $(\mathcal{P}_1(R), \mathcal{B}(R))$ for any commutative semihereditary ring $R$.
The following holds also for not-necessarily commutative rings.
\begin{lem}\label{L:sh-limP1}
Suppose $R$ is a semihereditary ring. Then $\mathcal{P}_1(R)$ is closed under direct limits if and only if $R$ is hereditary.
\end{lem}
\begin{proof}
Sufficiency is clear, since $R$ is hereditary if and only if $\mathcal{P}_1(R)=\mathrm{Mod}\textrm{-}R$.
The converse follows immediately since every $R$-module is a direct limit of finitely presented modules and if $R$ is semihereditary, every finitely presented module is in $\mathcal{P}_1(R)$.
\end{proof}
We first consider the case of a Von Neumann regular commutative ring.
\begin{prop}\label{P:vnr-P1} Let $R$ be a Von Neumann regular commutative ring. Then $\mathcal{P}_1(R)$ is covering if and only if $R$ is a hereditary ring.
\end{prop}
\begin{proof} $R$ is semi-hereditary, so it remains to show that every infinitely generated ideal $I$ of $R$ is projective. Consider a $\mathcal{P}_1(R)$-cover of $R/I$
\[(\ast)\quad 0\to B\to A\to R/I\to 0.\]
The ideal $I$ is the sum of its finitely generated ideals which are all of the form $eR$, for some idempotent element $e\in R$. For every idempotent element $e\in I$, we have $Ae\subseteq B$, hence by Proposition~\ref{P:Xu} $Ae=0$. We conclude that $AI=0$.
On the other hand, $A=B+xR$ for some element $x\in A$ such that $xR\cap B=xI$. Thus $B\cap xR=0$, since $AI=0$ and we infer that the sequence $(\ast)$
splits, thus p.dim $R/I\leq 1$ and $I$ is projective.
\end{proof}
We pass now to the case of semihereditary commutative rings.
\begin{lem}\label{L:Rp-P1-covering}
Let $R$ be a commutative semihereditary ring. Then the following statements hold.
\begin{enumerate}
\item[(i)] For every $M \in \mathcal{P}_1(R)$, $M \otimes_R R_\mathfrak{p} \in \mathcal{P}_1(R_\mathfrak{p}) $
\item[(ii)] For every $N \in \mathcal{B}(R)$, $N \otimes_R R_\mathfrak{p} \in \mathcal{B}(R_\mathfrak{p})$.
\item[(iii)] If $\mathcal{P}_1(R)$ is covering then $\mathcal{P}_1(R_\mathfrak{p})$ is covering.
\end{enumerate}
\end{lem}
\begin{proof}
(i) Clear as $R_\mathfrak{p}$ is flat. \\
(ii) As $R_\mathfrak{p}$ is a commutative domain, the cotorsion pair $(\mathcal{P}_1(R_\mathfrak{p}), \mathcal{B}(R_\mathfrak{p}))$ is of finite type, and $\mathcal{B}(R_\mathfrak{p})$ coincides with the class of divisible modules by ~\cite[Theorem 7.2]{BH09}. Thus it is sufficient to show that for every $N \in \mathcal{B}(R)$, $\operatorname{Ext}^1_{R_\mathfrak{p}}(R_\mathfrak{p}/aR_\mathfrak{p}, N \otimes_R R_\mathfrak{p})=0$ for each $a \in R_\mathfrak{p}$. Without loss of generality, we can assume that $a \in R$. As $R$ is commutative and $R/aR \in \clP_1(\mathrm{mod}\textrm{-}R)$ since $R$ is semihereditary, there is the following isomorphism.
\[
\operatorname{Ext}^1_R(R/aR, N)_\mathfrak{p} \cong \operatorname{Ext}^1_{R_\mathfrak{p}}(R_\mathfrak{p}/aR_\mathfrak{p}, N_\mathfrak{p})
\]
As $R/aR \in \mathcal{P}_1(R)$, the left-hand side vanishes as required. \\
(iii) Let $(\ast\ast)\ 0\to B\to A \overset{\phi}\to M \to 0
$ be a $\mathcal{P}_1(R)$-cover of $M \in \mathrm{Mod}\textrm{-}R_\mathfrak{p}$.
Then $A, B \in \mathrm{Mod}\textrm{-}R_\mathfrak{p}$ by Corollary~\ref{C:cov-of-localisation}, and by (ii), $B \in \mathcal{B}(R_\mathfrak{p})$. Therefore, $(\ast\ast)$ is also a $\mathcal{P}_1(R_\mathfrak{p})$-precover of $M$ in $\mathrm{Mod}\textrm{-}R_\mathfrak{p}$. Moreover, since any $R_\mathfrak{p}$-module homomorphism is also an $R$-module homomorphism, $(\ast\ast)$ is a $\mathcal{P}_1(R)$-cover of $M$.
\end{proof}
\begin{lem}\label{L:locals-dvrs}
Let $R$ be a commutative semihereditary ring such that $\mathcal{P}_1(R)$ is covering. Then for each prime $\mathfrak{p}$, the ring $R_\mathfrak{p}$ is a discrete valuation domain.
Moreover, $\mathrm{Mod}\textrm{-}R_\mathfrak{p} \subseteq \mathcal{P}_1(R)$.
As a consequence, every maximal ideal $\mathfrak{m}$ in $R$ is projective.
\end{lem}
\begin{proof}
First note that as $R_\mathfrak{p}$ is a valuation domain, it is also semihereditary. By Lemma~\ref{L:Rp-P1-covering}(iii), $\mathcal{P}_1(R_\mathfrak{p})$ is covering. Therefore by Remark~\ref{R:ft-comm-dom}, $\mathcal{P}_1(R_\mathfrak{p})$ is closed under direct limits, so
by Lemma~\ref{L:sh-limP1} we conclude that $R_\mathfrak{p}$ is hereditary, hence a discrete valuation domain.
To see that $\mathrm{Mod}\textrm{-}R_\mathfrak{p} \subseteq \mathcal{P}_1(R)$, let $ 0\to B\to A \overset{\phi}\to M \to 0$ be a $\mathcal{P}_1(R)$-cover of an $R_\mathfrak{p}$-module $M$.
As in the proof of Lemma~\ref{L:locals-dvrs}, the sequence is
also a $\mathcal{P}_1(R_\mathfrak{p})$-cover of $M$ in $\mathrm{Mod}\textrm{-}R_\mathfrak{p}$. We have just shown that $R_\mathfrak{p}$ is hereditary, so $M \in \mathcal{P}_1(R_\mathfrak{p})$, hence $\phi$ is an isomorphism. Since $A \in \mathcal{P}_1(R)$, also $M \in \mathcal{P}_1(R)$ for any $R_\mathfrak{p}$-module $M$.
For the second statement, let $\mathfrak{m}$ be a maximal ideal of $R$. Once one observes that $R/ \mathfrak{m}$ is an $R_\mathfrak{m}$-module, it follows that $\mathfrak{m}$ is projective as $\mathrm{Mod}\textrm{-}R_\mathfrak{m} \subseteq \mathcal{P}_1(R)$ by what is proved above.
\end{proof}
\begin{lem}\label{L:reg-prime-are-max}
Let $R$ be a commutative semihereditary ring such that $\mathcal{P}_1(R)$ is covering. Then every regular prime ideal is maximal.
\end{lem}
\begin{proof}
Take $\mathfrak{p}$ to be a regular prime ideal of $R$. Then by Lemma~\ref{L:sh-reg-ideals}, $\mathfrak{p}$ cannot be minimal. Fix a maximal ideal $\mathfrak{m}$ such that $\mathfrak{p} \subseteq \mathfrak{m}$. Then by Lemma~\ref{L:locals-dvrs} in the localisation $R_\mathfrak{m}$, there are exactly two prime ideals, $0$ and $\mathfrak{m}_\mathfrak{m}$, which are in bijective correspondence with the prime ideals of $R$ contained in $\mathfrak{m}$. As $\mathfrak{p}$ cannot be minimal, one concludes that $\mathfrak{p} = \mathfrak{m}$, therefore $\mathfrak{p}$ is maximal.
\end{proof}
%
The following corollary follows easily.
\begin{cor}\label{C:reg-max-fg}
Let $R$ be a commutative semihereditary ring such that $\mathcal{P}_1(R)$ is covering. Then every regular prime (hence maximal) ideal is finitely generated.
\end{cor}
\begin{proof}
Let $\mathfrak{p}$ be a regular prime ideal of $R$. By Lemma~\ref{L:reg-prime-are-max} and Lemma~\ref{L:locals-dvrs} $\mathfrak{p}$ is a projective ideal. Hence by Lemma~\ref{L:sh-reg-ideals} and Proposition~\ref{P:proj-ideals} $\mathfrak{p}$ is finitely generated.
\end{proof}
We will use the following characterisation of hereditary rings.
\begin{thm}\cite[Corollary 4.2.20]{Glaz89},\cite[Theorem 1.2]{Vas}\label{T:hered-rings}
Let $R$ be a commutative ring. Then $R$ is hereditary if and only if $Q(R)$ is hereditary and any ideal of $R$ that is not contained in any minimal prime ideal of $R$ is projective.
\end{thm}
We now can state the main result of this paper.
\begin{thm}\label{T:sh-P1-cov-lim}
Let $R$ be a commutative semihereditary ring such that $\mathcal{P}_1(R)$ is covering. Then $R$ is hereditary. Therefore $\mathcal{P}_1(R)$ is closed under direct limits.
\end{thm}
\begin{proof}
We use Theorem~\ref{T:hered-rings} to show that $R$ must be hereditary. First we show that the classical ring of quotients, $Q$, is hereditary. From Lemma~\ref{L:local-P1-cov} and the assumption that $\mathcal{P}_1(R)$ is covering, we know that $\mathcal{P}_1(Q)$ is covering. Additionally, by \cite[Corollary 4.2.19]{Glaz89} $Q$ is Von Neumann regular. Therefore, $Q$ must be hereditary by Proposition~\ref{P:vnr-P1}.
Now we show that any ideal not contained in a minimal prime ideal is projective. By Lemma~\ref{L:sh-reg-ideals}, it is enough to show that any regular ideal is projective, which follows if any regular ideal is finitely generated as $R$ is semihereditary. By Proposition~\ref{P:prime-reg-fg}, it is sufficient to show that the regular prime ideals are finitely generated, which follows from Corollary~\ref{C:reg-max-fg}. We conclude that all ideals not contained in a minimal prime ideal are finitely generated, and hence are projective as $R$ is semihereditary.
\end{proof}
\bibliographystyle{alpha}
|
2,877,628,090,350 | arxiv | \section{Introduction}
\label{sec:Introduction}
The study of neutrino properties is known to be a powerful tool for searching for physics beyond the Standard Model. The observation of flavour oscillations in experiments with solar, atmospheric, reactor and accelerator neutrinos imply that neutrinos have nonzero mass; this, in particular, means that they should also have magnetic dipole moments. As neutrinos are electrically neutral, they have no direct coupling to electromagnetic fields, and their electromagnetic interactions
should arise entirely through quantum loop effects.
In the simplest extensions of the Standard Model capable of producing nonvanishing neutrino mass, the predicted neutrino magnetic dipole moments%
\footnote{Actually, neutrinos may have magnetic and/or electric dipole moments. The former are described by the real part of the matrix of neutrino electromagnetic dipole moments $\mu$, whereas the latter, by its imaginary part. Both can cause the physical processes we consider in this paper. For brevity
we refer to $\mu$ as simply the magnetic dipole moment.
}
are too small to be probed in a foreseeable future. However, a number of models with new physics at TeV scale predict neutrino magnetic moments that may be close to the current experimental upper bounds (for a recent discussion, see e.g.\ \cite{Babu:2020ivd} and references therein).
Photon exchange processes induced by neutrino magnetic moments can contribute to cross sections of neutrino-electron and neutrino-nucleus scattering, and in particular can affect the results of coherent elastic neutrino-nucleus scattering experiments.
In the light of the recent results of the XENON1T experiment \cite{XENON:2020rca}, these topics have gained renewed attention, as sufficiently large neutrino magnetic moments can provide an explanation for the observed electron recoil excess
\cite{XENON:2020rca,Miranda:2020kwy,Babu:2020ivd,Miranda:2021kre}. This possibility can be further explored in multiton xenon detectors \cite{Huang:2018nxj,Hsieh:2019hug}.
Processes induced by neutrino magnetic moments may also play an important role in astrophysical environments. They can influence stellar evolution and can affect neutrino emission by core-collapse supernovae. Constraints on neutrino electromagnetic properties coming from non-observation of these and other processes, including constraints from the cosmic microwave background and big bang nucleosynthesis, can be found in the literature \cite{ParticleDataGroup:2020ssz}.
If neutrinos are Majorana particles, the interaction of their transition (flavour off-diagonal) magnetic moments with the solar magnetic field can result in the conversion of a fraction of left-handed $\nu_{e}$ produced in the Sun into right-handed antineutrinos $\bar{\nu}_{\mu}$ and $\bar{\nu}_{\tau}$. This spin-flavour precession (SFP) process can be resonantly enhanced by solar matter \cite{Lim:1987tk,Akhmedov:1988uk}, similarly to the resonance amplification of neutrino flavour conversion in matter (the MSW effect \cite{Wolfenstein:1977ue,Mikheyev:1985zog}). Although it is currently firmly established that the observed deficit of solar $\nu_e$ is due to the MSW effect, SFP could still be present at a subdominant level.
The combined action of neutrino SFP and flavour oscillations would then produce a small but potentially observable flux of solar electron antineutrinos $\bar{\nu}_e$ at the Earth (see, e.g., \cite{Akhmedov:2002mf,Guzzo:2012rf} and references therein). The detection of such a flux would therefore be a clear signature of both nonzero magnetic moment and Majorana nature of neutrinos.
Electron antineutrinos from the Sun have been searched for experimentally by KamLAND \cite{KamLAND:2003gfh,Hatakeyama:2004gm,Perevozchikov:2009bla,KamLAND:2021gvi}, Borexino \cite{Borexino:2010zht,Borexino:2019wln} and Super-Kamiokande \cite{Super-Kamiokande:2020frs} collaborations.
No excess over the expected backgrounds was found, which allowed the collaborations to establish upper bounds on the product of the transition neutrino magnetic moment and the solar magnetic field strength. In the analyses of the data presented in these papers use was made of the results of the theoretical study \cite{Akhmedov:2002mf}, which was done within a simplified 2-flavour neutrino framework and employed a standard solar model that is currently outdated.
In the present paper we extend the theoretical analysis of \cite{Akhmedov:2002mf} to the full 3-flavour neutrino framework and also use more recent standard solar models. We develop a simple analytical approach for calculating the expected flux of $\bar{\nu}_e$ from the Sun and also solve the full system of neutrino evolution equations numerically without any simplifying approximations. Good general agreement between the results of these two approaches is found.
All the calculations are performed for two standard solar models (low-metallicity and high-metallicity) and a number of model solar magnetic field profiles. We also study the role of various transition neutrino magnetic moments in the production of the $\bar{\nu}_e$ flux. To facilitate the extraction of the constraints on the neutrino magnetic moments and solar magnetic fields from the experimental data, we present our results in the form of simple analytical formulas as well as of ready-to-use tables of numerically calculated appearance probabilities and fluxes of solar $\bar{\nu}_e$. We then re-analyze the results of refs.~\cite{KamLAND:2003gfh,Hatakeyama:2004gm,Perevozchikov:2009bla,KamLAND:2021gvi,Borexino:2010zht,Borexino:2019wln,Super-Kamiokande:2020frs}
using our formalism.
For reference, we also present a compilation of bounds on neutrino magnetic moments obtained from other experiments and astrophysical observations and express the effective neutrino magnetic moments to which they are sensitive through the fundamental neutrino magnetic moments and leptonic mixing parameters.
\section{Neutrino evolution in the Sun \label{sec:InSun}}
In the absence of magnetic fields, neutrino transformations in matter are described, in the 3-neutrino framework, by the flavour evolution equation
\begin{equation}
i \frac{d}{dr} \nu_{fl L} = \left[U\ensuremath{{\rm diag}}(0, 2\delta, 2 \Delta)U^\dagger
+ \ensuremath{{\rm diag}}(V_e + V_n,V_n,V_n)\right]\nu_{fl L} \,.
\label{eq:evol1}
\end{equation}
Here $\nu_{fl L} = (\nu_{eL} \; \nu_{\mu L} \; \nu_{\tau L})^T$ is the vector of neutrino amplitudes in flavour space and $U$ is the 3-flavour leptonic mixing matrix, for which we use the standard parametrisation
\begin{equation}
U = O_{23}\Gamma_{\delta}O_{13}\Gamma^{\dagger}_{\delta}O_{12} \,.
\end{equation}
Here $O_{ij}$ are the orthogonal matrices of rotation with the angle $\theta_{ij}$ in the $i-j$ plane and $\Gamma_{\delta}=\ensuremath{{\rm diag}}(1,1,e^{i\delta_{\rm CP}})$, $\delta_{\rm CP}$ being the Dirac-type CP-violating phase. In the case of Majorana neutrinos, the leptonic mixing matrix $U_M$ depends on two additional phases: $U_M = UK$, where $K = \ensuremath{{\rm diag}}(1, e^{i\lambda_{2}}, e^ {i\lambda_{3}})$.
However, as can be seen from eq.~(\ref{eq:evol1}), these phases play no role in neutrino oscillations. We are using the notation
\begin{equation}
\delta = \frac{\Delta m^2_{21}}{4E}\,, \indent \indent ~~ \Delta
= \frac{\Delta m^2_{31}}{4E},
\end{equation}
where $\Delta m_{ij}^2=m_i^2-m_j^2$ are the neutrino mass squared differences, and also denote the effective potentials due to coherent forward neutrino scattering on matter constituents by
\begin{equation}
V_e = \sqrt{2} G_F N_e(r) \quad \text{and} \quad V_n =
- \sqrt{2}G_F N_n(r)/2\, .
\label{eq:pot}
\end{equation}
In eq.~(\ref{eq:pot}), $G_F$ is the Fermi constant and $N_e(r)$, $N_n(r)$ are the number densities of electrons and neutrons in matter, respectively.
A convenient basis for considering flavour transitions in the Sun is defined through the relation
\begin{equation}
\nu_{fl L} = O_{23}\Gamma_{\delta}O_{13}\nu_L'\,
\label{eqn:basis}
\end{equation}
with $\nu_L' \equiv
(\nu'_{eL} \; \nu'_{\mu L} \; \nu'_{\tau L})^T$. The neutrino evolution equation in the primed basis is then
\begin{equation}
i \frac{d}{dr}
\begin{pmatrix}
\nu'_{eL} \\ \nu'_{\mu L} \\ \nu'_{\tau L}
\end{pmatrix} = \begin{pmatrix}
2\delta s^2_{12} + c^2_{13}V_e +V_n & 2\delta s_{12}c_{12} &
s_{13}c_{13}V_e \\
2\delta s_{12}c_{12} & 2\delta c^2_{12}+ V_n & 0 \\
s_{13}c_{13}V_e & 0 & 2\Delta + s^2_{13}V_e +V_n
\end{pmatrix}
\!
\begin{pmatrix}
\nu'_{eL} \\ \nu'_{\mu L} \\ \nu'_{\tau L}
\end{pmatrix}\! \equiv\! H \begin{pmatrix}
\nu'_{eL} \\ \nu'_{\mu L} \\ \nu'_{\tau L}
\end{pmatrix},
\label{eq:evol2}
\end{equation}
where we used the short-hand notation $c_{ij}\equiv\cos \theta_{ij}$, $s_{ij}\equiv \sin \theta_{ij}$.
Next, we include the effects of SFP due to the interaction of the neutrino magnetic moments with external magnetic fields. The evolution equation is then \cite{Lim:1987tk,Akhmedov:1989df,Akhmedov:1993sh}
\begin{equation}
i \frac{d}{dr} \begin{pmatrix}
\nu'_{L} \\ \bar{\nu}'_{ R}
\end{pmatrix} = \begin{pmatrix}
H & \mathcal{B} \\ \mathcal{B}^ {\dagger} & \bar{H}
\end{pmatrix}\begin{pmatrix}
\nu'_{L} \\ \bar{\nu}'_{R}
\end{pmatrix}
\label{eq:evol}
\end{equation}
where $\bar{\nu}_R'=(\bar{\nu}'_{eR}\;\bar{\nu}'_{\mu R}\;
\bar{\nu}'_{\tau R})^T$ is the vector of the right-handed antineutrino amplitudes in the primed basis, $H$ is the Hamiltonian defined in the evolution equation \eqref{eq:evol2} and $\bar{H}$ is obtained from $H$ by substituting $\delta_{\rm CP} \rightarrow -\delta_{\rm CP}$, $V_e \rightarrow - V_e$ and $V_n\rightarrow-V_n$. For Majorana neutrinos, to which we restrict ourselves in the present paper, the matrix $\mathcal{B}$, describing neutrino interactions with the external magnetic field, can be written as
\begin{equation}
\mathcal{B} = \begin{pmatrix}
\mathcal{B}_{e'e'} & \mathcal{B}_{e'\mu'} & \mathcal{B}_{e'\tau'}\\
\mathcal{B}_{\mu'e'} & \mathcal{B}_{\mu'\mu'} & \mathcal{B}_{\mu' \tau'} \\
\mathcal{B}_{\tau'e'} & \mathcal{B}_{\tau'\mu'} & \mathcal{B}_{\tau'\tau'}
\end{pmatrix} =\begin{pmatrix}
0 & \mu_{e'\mu'} & \mu_{e'\tau'}\\
-\mu_{e'\mu'} & 0 & \mu_{\mu' \tau'} \\
- \mu_{e'\tau'} & -\mu_{\mu'\tau'} & 0
\end{pmatrix} B_{\perp} (r) e^{i\phi(r)}
\equiv \upmu' \cdot
B_{\perp} (r) e^{i\phi(r)}
\label{eq:B} \, .
\end{equation}
Here $\upmu'$ is the matrix of transition magnetic moments in the primed basis. To simplify notation, the matrix elements of $\upmu'$ are merely written with the primed indices as $\mu_{\alpha' \beta'}$, and the overall primes are omitted. The external magnetic field in the plane transverse to the neutrino momentum is described by the factor
$B_{\perp}(r)e^{i\phi(r)}$, where $B_{\perp}(r)>0$ and the azimuthal angle $\phi(r)$ defines the direction of the magnetic field in this plane.%
\footnote{We ignore the contributions of the longitudinal component of the magnetic field as they
are inversely proportional to the neutrino Lorentz factor and are thus negligible in all situations of practical interest.}
It is useful to relate the magnetic moments in the primed basis with the magnetic moments $\upmu_m$ in the neutrino mass eigenstate basis, which are of more fundamental nature:
\begin{equation}
\upmu'=\Gamma_\delta O_{12} K^* \upmu_{m} K^* O^T_{12} \Gamma_\delta \,.
\end{equation}
For the nonzero matrix elements of $\upmu'$ we find
\begin{align}
\mu_{e'\mu'} & = \mu_{12}e^ {-i\lambda_{2}}\,,
\label{eq:emu}
\\
\mu_{e'\tau'} & = \left(\mu_{13}c_{12} + \mu_{23}s_{12}
e^{-i \lambda_{2} }\right) e^{-i(\lambda_{3}-\delta_{\rm CP})}\,,
\label{eq:etau}\\
\mu_{\mu' \tau'} & = \left(\mu_{23}c_{12}e^{-i\lambda_{2}} -
\mu_{13}s_{12}\right)e^{-i(\lambda_{3}-\delta_{\rm CP})} \, .
\label{eq:mutau}
\end{align}
The evolution equation (\ref{eq:evol}) can now be written in more detailed
form as
\begin{subequations}
\begin{align}
i\frac{d}{dr}\nu'_{eL} &= H_{e'e'}\nu'_{eL} + H_{e'\mu'}\nu'_{\mu L} +
H_{e'\tau'}\nu'_{\tau L} + \mathcal{B}_{e'\mu'}\bar{\nu}'_{\mu R} +
\mathcal{B}_{e'\tau'}\bar{\nu}'_{\tau R} \, ,
\\
i\frac{d}{dr}\nu'_{\mu L} &= H_{\mu'e'}\nu'_{eL} + H_{\mu'\mu'}
\nu'_{\mu L} + \mathcal{B}_{\mu'e'}\bar{\nu}'_{e R} +
\mathcal{B}_{\mu'\tau'}\bar{\nu}'_{\tau R} \, ,
\\
i\frac{d}{dr}\nu'_{\tau L} &= H_{\tau'e'}\nu'_{eL} + H_{\tau'\tau'}
\nu'_{\tau' L} + \mathcal{B}_{\tau'e'}\bar{\nu}'_{eR} +
\mathcal{B}_{\tau'\mu'}\bar{\nu}'_{\mu R}\, ,
\\
i\frac{d}{dr}\bar{\nu}'_{e R} &= \bar{H}_{e'e'}\bar{\nu}'_{eR} +
\bar{H}_{e'\mu'}\bar{\nu}'_{\mu R} +\bar{H}_{e'\tau'}\bar{\nu}'_{\tau R}
- \mathcal{B}^*_{e'\mu'} \nu'_{\mu L} -\mathcal{B}^*_{e'\tau '}
\nu'_{\tau L}\, ,
\\
i\frac{d}{dr}\bar{\nu}'_{\mu R} &= \bar{H}_{\mu'e'}\bar{\nu}'_{eR} +
\bar{H}_{\mu'\mu'}\bar{\nu}'_{\mu R} - \mathcal{B}^*_{\mu'e'} \nu'_{e L}
- \mathcal{B}^*_{\mu'\tau '}\nu'_{\tau L}\, ,
\\
i\frac{d}{dr}\bar{\nu}'_{\tau R} &= \bar{H}_{\tau'e'}\bar{\nu}'_{eR}
+\bar{H}_{\tau'\tau'}\bar{\nu}'_{\tau R} -\mathcal{B}^*_{\tau'e'}
\nu'_{e L} - \mathcal{B}^*_{\tau'\mu '}\nu'_{\mu L} \, .
\end{align}
\label{eq:evol3}
\end{subequations}
Here we have taken into account that the diagonal elements of the matrix $\mathcal{B}$ vanish and also that $H_{\mu'\tau'}=H_{\tau'\mu'} =\bar{H}_{\mu'\tau'} = \bar{H}_{\tau'\mu'}=0$.
\subsection{\label{sec:Approx} Approximate analytical solution of the evolution equation}
Because the diagonal magnetic moments of Majorana neutrinos vanish, direct conversion of the left-handed electron neutrinos produced in the Sun into $\bar{\nu}_{eR}$ is not possible. Still, $\nu_{eL}\to\bar{\nu}_{eR}$ transitions can proceed via two-step processes,
\begin{subequations}
\begin{align}
\nu_{eL} \overset{\rm osc.}{\longrightarrow} \nu_{\mu L}
\overset{\rm SFP}{\longrightarrow} \bar{\nu}_{eR}\,,
\label{eq:trans1}
\\
\nu_{eL} \overset{\rm SFP}{\longrightarrow} \bar{\nu}_{\mu R}
\overset{\rm osc.}{\longrightarrow} \bar{\nu}_{eR} \, ,
\label{eq:trans2}
\end{align}
\end{subequations}
and similarly for transitions through the $\nu_{\tau L}$ and
$\bar\nu_{\tau R}$ intermediate states. However, inside the Sun such conversions should be heavily suppressed because the amplitudes of the processes (\ref{eq:trans1}) and (\ref{eq:trans2}) are of opposite sign and
nearly cancel each other \cite{Akhmedov:1989df,Akhmedov:1993sh}.
For the same reasons, the transitions $\nu_{eL}'\to\bar{\nu}_{eR}'$ between the primed states are also suppressed.
The transitions $\nu_{eL}\to \bar{\nu}_{eR}$ through the processes (\ref{eq:trans1}) and (\ref{eq:trans2}) (and similar transitions with $\nu_{\tau L}$ and $\bar\nu_{\tau R}$ intermediate states) will, however, not be suppressed if the flavour conversions and SFP occur in spatially separated regions.
Because magnetic fields outside the Sun are very weak, we are left with the possibility of the transition chain (\ref{eq:trans2}), with SFP taking place inside the Sun and the subsequent flavour conversions occurring on the flight between the Sun and the Earth. To calculate the flux of solar $\bar{\nu}_{eR}$ on the Earth we therefore first need to find the fluxes of $\bar{\nu}_{\mu R}'$ and $\bar{\nu}_{\tau R}'$ at the surface of the Sun.
We shall now develop an approximate analytical approach to this problem. First, basing on the above arguments, we neglect $\nu_{eL}'\to \bar{\nu}_{eR}'$ conversions inside the Sun.
We therefore omit the evolution equation for $\bar{\nu}_{eR}'$ as well as any terms containing the $\bar{\nu}_{eR}'$ amplitude from the equation system (\ref{eq:evol3}).
Next, we neglect the terms containing $H_{e'\tau'}=H_{\tau' e'}$ (and $\bar{H}_{e'\tau'} = \bar{H}_{\tau'e'})$, since they are much smaller than the diagonal elements $H_{\tau' \tau'}$ and
$\bar{H}_{\tau'\tau'}$, which means that flavour transitions, caused by the above-mentioned off-diagonal terms, are strongly suppressed.
Finally, we take into account that the effects of SFP of solar neutrinos are expected to be small and restrict ourselves to leading order in perturbation theory in $\mathcal{B}$. As we are interested in calculating the amplitudes $\bar{\nu}_{\mu R}'$ and $\bar{\nu}_{\tau R}'$, whose evolution equations
contain the amplitudes $\nu_{eL}'$, $\nu_{\mu L}'$, and $\nu_{\tau L}'$ multiplied by the elements of the $\mathcal{B}$
matrix, the amplitudes of these left-handed states should be found to zeroth order in $\mathcal{B}$. Applying these approximations to eq.~(\ref{eq:evol3}), we find the simplified evolution equations
\begin{subequations}
\begin{align}
i\frac{d}{dr}\nu'_{eL} &= \left(2\delta s^2_{12} + c^2_{13}V_e +V_n\right)
\nu'_{eL} + 2\delta s_{12}c_{12}\nu'_{\mu L}\,,
\label{eqn:nuel}
\\
i\frac{d}{dr}\nu'_{\mu L} &= 2\delta s_{12}c_{12} \nu_{e'L} +
\left(2\delta c^2_{12}+V_n\right)\nu'_{\mu L}\,,
\label{eqn:numul}
\\
i\frac{d}{dr}\nu'_{\tau L} &= \left(2\Delta + s^2_{13}V_e + V_n\right)
\nu'_{\tau L}\,,
\label{eqn:nutaul}
\\
i\frac{d}{dr}\bar{\nu}'_{\mu R} &= \left(2\delta c^2_{12} -V_n\right)
\bar{\nu}'_{\mu R} + \mu^*_{e'\bar{\mu}'} B_{\perp}e^{-i\phi}\nu'_{eL} -
\mu^*_{\mu' \bar{\tau}'}B_{\perp}e^{-i\phi}\nu'_{\tau L}\,,
\label{eqn:numur}
\\
i\frac{d}{dr}\bar{\nu}'_{\tau R} &= \left(2\Delta -s^2_{13}V_e -
V_n\right)\bar{\nu}'_{\tau R} + \mu^ *_{e'\bar{\tau}'}B_{\perp}e^{-i\phi}
\nu'_{eL} + \mu^*_{\mu' \bar{\tau}'}B_{\perp}e^{-i\phi}\nu'_{\mu L}\,.
\label{eqn:nutaur}
\end{align}
\end{subequations}
We first note that the first two of these equations, describing the evolution of the amplitudes $\nu_{eL}'$ and $\nu_{\mu L}'$, decouple from the rest of the system and can be solved independently. This essentially reduces to solving the MSW problem for solar neutrinos. We therefore employ the adiabatic approximation, which is known to work very well in this case, and obtain
\begin{equation}
\nu'_{eL}(r) = c_{13}\left[\cos \Tilde{\theta}(r_0)\cos \Tilde{\theta}(r)
e^{-i\int_{r_0}^ r E_1 dr'} + \sin \Tilde{\theta}(r_0)\sin
\Tilde{\theta}(r) e^{-i\int_{r_0}^ r E_2 dr'}\right], ~~~
\end{equation}
\begin{equation}
\nu'_{\mu L}(r) = c_{13}\left[-\cos \Tilde{\theta}(r_0)\sin
\Tilde{\theta}(r) e^{-i\int_{r_0}^ r E_1 dr'} + \sin \Tilde{\theta}(r_0)
\cos \Tilde{\theta}(r) e^{-i\int_{r_0}^ r E_2 dr'}\right] \,.
\end{equation}
Here $r_0$ is the coordinate of the neutrino production point,
$\Tilde{\theta}(r)$ is the effective mixing angle in matter which can be found from the relation
\begin{equation}
\cos 2\Tilde{\theta}(r)=\frac{\cos2\theta_{12} - c^2_{13}V_e/2\delta}
{\sqrt{\left(\cos2\theta_{12}- c^2_{13}V_e/2\delta\right)^ 2
+ \sin^2 2\theta_{12}}}\,,
\label{eq:tildetheta}
\end{equation}
and we have defined
\begin{equation}
E_{1,2} \equiv \delta + c^2_{13}\frac{V_e}{2} +V_n \mp \sqrt{\left(\delta
\cos2\theta_{12}- c^2_{13}V_e/2\right)^ 2 + \delta^ 2\sin^2 2\theta_{12}}\,.
\label{eq:E12}
\end{equation}
Note that the initial conditions $\nu_{eL}(r_0)=1$, $\nu_{\mu L}(r_0)=\nu_{\tau L}(r_0)=0$ translate, in the primed basis, to $\nu'_{eL}(r_0) = c_{13}$, $\nu'_{\mu L} (r_0) = 0$, $\nu'_{\tau L} (r_0) = s_{13}$.
The evolution equation for the amplitude $\nu_{\tau L}'$ completely decouples from the rest of the system and its solution is
\begin{equation}
\nu'_{\tau L}(r) = s_{13} e^{-i\int_{r_0}^ r\left(2\Delta + s^2_{13}V_e
+ V_n\right)dr'}.
\end{equation}
Now that $\nu_{eL}'$, $\nu_{\mu L}'$ and $\nu_{\tau L}'$ are found, it straightforward to solve eqs.~(\ref{eqn:numur}) and (\ref{eqn:nutaur}). For the values of the amplitudes $\bar{\nu}'_{\mu R}$ and $\bar{\nu}'_{\tau R}$ at the surface of the Sun we obtain
\begin{subequations}
\begin{align}
\bar{\nu}'_{\mu R} (R_{\odot}) =& \int_{r_0}^ {R_\odot} B_{\perp} (r)
\left[c_{13}\mu^ *_{e' \mu'} \cos \Tilde{\theta} (r_0) \cos
\Tilde{\theta} (r) e ^ {-i g_1 (r)} \right.\,, \nonumber \\
&+ c_{13}\mu^ *_{e' \mu'} \sin \Tilde{\theta} (r_0) \sin \Tilde{\theta}(r)
e^{-i g_2 (r)} \left. - s_{13}\mu ^*_{\mu' \tau'} e ^ {-ig_3(r)} \right] dr\,,
\label{eq:numur1}
\\
\bar{\nu}'_{\tau R} (R_{\odot}) =& \int_{r_0}^ {R_\odot} B_{\perp} (r)
c_{13} \cos \Tilde{\theta} (r_0) \left(\mu^*_{e'\tau'}\cos \Tilde{\theta} (r)
-\mu^*_{\mu'\tau'}\sin \Tilde{\theta}(r)\right) e^{-ig_4(r)} dr\,, \nonumber \\
&+ \int_{r_0}^ {R_\odot} B_{\perp} (r) c_{13} \sin\Tilde{\theta} (r_0)
\left(\mu^*_{e'\tau'}\sin \Tilde{\theta} (r) + \mu^*_{\mu'\tau'}
\cos \Tilde{\theta} (r)\right) e^{-ig_5(r)} dr\,,
\label{eq:nutaur1}
\end{align}
\label{eqn:amplitud}
\end{subequations}
where we have defined
\begin{subequations}
\begin{align}
g_{1,2}(r) & \equiv
\phi + \int_{r_0} ^ r \left[E_{1,2} - \left( 2c^2_{12}
\delta - V_n \right)\right]dr'\,,
\label{eq:g1}
\\
g_{3}(r) &\equiv \phi + \int_{r_0} ^ r \left[\left(2\Delta +s^2_{13}V_e
+V_n \right) - \left( 2c^2_{12}\delta - V_n \right) \right]dr'\,,
\label{eq:g2}
\\
g_{4,5}(r) &\equiv \phi + \int_{r_0} ^ r \left[ E_{1,2}-\left(2\Delta
-s^2_{13}V_e -V_n \right) \right] dr'\,,
\label{eq:g3}
\end{align}
\end{subequations}
and have dropped the irrelevant overall phase factors from the expressions for $\bar{\nu}'_\mu$ and $\bar{\nu}'_\tau$.
Such inconsequential phase factors will also be consistently
omitted in what follows.
\subsubsection{Analytical expressions for the amplitudes}
\label{sec:Amplitude}
The integrals in eq.~\eqref{eqn:amplitud} are of general form
\begin{equation}
I = \int_{a}^b f(x) e^{-ig(x)}dx\,,
\label{eq:int1}
\end{equation}
where $f(x)$ is a slowly varying function of coordinate and
$|g'(x)|$ is large except possibly in the vicinity of a finite number of points in the interval $(a,b)$. Such integrals get their main contributions from the endpoints of the integration intervals and from the stationary phase points where $g'(x)=0$, if any \cite{Erdelyi} (see also section~\ref{sec:non-tw} below). Let us first check if stationary phase points for the integrals in eqs.~(\ref{eq:numur1}) and (\ref{eq:nutaur1}) exist.
The evolution equation (\ref{eq:numur1}) for the amplitude $\bar{\nu}_{\mu R}'$ depends on the phases $g_1$, $g_2$ and $g_3$. The stationary phase conditions are $\frac{d}{dr}g_{1,2} = 0$ and $\frac{d}{dr}g_{3} = 0$ or, respectively,
\begin{equation}
\frac{d\phi}{dr} =
2c^2_{12}\delta -V_n -E_{1,2}\,,\qquad\quad~~
\label{eq:statpoint1}
\end{equation}
\begin{equation}
\frac{d\phi}{dr} =
2c^2_{12}\delta -2V_n -2\Delta -s^2_{13}V_e\,.
\label{eq:statpoint2}
\end{equation}
The stationary phase conditions for the integrals in eq.~(\ref{eq:nutaur1}) are
$\frac{d}{dr}g_{4,5} = 0$, or
\begin{equation}
\frac{d\phi}{dr} = 2\Delta - s^2_{13}V_e -V_n -E_{1,2} = 0\,.
\label{eq:statpoint3}
\end{equation}
Consider first eq.~(\ref{eq:statpoint3}). The term $2\Delta$ on its right hand side is at least an order of magnitude larger than the other terms (note that $2\Delta \sim 10^{-9} - 10^ {-10}$ eV). For the solution of this equation to exist, $d\phi/dr$ should be of the same order of magnitude, which corresponds to $\sim 1-10$ rad/km. While short-scale stochastic magnetic fields in the Sun may possibly have such rapid twists, it is unlikely that this is possible for large-scale fields relevant for SFP.
This means that no stationary phase points
are expected for the integrals in eq.~(\ref{eq:nutaur1}), and they should receive their main contributions from the endpoints of the integration interval. The same arguments apply to eq.~(\ref{eq:statpoint2}) and the integral containing $g_3$ in eq.~(\ref{eq:numur1}).
Let us now examine eq.~(\ref{eq:statpoint1}). Using eq.~(\ref{eq:E12}), one can reduce it to
\begin{equation}
d\phi/dr +2V_n+c^2_{13}V_e - 2\delta \cos2\theta_{12} = \frac{\delta^2
\sin ^2 2\theta_{12}}{d\phi/dr + 2 V_n} \,,
\end{equation}
which has a solution as long as
\begin{equation}
1 + \sin^2 2\theta_{12} \frac{c^2_{13}V_e}{d\phi/dr +2V_n} \geq 0 \, .
\label{eq:twistcond1}
\end{equation}
This condition is satisfied if
\begin{align}
\frac{d \phi}{dr} > 2|V_n| \quad \text{or} \quad
-\frac{d \phi}{dr} \geq c^ 2_{13}V_e
\left(\sin^2 2\theta_{12}- \frac{1- Y_e}{c^ 2_{13}Y_e}\right),
\label{eq:twistcond2}
\end{align}
where $Y_e$ is the number of electrons per nucleon in the medium. As $Y_e$ varies between 0.67 and 0.88 in the Sun \cite{Serenelli:2009yc}, it is easy to see that the expression in the brackets in the second condition in (\ref{eq:twistcond2}) is positive and on the order of 0.3 -- 0.7; therefore, for non-twisting magnetic fields the stationary phase condition cannot be fulfilled. In fact, it requires $|d\phi/dr|$ to be of the same order of magnitude as $V_e$ and $|V_n|$, which vary from $\sim 7\times 10 ^{-12} \text{ eV}$ near the neutrino production point to zero at the surface of the Sun, where the solar magnetic field nearly vanishes as well.
One can see that the stationary phase condition can be fulfilled, for instance, for magnetic fields with constant twist $|d\phi/dr| \sim 10/R_\odot \sim 3\times 10^ {-15} \text{ eV}$ \cite{Akhmedov:1993sh}.
We will first focus on the case in which the magnetic fields in the Sun are either non-twisting or they twist slowly enough, so that no stationary phase points exist.
Effects of possible existence of stationary phase points in the scenario with fast twisting magnetic fields will be discussed in section~\ref{sec:twist}.
\subsubsection{\label{sec:non-tw}Non-twisting or slowly twisting magnetic fields}
In this case, the integrals in eqs.~(\ref{eq:numur1}) and (\ref{eq:nutaur1}) are dominated by the contributions from the endpoints of the integration intervals. To evaluate such contributions to an integral of the type (\ref{eq:int1}), we integrate it by parts. Integrating two times one finds
\begin{align}
\int_{a}^ {b}f(x) e^ {-ig(x)} dx = \left[\left(i \frac{f(x)}{g'(x)} +
\frac{f'(x)}{g'(x)^2} - \frac{f(x)g''(x)}{g'(x)^3}\right)e^{-i g(x)}\right]_a ^ b + \mathcal{O}\left(\frac{1}{g'(x)^3}\right)\, .
\label{eq:appr1}
\end{align}
It follows from the definitions of the phases $g$ in eqs.~(\ref{eq:g1})-(\ref{eq:g3}) and eqs.~(\ref{eq:pot}) and~(\ref{eq:E12}) that in the case of interest to us the condition
\begin{equation}
|g''(x)|^2/g'(x)^2 \ll 1
\label{eq:cond1}
\end{equation}
is satisfied for all $g_i(x)$ ($i=1,...,5)$. Therefore, the third term in the brackets in eq.~(\ref{eq:appr1}) can be neglected compared to the first term.
In addition, the first term dominates over the second one provided that
\begin{equation}
\Bigg|\frac{f(x)}{f'(x)}\Bigg| \gg \frac{1}{|g'(x)|}.
\label{eq:cond2}
\end{equation}
Consider the left hand side of this inequality. It is essentially the scale height of the function $f$, i.e.\ the characteristic distance over which it varies significantly.
As follows from (\ref{eq:numur1}) and (\ref{eq:nutaur1}), in the case under discussion $f(r)\propto B_\perp(r)$ times $\sin\tilde{\theta}(r)$ or $\cos\tilde{\theta}(r)$. Because the effective mixing angle $\tilde{\theta}$ is a slowly varying function of coordinate inside the Sun (which actually justifies using the adiabatic approximation for flavour conversions), the scale height of $f(r)$ essentially coincides with the scale height of the solar magnetic field, $L_B\equiv B_\perp(r)/|B_\perp'(r)|$.
Therefore eq.~(\ref{eq:cond2}) reduces to
\begin{equation}
L_B
\gg \frac{1}{|g'_i(r)|} \,
\end{equation}
for each of the five $g_i(r)$ defined. For the propagation of neutrinos in the Sun, these conditions are satisfied if $L_B \gg 10^ {-4} R_\odot$.
Magnetic fields with scale heights as small as $L_B\lesssim 10^{-4}R_\odot$ can only exist over very short distances in the Sun, and so they cannot lead to any sizeable SFP. We therefore only consider large-scale solar magnetic fields, which satisfy (\ref{eq:cond2}).
As a consequence, it is justified to retain only the first term in the brackets in eq.~(\ref{eq:appr1}). In addition, we take into account that the magnetic field strength at the surface of the Sun $B_\perp(R_\odot)$ is very weak and consider only the contribution of the neutrino production point $r=r_0$. Eqs.~(\ref{eq:numur1}) and (\ref{eq:nutaur1}) thus yield
\begin{subequations}
\begin{align}
\bar{\nu}'_{\mu R} (R_\odot) \simeq & \, B_\perp (r_0) \left[c_{13}
\mu^*_{e'\mu'}\left(\frac{\cos^2\tilde{\theta}(r_0)}{g'_1(r_0)} +
\frac{\sin^2\tilde{\theta}(r_0)}{g'_2(r_0)}\right) -
\frac{s_{13}\mu^*_{\mu' \tau'}}{2\Delta}\right],
\label{eq:numur2}
\\
\bar{\nu}'_{\tau R}(R_\odot) \simeq & \, B_\perp (r_0)\frac{c_{13}
\mu^*_{e' \tau'}}{2\Delta} \, ,
\label{eq:nutaur2}
\end{align}
\end{subequations}
where we have taken into account that $g'_3 \sim 2\Delta$, $g'_{4,5}
\sim - 2\Delta$ and that
\begin{equation}
\bigg|\frac{1}{g'_4}-\frac{1}{g'_5} \bigg| \ll \bigg| \frac{1}{g'_{4,5}}
\bigg|.
\end{equation}
Notice that setting $\theta_{13} = 0$ and neglecting $\cos\tilde{\theta}(r_0)$ compared with $\sin\tilde{\theta}(r_0)$ one recovers the expression for the amplitude of $\bar{\nu}'_{\mu R}$ found in \cite{Akhmedov:2002mf}.
\subsubsection{\label{sec:twist}Fast-twisting magnetic fields}
Consider now the case when one of the conditions in \eqref{eq:twistcond2} is satisfied, which requires the solar magnetic field to be sufficiently fast twisting.
Let $r_1$ and $r_2$ be such that $g'_1(r_1) = 0$ and $g'_2(r_2) =0$. The contribution of these stationary phase points to the amplitude $\bar{\nu}_{\mu R}'(R_\odot)$ in eq.~(\ref{eq:numur1}) is
\begin{align}
\bar{\nu}'_{\mu R}(R_\odot) =&\, c_{13}\mu^*_{e'\mu'}
\left[\cos \tilde{\theta} (r_0) \cos \tilde{\theta} (r_1)B_\perp(r_1)
\sqrt{\frac{2\pi}{g''_1(r_1)}}\right. \nonumber \\
& \left.+ \sin \tilde{\theta} (r_0) \sin \tilde{\theta} (r_2)B_\perp(r_2)
\sqrt{\frac{2\pi}{g''_2(r_2)}}e^ {-i (g_2(r_2) -g_1(r_1))}\right].
\end{align}
{}From eq.~(\ref{eq:cond1}) it follows that it strongly dominates over the contributions (\ref{eq:numur2}) that come from the endpoints of the integration interval, which can therefore be neglected in this case.
As there are no stationary phase point contributions to the amplitude $\bar{\nu}_{\tau R}'$, it is still given by eq.~(\ref{eq:nutaur2}), just as in the cases of non-twisting or slow-twisting magnetic fields.
Note that the validity of this approximation still relies on the assumption that there is no $\nu_e \rightarrow \bar{\nu}_e$ transitions in the Sun. In the presence of fast-twisting magnetic fields, this might not be accurate enough \cite{Akhmedov:1993sh}.
From now on we will constrain ourselves to the case of non-twisting or slowly twisting magnetic fields, in which there are no stationary phase point contributions to the amplitudes $\bar{\nu}_{\mu R}'$ and $\bar{\nu}_{\tau R}'$. As follows from the above discussion, this may only reduce these amplitudes, and therefore will make our upper bound on $\mu B_\perp$ more conservative.
\subsubsection{Solar electron antineutrino flux on the Earth}
\label{sec:Flux}
Once the amplitudes $ \bar{\nu}_{\mu R}'$ and $\bar{\nu}_{\tau R}'$ on the surface of the Sun have been calculated, one can compute the expected flux of $\bar{\nu}_{e R}$ that reaches the Earth.
As the magnetic field in the space between the Sun and the Earth is negligible, neutrino evolution en route to the Earth reduces to pure flavour transformations. Due to coherence loss, solar neutrinos (or antineutrinos) arrive at the Earth as incoherent sums of mass eigenstates \cite{Dighe:1999id}.
The probability that a $\nu_{eL}$ produced in the Sun will reach the Earth as $\bar{\nu}_{eR}$ is therefore
\begin{equation}
P(\nu_{eL} \rightarrow \bar{\nu}_{eR}) = |U_{e1}|^ 2 |\bar{\nu}_{1\oplus}|^2
+|U_{e2}|^ 2 |\bar{\nu}_{2\oplus}|^ 2 + |U_{e3}|^ 2 |\bar{\nu}_{3\oplus}|^2\,,
\label{eq:P1}
\end{equation}
where $\bar{\nu}_{i\oplus}$ ($i=1,2,3$) are the amplitudes of the antineutrino mass eigenstates reaching the Earth.
These amplitudes are related to those in the primed basis by
$\bar{\nu}'_R = \Tilde{U} \bar{\nu}_R$ with
$\Tilde{U} = \Gamma^{\dagger}_\delta O_{12}$, where
$\bar{\nu}_R' = (\bar{\nu}'_{eR}, \; \bar{\nu}'_{\mu R}, \;
\bar{\nu}'_{\tau R})^T$ and $\bar{\nu}_R = (\bar{\nu}_{1R}, \;
\bar{\nu}_{2R}, \; \bar{\nu}_{3R} )^T$.
Therefore
\begin{equation}
|\bar{\nu}_{i\oplus}|^ 2 = |\Tilde{U}_{\mu ' i}|^ 2 |\bar{\nu}'_{\mu R }
(R_\odot)|^ 2 + |\Tilde{U}_{\tau ' i}|^2 |\bar{\nu}'_{\tau R} (R_\odot)|^2 \,,
\end{equation}
and eq.~(\ref{eq:P1}) for the electron antineutrino appearance probability can be rewritten as
\begin{align}
P (\nu_{eL} \rightarrow \bar{\nu}_{eR}) = \frac{1}{2}c^2_{13}\sin^2
2\theta_{12} |\bar{\nu}_{\mu R}'(R_\odot)|^ 2 +s^2_{13}|\bar{\nu}_{\tau R}'
(R_\odot)|^2.
\end{align}
Substituting here the approximate analytical expressions for the amplitudes $\bar{\nu}_{\mu R}'$ and $\bar{\nu}_{\tau R}'$ from eqs.~(\ref{eq:numur2}) and (\ref{eq:nutaur2}), we find
\begin{align}
P (\nu_{eL} \rightarrow \bar{\nu}_{eR}) = \frac{1}{2} c^2_{13}\sin^2
2\theta_{12}B^2_\perp (r_0)\left[c^2_{13}|\mu_{e'\mu'}|^2
\Bigg(\frac{\cos^2\tilde{\theta}(r_0)}{g'_1(r_0)} + \frac{\sin^2\tilde{\theta}
(r_0)}{g'_2(r_0)}\Bigg) ^ 2 \right. \nonumber \\ \left. +
\Bigg( \frac{s_{13} |\mu_{\mu' \tau'}|}{2\Delta}\Bigg)^ 2 -2{\rm Re}
\{ \mu_{e'\mu'}^* \mu_{\mu' \tau'} \} B^2_\perp (r_0)
\left(\frac{\cos^2\tilde{\theta}(r_0)}{g'_1(r_0)} + \frac{\sin^2\tilde{\theta}
(r_0)}{g'_2(r_0)}\right) \frac{s_{13} c_{13}}{2\Delta} \right] \nonumber \\
+ s^2_{13}B^2_\perp (r_0)\Bigg( \frac{c_{13} |\mu_{e'\tau'}|}
{2\Delta}\Bigg)^ 2\, ,
\label{eqn:peebar}
\end{align}
where the terms containing $|\mu_{\mu'\tau'}|^2$ and $|\mu_{e' \tau '}|^2$ are expected to give very small contributions,%
\footnote{Unless $|\mu_{e'\mu'}|^2$ is anomalously small.}
since they are proportional to $s^2_{13}/\Delta^2$.
There are three main differences between this result and that obtained in ref.~\cite{Akhmedov:2002mf} in the 2-flavour approach.
First, the main (first) term in (\ref{eqn:peebar}) contains an additional factor $c^4_{13}$. Second, there is a cross-term contribution in (\ref{eqn:peebar}) which is absent from the two-flavour result and which may give rise to a non-negligible correction to the $\bar{\nu}_{eR}$ appearance probability.
Finally, the expression in eq.~(\ref{eqn:peebar}) can be used
for neutrino energies below $\sim 5 - 8$ MeV, for which the analytical two-flavour result of \cite{Akhmedov:2002mf} is not applicable because of simplifying assumptions made.
It is interesting to note that the main term in (\ref{eqn:peebar}) is proportional to $|\mu_{e'\mu'}|^2$ which is equal to $|\mu_{12}|^2$, i.e.\ the electron antineutrino appearance probability is, to a very good approximation, proportional to $|\mu_{12}B_\perp (r_0)|^2$.
It will be shown in section~\ref{sec:compare} that the $\bar{\nu}_{eR}$ appearance probability (\ref{eqn:peebar}) is a relatively slowly varying function of neutrino energy for $E\gtrsim 5 - 8$ MeV, relevant for experiments on detection of solar $^8$B neutrinos. Taking for an estimate its value at $E=12$ MeV and electron and neutron number densities at the neutrino production point $N_e \simeq 89$\,cm$^{-3}$ and $N_n \simeq 35$\,cm$^{-3}$ \cite{Serenelli:2009yc}, the electron antineutrino appearance probability can be written as
\begin{equation}
P (\nu_{eL} \rightarrow \bar{\nu}_{eR}) \simeq 1.1\times 10^{-10} \left(
\frac{\mu_{12}B_\perp (r_0)}{10^{-12}\mu_B \cdot 10\, {\rm kG}}\right)^2\,,
\label{eq:analit1}
\end{equation}
where $\mu_B$ is the electron Bohr magneton.
The numerical coefficient here is about a factor of 1.4 smaller
than it is in the two-flavour approach of ref.~\cite{Akhmedov:2002mf})
(see eq.~(25) of that paper). This is partly due to 3-flavour effects and to
using the updated neutrino mixing parameters and solar models and partly
because of the approximation $\cos\tilde{\theta}(r_0)\ll\sin\tilde{\theta}
(r_0)$ adopted in~\cite{Akhmedov:2002mf} (see section~\ref{sec:compare} below).
Note that eq.~(\ref{eq:analit1}) is not suitable for experiments sensitive to pp, pep or $^7$Be solar neutrinos, for which the electron antineutrino appearance probability is strongly suppressed (at $E\sim 1$ MeV, it is approximately three orders of magnitude smaller than that given by eq.~(\ref{eq:analit1})) and also exhibits a stronger energy dependence. We will discuss this issue in more detail in section~\ref{sec:compare}.
\subsection{\label{sec:num}Numerical calculations}
Instead of developing an approximate analytical solution, one can solve the complete set of the six coupled evolution equations (\ref{eq:evol3}) numerically, tracing the evolution of the system from the neutrino production point in the Sun to the Earth.
We have developed a numerical code to calculate the electron antineutrino appearance probability at the surface of the Earth for $^8$B neutrinos, which give the main contribution to the solar $\bar{\nu}_{eR}$ flux for energies above the
threshold of inverse beta decay on protons, i.e. for $E > 1.8$ MeV.%
\footnote{
We are mostly interested in inverse beta decay because the best currently available limits have been obtained from the experiments that used this detection channel.}
The calculations average over the production region of $^8$B neutrinos in the Sun and take into account the electron and neutron number density profiles in the Sun.
We have performed the calculations for two standard solar models (SSM) which differ on the elemental abundances in the Sun and hence have different metallicities.
These are the high-metallicity GS98 model and the low-metallicity AGSS09 model, as discussed in \cite{Serenelli:2009yc}.
If not otherwise specified, in our calculations we will be
using the linearly decreasing magnetic field profile inside the Sun
\begin{equation}
B_\perp(r)=B_0(r) \equiv 52600 (1 - r/R_\odot)\,{\rm kG}\,,
\label{eq:b0}
\end{equation}
which takes the value
$B_\perp \simeq 5 \times 10^{7}$\,G
at $r_0$ = 0.05R$_\odot$ and vanishes at the surface of the Sun. Possible twist of the solar magnetic field will be neglected.
\section{Results}
\label{sec:results}
\subsection{\label{sec:compare}Comparison between analytical and numerical results}
We have shown that the main contribution to the electron antineutrino appearance probability is proportional to $|\mu_{e'\mu'}|^2=|\mu_{12}|^2$. In this subsection we set $\mu_{13} =\mu_{23}=0$ (which also implies $\mu_{\mu'\tau'}=\mu_{e'\tau'}=0$)
and compare our analytical expressions with the results obtained by numerical solution of the system of the evolution equations (\ref{eq:evol3}).
With only $\mu_{e'\mu'}$ different from zero, our analytical expression (\ref{eqn:peebar}) becomes
\begin{align}
P(\nu_{eL} \rightarrow \bar{\nu}_{eR}) = \frac{1}{2} c^4_{13}\sin^ 2
2\theta_{12}B^2_\perp (r_0)|\mu_{e'\mu'}|^2 \Bigg(\frac{\cos^2
\tilde{\theta}(r_0)}{g'_1(r_0)} + \frac{\sin^2\tilde{\theta}(r_0)}
{g'_2(r_0)}\Bigg) ^ 2.
\label{eqn:peebar_full}
\end{align}
We will also consider the simplified analytical expression
\begin{align}
P (\nu_{eL} \rightarrow \bar{\nu}_{eR})_{\rm simpl.} =
\frac{1}{2} c^4_{13}\sin^2 2\theta_{12}B^2_\perp (r_0)|\mu_{e'\mu'}|^2
\Bigg( \frac{\sin^2\tilde{\theta}(r_0)}{g'_2(r_0)}\Bigg) ^ 2,
\label{eqn:peebar_simple}
\end{align}
obtained from (\ref{eqn:peebar_full}) by neglecting the first term in the brackets compared to the second one, the approximation similar to the one adopted in \cite{Akhmedov:2002mf}.
This approximation is expected to be valid for relatively high neutrino energies, for which $\cos^2\tilde{\theta}(r_0)\ll \sin^2\tilde{\theta}(r_0)$ (note that $|g_1'(r_0)|$ and $|g_2'(r_0)|$ differ by less than a factor of two for all considered energies).
\begin{figure}[t!]
\centering
\includegraphics[width = 0.48\textwidth]{figures/Peebar-AGSS09-bf2020.pdf}
\includegraphics[width = 0.48\textwidth]{figures/Peebar-GS98-bf2020.pdf}
\caption{Comparison of different calculations for $\bar{\nu}_{eR}$ appearance probability for $^8$B neutrinos. Values of the oscillation parameters ($\sin^2\theta_{12}$, $\Delta m^2_{21}$) = (0.32, 7.5$\times 10^{-5}$ eV$^2$) and $\sin^2\theta_{13} = 0.022$ \cite{deSalas:2020pgw} were chosen. Left and right panels correspond to AGSS09 and GS98 SSM, respectively \cite{Serenelli:2009yc}. Grey curves: numerical calculations assuming that all neutrinos are produced at $r_0 =0.05 R_\odot$.
Green curves: numerical calculation with averaging over the neutrino production region. Orange curves: results based on the full analytical expression \eqref{eqn:peebar_full} (solid) and on the simplified analytical expression \eqref{eqn:peebar_simple}(dashed).
}
\label{fig:Peebar2020}
\end{figure}
In Figure \ref{fig:Peebar2020} we compare our analytical results with those found by numerical solution of the neutrino evolution equations (\ref{eq:evol3}). The grey wiggly curves show the numerical results obtained assuming that all neutrinos are produced at the distance $r_0=0.05 R_\odot$ from the centre of the Sun; the wiggly behaviour gets washed out if one averages over the neutrino production region, as shown by the green curves.
The solid and dashed orange curves correspond to the analytical expressions \eqref{eqn:peebar_full} and \eqref{eqn:peebar_simple}, respectively, assuming that all neutrinos are produced in the Sun at $r_0=0.05R_\odot$.
The left and right panels show the results for AGSS09 and GS98 solar models, respectively. The figure demonstrates a good general agreement between our numerical and analytical results, especially for neutrino energies $E\gtrsim 5$ MeV. The discrepancy between the numerical and analytical results becomes larger for smaller $E$, where the $\bar{\nu}_{eR}$ appearance probability is relatively small.
\subsection{Neutrino evolution inside the Sun}
In order to gain a better insight into the process of anitneutrino appearance, we consider the evolution of the neutrino system inside the Sun as a function of coordinate. For simplicity, we do so in the effective 2-neutrino approach, which corresponds to setting $s_{13}\to 0$. We have checked that the obtained results give a good approximation to those in the full 3-flavour case, the reason being that the corresponding corrections are of the order of $s_{13}^2$.
In Figure \ref{fig:inside-bases} we show the evolution of the antineutrino appearance probabilities, obtained by numerical solution of the evolution equations, for mass-eigenstate (left panel) and primed (right panel) neutrino states.
\begin{figure}
\includegraphics[width = 0.98\textwidth]{figures/inside_sun_bases.pdf}
\caption{
Two-flavour antineutrino appearance probabilities for the
mass eigenstates (left panel) and for the states in the primed basis (right panel) as functions of the distance to the centre of the Sun. Neutrino energy $E=10$ MeV, transition magnetic moments $\mu_{12}= 10^{-12} \mu_B$, $\mu_{13} = \mu_{23}=0$ and the $\nu_{e}$ production coordinate $r_0= 0.05R_\odot$ are chosen.}
\label{fig:inside-bases}
\end{figure}
From the right panel, one can see that the approximation $\bar{\nu}_e'\sim 0$ is reasonably good inside the Sun, though it is less accurate at its surface.
In the two-flavour approximation, SFP converts $\nu_{e}$ produced in the Sun into $\bar{\nu}_\mu'$. Close to the neutrino production point the composition of $\bar{\nu}_\mu'$ is approximately given by \mbox{ $\bar{\nu}_\mu'\simeq - \sin \tilde{\bar{\theta}}(r_0) \bar{\nu}_{1M} + \cos \tilde{\bar{\theta}}(r_0) \bar{\nu}_{2M}$,} where $\bar{\nu}_{1,2 M}$ are antineutrino matter eigenstates (i.e.\ the states that diagonalize the antineutrino Hamiltonian in matter), and the mixing angle $\tilde{\bar{\theta}}(r)$ is given by
\begin{equation}
\tan 2\tilde{\bar{\theta}}(r) = \frac{\sin 2 \theta_{12}}{\cos 2\theta_{12}
+ c^2_{13}V_e(r)/2\delta}\, .
\end{equation}
Close to the neutrino production point the electron number density is rather large, and for neutrino energies $E \gtrsim 2 \text{ MeV}$ one has $\tilde{\bar{\theta}}(r_0)\ll 1$, so that $\bar{\nu}'_\mu \simeq \bar{\nu}_{2M}$. As there is no level crossing for antineutrinos and, in addition, their evolution is adiabatic in the Sun, $\bar{\nu}_{2M}$ propagate through the Sun without noticeable transformations to $\bar{\nu}_{1M}$.
Because matter density essentially vanishes at $r=R_\odot$, matter eigenstates become mass eigenstates there, and therefore antineutrinos emerge at the surface of the Sun as $\bar{\nu}_{2}$. This can be seen in Figure \ref{fig:inside-norm}, where we show the appearance probabilities for $\bar{\nu}_1$ and $ \bar{\nu}_2$ (left panel) and for the matter eigenstates $\bar{\nu}_{1M}$ and $ \bar{\nu}_{2M}$ (right panel), normalised to the unit sum.
For $r=0.1R_{\odot}$, which is relatively close to the neutrino production point, most of the antineutrinos are $\bar{\nu}_{2M}$, which is a nontrivial combination of $\bar{\nu}_{1}$ and $\bar{\nu}_2$.
At the surface of the Sun, the antineutrinos emerge as $\bar{\nu}_{2M}$ as well, which coincides there with $\bar{\nu}_2$. This is in accord with left panel of Figure \ref{fig:inside-bases}, which shows that at $r=R_\odot$ we mainly find $\bar{\nu}_2$.
As $\bar{\nu}_2$ is a linear combination of $\bar{\nu}_e'$ and $\bar{\nu}_\mu'$ with weights $\sin^2\theta_{12}\simeq 1/3$ and $\cos^2\theta_{12}\simeq 2/3$ respectively, at the surface of the Sun the appearance probability of $\bar{\nu}_\mu'$ is about twice that of $\bar{\nu}_e'$ (right panel of Figure \ref{fig:inside-bases}).
It should be noted that, unlike for the normalised probabilities shown in Figure \ref{fig:inside-norm}, the sum of the antineutrino appearance probabilities presented in Figure \ref{fig:inside-bases} is not conserved; this is due to the fact that some of antineutrinos can precess back to neutrinos in the course of their evolution inside the Sun.
\begin{figure}
\includegraphics[width = 0.98\textwidth]{figures/inside_sun_normalised.pdf}
\caption{Two-flavour evolution of antineutrino appearance probabilities inside the Sun for mass eigenstates (left panel) and for the matter eigenstates (right panel), normalised to the unit total antineutrino appearance probability. Assumptions regarding neutrino energy, magnetic moments and the $\nu_e$ production coordinate are the same as in Figure
\ref{fig:inside-bases}.}
\label{fig:inside-norm}
\end{figure}
\subsection{The roles of various transition magnetic moments and of the magnetic field profile}
Up to this point, in our numerical analysis we were assuming only one transition magnetic moment, $\mu_{e'\mu'}=\mu_{12}$, to be nonzero. This was motivated by our analytical results, which showed that the contributions $\mu_{e'\tau'}$ and $\mu_{\mu'\tau'}$, which are linear combinations $\mu_{13}$ and $\mu_{23}$, are strongly suppressed.
To illustrate this point, in Figure \ref{fig:comparison} we present the $\bar{\nu}_e$ appearance probability $P(\nu_e\to \bar{\nu}_e)$ at the Earth when one nonzero magnetic moment at a time is allowed. It clearly demonstrates that, unless $\mu_{13}$ or $\mu_{23}$ are more than three orders of magnitude larger than $\mu_{12}$, the latter completely dominates the $\nu_e\to \bar{\nu}_e$ conversion.
\begin{figure}[b]
\centering
\includegraphics[width = 0.6\textwidth]
{figures/Peebar-AGSS09-2020-comparison-mu.pdf}
\caption{Electron antineutrino appearance probability on the Earth as a function of neutrino energy for nonzero $\mu_{12}$, $\mu_{13}$ and $\mu_{23}$, for the AGSS09 solar model. Evolution equations were solved numerically.}
\label{fig:comparison}
\end{figure}
In the above, all the numerical results were obtained for the simple linear model magnetic field of the Sun $B_0(r)$ (\ref{eq:b0}). We shall now study the sensitivity of $\nu_e\to \bar{\nu}_e$ conversion to the solar magnetic field profile.
To this end, we compare the results obtained for the linear profile we used above with those for two different parabolic profiles, $B_1(r)$ and $B_2(r)$. All the profiles are chosen to have the same strength $5\times 10^4$\,kG at $r=0.05R_\odot$ and to vanish at the surface of the Sun.
The profile
\begin{equation}
B_1(r) = 50000 + 2632 \frac{r}{R_\odot} - 52632 \left(\frac{r}
{R_\odot}\right)^2 \, {\rm kG} \,
\label{eqn:b1}
\end{equation}
is almost flat over the production region; the profile
\begin{equation}
B_2(r) = 55000 - 102368 \frac{r}{R_\odot} + 47368 \left(\frac{r}
{R_\odot}\right)^2 \, {\rm kG} \,.
\label{eqn:b2}
\end{equation}
corresponds to the magnetic field that is smaller than the linear one for $r > 0.05R_\odot$.
In Figure \ref{fig:magn_profile} (left panel) we plot the magnetic field profiles we use. In the right panel the corresponding $\bar{\nu}_e$ appearance probabilities are shown. For neutrino energies $E\lesssim 7$\,MeV all the employed magnetic field profiles lead to $\bar{\nu}_e$ appearance probabilities that are quite close to each other. The sensitivity to the magnetic field profile increases with neutrino energy.
The reason for this is twofold. First, neutrinos are produced in the Sun not at the same distance from its centre (such as e.g. $0.05R_\odot$ which we considered as a reference value for our estimates and where all our model magnetic field profiles coincide), but their production actually takes place over the extended region; the $\nu_e\to \bar{\nu}_e$ production probability is therefore sensitive to the magnetic profile in that region.
Second, the $\bar{\nu}_e$ appearance probability depends on the ``mixing'' of the left-handed and right handed neutrinos at their production point $r_0$, which is proportional to $\mu_{12}B_\perp(r_0)/(\Delta m_{21}^2/2E)$, which increases with neutrino energy.
From Figure \ref{fig:magn_profile} it follows that for $E\sim 8$\,MeV (which is a typical energy of $^8$B neutrinos) one can expect the sensitivity of the $\bar{\nu}_e$ appearance probability to the choice of the magnetic field profile to be of the order of 10 $-$ 15\%.
\begin{figure}
\centering
\includegraphics[width = 0.99\textwidth]
{figures/solar_magnetic_profile.pdf}
\caption{Left panel: model magnetic field profiles inside the Sun,
as defined in eqs.~(\ref{eq:b0}), (\ref{eqn:b1}) and (\ref{eqn:b2}).
Right panel: the corresponding electron antineutrino appearance probabilities on the Earth. Results of the analytical expression shown by black dotted curve; for the rest of the curves evolution equations were solved numerically. }
\label{fig:magn_profile}
\end{figure}
\subsection{Average electron antineutrino appearance probability and expected flux}
A number of experimental collaborations have reported upper limits on $\bar{\nu}_e$ flux from astrophysical sources. These are obtained for certain energy ranges as
\begin{equation}
\Phi_{\rm C.L.} = \frac{N_{\rm C.L.}}{\epsilon \cdot \langle\sigma\rangle
\cdot T\cdot N_p} \, ,
\end{equation}
where $N_{\rm C.L.}$ is upper limit on the number of events at a given confidence level, $\epsilon$ is the average detection efficiency in the energy range considered, $\langle\sigma\rangle$ is the averaged cross-section in the same energy range, $T$ is the exposure time and $N_p$ is the number of target particles.
In order to make it easier to use our results for analyses of the existing and future experimental data, we compute, for each energy bin $E \in [E_i-\Delta E/2, E_i+\Delta E/2]$, the averaged $\bar{\nu}_e$ appearance probability
\begin{equation}
\langle P_{i} \rangle = \frac{ \bigintss_{E_i - \Delta E/2}^{E_i
+\Delta E/2} \phi(E) \sigma(E) P(E) dE}{
\bigintss_{E_i - \Delta E/2}^{E_i +\Delta E/2} \phi(E) \sigma(E) dE} \, ,
\label{eq:Pi}
\end{equation}
and the expected $\bar{\nu}_e$ flux
\begin{equation}
\langle\Phi_{i}\rangle = \frac{ \bigintss_{E_i - \Delta E/2}^{E_i +
\Delta E/2} \phi(E) \sigma(E) P(E) dE}{ \bigintss_{E_i -
\Delta E/2}^{E_i +\Delta E/2} \sigma(E) dE} \,.
\label{eq:Phii}
\end{equation}
For simplicity, we have assumed that the detection efficiency $\epsilon$ is nearly energy independent within each bin (though it may vary from bin to bin); it then cancels out in the ratios (\ref{eq:Pi}) and (\ref{eq:Phii}).
We also assumed perfect detector energy resolution; we have checked that for the KamLAND experiment, taking into account the realistic energy resolution of 6.4\%/$\sqrt{E {\rm (MeV)}}$, changes our results by less than 0.5\%. This is related to the fact that the $\bar{\nu}_e$ appearance probability is a rather smooth function of neutrino energy (see the right panel of Figure~\ref{fig:magn_profile}).
We restrict ourselves to energies above 1.8 MeV, where only $^8$B neutrinos give a significant contribution to the solar neutrino signal, and we consider the inverse beta decay as the $\bar{\nu}_e$ detection process.
We compute the $\bar{\nu}_e$ appearance probability and the expected flux numerically, using both the numerical and analytical expressions for the probabilities $P(\nu_e\to\bar{\nu}_e)$.
In Tables \ref{tab:estimatesAGSS09} and \ref{tab:estimatesGS98} we present these probabilities and the expected $\bar{\nu}_e$ fluxes for the fixed values $\mu_{12} = 10^{-12} \mu_B$ and $B_\perp(r_0) = 1$ kG. As the the $\bar{\nu}_e$ appearance probability and the $\bar{\nu}_e$ flux are proportional to $(\mu_{12}B_\perp(r_0))^2$, the values of $\langle P_i\rangle$ and $\langle \Phi_i\rangle$ for different $\mu_{12}B_\perp(r_0)$ can be found by simple rescaling.
\begin{table}
\centering
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{c|c|c|c|c|} \cline{2-5}
& \multicolumn{2}{c|}{Numerical AGSS09} & \multicolumn{2}{c|}
{Analytical AGSS09}
\\ \hline
\multicolumn{1}{|c|}{E [MeV]} & $\langle P_{i} \rangle$ & $\langle\Phi_{i}
\rangle [\text{cm}^{-2} s^{-1} \text{MeV}^{-1}]$ & $\langle P_{i}
\rangle$ & $\langle\Phi_{i}\rangle [\text{cm}^{-2} s^{-1} \text{MeV}^{-1}]$
\\ \hline \hline
\multicolumn{1}{|c|}{1.8 - 2.8} & 1.73$\times 10^{-13}$ &
4.62$\times 10^{-8}$ & 5.08$\times 10^{-14}$ & 1.36$\times
10^{-8}$ \\
\multicolumn{1}{|c|}{2.8 - 3.8} & 2.95$\times 10^{-13}$ &
1.18$\times 10^{-7}$ & 1.45$\times 10^{-13}$ & 5.82$\times
10^{-8}$ \\
\multicolumn{1}{|c|}{3.8 - 4.8} & 4.28$\times 10^{-13}$ &
2.24$\times 10^{-7}$ & 2.97$\times 10^{-13}$ & 1.55$
\times 10^{-7}$ \\
\multicolumn{1}{|c|}{4.8 - 5.8} & 5.52$\times 10^{-13}$ &
3.34$\times 10^{-7}$ & 4.73$\times 10^{-13}$ & 2.86$
\times 10^{-7}$ \\
\multicolumn{1}{|c|}{5.8 - 6.8} & 6.57$\times 10^{-13}$ &
4.19$\times 10^{-7}$ & 6.38$\times 10^{-13}$ & 4.07$
\times 10^{-7}$ \\
\multicolumn{1}{|c|}{6.8 - 7.8} & 7.45$\times 10^{-13}$ &
4.62$\times 10^{-7}$ & 7.74$\times 10^{-13}$ &
4.80$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{7.8 - 8.8} & 8.19$\times 10^{-13}$ &
4.55$\times 10^{-7}$ & 8.78$\times 10^{-13}$ &
4.88$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{8.8 - 9.8} & 8.84$\times 10^{-13}$ &
4.03$\times 10^{-7}$ & 9.53$\times 10^{-13}$ &
4.35$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{9.8 - 10.8} & 9.38$\times 10^{-13}$ &
3.15$\times 10^{-7}$ & 1.01$\times 10^{-12}$ &
3.38$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{10.8 - 11.8} & 9.87$\times 10^{-13}$ &
2.11$\times 10^{-7}$ & 1.04$\times 10^{-12}$ &
2.23$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{11.8 - 12.8} & 1.04$\times 10^{-12}$ &
1.12$\times 10^{-7}$ & 1.07$\times 10^{-12}$ &
1.16$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{12.8 - 13.8} & 1.08$\times 10^{-12}$ &
3.87$\times 10^{-8}$ & 1.08$\times 10^{-12}$ &
3.89$\times 10^{-8}$ \\
\multicolumn{1}{|c|}{13.8 - 14.8} & 1.12$\times 10^{-12}$ &
5.77$\times 10^{-9}$ & 1.09$\times 10^{-12}$ &
5.63$\times 10^{-9}$ \\
\multicolumn{1}{|c|}{14.8 - 15.8} & 1.16$\times 10^{-12}$ &
2.83$\times 10^{-10}$ & 1.10$\times 10^{-12}$ &
2.68$\times 10^{-10}$ \\ \hline
\end{tabular}
\caption{\label{tab:estimatesAGSS09} Averaged $\bar{\nu}_e$ appearance probabilities and expected fluxes of $\bar{\nu}_e$ from the Sun for low-metallicity AGSS09 SSM. Detection through inverse beta decay is assumed; magnetic field profile (\ref{eq:b0}) and $\mu_{12}B_\perp(r_0) =10^{-12}\mu_B
\cdot {\rm kG}$ were chosen.
For rescaling to different values of $\mu_{12}B_\perp(r_0)$ see text.
}
\end{table}
\begin{table}
\centering
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{c|c|c|c|c|} \cline{2-5}
& \multicolumn{2}{c|}{Numerical GS98} & \multicolumn{2}{c|}{Analytical GS98}
\\ \hline
\multicolumn{1}{|c|}{E [MeV]} & $\langle P_{i} \rangle$ & $\langle\Phi_{i}
\rangle [\text{cm}^{-2} s^{-1} \text{MeV}^{-1}]$ & $\langle P_{i} \rangle$
& $\langle\Phi_{i}\rangle [\text{cm}^{-2} s^{-1} \text{MeV}^{-1}]$
\\ \hline \hline
\multicolumn{1}{|c|}{1.8 -2.8} & 2.03$\times 10^{-13}$ & 6.60$\times 10^{-8}$ & 5.43$\times 10 ^{-14}$ & 1.76$\times 10^{-8}$ \\
\multicolumn{1}{|c|}{2.8 - 3.8} & 3.43$\times 10^{-13}$ & 1.67$\times 10^{-7}$ & 1.55$\times 10^{-13}$ & 7.56$\times 10^{-8}$ \\
\multicolumn{1}{|c|}{3.8 - 4.8} & 4.92$\times 10^{-13}$ & 3.12$\times 10^{-7}$ & 3.19$\times 10^{-13}$ & 2.02$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{4.8 - 5.8} & 6.27$\times 10^{-13}$ & 4.60$\times 10^{-7}$ & 5.06$\times 10^{-13}$ & 3.72$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{5.8 - 6.8} & 7.41$\times 10^{-13}$ & 5.73$\times 10^{-7}$ & 6.82$\times 10^{-13}$ & 5.28$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{6.8 - 7.8} & 8.39$\times 10^{-13}$ & 6.31$\times 10^{-7}$ & 8.26$\times 10^{-13}$ & 6.21$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{7.8 - 8.8} & 9.23$\times 10^{-13}$ & 6.22$\times 10^{-7}$ & 9.35$\times 10^{-13}$ & 6.30$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{8.8 - 9.8} & 9.96$\times 10^{-13}$ & 5.51$\times 10^{-7}$ & 1.0$\times 10^{-12}$ & 5.60$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{9.8 - 10.8} & 1.06$\times 10^{-12}$ & 4.33$\times 10^{-7}$ & 1.07$\times 10^{-12}$ & 4.35$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{10.8 - 11.8} & 1.12$\times 10^{-12}$ & 2.92$\times 10^{-7}$ & 1.10$\times 10^{-12}$ & 2.87$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{11.8 - 12.8} & 1.18$\times 10^{-12}$ & 1.55$\times 10^{-7}$ & 1.13$\times 10^{-12}$ & 1.48$\times 10^{-7}$ \\
\multicolumn{1}{|c|}{12.8 - 13.8} & 1.24$\times 10^{-12}$ & 5.38$\times 10^{-8}$ & 1.14$\times 10^{-12}$ & 4.99$\times 10^{-8}$ \\
\multicolumn{1}{|c|}{13.8 - 14.8} & 1.29$\times 10^{-12}$ & 8.06$\times 10^{-9}$ & 1.15$\times 10^{-12}$ & 7.21$\times 10^{-9}$ \\
\multicolumn{1}{|c|}{14.8 - 15.8} & 1.34$\times 10^{-12}$ & 3.98$\times 10^{-10}$ & 1.16$\times 10^{-12}$ & 3.43$\times 10^{-10}$ \\ \hline
\end{tabular}
\caption{\label{tab:estimatesGS98} Same as in Table \ref{tab:estimatesAGSS09} but for high metallicity GS98 SSM.
}
\end{table}
For better illustration, we also compare in Figure \ref{fig:pav_flux} the $\bar{\nu}_e$ appearance probabilities (left panel) and the predicted $\bar{\nu}_e$ fluxes at the Earth (right panel) obtained numerically and analytically for the case of AGSS09 SSM, magnetic field strength of eq.~(\ref{eq:b0}) and $\mu_{12} = 10^{-12} \mu_B$.
It can be seen from the figure that for neutrino energies $E\gtrsim 6$ MeV there is a good agreement between our numerical and analytical results; the agreement worsens towards smaller $E$. Thus, while our simple analytical results can be reliably used at relatively high neutrino energies, numerical results should preferably be used for analysing experiments sensitive to low-$E$ part of the solar neutrino spectrum, such as Borexino.
\begin{figure}
\centering
\includegraphics[width = 0.98\textwidth]{figures/Pav_flux_AGSS09.pdf}
\caption{Comparison of the numerical and analytical results for $\bar{\nu}_e$ appearance for AGSS09 SSM. Left panel: appearance probabilities; right panel: expected $\bar{\nu}_e$ fluxes. Magnetic field of eq.~(\ref{eq:b0}) and $\mu_{12} = 10^{-1 2} \mu_B$ were chosen.}
\label{fig:pav_flux}
\end{figure}
\subsection{Existing limits from astrophysical $\bar{\nu}_e$ fluxes revisited}
We will revisit here the existing limits on neutrino magnetic moments and solar magnetic fields coming from the upper bounds on astrophysical $\bar{\nu}_e$ fluxes and compare them with our results.
At present, the most stringent limits come from the KamLAND experiment \cite{KamLAND:2021gvi}, although Borexino and Super-Kamiokande set comparable bounds \cite{Borexino:2019wln, Super-Kamiokande:2020frs,Super-Kamiokande:2021jaq}. In all these experiments the detection channel was inverse beta decay on protons.
Historically, SNO also put constraints on astrophysical $\bar{\nu}_e$ in the MeV energy range using charge-current interactions with deuterium \cite{SNO:2004eru}, but these limits are not currently competitive.
The model-independent limits on the $\bar{\nu}_e$ flux established by the above-mentioned experiments are shown in Figure \ref{fig:limits}, together with our $\bar{\nu}_e$ flux prediction for solar electron antineutrinos for the AGSS09 SSM and for $\mu_{12}B(r_0) = 2.5 \times 10^{-9}\mu_B\,$kG. Notice that the experimental bounds come closest to the predicted flux at neutrino energies $E\sim 10$ MeV.
Although the experimental bounds are stronger at energies around 20-30 MeV, the flux of solar neutrinos is extremely low for energies above 16 MeV. High-energy experimental bounds may, however, be relevant for constraining the flux of $\bar{\nu}_e$ from supernovae.
\begin{figure}
\centering
\includegraphics[width = 0.8\textwidth]{figures/limits_AGSS09.pdf}
\caption{Model-independent limits on $\bar{\nu}_e$ flux of
astrophysical origin, as reported by KamLAND \cite{KamLAND:2021gvi}, Borexino \cite{Borexino:2019wln} and Super-Kamiokande \
\cite{Super-Kamiokande:2020frs,Super-Kamiokande:2021jaq}. For comparison, we show the expected solar $\bar{\nu}_e$ flux for $\mu_{12}B_\perp(r_0)= 2.5 \times 10^{-9}\mu_B\,$kG for the AGSS09 SSM, from our analytical and numerical calculations.}
\label{fig:limits}
\end{figure}
The 90$\%$ C.L. upper limits on the product of the neutrino magnetic moment and the solar magnetic field strength we obtain from the KamLAND upper bound on the astrophysical $\bar{\nu}_e$ flux are, for the two SSM considered,
\begin{align}
\left(\mu_{12} B_\perp(r_0)\right)_{\rm AGSS09} &< \left( 4.9-5.1\right)
\times 10^{-9} \mu_B\,\text{kG}\,, \nonumber
\\
\left(\mu_{12} B_\perp(r_0)\right)_{\rm GS98} &< \left( 4.7 -4.8\right)
\times 10^{-9} \mu_B\,\text{kG}\,.
\label{eqn:KamLANDlimit}
\end{align}
Here the lower numbers correspond to our analytical approximation and the higher ones, to the full numerical calculation.
A good general agreement between the results of the two approaches can be seen. The obtained results are also consistent with the limits derived in \cite{KamLAND:2021gvi}, $\mu B_\perp(r_0) < 4.9 \times 10^{-9} \mu_B\,{\rm kG}$, where the previous analytical calculation from ref.~\cite{Akhmedov:2002mf} was used.
Similarly, one can derive the 90$\%$ C.L. limits from the Borexino results,
\begin{align}
\left(\mu_{12} B_\perp(r_0)\right)_{\rm AGSS09} &< \left( 1.8-1.9\right)
\times 10^{-8} \mu_B\,\text{kG}\,, \nonumber
\\
\left(\mu_{12} B_\perp(r_0)\right)_{\rm GS98} &< \left( 1.7-1.8\right)\times
10^{-8} \mu_B\,\text{kG}\,,
\label{eq:BorLimitOurs}
\end{align}
whereas from the Super-Kamiokande results we find
\begin{align}
\left(\mu_{12} B_\perp(r_0)\right)_{\rm AGSS09} &< \left( 7.1-7.3\right)
\times 10^{-9} \mu_B \, \text{kG} \nonumber
\\
\left(\mu_{12} B_\perp(r_0)\right)_{\rm GS98} &< \left( 6.8-6.9\right)
\times 10^{-9} \mu_B \, \text{kG} \, .
\label{eq:SK}
\end{align}
The previously obtained Borexino limit, derived in \cite{Borexino:2019wln} for the high-metallicity GS98 SSM, was $\mu B_\perp(r_0) < 6.9 \times 10^{-9} \mu_B \cdot {\rm kG}$. The factor $\sim 2.6$ discrepancy between this result and our limit (\ref{eq:BorLimitOurs}) is presumably related to the fact that in the Borexino analysis the simplified energy-independent formula (25) from \citep{Akhmedov:2002mf}, derived for $E\sim 5-10$ MeV, was used for neutrinos of smaller energies, i.e.\ outside its range of validity. As a result, Borexino arrived at a more stringent limit.
For the Super-Kamiokande experiment, the limit found in
\cite{Super-Kamiokande:2020frs}, $\mu B_\perp(r_0) < 1.5 \times 10^{-8} \mu_B \, {\rm kG}$, is approximately a factor 2 weaker than our limit (\ref{eq:SK}). This difference is probably due to the fact that Super-Kamiokande looked for electron antineutrinos in the energy range 9.3 to 17.3 MeV but used in their analysis the same simplified energy-independent $\bar{\nu}_e$ appearance probability that was derived in \citep{Akhmedov:2002mf} for smaller energies.
\section{Other limits on neutrino magnetic moments \label{sec:limits}}
In this section we give an overview of the existing limits on neutrino magnetic moments coming from various experimental searches, paying special attention to the relations between the experimentally accessible quantities and neutrino magnetic moments or their combinations.
Neutrino magnetic moment contributions to the cross-sections are often parametrised in terms of effective magnetic moments. These quantities depend on Dirac versus Majorana nature of neutrinos, on the flavour of the incoming neutrinos, and may also depend on other experimental details; in particular, flavour transitions on the way between the neutrino source and detector may have to be taken into account.
We clarify how these effective quantities are related to each other and to the fundamental neutrino magnetic moments in the mass and flavour bases.
\subsection{Limits from electromagnetic contributions to scattering processes}
Photon exchange processes induced by neutrino magnetic moments can affect neutrino scattering processes, such as e.g.\ elastic neutrino-electron scattering (ES) and coherent elastic neutrino-nucleus scattering (CE$\nu$NS). Since the neutrino magnetic dipole moment interactions flip the neutrino chirality while the Standard Model weak interactions conserve it, these contributions add up incoherently.
Following the formalism in \cite{Grimus:2000tq, Grimus:2002vb}, one can express the effective neutrino magnetic moment as
\begin{equation}
\mu^2_{\nu_\alpha} = \nu^\dagger_{L}\, (\upmu^\dagger \upmu) \, \nu_{L} +
\nu^\dagger_{R}\, (\upmu\upmu^ \dagger) \, \nu_{R} \, ,
\label{eqn:effective_mu}
\end{equation}
where $\nu_{L}$ and $\nu_{R}$ denote the vectors of the amplitudes of the incoming neutrinos with left- and right-handed chiralities, respectively, and $\upmu$ is the matrix of neutrino magnetic moments.
For Majorana neutrinos, the transformation between the neutrino amplitudes and magnetic moments in the mass-eigenstate and flavour bases is given by
\begin{equation}
\nu_{m, L} = U^\dagger \nu_{fl, L} \quad \quad \nu_{m, R} = U^T
\nu_{fl, R} \quad \quad \upmu_m = U^T\upmu_{fl} U \,,
\label{eq:MajTransf}
\end{equation}
in an obvious notation. For Dirac neutrinos, the mass matrix is in general diagonalised by a bi-unitary transformation with separate rotations for the left-handed and right handed fields, so that the amplitudes and the magnetic moment matrix transform as
\begin{equation}
\nu_{m,L} = U_{L}^\dagger \nu_{fl, L} \quad \quad \nu_{m, R} =
U_{R}^\dagger \nu_{fl, R}\,,\quad\quad
\quad \upmu_m = U^ \dagger _{R} \upmu_{fl} U_{L}
\label{eq:DirTransf}
\end{equation}
{}From eqs.~(\ref{eq:MajTransf}) and (\ref{eq:DirTransf}) it is easy to see that the expression for the effective neutrino magnetic moment in eq.~(\ref{eqn:effective_mu}) is basis-independent, as any observable should be. It is also valid for both Majorana and Dirac neutrino cases, as far as the final-state neutrinos are not detected in the scattering processes.
\subsubsection{Effective neutrino magnetic moments for short-baseline experiments}
For short-baseline scattering experiments, where the distance from the source to the detector is much shorter than the oscillation lengths $L_{ij} = 4\pi E/\Delta m^2_{ij}$, the oscillations do not have time to develop. The effective magnetic moment that enters into the cross-section of ES and CE$\nu$NS is then
\begin{align}
\mu^2_{\nu_\alpha{\rm SB}} = (\upmu^\dagger \, \upmu )_{\alpha \alpha}
= \mu^2_{\bar{\nu}_\alpha{\rm SB}}\,.
\end{align}
Expressed in terms of the elements of the neutrino magnetic moment matrix in the mass basis, for Dirac neutrinos this effective magnetic moment takes the form
\begin{flalign}
& \mu^ 2_{\nu_\alpha{\rm SB}} = |U_{\alpha1}|^2\left(|\mu_{11}|^2 +
|\mu_{21}|^2+ |\mu_{31}|^2\right) + |U_{\alpha2}|^2\left(|\mu_{12}|^2 +
|\mu_{22}|^2+ |\mu_{32}|^2\right)\nonumber \\ &+|U_{\alpha3}|^2
\left(|\mu_{13}|^2 + |\mu_{23}|^2+ |\mu_{33}|^2\right)+2\text{ Re}\lbrace
U_{\alpha1}U^*_{\alpha2} \left(\mu^*_{11} \mu_{12} + \mu_{21}^* \mu_{22} +
\mu^*_{31} \mu_{32}\right) \rbrace \nonumber \\ &+
2\text{ Re}\lbrace U_{\alpha1}U^*_{\alpha3} \left(\mu^*_{11} \mu_{13} +
\mu^*_{21} \mu_{23} + \mu^*_{31} \mu_{33}\right)+ U_{\alpha2}U^*_{\alpha3}
\left(\mu^*_{12} \mu_{13} + \mu^*_{22} \mu_{23} + \mu^*_{32} \mu_{33}\right)
\rbrace \,,
\end{flalign}
whereas for Majorana neutrinos,
\begin{flalign}
\mu^ 2_{\nu_\alpha{\rm SB}} &= |U_{\alpha1}|^2\left(|\mu_{12}|^2+
|\mu_{13}|^2\right) + |U_{\alpha2}|^2\left(|\mu_{12}|^2 +
|\mu_{23}|^2\right) +|U_{\alpha3}|^2\left(|\mu_{13}|^2 +
|\mu_{23}|^2\right) \nonumber \\ & +2\text{ Re}\lbrace
U^*_{\alpha1}U_{\alpha2} \mu_{13} \mu^*_{23}\rbrace - 2\text{ Re}\lbrace
U^*_{\alpha1}U_{\alpha3} \mu_{12} \mu^*_{23} \rbrace + 2\text{ Re}
\lbrace U^*_{\alpha2}U_{\alpha3}\mu_{12} \mu^*_{13} \rbrace \, .
\end{flalign}
Expressed through the elements of $\upmu$ in the flavour basis, the effective magnetic moment accessible in short-baseline experiments with the incoming neutrino $\nu_\alpha$ looks much simpler:
\begin{align}
\mu^2_{\nu_\alpha} = \sum _\beta |\mu_{\beta\alpha}|^2 \qquad \quad &
\text{for Dirac or Majorana neutrinos}\,.
\label{eqn:sb_flavour_mu}
\end{align}
The existing limits reported by the experimental collaborations are summarised in Table \ref{tab:sb_limits}. As can be seen from the Table, the short-baseline accelerator experiments have constrained all three effective magnetic moments, $\mu_{\nu_e}$, $\mu_{\nu_\mu}$ and $\mu_{\nu_\tau}$, whereas reactor experiments have set upper bounds on $\mu_{\nu_e}$ using both ES and CE$\nu$NS.
\begin{table}
\centering
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{|c|c|c|c|}
\hline
Experiment & Limit & Reference & Method \\ \hline \hline
LAMPF & $\mu_{\nu_e} < 1.08\times 10^{-9} \mu_B $ at 90\%C.L. &
\cite{Krakauer:1990cd} & Accelerator $\nu_e e^-$ \\
LSND & $\mu_{\nu_e} < 1.1\times 10^{-9} \mu_B $ at 90\%C.L. &
\cite{LSND:2001akn} & Accelerator $\nu_e e^-$ \\ \hline
Krasnoyarsk & $\mu_{\nu_e} < 1.4\times 10^{-10} \mu_B$ at 90\% C.L.&
\cite{Aleshin:2008zz} & Reactor $\bar{\nu}_e e^-$\\
ROVNO & $\mu_{\nu_e} < 1.9\times 10^{-10} \mu_B$ at 95\% C.L.&
\cite{Derbin:1993wy} & Reactor $\bar{\nu}_e e^-$\\
MUNU & $\mu_{\nu_e} < 9\times 10^{-11} \mu_B$ at 90\% C.L.&
\cite{MUNU:2005xnz} & Reactor $\bar{\nu}_e e^-$\\
TEXONO & $\mu_{\nu_e} < 7.4\times 10^{-11} \mu_B$ at 90\% C.L.&
\cite{TEXONO:2006xds} & Reactor $\bar{\nu}_e e^-$\\
GEMMA & $\mu_{\nu_e} < 2.9\times 10^{-11} \mu_B$ at 90\% C.L.&
\cite{Beda:2012zz} & Reactor $\bar{\nu}_e e^-$\\ \hline
CONUS & $\mu_{\nu_e} < 7.5 \times 10^{-11} \mu_B$ at 90\% C.L &
\cite{Bonet:2022imz}& Reactor CE$\nu$NS \\
Dresden-II & $\mu_{\nu_e} < 2.2 \times 10^{-11} \mu_B$ at 90\% C.L & \cite{Coloma:2022avw} & Reactor CE$\nu$NS \\ \hline
LAMPF & $\mu_{\nu_\mu} < 7.4\times 10^{-10} \mu_B$ at 90\%C.L.&
\cite{Krakauer:1990cd} & Accelerator $\nu_\mu e^-$ \\
BNL-E-0734 & $\mu_{\nu_\mu} < 8.5\times 10^{-10} \mu_B$ at 90\%C.L.&
\cite{Ahrens:1990fp} & Accelerator $\nu_\mu e^-$ \\
LSND & $\mu_{\nu_\mu} < 6.8 \times 10^{-10} \mu_B$ at 90\%C.L.&
\cite{LSND:2001akn} & Accelerator $\nu_\mu e^-$ \\ \hline
DONUT & $\mu_{\nu_\tau} < 3.9 \times 10^{-7} \mu_B$ at 90\%C.L.&
\cite{DONUT:2001zvi} & Accelerator $\nu_\tau e^-$ \\ \hline
\end{tabular}
\caption{Current experimental constraints on the effective neutrino magnetic moments from short-baseline accelerator and reactor experiments.}
\label{tab:sb_limits}
\end{table}
\subsubsection{Effective magnetic moments for solar neutrinos}
In this work we focused on the constraints on the product of the neutrino magnetic moments and solar magnetic field strength that can be obtained from non-observation of the solar $\bar{\nu}_e$ flux.
However, solar neutrino experiments can also constrain neutrino electromagnetic interactions through the study of the scattering of solar neutrinos on electrons. The effective magnetic moment probed in such experiments is different from the one accessible in short-baseline experiments, since in this case neutrino flavour transitions play an important role.
We have shown that for realistic values of the solar magnetic fields and neutrino magnetic moments the flux of right-handed solar (anti)neutrinos arriving at the Earth is much smaller than that of the left-handed neutrinos; therefore, their contribution can be safely neglected when considering
ES of solar neutrinos.%
\footnote{We have demonstrated this for Majorana neutrinos. However, from the consistency of the solar neutrino and KamLAND data, it is known that electromagnetic interactions cannot play a major role for solar neutrinos and thus, even for Dirac neutrinos, the amplitudes of the solar $\nu_R$ arriving at the Earth have to be much smaller than $\nu_L$.}
The expression for the effective magnetic moment accessible in solar neutrino experiments therefore depends only on the left-chirality amplitudes $\nu_L$ = ($\nu_{eL} \; \nu_{\mu L} \; \nu_{\tau L})^T$, which can be obtained in the standard three-flavour picture. We find
\begin{align}
\mu^2_{\nu{\rm SOLAR}} & = \nu^\dagger _L (\upmu^\dagger \upmu)\nu_L
\nonumber \\
& = (|\mu_{11}|^2 + |\mu_{21}|^2 +|\mu_{31}|^2) |\nu_{1L}|^2 +
(|\mu_{12}|^2 + |\mu_{22}|^2 +|\mu_{32}|^2) |\nu_{2L}|^2 \nonumber \\
& + (|\mu_{13}|^2 + |\mu_{23}|^2 +|\mu_{33}|^2) |\nu_{3L}|^2 +
2\rm{Re}\lbrace (\mu^*_{11}\mu_{12} + \mu^*_{21} \mu_{22} +
\mu^*_{31}\mu_{32}) (\nu_{1L}\nu^*_{2L})\rbrace \nonumber \\ & +
2 \rm{Re}\lbrace (\mu^*_{11}\mu_{13} + \mu^*_{21} \mu_{23} +
\mu^*_{13}\mu_{33}) (\nu_{1L}\nu^*_{3L})\rbrace \nonumber \\
&+2 \rm{Re}\lbrace (\mu^*_{12}\mu_{13} + \mu^*_{22} \mu_{23} +
\mu^*_{32}\mu_{33}) (\nu_{2L}\nu^*_{3L})\rbrace \, .
\label{eq:munuSol1}
\end{align}
This expression is valid for both Dirac and Majorana neutrinos (it should be remembered that in the latter case the diagonal elements of the matrix $\mu$ vanish).
Next, we note that the coherence of different neutrino mass eigenstates is lost on the way to the Earth, that is $\nu^*_{iL} \nu_{jL}$ averages to zero for $i \neq j$ \cite{Dighe:1999id,Grimus:2002vb}.
Taking into account that neutrino flavour conversion in the Sun is adiabatic, for the probabilities of finding the mass-eigenstate components of the solar neutrino flux at the Earth we find
\begin{flalign}
|\nu_{1L}|^2
= c^2_{13} \cos^2\tilde{\theta}\,,
\quad
|\nu_{2L}|^2 =
c^2_{13} \sin ^2 \Tilde{\theta}
\quad \text{ and }
\quad |\nu_{3L}|^2
= s^2_{13} \,,
\end{flalign}
where the mixing angle $\tilde{\theta}(r)$ was defined in eq.~(\ref{eq:tildetheta}) and the averaging over the coordinate of the neutrino production point in the Sun is implied.
{}From eq.~(\ref{eq:munuSol1}) we then find
\begin{align}
\mu^2_{\nu{\rm SOLAR}} &= (|\mu_{11}|^2 + |\mu_{21}|^2 +|\mu_{31}|^2)
c^2_{13} \cos ^2 \tilde{\theta} & \nonumber \\ & + (|\mu_{12}|^2 +
|\mu_{22}|^2 +|\mu_{32}|^2)c^2_{13} \sin ^2 \tilde{\theta} & \nonumber \\
& + (|\mu_{13}|^2 + |\mu_{23}|^2 +|\mu_{33}|^2)s^2_{13} & \quad
\text{for Dirac neutrinos.}
\label{eq:muDir}
\\
\mu^2_{\nu{\rm SOLAR}} &= |\mu_{12}|^2 c^2_{13} + |\mu_{13}|^2(c^2_{13}
\cos^2 \Tilde{\theta} + s^2_{13}) & \nonumber \\ & +
|\mu_{23}|^2(c^2_{13} \sin^2 \Tilde{\theta} + s^2_{13}) & \quad
\text{for Majorana neutrinos.}
\label{eq:muMaj}
\end{align}
For neutrino energies $E\lesssim 1$ MeV, solar matter effects can be neglected and $\cos\Tilde{\theta} \simeq \cos\theta_{12}$. For $E\gtrsim 5-7$ MeV, $\tilde{\theta} \simeq \pi/2$. In general, to consistently extract the limits on neutrino magnetic moments from solar neutrino scattering measurements, it is important to carefully take the energy dependence of $\tilde{\theta}$ into account.
The limits derived by Borexino and Super-Kamiokande collaborations are shown in Table \ref{tab:mu_solar}. \\
\begin{table}
\centering
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{|c|c|c|c|}
\hline
Experiment & Limit at 90\%C.L. & Reference & Energy range
\\ \hline \hline
Borexino & $\mu_{\nu{\rm SOLAR}} < 2.8 \times 10^{-11} \mu_B $ &
\cite{Borexino:2017fbd,Coloma:2022umy} & 0.19 MeV $-$ 2.93 MeV \\
\hline Super-Kamiokande & $\mu_{\nu{\rm SOLAR}} < 1.1 \times 10^{-10}
\mu_B $ & \cite{Super-Kamiokande:2004wqk} & 5 MeV $-$ 20 MeV \\ \hline
\end{tabular}
\caption{Limits on the effective neutrino magnetic moment from elastic scattering of solar neutrinos on electrons.}
\label{tab:mu_solar}
\end{table}
The excess of low-energy electron recoil events reported by the XENON1T Collaboration \cite{XENON:2020rca} can be interpreted as coming from the scattering of solar $pp$ neutrinos on electrons, with an effective neutrino magnetic moment
\begin{equation}
\mu_{\nu{\rm XENON1T}} \in (1.4,\; 2.9) \times 10^ {-11} \mu_B \quad
\text{at 90\% C.L.}
\label{eq:X1T}
\end{equation}
As the energies of $pp$ neutrinos are below 0.5 MeV,
the effective neutrino magnetic moments to which XENON1T is sensitive can be found from (\ref{eq:muDir}) and (\ref{eq:muMaj}) by setting $\tilde{\theta}=\theta_{12}$.
The obtained results are in accord with eq.~(10) of \cite{Miranda:2020kwy}.
It should be noted that it is possible to derive stronger limits on neutrino magnetic moments than those quoted in this subsection by combining the available data on neutrino scattering, see for instance \cite{Canas:2015yoa}, where the Majorana neutrino case was considered. Such analyses can also shed some light on the so-called blind spots in the neutrino parameter space \cite{Canas:2016kfy,Sierra:2021say}.
\subsection{Other limits from astrophysics and cosmology}
\subsubsection{Plasmon decay and related processes in astrophysical environments}
Photons in plasma (plasmons) have nonzero effective mass and so can decay into neutrino-antineutrino pairs. The rate of such processes depend on effective neutrino magnetic moment given by
\begin{equation}
\mu^2_{\nu{\rm PLASMON}} = \sum_{i,j} |\mu_{ij}|^2 \,.
\label{eqn:plasmon_mu}
\end{equation}
The plasmon decay process leads to increased energy loss in stellar environments.
By studying the impact of the extra energy loss on the luminosity of stars one can derive bounds on the neutrino magnetic moment, see Table \ref{tab:mu_astro}. In red giants, plasmon decay would be an additional source of cooling, delaying helium ignition. Non-observation of such delay was also used to constrain magnetic moments \cite{Raffelt:1992pi}.
Besides that, additional energy losses would lead to a larger core mass at helium ignition and consequently, the tip of the red-giant branch (TRGB) would be brighter than predicted by the standard stellar models \cite{Raffelt:1999tx,Capozzi:2020cbu}.
There are also bounds on the neutrino magnetic moments from observations of the rate of change of the period of pulsating white dwarfs of spectral type DB (which have only helium absorption lines in its spectrum) \cite{C_rsico_2014}.
There are other processes contributing to stellar cooling that are sensitive to neutrino magnetic moments: for instance, $\gamma e^- \rightarrow e^- \bar{\nu}\nu$, electron-positron annihilation to neutrinos $e^+ e^- \rightarrow \bar{\nu}\nu$ and bremsstrahlung $e^- (Ze) \rightarrow (Ze)e^- \bar{\nu}\nu$. Note that all these processes probe the same combination of neutrino magnetic moments as that probed by plasmon decay.
It has been shown that they could lead to considerable changes in the evolution of stars with masses between 7$M_\odot$ and 18 $M_\odot$, \cite{Heger:2008er}. The resulting sensitivity to the magnetic moment $\mu_{\nu{\rm PLASMON}}$ is at the level of $(2-4) \times 10^{-11} \mu_B$.
\begin{table}
\centering
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{|c|c|c|}
\hline
Limit & Reference & Method \\ \hline \hline $\mu_{\nu{\rm PLASMON}}
< 1.2 \times 10^{-12} \mu_B $ at 95\%C.L. & \cite{Capozzi:2020cbu} & Tip of red-giant branch \\
\hline $\mu_{\nu{\rm PLASMON}}
< 1.0 \times 10^{-11} \mu_B $ at 95\%C.L. & \cite{C_rsico_2014}
& Pulsating white dwarfs \\
\hline $\mu_{\nu{\rm PLASMON}} < 2.2 \times 10^{-12} \mu_B $ at
95\%C.L. & \cite{Diaz:2019kim} & Luminosity \\
\hline $\mu_{\nu{\rm PLASMON}} < 2.2 \times 10^{-12} \mu_B $ at
95\%C.L. & \cite{ARCEODIAZ20151} & Luminosity \\ \hline
\end{tabular}
\caption{Limits on effective neutrino magnetic moments from plasmon decays in stars.}
\label{tab:mu_astro}
\end{table}
\subsubsection{Limits from SN1987A}
If neutrinos are Dirac particles, their nonzero magnetic moments could lead to conversion of a significant fraction of supernova (SN) neutrinos and antineutrinos into (practically) sterile $\nu_R$ and $\bar{\nu}_L$.
For sufficiently high conversion efficiency, this would not be compatible with the observed neutrino signal from SN1987A. There are several processes that have been considered in this context and that could lead to a significant outflow of sterile neutrinos. In a hot and dense SN core, sterile neutrinos can be produced via neutrino scattering on electrons ($\nu_{L} e^ - \rightarrow \nu_Re^-$) and protons ($\nu_L p \rightarrow \nu_R p$) mediated by photon exchange, and similarly for $\bar{\nu}_R$ scattering. Once sterile neutrinos are produced, they will easily escape the SN, since their mean free path is much larger than the radius of the core. Limits based on this argument were found to be
\cite{PhysRevLett.61.27}
\begin{equation}
\mu_\nu \leq (0.1-1)\times10^{-12} \mu_B\,.
\end{equation}
A detailed analysis of mediated by virtual plasmons chirality-flip neutrino scattering processes on electrons and protons in plasma was carried out in \cite{Ayala:1999xn,Kuznetsov:2009we, Kuznetsov:2009zm}.
The following limits on flavour- and time-averaged Dirac neutrino magnetic moments were found in these papers for a number of SN models:
\begin{equation}
\mu_\nu < (1.1 -2.7)\times 10^{-12} \mu_B\,.
\end{equation}
It is difficult to interpret these results in terms of more fundamental quantities since they involve weighing the contribution from different neutrino flavours depending on their abundances which vary with time.
The above limits were questioned in ref.~\cite{Bar:2019ifz}. The authors argued that a cooling proto-neutron star is not the only possible source of neutrino emission in core-collapse SN. If the canonical delayed neutrino mechanism failed to explode SN1987A, and if the pre-collapse star was rotating, an accretion disk could form.
Neutrinos from SN1987A could have been emitted from such an accretion disk and not from the SN core. As the disc should be optically thin for neutrinos, their electromagnetic interactions would play negligible role and so would be the additional energy loss in the form of sterile neutrinos.
\subsubsection{Conversion of $\nu_e$ from supernova neutronisation burst into $\bar{\nu}_e$}
Similarly to $\nu_e\to \bar{\nu}_e$ conversion of solar neutrinos discussed in this paper, electron neutrinos produced in SN can be converted into electron antineutrinos due to the combined action of neutrino SFP in strong SN magnetic fields and flavour transitions \cite{Akhmedov:1992ea,Akhmedov:2003fu,Ando:2003is,Jana:2022tsa} (note that SFP can be resonantly enhanced in this case).
Such a conversion would have a very clear signature for neutrinos emitted during the prompt neutronisation stage of SN evolution, as the produced neutrino flux consists almost exclusively of $\nu_e$ at this
stage.
The $\bar{\nu}_e$ appearance probability will depend on the product of the effective neutrino magnetic moment $\mu_\nu$ and the SN magnetic field strength $B_0$ at the resonance of SFP.
The expression for $\mu_\nu$ takes the simplest form in a rotated (primed) basis, which differs from our primed basis defined in (\ref{eqn:basis}) by the absence of the 1--3 rotation and $\Gamma_\delta$ transformation.
For normal neutrino mass ordering, $\mu_\nu=\mu'_{e\mu'}$, whereas for the inverse ordering $\mu_\nu=\mu'_{e\tau'}$. These quantities are related to the neutrino magnetic moments in the mass eigenstate basis as
\begin{align}
&\mu'_{e\mu'}=\mu_{12}c_{13} e^{-i\lambda_2}+(\mu_{13}s_{12}-\mu_{23}c_{12}
e^{-i\lambda_2})s_{13} e^{i(\delta_{\rm CP} - \lambda_3)}\,,\\
&\mu'_{e\tau'}=(\mu_{13}c_{12}+\mu_{23}s_{12}e^{-i\lambda_2})e^{-i\lambda_3}\,.
\end{align}
Conversion of SN neutronisation burst $\nu_e$'s into $\bar{\nu}_e$'s can be searched for in future neutrino experiments. For example, the Hyper-Kamiokande experiment is expected to have the senstivity to $\mu_\nu B_0\sim (5\times 10^{-3}$ -- $6\times 10^{-4})\,\mu_B$\,G, depending on the neutrino mass ordering \cite{Jana:2022tsa}.
Assuming $B_0\simeq 10^{10}$\,G, this would imply the sensitivity to $\mu_\nu$ at the level of $(5\times 10^{-13}$ -- $6\times 10^{-14})$\,$\mu_B$.
\subsubsection{Cosmology}
Neutrino magnetic moments can also be constrained by cosmology. Nonzero magnetic moments could increase the time during which neutrinos remain in thermal contact with the cosmic plasma.
In \cite{Morgan:1981psa} the impact
of this effect on the production of deuterium in big bang nucleosynthesis was addressed, assuming that, due to their electromagnetic scattering on electrons and positrons, neutrinos remained coupled to the plasma until the epoch of electron-positron annihilation.
In \cite{Vassh:2015yza} the impact of transition magnetic moments of Majorana neutrinos on the neutrino decoupling temperatures and the corresponding consequences for Big Bang Nucleosynthesis were studied. Upper limits on the transition magnetic moments in the flavour basis \eqref{eqn:sb_flavour_mu} of the order $\mathcal{O}(10^{-10} \mu_B)$ were obtained.
In a different approach, a number of authors considered the production of sterile $\nu_R$ through neutrino scattering on electrons and positrons $e^{\pm}+ \nu_L \rightarrow e^{\pm} + \nu_R$ and electron-positron annihilation $e^{+} +e^{-} \rightarrow \nu_{L,R} + \bar{\nu}_{L,R}$, mediated by active-to-sterile neutrino transition magnetic moments.
Depending on the mass of the sterile neutrino states, their production can have two important consequences, (i) if sterile neutrinos are sufficiently light, they contribute to the radiation density of the Universe and modify its expansion rate, and (ii) they can also experience radiative decay $\nu_R \rightarrow \gamma+ \nu_L$, which would increase the photon energy density. Both effects can modify the primordial abundances of light elements, see for instance \cite{Dolgov:2002wy,Brdar:2020quo}. It is difficult to interpret the results of these works in terms of fundamental magnetic moments based on the provided information on the underlying assumptions. Also, the limits have a strong dependence on the mass of the right-handed neutrino.
\subsection{Collider}
Bounds on neutrino magnetic moments are also set by collider searches for the process $e^+e^- \rightarrow \bar{\nu}\nu \gamma$ \cite{Grotch:1988ac}, including the searches for anomalous production of energetic single photons in $e^+ e^-$ annihilation at the $Z$ resonance \cite{Gould:1994gq,L3:1997exg}.
In the latter case, the dominant mechanism for the production of single-photon events via the neutrino magnetic moment interaction is radiation of a photon from the final-state neutrino or anti-neutrino; off the resonance, it is mainly bremsstrahlung from $e^+$ or $e^-$, with the $\bar{\nu}\nu$ pair production being mediated by an $s$-channel exchange of a
virtual photon.
The process $e^+e^- \rightarrow \bar{\nu}\nu \gamma$ is sensitive to the same combination of the neutrino magnetic moments as the plasmon decay, eq.~\eqref{eqn:plasmon_mu}.
The constraints coming from LEP are of the order of $10^{-6} \mu_B$ \cite{Gould:1994gq,L3:1997exg}; they are much weaker than those from astrophysical observations, but on the other hand they are more direct.
Other processes potentially sensitive to neutrino magnetic moments could also be explored, such as e.g.\ $\pi^0 \rightarrow \gamma \bar{\nu}\nu$, but so far the obtained limits are of the same order of magnitude as those from LEP \cite{Grasso:1991qy,Grasso:1993kn}. The combination of magnetic moments constrained in these searches is the same as that in in plasmon decay, eq.~\eqref{eqn:plasmon_mu}.
\section{Discussion}
\label{sec:discussion}
Assuming neutrinos to be Majorana particles, we have studied the conversion of solar $\nu_e$ into electron antineutrinos through the combined action of SFP of solar neutrinos, caused by the interaction of their transition magnetic moments with solar magnetic fields, and the ordinary flavour transitions. To this end, we have derived the neutrino evolution equations in the three-flavour framework in a rotated basis convenient for studying solar neutrinos. Making use of the fact that the effect of SFP in the Sun can at most be subleading, we developed a perturbation-theoretic approach and obtained a simple analytical expression for the probability of appearance of solar $\bar{\nu}_e$ on the Earth.
The possibility that the solar magnetic fields may be twisting was taken into account. The obtained expression can be readily employed for the analysis and interpretation of the experimental results on searches of astrophysical $\bar{\nu}_e$ fluxes.
To check the validity of our approximations and the accuracy of the obtained analytical solution, we also carried out, for a number of model solar magnetic fields $B_\perp(r)$, a full numerical solution of the system of coupled neutrino evolution equations. We have found a good general agreement between our numerical and analytical results, especially for neutrino energies $E\gtrsim 5$ MeV. The discrepancy between the numerical and analytical results is larger for smaller $E$, where the $\bar{\nu}_e$ appearance probability is, however, relatively small.
We have found that the $\bar{\nu}_e$ appearance probability is to a good accuracy proportional to $[\mu_{12} B_\perp(r_0)]^2$, where $r_0$ is the coordinate of the neutrino production point in the Sun, over which averaging has to be performed.
The contribution of the other two transition magnetic moments, $\mu_{13}$ and $\mu_{23}$, are strongly suppressed unless they exceed $\mu_{12}$ by several orders of magnitude. The shape of the profile of the solar magnetic field turns out to play relatively minor role, as the flux of the produced $\bar{\nu}_e$ is mostly determined by the average magnetic field in the neutrino production region.
With the aim to facilitate accurate analysis of and derivation of constraints from future experiments searching for solar antineutrinos, we provided the $\bar{\nu}_e$ appearance probabilities as well as the expected fluxes on the Earth in the binned form in Tables \ref{tab:estimatesAGSS09} and \ref{tab:estimatesGS98}. The calculations were done for two solar models -- low metallicity and high metallicity ones.
We have also revisited and updated the existing upper bounds on $\mu_{12}B_\perp$ using the 3-flavour formalism developed here. The best current limit on the product of the neutrino magnetic moment and the solar magnetic field comes from the KamLAND upper bound on the astrophysical $\bar{\nu}_e$ flux, from which we have obtained $\mu_{12} B_{\perp} (r_0)\lesssim 5\times 10^{-9} \mu_B$\,kG, with a mild dependence on the solar model considered.
For reference purposes, we have also presented a comprehensive review of the other existing constrains on neutrino magnetic moments. In particular, we discussed, both for Dirac and Majorana neutrinos, how the different effective neutrino magnetic moments probed in a variety of experiments are related to the magnetic moments in the mass and flavour eigenstates bases and leptonic mixing parameters.
If the neutrino magnetic moment interpretation of the low-energy event excess in XENON1T data is correct, this would imply the effective neutrino moment $\mu_{\nu\rm XENON1T}$, which can be obtained from eq.~(\ref{eq:muDir}) or eq.~(\ref{eq:muMaj}) by setting $\tilde{\theta}=\theta_{12}$, is of the order of $2\times 10^{-11}\mu_B$ \cite{XENON:2020rca}. Assuming that neutrinos are Majorana particles and that $\mu_{\nu\rm XENON1T}$ is dominated by $\mu_{12}$, one could then combine the XENON1T result with the discussed above KamLAND constraint on $\mu_{12}B_\perp(r_0)$ to obtain the limit on the magnetic field strength in the neutrino production region in the Sun, which gives $B_\perp< 250$\,kG.
This is apparently the most stringent constraint on the magnetic field in the solar core currently available. However, it is obviously model dependent: it relies heavily on the assumption of significant contribution of $\mu_{12}$ to $\mu_{\nu\rm XENON1T}$, whereas the latter can be nonzero even if $\mu_{12}$ vanishes.
If the magnetic field strength in the solar core were known, one could use the upper bounds on $\mu_{12}B_\perp$ obtained from non-observation of solar $\bar{\nu}_e$ to derive constraints on $\mu_{12}$ for Majorana neutrinos. Unfortunately, very little (if anything) is known about the magnetic field strength in the core of the Sun. There is a very conservative upper bound $B< 10 ^9$\,G coming from the requirement that the pressure of the magnetic field in the solar core does not exceed matter pressure \cite{Schramm:1993mv}. With the KamLAND result, this would translate to the limit $\mu_{12}<5\times 10 ^{-15}\mu_B$. There are some constraints on the magnetic fields in the radiative zone of the Sun. From solar oblateness and the analysis of the splitting of the solar oscillation frequencies, one finds $B \lesssim 7$\,MG \cite{Friedland:2002is}.
If one assumes (rather arbitrarily) that the magnetic field in the solar core, where the neutrinos are produced, is of similar magnitude, this would translate to the limit $\mu_{12} < 7.1\times 10^{-13}\mu_B$.
From the requirement of the stability of toroidal magnetic fields in the radiative zone of the Sun, a much more stringent limit $B\lesssim 600$\,G can be found \cite{Kitchatinov, Bonanno2013}. Assuming that the magnetic field in the core of the Sun is of similar magnitude, one would obtain the constraint $\mu_{12}<8.3 \times 10^{-9} \mu_B$. We stress once again that there is no {\it a priori} reason to believe that the magnetic fields in the core of the Sun are of the same order as those in the radiative zone; we use the latter just as some reference values.
The limits on the product of the neutrino magnetic moment and the solar magnetic field strength are expected to be improved in the near future by current and next-generation neutrino observatories with high potential to detect electron antineutrinos from astrophysical sources, which include Super-Kamiokande loaded with gadolinium, JUNO and Hyper-Kamiokande.
The simple analytical expression for the electron antineutrino appearance probability derived here as well as the calculated expected values of the $\bar{\nu}_e$ flux can facilitate the analyses of forthcoming data.
\acknowledgments
We thank Manfred Lindner and Mariam Tórtola for useful discussions.
PMM is grateful for the hospitality of the Particle and Astroparticle Physics Division of the \mbox{Max-Planck-Institut} f{\"u}r
Kernphysik (Heidelberg) during the development of this project.
The work of PMM is supported by the grants FPU18/04571
(MICIU), PROMETEO/2018/165 (\mbox{Generalitat} Valenciana) and PID2020-113775GB-I00 (AEI/10.13039/501100011033).
\bibliographystyle{JHEP}
|
2,877,628,090,351 | arxiv | \section{Introduction}\label{sec-1}
Different manifolds with additional tensor structures have been classified with respect to the structure $(0,3)$ tensors, generated by the covariant derivative of the
fundamental tensor of type $(1,1)$. For example, such classifications are: the Gray-Hervella classification of almost Hermitian manifolds given in \cite{GH}, the Naveira classification of Riemannian almost product manifolds - in \cite{N}, the Ganchev-Borisov classification of almost complex manifolds with Norden metric - in
\cite{GB}, the Alexiev-Ganchev classification of almost contact metric manifolds - in \cite{AG}, the Ganchev-Mihova-Gribachev classification of almost contact B-metric manifolds - in \cite{GMG} and etc.
\par
In \cite{NZ} we decomposed the vector space of the structure $(0,3)$ tensors on almost paracontact metric manifolds (called almost paracontact manifolds with semi-Riemannian metric of $(n+1,n)$) in eleven subspaces which are orthogonal and invariant under the action of the structure group of the considered manifolds. In this paper we show that one of the eleven subspaces could be decomposed in two orthogonal and invariant subspaces and give their characteristic conditions. Then we find the
dimensions of the twelve subspaces and the projections of the structure tensor in the corresponding basic classes of almost paracontact metric manifolds. Also, we obtain the classes of the following types of almost paracontact metric manifolds: $\alpha$-para-Sasakian, $\alpha$-para-Kenmotsu, normal, paracontact metric, para-Sasakian, K-paracontact and quasi-para-Sasakian.
\par
We pay special attention to almost paracontact metric manifolds of dimension 3, which is the lowest dimension for these manifolds. First, we establish that such
manifolds belong only to four basic classes from the considered classification. Then we define an almost paracontact metric structure on a 3-dimensional Lie group. We determine its Lie algebra by commutators such that the Lie group is a manifold belonging to some of the four basic classes of 3-dimensional almost paracontact metric manifolds. The considered Lie groups are characterized geometrically in terms of their curvature properties. Moreover, we find explicit matrix representations of these Lie groups. Let us note that Lie groups as 3-dimensional almost contact B-metric manifolds were studied in \cite{HMDM} and their matrix representations were obtained in
\cite{HM}.
\section{Preliminaries}\label{sec-2}
A (2n+1)-dimensional smooth manifold $M^{(2n+1)}$
has an \emph{almost paracontact structure} $(\varphi,\xi,\eta)$ if it admits a tensor field
$\varphi$ of type $(1,1)$, a vector field $\xi$ and a 1-form
$\eta$ satisfying the following conditions:
\begin{eqnarray}
\label{f82}
& &
\begin{array}{cl}
(i) & \varphi^2 = id - \eta \otimes \xi, \quad \eta (\xi)=1, \quad \varphi(\xi)=0,
\\[5pt]
(ii) & \textrm{there exists a distribution $\mathbb {D}: p \in M \longrightarrow \mathbb {D}_p\subset T_pM:$}
\\[1pt]
& \textrm{$\mathbb D_p=Ker \eta=\{x\in T_pM: \eta (x)=0\}$, called {\it paracontact}}
\\[1pt]
& \textrm{{\it distribution} generated by $\eta$.}
\end{array}
\end{eqnarray}
Then the tangent space $T_pM$ at each $p\in M$ is the following orthogonal direct sum
\[
T_pM=\mathbb D_p\oplus span_\mathbb R\{\xi (p)\}
\]
and every vector $x\in T_pM$ can be decomposed uniquely in the manner
\begin{equation}\label{3}
x=hx+vx,
\end{equation}
where $hx=\varphi ^2x \in \mathbb{D}_p$ and $vx=\eta (x).\xi (p) \in span_\mathbb R\{\xi (p)\}$. Using the conditions \eqref{f82} we have
\[
h\xi =0, \quad h^2=h, \quad h\circ \varphi =\varphi \circ h=\varphi ,\quad v\circ h=h\circ v =0.
\]
\par
The tensor field $\varphi $ induces an almost paracomplex structure \cite{KW} on each
fibre on $\mathbb D$ and $(\mathbb D, \varphi , g_{\vert \mathbb D})$ is a $2n$-dimensional
almost paracomplex manifold. Since $g$ is non-degenerate metric on $M$ and $\xi $ is non-isotropic,
the paracontact distribution $\mathbb D$ is non-degenerate.
As immediate consequences of the definition of the almost
paracontact structure we have that the endomorphism $\varphi$ has rank
$2n$,
and $\eta \circ \varphi=0$, (see \cite{B1,B2} for the almost contact case).\\
From now on, we will use $x, y, z$ for arbitrary elements of $\chi (M)$ or vectors in the tangent space $T_pM$ at $p\in M$.\\
If a manifold $M^{(2n+1)}$ with $(\varphi,\xi,\eta)$-structure
admits a pseudo-Riemannian metric $g$ such that
\begin{equation}\label{con}
g(\varphi x,\varphi y)=-g(x,y)+\eta (x)\eta (y),
\end{equation}
then we say that $M^{(2n+1)}$ has an almost paracontact metric structure and
$g$ is called \emph{compatible} metric. Any compatible metric $g$ with a given almost paracontact
structure is necessarily of signature $(n+1,n)$.
Setting $y=\xi$, we have
$\eta(x)=g(x,\xi).$
Any almost paracontact structure admits a compatible metric.
The fundamental 2-form
\begin{equation}\label{fund}
\phi(x,y)=g(\varphi x,y)
\end{equation}
is non-degenerate on the horizontal distribution $\mathbb D$ and
$\eta\wedge {\phi}^n\not=0$.
\par
Let $\phi$ be the fundamental 2-form on $(M,\varphi ,\xi ,\eta ,g)$
and $F$ be the covariant derivative of $\phi$ with respect to the
Levi-Civita connection $\nabla $ of $g$, i.e the
tensor field $F$ of type $(0,3)$ is defined by
\begin{equation}\label{4}
F(x,y,z)=(\nabla \phi )(x,y,z)=(\nabla _x\phi )(y,z)=\\
g((\nabla _x\varphi )y,z).
\end{equation}
Because of $(1.1)$ and $(1.3)$ the tensor $F$ has the following properties:
\begin{equation}\label{5}
\begin{array}{ll}
F(x,y,z)=-F(x,z,y), \\
F(x,\varphi y, \varphi z)=F(x,y,z)+\eta(y)F(x,z,\xi)-\eta(z)F(x,y,\xi).
\end{array}
\end{equation}
The following 1-forms are associated with $F$:
\begin{equation}\label{6}
\theta(x)=g^{ij}F(e_i,e_j,x); \,
\theta^*(x)=g^{ij}F(e_i,\varphi e_j,x); \,
\omega(x)=F(\xi,\xi,x),
\end{equation}
where $\{e_i,\xi\}$ $(i=1,\ldots,2n)$ is a basis of $TM$, and $(g^{ij})$ is the inverse matrix of $(g_{ij})$.
We express $\nabla \eta$, $d\eta $, $L_\xi g$ and $d\phi $ in terms of the structure tensor $F$ in the following lemma
\begin{lem}\label{Lemma 2.1}
For arbitrary $x, y, z$ we have:
\begin{equation}\label{2.2}
(\nabla _x\eta)y=g(\nabla _x\xi ,y)=-F(x,\varphi y,\xi );
\end{equation}
\begin{equation}\label{2.3}
d\eta (x,y)=\frac{1}{2}\left((\nabla _x\eta)y-(\nabla _y\eta)x\right)=\frac{1}{2}(-F(x,\varphi y,\xi )+F(y,\varphi x,\xi ));
\end{equation}
\begin{equation}\label{2.4}
(L_\xi g)(x,y)=(\nabla _x\eta)y+(\nabla _y\eta)x=-F(x,\varphi y,\xi )-F(y,\varphi x,\xi );
\end{equation}
\begin{equation}\label{2.5}
d\phi (x,y,z)=\mathop{\s} \limits_{(x,y,z)}F(x,y,z) ,
\end{equation}
where $\mathop{\s} \limits_{(x,y,z)}$ denotes the cyclic sum over $x, y, z$.
\end{lem}
In \cite{JW} it is proved that a $(2n+1)$-dimensional almost paracontact metric manifold is normal if and only if the following condition holds:
\begin{equation}\label{2.6}
\varphi (\nabla _x\varphi)y-(\nabla _{\varphi x}\varphi)y+(\nabla _x\eta)(y)\xi =0 .
\end{equation}
Moreover, we have that \eqref{2.6} is equivalent to the following equality
\begin{equation}\label{2.7}
F(x, y,\varphi z)+F(\varphi x,y,z)+F(x,\varphi y,\xi )=0 .
\end{equation}
\begin{defn}\label{Definition 2.2}
A $(2n+1)$-dimensional almost paracontact metric manifold is called
\begin{itemize}
\item {\it normal} if $N(x,y)-2d\eta (x,y)\xi = 0$, where
\[
N(x,y)=\varphi ^2[x,y]+[\varphi x,\varphi y]-\varphi [\varphi x,y]-\varphi [x,\varphi y]
\]
is the Nijenhuis torsion tensor of $\varphi $ (see \cite{Z});
\item {\it paracontact metric} if $\phi =d\eta$;
\item {\it $\alpha $-para-Sasakian} if $(\nabla_x\varphi)y=\alpha(g(x,y)\xi-\eta(y)x)$, where $\alpha\neq 0$ is constant; \item {\it para-Sasakian} if it is normal and paracontact metric;
\item {\it $\alpha $-para-Kenmotsu} if $(\nabla_x\varphi)y=-\alpha(g(x,\varphi y)\xi+\eta(y)\varphi x)$, where $\alpha\neq 0$ is constant, in particular, para-Kenmotsu if $\alpha=-1$;
\item {\it K-paracontact} if it is paracontact and $\xi$ is Killing vector field;
\item {\it quasi-para-Sasakian} if it is normal and $d\phi =0$.
\end{itemize}
\end{defn}
\begin{rem}\label{Remark 1.}
In \cite{Z} it was proved that $(M,\varphi ,\xi ,\eta ,g)$ is para-Sasakian if and only if $(\nabla_x\varphi)y=-g(x,y)\xi+\eta(y)x$. This result is obtained by $\phi (x,y)=g(x,\varphi y)$. We note that if
$\phi (x,y)=g(\varphi x,y)$, then $(M,\varphi ,\xi ,\eta ,g)$ is para-Sasakian if and only if $(\nabla_x\varphi)y=g(x,y)\xi-\eta(y)x$. Hence, if $\phi (x,y)=g(x,\varphi y)$ (resp. $\phi (x,y)=g(\varphi x,y)$), then an $\alpha $-para-Sasakian manifold is para-Sasakian if $\alpha=-1$ (resp. $\alpha=1$).
\end{rem}
Let $\mathbb U^ \pi(n)$ be the paraunitary group, i.e. $\mathbb U^ \pi(n)$ consists of paracomplex
matrices $\beta =A+\epsilon B$ \, ($\epsilon ^2=1$; \, $A, B$ are real matrices of type $(n\times n)$) such
that $\beta ^{-1}=\bar \beta ^t$. If $r$ is the real representation of $\mathbb U^ \pi(n)$ then
\[
r(\beta )=\left(\begin {matrix} A & B
\cr B & A\cr \end {matrix}\right) , \quad A^tA-B^tB=I_n, \quad A^tB-B^tA=0 ,
\]
where $\beta \in \mathbb U^ \pi(n)$, $I_n$ denotes the identity matrix of type $(n\times n)$.
We consider the group $\mathbb U^ \pi(n)\times \{1\}$ which consists of matrices $\alpha $ of type
$((n+1)\times (n+1))$ such that
$\alpha =\left(\begin {matrix} & 0 \cr \beta & \vdots \cr & 0 \cr 0 \ldots 0 & 1
\end {matrix}\right) , \beta \in \mathbb U^ \pi(n)$. Then $r(\alpha )=
\left(\begin {matrix}A & B & 0 \cr B & A & \vdots \cr & & 0 \cr 0 & \ldots 0 & 1
\end {matrix}\right)$. \\
For $\alpha \in \mathbb U^ \pi(n)\times id$ we have $\alpha \xi=\xi , \, \alpha \circ \varphi =
\varphi \circ \alpha $ and $g$ is an isometry with respect to $\alpha$, i.e. the matrices of
$\mathbb U^ \pi(n)\times \{1\}$ preserve the structures $\xi , \varphi , g, \eta$. Hence,
$\mathbb U^ \pi(n)\times \{1\}$ is the structure group of the almost paracontact metric manifolds.
\par
Let $V$ be a $(2n+1)$-dimensional real vector space with an almost paracontact structure $(\varphi,\xi,\eta)$ and a compatible metric $g$ with this structure.
We denote by $\otimes ^0_3V$ the space of the tensors of type $(0,3)$ over $V$. Let $\mathcal{F}$ be the subspace of $\otimes ^0_3V$ defined by
\begin{equation*}
\begin{array}{lr}
\mathcal{F}=\{F\in \otimes ^0_3V : F(x,y,z)=-F(x,z,y)=F(x,\varphi y,\varphi z)-\eta (y)F(x,z,\xi ) \\
\qquad \qquad \qquad \qquad \qquad \qquad +\eta (z)F(x,y,\xi )\}.
\end{array}
\end{equation*}
The metric $g$ on $V$ induces an inner product $\langle , \rangle$ on $\mathcal{F}$ which is defined by
\[
\langle F_1,F_2\rangle=g^{ip}g^{jq}g^{kr}F_1(f_i,f_j,F_k)F_2(f_p,f_q,F_r) ,
\]
where $F_1, F_2 \in \mathcal{F}$ and $\{f_1,\ldots ,f_{2n}\}$ is a basis of $V$.
\par
The standard representation of the structure group $\mathbb U^ \pi(n)\times \{1\}$ in $V$ induces a representation $\lambda $ of $\mathbb U^ \pi(n)\times \{1\}$ in $\mathcal{F}$ in the following manner:
\[
(\lambda (\alpha )F)(x,y,z)=F(\alpha ^{-1}x,\alpha ^{-1}y,\alpha ^{-1}z) ,
\]
for $\alpha \in \mathbb U^ \pi(n)\times \{1\}$ and $F\in \mathcal{F}$. Also, $\lambda (\alpha )$
preserves the inner product $\langle , \rangle$ in $\mathcal{F}$.
\section{On the decomposition of $\mathcal{F}$}\label{sec-7}
In \cite{NZ} we obtained a decomposition of a vector space $\mathcal{F}$ into eleven subspaces
$\mathcal{F}_i$ $(i =1, \ldots ,11 )$, which are mutually orthogonal and
invariant under the action of the structure group $\mathbb {U}^\pi (n)\times \{1\}$. First we found the
following partial decomposition of $\mathcal{F}$ in a direct sum of its subspaces $W_i$ $(i =1,2,3,4 )$, i.e.
\[
\mathcal{F}=W_1\oplus W_2\oplus W_3\oplus W_4 ,
\]
where $W_i$ $(i =1,2,3,4 )$ were defined by
\begin{equation}\label {7.1}
\begin{array}{llll}
W_1=\{F\in \mathcal{F} : F(x,y,z)=F(hx,hy,hz)\} , \\
W_2=\{F\in \mathcal{F} : F(x,y,z)=-\eta (y)F(hx,hz,\xi )+\eta (z)F(hx,hy,\xi )\} , \\
W_3=\{F\in \mathcal{F} : F(x,y,z)=\eta (x)F(\xi ,hy,hz) , \\
W_4=\{F\in \mathcal{F} : F(x,y,z)=\eta (x)\{\eta (y)F(\xi ,\xi ,hz)-\eta (z)F(\xi ,\xi ,hy)\} ,
\end{array}
\end{equation}
for arbitrary vectors $x,y,z \in V$. The subspaces $W_i$ $(i =1,2,3,4 )$ are mutually orthogonal and invariant under the action of $\mathbb {U}^\pi (n)\times \{1\}$.
\subsection{The subspace $W_1$ of $\mathcal{F}$}\label{subsec-7.1}
In \cite{NZ} we obtained that $W_1=\mathcal{F}_1\oplus \mathcal{F}_2\oplus \mathcal{F}_3$, where the subspaces $\mathcal{F}_i$ $(i=1,2,3)$ of $W_1$ are mutually orthogonal and invariant under the action of $\mathbb {U}^\pi (n)\times \{1\}$. They were characterized by
\begin{equation}\label{7.2}
\begin{array}{ll}
\mathcal{F}_1=\{F\in \mathcal{F} : F(x,y,z)=\frac{1}{2(n-1)}\{g(x,\varphi y)\theta _F(\varphi z)-
g(x,\varphi z)\theta _F(\varphi y)\\ \\
\qquad \qquad \qquad \qquad \qquad -g(\varphi x,\varphi y)\theta _F(hz)+g(\varphi x,\varphi z)\theta _F(hy)\}\} ,
\end{array}
\end{equation}
\begin{equation}\label{7.3}
\begin{array}{l}
\mathcal{F}_2=\{F\in \mathcal{F} : F(\varphi x ,\varphi y,z)=-F(x,y,z) , \quad \theta _F=0\} ,
\end{array}
\end{equation}
\begin{equation}\label{7.4}
\begin{array}{l}
\mathcal{F}_3=\{F\in \mathcal{F} : F(\varphi x ,\varphi y,z)=F(x,y,z)\} .
\end{array}
\end{equation}
Now, we will show that the subspace $\mathcal{F}_3$ could be decomposed in an orthogonal direct sum of two its subspaces. For this purpose we define the following linear map
\[
k : \mathcal{F}_3\longrightarrow \mathcal{F}_3 \quad {\text by} \quad
k(F)(x,y,z)=\frac{1}{3}\{F(x,y,z)+F(y,z,x)+F(z,x,y)\}.
\]
We note that from \eqref{7.4} it follows that $F(\xi ,y,z)=F(x,\xi ,z)=0$ for an arbitrary $F\in \mathcal{F}_3$. Then one can easily verify that $k(F)(\varphi x ,\varphi y,z)=k(F)(x,y,z)$, i.e.
$k(F)$ belongs to $\mathcal{F}_3$. By direct computations we check that $k$ is a projection
(i.e. $k^2=k$) and it commutes with the action of $\mathbb {U}^\pi (n)\times \{1\}$. We put
\begin{equation}\label{7.5}
\begin{array}{l}
\mathbb{G}_3={\rm Im} k=\{F\in \mathcal{F}_3 : F(x,y,z)=\frac{1}{3}\{\mathop{\s} \limits_{(x,y,z)}F(x,y,z)\}\} ,
\end{array}
\end{equation}
\begin{equation}\label{7.6}
\begin{array}{l}
\mathbb{G}_4={\rm Ker} k=\{F\in \mathcal{F}_3 : \mathop{\s} \limits_{(x,y,z)}F(x,y,z)=0\} .
\end{array}
\end{equation}
\begin{pro}\label{Proposition 7.1}
The subspace $\mathcal{F}_3$ is an orthogonal direct sum of the subspaces $\mathbb{G}_3$ and
$\mathbb{G}_4$. These subspaces are invariant under the action of $\mathbb {U}^\pi (n)\times \{1\}$.
\end{pro}
\begin{proof}
Taking into account that $k$ is a projection in $\mathcal{F}_3$, we have $\mathcal{F}_3={\rm Im} k\oplus {\rm Ker} k=\mathbb{G}_3\oplus \mathbb{G}_4$. Further, we will show that $\mathbb{G}_3$ and $\mathbb{G}_4$ are orthogonal.
Because for an arbitrary $F\in \mathcal{F}$ the conditions $F(x,y,z)=-F(y,x,z)$ and $F(x,y,z)=\frac{1}{3}\{F(x,y,z)+F(y,z,x)+F(z,x,y)\}$ are equivalent, \eqref{7.5} becomes
\begin{equation}\label{7.7}
\begin{array}{l}
\mathbb{G}_3={\rm Im} k=\{F\in \mathcal{F}_3 : F(x,y,z)=-F(y,x,z)\} .
\end{array}
\end{equation}
Now, we take $F^\prime \in \mathbb{G}_3$ and $F'' \in \mathbb{G}_4$. Using \eqref{7.6} and
\eqref{7.7} we obtain
\begin{equation*}
\begin{array}{lll}
\langle F^\prime ,F''\rangle=g^{ip}g^{jq}g^{kr}F_1(f_i,f_j,F_k)F_2(f_p,f_q,F_r)=
-g^{ip}g^{jq}g^{kr}F^\prime (f_i,f_j,F_k)F''(f_q,f_r,F_p) \\ \\
-g^{ip}g^{jq}g^{kr}F^\prime (f_i,f_j,F_k)F''(f_r,f_p,F_q)=
-g^{jq}g^{ip}g^{kr}F^\prime (f_j,f_i,F_k)F''(f_q,f_p,F_r) \\ \\
-g^{kr}g^{ip}g^{jq}F^\prime (f_k,f_i,F_j)F''(f_r,f_p,F_q)=-2\langle F^\prime ,F''\rangle .
\end{array}
\end{equation*}
Thus we find $\langle F^\prime ,F''\rangle=0$. Hence, $\mathbb{G}_3$ and $\mathbb{G}_4$ are orthogonal.
\par
Finally, taking into account that $k$ commutes with the action of $\mathbb {U}^\pi (n)\times \{1\}$, for an arbitrary $F^\prime \in \mathbb{G}_3={\rm Im} k$ we have
\begin{equation}\label{7.8}
\lambda (\alpha )(F^\prime )=
\lambda (\alpha )(k(F^\prime ))=k(\lambda (\alpha )(F^\prime )).
\end{equation}
We note that $\lambda (\alpha )(F^\prime )\in \mathcal{F}_3$ because $\mathcal{F}_3$ is invariant under the action of $\mathbb {U}^\pi (n)\times \{1\}$. Then from \eqref{7.8} it follows that
$\lambda (\alpha )(F^\prime )\in \mathbb{G}_3$ which means that $\mathbb{G}_3$ is invariant
under the action of $\mathbb {U}^\pi (n)\times \{1\}$. Since $\lambda (\alpha )$ is an isometry with
respect to the inner product $\langle , \rangle$ in $\mathcal{F}$, the orthogonal complement
$\mathbb{G}_4$ of the invariant subspace $\mathbb{G}_3$ in $\mathcal{F}_3$ is also invariant.
\end{proof}
From now on, we will denote the subspaces $\mathcal{F}_1$ and $\mathcal{F}_2$ by $\mathbb{G}_1$ and $\mathbb{G}_2$ , respectively. In conclusion, we state
\begin{pro}\label{Proposition 7.2}
The decomposition $W_1=\mathbb{G}_1\oplus \mathbb{G}_2\oplus \mathbb{G}_3\oplus \mathbb{G}_4$ is orthogonal and invariant under the action of $\mathbb {U}^\pi (n)\times \{1\}$.
\end{pro}
The characteristic conditions of $\mathbb{G}_1$ and $\mathbb{G}_2$ are \eqref{7.2} and \eqref{7.3},
respectively, which were obtained in \cite{NZ}. According to \eqref{7.7} and \eqref{7.6}, the characteristic conditions of $\mathbb{G}_3$ and $\mathbb{G}_4$ are as follows:
\begin{equation}\label{7.9}
\mathbb{G}_3=\{F\in \mathcal{F} : F(\xi ,y,z)=F(x,\xi ,z)=0, \quad F(x,y,z)=-F(y,x,z)\} ,
\end{equation}
\begin{equation}\label{7.10}
\mathbb{G}_4=\{F\in \mathcal{F} : F(\xi ,y,z)=F(x,\xi ,z)=0, \quad \mathop{\s} \limits_{(x,y,z)}F(x,y,z)=0\} .
\end{equation}
\subsection{The subspace $W_2$ of $\mathcal{F}$}\label{subsec-7.2}
In \cite{NZ} we decomposed $W_2$ into 6 subspaces which are mutually orthogonal and invariant under the action of $\mathbb {U}^\pi (n)\times \{1\}$, i.e.
\[
W_2=\mathcal{F}_4\oplus \mathcal{F}_5\oplus \mathcal{F}_6\oplus \mathcal{F}_7\oplus
\mathcal{F}_8\oplus \mathcal{F}_9 .
\]
Taking into account the characteristic condition of $W_2$ in \eqref{7.1}, we rewrite the conditions of
$\mathcal{F}_6, \mathcal{F}_7, \mathcal{F}_8$ and $\mathcal{F}_9$ in an equivalent form to the one in \cite[Theorem 2.1, p. 124]{NZ}. Moreover, we denote the subspaces $\mathcal{F}_4, \mathcal{F}_5, \mathcal{F}_6, \mathcal{F}_7, \mathcal{F}_8, \mathcal{F}_9$ by $\mathbb{G}_5, \mathbb{G}_6, \mathbb{G}_7, \mathbb{G}_8, \mathbb{G}_9$, $\mathbb{G}_{10}$, respectively. So, we have:
\begin{equation*}
\begin{array}{llllll}
\mathbb{G}_5=\mathcal{F}_4=\left\{F\in \mathcal{F} : F(x,y,z)=\displaystyle{\frac{\theta _F(\xi)}{2n}}\{\eta(y)g(\varphi x,\varphi z)-\eta(z)g(\varphi x,\varphi y) \}\right\} . \\ \\
$This is the class of generalized $\alpha$-para-Sasakian manifolds.$ \\ \\
\mathbb{G}_6=\mathcal{F}_5=\left\{F\in \mathcal{F} : F(x,y,z)=-\displaystyle{\frac{\theta _F^*(\xi)}{2n}}\{\eta(y)g(x,\varphi z)-\eta(z)g(x,\varphi y)\}\right\} . \\ \\
$This is the class of generalized $\alpha$-para-Kenmotsu manifolds.$ \\ \\
\mathbb{G}_7=\mathcal{F}_6=\left\{F\in \mathcal{F} : F(x,y,z)=-\eta(y)F(x,z,\xi )+\eta(z)F(x,y,\xi ),
\right .\\ \\
\qquad \qquad \quad \left .F(x,y,\xi )=-F(y,x,\xi )=-F(\varphi x,\varphi y,\xi ), \quad \theta _F^*(\xi)=0\right\} , \\ \\
\mathbb{G}_8=\mathcal{F}_7=\left\{F\in \mathcal{F} : F(x,y,z)=-\eta(y)F(x,z,\xi )+\eta(z)F(x,y,\xi ),
\right .\\ \\
\qquad \qquad \quad \left .F(x,y,\xi )=F(y,x,\xi )=-F(\varphi x,\varphi y,\xi ), \quad \theta _F(\xi)=0\right\} ,
\end{array}
\end{equation*}
\begin{equation*}
\begin{array}{llll}
\mathbb{G}_9=\mathcal{F}_8=\left\{F\in \mathcal{F} : F(x,y,z)=-\eta(y)F(x,z,\xi )+\eta(z)F(x,y,\xi ),
\right .\\ \\
\qquad \qquad \quad \left .F(x,y,\xi )=-F(y,x,\xi )=F(\varphi x,\varphi y,\xi )\right\} , \\ \\
\mathbb{G}_{10}=\mathcal{F}_9=\left\{F\in \mathcal{F} : F(x,y,z)=-\eta(y)F(x,z,\xi )+\eta(z)F(x,y,\xi ),
\right .\\ \\
\qquad \qquad \quad \left .F(x,y,\xi )=F(y,x,\xi )=F(\varphi x,\varphi y,\xi )\right\}.
\end{array}
\end{equation*}
\begin{rem}\label{Remark 7.1}
We call the classes $\mathbb{G}_5$ and $\mathbb{G}_6$ the class of generalized $\alpha$-para-Sasakian and generalized $\alpha$-para-Kenmotsu manifolds, respectively, because $\theta _F(\xi)$ and $\theta _F^*(\xi)$ are functions in general.
\end{rem}
\subsection{The subspaces $W_3$ and $W_4$ of $\mathcal{F}$}\label{subsec-7.3}
As in \cite{NZ} we put $\mathcal{F}_{10}=W_3$ and $\mathcal{F}_{11}=W_4$. Now, we denote
$\mathcal{F}_{10}$ and $\mathcal{F}_{11}$ by $\mathbb{G}_{11}$ and $\mathbb{G}_{12}$, respectively.
These subspaces were represented by
\begin{equation*}
\begin{array}{ll}
\mathbb{G}_{11}=\mathcal{F}_{10}=\left\{F\in \mathcal{F} : F(x,y,z)=\eta(x)F(\xi ,\varphi y,\varphi z)\right\} , \\ \\
\mathbb{G}_{12}=\mathcal{F}_{11}=\left\{F\in \mathcal{F} : F(x,y,z)=\eta(x)\left\{\eta(y)F(\xi ,\xi ,z)-\eta(z)F(\xi ,\xi ,y)\right\}\right\} .
\end{array}
\end{equation*}
\par
Corresponding to the decomposition in \secref{sec-7} of the space $ \mathcal{F}$ into 12 mutually orthogonal and invariant subspaces, we give 12 classes of almost paracontact metric manifolds. An almost paracontact metric manifold $M$ is said to be in the class $\mathbb{G}_i$ $(i=1,\ldots ,12)$ (or $\mathbb{G}_i$-manifold) if at each $p\in M$ the tensor $F$ of $M$ belongs to the subspace $\mathbb{G}_i$. The special class $\mathbb{G}_0$, determined by the condition $F(x,y,z) = 0$, is the intersection of the basic twelve classes. Hence, $\mathbb{G}_0$ is the class of the almost paracontact metric manifolds with parallel structures, i.e. $\nabla \varphi =\nabla \xi =\nabla \eta =\nabla g=0$.
\par
Finally, by using the characteristic symmetries of the structure tensor $F$ of a $(2n+1)$-dimensional almost paracontact metric manifold in the classes $\mathbb{G}_i$ $(i=1,\ldots ,12)$, we obtain
\begin{thm}\label{Theorem 2.1}
The dimensions of the subspaces in the decomposition of the space ${\F}$ are as follows:
\begin{align*}
\begin{array}{lll}
{\rm dim} \, \mathbb{G}_1=2(n-1), & {\rm dim} \, \mathbb{G}_2=(n-1)(n^2-2), & {\rm dim} \, \mathbb{G}_3=\displaystyle\frac{(n-2)(n-1)n}{3}\\ \\
\mathbb{G}_4=\displaystyle\frac{2(n-1)n(n+1)}{3}, & {\rm dim} \, \mathbb{G}_5=1, &
{\rm dim} \, \mathbb{G}_6=1, \\
{\rm dim} \, \mathbb{G}_7=n^2-1, & {\rm dim} \, \mathbb{G}_8=n^2-1, &
{\rm dim} \, \mathbb{G}_9=n(n-1), \\
{\rm dim} \, \mathbb{G}_{10}=n(n+1), & {\rm dim} \, \mathbb{G}_{11}=n(n-1),
& {\rm dim} \, \mathbb{G}_{12}=2n .
\end{array}
\end{align*}
\end{thm}
\section{The projections of the structure tensor $F$ in the twelve basic classes of the classification of the almost paracontact metric manifolds}\label{sec-3}
The decompositions of $\mathcal{F}$ in direct sums of the subspaces $W_j$ $(j=1,2,3,4)$ and
$\mathbb{G}_i$ $(i=1,\ldots ,12)$ imply that every $F\in \mathcal{F}$ has a unique representation in the form $F(x,y,z)=\sum \limits _{j=1}^{4} F^{W_j}(x,y,z)$ and $F(x,y,z)=
\sum \limits _{i=1}^{12} F^i(x,y,z)$, respectively, where $F^{W_j}\in W_j$ and
$F^i\in \mathbb{G}_i$.
Then it is clear that an almost paracontact metric manifold $(M,\varphi ,\xi ,\eta ,g )$ belongs to a direct sum of two or more basic classes, i.e. $M\in \mathbb{G}_i\oplus \mathbb{G}_j\oplus \ldots $, if and only if the structure tensor $F$ on $M$ is the sum of the corresponding projections $F^i$, $F^j$, $\ldots $, i.e. the following condition is satisfied $F =F^i+F^j+\ldots $. \\
Following the operators defined in \cite[Theorem 2.1, p. 124]{NZ} and the operator $k$ defined in this paper in \secref{subsec-7.1}, we find the projections $F^{W_j}$ $(j=1,2,3,4)$ and $F^i$ $(i =1, \ldots ,12 )$ of $F\in {\F}$ in the subspaces $W_j$ and $\mathbb{G}_i$, respectively.
These projections are given below:
\begin{equation}\label{0)}
\begin{array}{l}
F^{W_1}=F(\varphi ^2x,\varphi ^2y,\varphi ^2z) ,\\
F^{W_2}=-\eta(y)F(\varphi ^2x,\varphi ^2z,\xi )+\eta(z)F(\varphi ^2x,\varphi ^2y,\xi ) ,\\
F^{W_3}=\eta(x)F(\xi ,\varphi y,\varphi z) , \\
F^{W_4}=\eta(x)\{\eta (y)F(\xi ,\xi ,z)-\eta (z)F(\xi ,\xi ,y)\} .
\end{array}
\end{equation}
\begin{equation}\label{1)}
\begin{array}{l}
F^1(x,y,z)=\displaystyle{\frac{1}{2(n-1)}}\left\{g(x,\varphi y)\theta _{F^1}(\varphi z)-g(x,\varphi z)\theta _{F^1}(\varphi y)\right. \\ \\
\left. \qquad \qquad \quad -g(\varphi x,\varphi y)\theta _{F^1}(\varphi ^2z)+g(\varphi x,\varphi z)\theta _{F^1}(\varphi ^2y)\right\};
\end{array}
\end{equation}
\begin{equation}\label{2)}
\begin{array}{l}
F^2(x,y,z)=\displaystyle{\frac{1}{2}}\left\{F(\varphi ^2x,\varphi ^2y,\varphi ^2z)-F(\varphi x,\varphi ^2y,\varphi z)\right\}\\ \\
\qquad \qquad \qquad -\displaystyle{\frac{1}{2(n-1)}}\left\{g(x,\varphi y)\theta _{F^1}(\varphi z)-g(x,\varphi z)\theta _{F^1}(\varphi y)\right. \\ \\
\left. \qquad \qquad \qquad -g(\varphi x,\varphi y)\theta _{F^1}(\varphi ^2z)+g(\varphi x,\varphi z)\theta _{F^1}(\varphi ^2y)\right\};
\end{array}
\end{equation}
\begin{equation}\label{3')}
\begin{array}{lll}
F^3(x,y,z)=\displaystyle{\frac{1}{6}}\left\{F(\varphi ^2x,\varphi ^2y,\varphi ^2z)+F(\varphi x,\varphi ^2y,\varphi z)\right. \\ \\
\left.+F(\varphi ^2y,\varphi ^2z,\varphi ^2x)+F(\varphi y,\varphi ^2z,\varphi x)\right. \\ \\
\left.+F(\varphi ^2z,\varphi ^2x,\varphi ^2y)+F(\varphi z,\varphi ^2x,\varphi y)\right\};
\end{array}
\end{equation}
\begin{equation}\label{3'')}
\begin{array}{llll}
F^4(x,y,z)=\displaystyle{\frac{1}{2}}\left\{F(\varphi ^2x,\varphi ^2y,\varphi ^2z)+F(\varphi x,\varphi ^2y,\varphi z)\right\} \\ \\
-\displaystyle{\frac{1}{6}}\left\{F(\varphi ^2x,\varphi ^2y,\varphi ^2z)+F(\varphi x,\varphi ^2y,\varphi z)\right. \\ \\
\left.+F(\varphi ^2y,\varphi ^2z,\varphi ^2x)+F(\varphi y,\varphi ^2z,\varphi x)\right. \\ \\
\left.+F(\varphi ^2z,\varphi ^2x,\varphi ^2y)+F(\varphi z,\varphi ^2x,\varphi y)\right\};
\end{array}
\end{equation}
\begin{equation}\label{4)}
\begin{array}{l}
F^5(x,y,z)=\displaystyle{\frac{\theta _{F^5}(\xi)}{2n}}\{\eta(y)g(\varphi x,\varphi z)-
\eta(z)g(\varphi x,\varphi y) \};
\end{array}
\end{equation}
\begin{equation}\label{5)}
\begin{array}{l}
F^6(x,y,z)=-\displaystyle{\frac{\theta _{F^6}^*(\xi)}{2n}}\{\eta(y)g(x,\varphi z)-
\eta(z)g(x,\varphi y) \};
\end{array}
\end{equation}
\begin{equation}\label{6)}
\begin{array}{ll}
F^7(x,y,z)=\displaystyle{-\frac{1}{4}}\eta (y)\left\{F(\varphi ^2x,\varphi ^2z,\xi )-F(\varphi x,\varphi z,\xi )\right.\\ \\
\left.-F(\varphi ^2z,\varphi ^2x,\xi )+F(\varphi z,\varphi x,\xi )\right\}+
\displaystyle{\frac{1}{4}}\eta (z)\left\{F(\varphi ^2x,\varphi ^2y,\xi )\right.\\ \\
\left.-F(\varphi x,\varphi y,\xi )-F(\varphi ^2y,\varphi ^2x,\xi )+F(\varphi y,\varphi x,\xi )\right\}\\ \\
\qquad \qquad \qquad+\displaystyle{\frac{\theta _{F^6}^*(\xi)}{2n}}\{\eta(y)g(x,\varphi z)-
\eta(z)g(x,\varphi y) \};
\end{array}
\end{equation}
\begin{equation}\label{7)}
\begin{array}{l}
F^8(x,y,z)=\displaystyle{-\frac{1}{4}}\eta (y)\left\{F(\varphi ^2x,\varphi ^2z,\xi )-F(\varphi x,\varphi z,\xi )\right.\\ \\
\left.+F(\varphi ^2z,\varphi ^2x,\xi )-F(\varphi z,\varphi x,\xi )\right\}+
\displaystyle{\frac{1}{4}}\eta (z)\left\{F(\varphi ^2x,\varphi ^2y,\xi )\right.\\ \\
\left.-F(\varphi x,\varphi y,\xi )+F(\varphi ^2y,\varphi ^2x,\xi )-F(\varphi y,\varphi x,\xi )\right\}\\ \\
\qquad \qquad \qquad-\displaystyle{\frac{\theta _{F^5}(\xi)}{2n}}\{\eta(y)g(\varphi x,\varphi z)-
\eta(z)g(\varphi x,\varphi y) \};
\end{array}
\end{equation}
\begin{equation}\label{8)}
\begin{array}{l}
F^9(x,y,z)=\displaystyle{-\frac{1}{4}}\eta (y)\left\{F(\varphi ^2x,\varphi ^2z,\xi )+F(\varphi x,\varphi z,\xi )\right.\\ \\
\left.-F(\varphi ^2z,\varphi ^2x,\xi )-F(\varphi z,\varphi x,\xi )\right\}+
\displaystyle{\frac{1}{4}}\eta (z)\left\{F(\varphi ^2x,\varphi ^2y,\xi )\right.\\ \\
\left.+F(\varphi x,\varphi y,\xi )-F(\varphi ^2y,\varphi ^2x,\xi )-F(\varphi y,\varphi x,\xi )\right\};
\end{array}
\end{equation}
\begin{equation}\label{9)}
\begin{array}{l}
F^{10}(x,y,z)=\displaystyle{-\frac{1}{4}}\eta (y)\left\{F(\varphi ^2x,\varphi ^2z,\xi )+F(\varphi x,\varphi z,\xi )\right.\\ \\
\left.+F(\varphi ^2z,\varphi ^2x,\xi )+F(\varphi z,\varphi x,\xi )\right\}+
\displaystyle{\frac{1}{4}}\eta (z)\left\{F(\varphi ^2x,\varphi ^2y,\xi )\right.\\ \\
\left.+F(\varphi x,\varphi y,\xi )+F(\varphi ^2y,\varphi ^2x,\xi )+F(\varphi y,\varphi x,\xi )\right\};
\end{array}
\end{equation}
\begin{equation}\label{10)}
\begin{array}{l}
F^{11}(x,y,z)=\eta(x)F(\xi ,\varphi ^2y,\varphi ^2z);
\end{array}
\end{equation}
\begin{equation}\label{11)}
F^{12}(x,y,z)=\eta(x)\left\{\eta(y)F(\xi ,\xi ,\varphi ^2z)-\eta(z)F(\xi ,\xi ,\varphi ^2y)\right\}.
\end{equation}
Further in this section, using the characteristic conditions of the twelve classes of almost paracontact metric manifolds and the projections of the structure tensor $F$, we relate the obtained classes with those studied in the literature.
\begin{thm}\label{Theorem 2.2}
A $(2n+1)$-dimensional almost paracontact metric manifold $M(\varphi ,\xi ,\\
\eta ,g)$ is normal if and only if $M$ belongs to one of the classes $\mathbb{G}_1$, $\mathbb{G}_2$, $\mathbb{G}_5$, $\mathbb{G}_6$, $\mathbb{G}_7$, $\mathbb{G}_8$ or to the classes which are their direct sums.
\end{thm}
\begin{proof}
Let $M$ belongs to $\mathbb{G}_i$ $(i=1,2,5,6,7,8)$ or to their direct sums. By direct computations we check that for the structure tensor $F$ of $M$ the condition \eqref{2.7} holds. Hence, $M$ is normal.
\par
Now, let us assume that $M$ is normal. Then \eqref{2.7} is fulfilled. Replacing $x$ and $y$ with $\xi $ in
\eqref{2.7} we have
\begin{equation}\label{401}
F(\xi ,\xi ,z)=0 .
\end{equation}
Replacing $x$ with $\xi $ in \eqref{2.7} and using \eqref{401} we get
\begin{equation}\label{402}
F(\xi ,y,z)=0 .
\end{equation}
Another consecuence from \eqref{2.7} is
\begin{equation}\label{403}
F(\varphi x,\varphi y,\xi )=-F(x,y,\xi ) ,
\end{equation}
which we obtain substituting in \eqref{2.7} $y$ and $z$ with $\varphi y$ and $\xi $, respectively. The equalities \eqref{401} and \eqref{402} mean that the projections $F^{W_4}$ and $F^{W_3}$ of $F$ vanish. Hence, $F(x,y,z)=F^{W_1}(x,y,z)+F^{W_2}(x,y,z)$. Taking into account \eqref{403} we conclude that $F^{W_2}=F^i$ or $F^{W_2}$ is a sum of $F^i$ $(i=5,6,7,8)$. \\
Next, we replace $x$, $y$ and $z$ in \eqref{2.7} with $\varphi ^2x$, $\varphi ^2y$ and $\varphi z$, respectively. By using \eqref{5} and \eqref{401} we obtain
\begin{equation}\label{404}
F(\varphi ^2x,\varphi ^2y,\varphi ^2z)+F(\varphi x,\varphi y,\varphi ^2z)+F(x,\varphi y,\xi )=0 .
\end{equation}
We substitute $x$ and $y$ in \eqref{404} with $\varphi x$ and $\varphi y$, respectively. So we get
\begin{equation}\label{405}
F(\varphi x,\varphi y,\varphi ^2z)+F(\varphi ^2x,\varphi ^2y,\varphi ^2z)+F(\varphi x,y,\xi )=0 .
\end{equation}
From \eqref{403} by using \eqref{401} we derive $F(x,\varphi y,\xi )=-F(\varphi x,y,\xi )$. Then \eqref{404} and \eqref{405} imply $F(\varphi ^2x,\varphi ^2y,\varphi ^2z)=-F(\varphi x,\varphi y,\varphi ^2z)$, which shows that $F^{W_1}=F^i$ $(i=1,2)$ or $F^{W_1}=F^1+F^2$. Summarazing the obtained results we conclude that $F=F^i$ $(i=1,2,5,6,7,8)$ or $F=F^1+F^2+F^5+F^6+F^7+F^8$, which completes the proof.
\end{proof}
Now, by using \eqref{2.3} we compute $d\eta $ for a $(2n+1)$-dimensional almost paracontact metric manifold $M$ belonging to each of the basic classes. We obtain
\begin{lem}\label{Lemma 2.2}
(a) If $M\in \mathbb{G}_i$, $i=1,2,3,4,6,7,10,11$, then $d\eta =0$ ; \\
(b) If $M\in \mathbb{G}_5$, then $d\eta =\frac{\theta _F(\xi )}{2n}g(\varphi x,y)$ ; \\
(c) If $M\in \mathbb{G}_i$, $i=8,9$, then $d\eta =-F^i(x,\varphi y,\xi )$ ; \\
(d) If $M\in \mathbb{G}_{12}$, then $d\eta =\frac{1}{2}\left(\eta(x)F(\xi ,\xi ,\varphi y)-
\eta(y)F(\xi ,\xi ,\varphi x)\right)$.
\end{lem}
Let $\overline {\mathbb{G}}_5$ be the subclass of $\mathbb{G}_5$ which consists of all $(2n+1)$-dimensional $\mathbb{G}_5$-manifolds such that $\theta _F(\xi )=2n$
(resp. $\theta _F(\xi )=-2n$) by $\phi (x,y)=g(\varphi x,y)$ (resp. $\phi (x,y)=g(x,\varphi y)$).
Taking into account \lemref{Lemma 2.2} we establish the truth
of the following proposition
\begin{pro}\label{Proposition 2.1}
Let $M$ be a $(2n+1)$-dimensional $\mathbb{G}_5$-manifold. Then $M$ is paracontact metric if and only if $M$ belongs to $\overline {\mathbb{G}}_5$.
\end{pro}
\begin{thm}\label{Theorem 2.3}
A $(2n+1)$-dimensional almost paracontact metric manifold $M(\varphi ,\xi ,\\
\eta ,g)$ is paracontact metric if and only if $M$ belongs to the class $\overline {\mathbb{G}}_5$ or to the classes which are direct sums of $\overline {\mathbb{G}}_5$ with $\mathbb{G}_4$ and
$\mathbb{G}_{10}$.
\end{thm}
\begin{proof}
Let $M$ belongs to $\overline {\mathbb{G}}_5$. Then from \propref{Proposition 2.1} it follows that $M$ is paracontact metric. If $M$ belongs to a direct sum of $\overline {\mathbb{G}}_5$ with $\mathbb{G}_4$, $\mathbb{G}_{10}$, by using \lemref{Lemma 2.2} we verify that $M$ is also paracontact metric.
\par
Now, let $M$ be a paracontact metric manifold. Then $d\eta (x,y)=\phi (x,y)=g(\varphi x,y)$ and from \eqref{2.3} we have
\begin{equation}\label{406}
-F(x,\varphi y,\xi )+F(y,\varphi x,\xi )=2g(\varphi x,y) .
\end{equation}
Replacing $x$ and $y$ in \eqref{406} with $\xi $ and $\varphi y$, respectively, we get
\begin{equation}\label{407}
F(\xi ,\xi ,y )=0 .
\end{equation}
Moreover, replacing $y$ in \eqref{406} with $\varphi y$ we derive
\begin{equation}\label{408}
F(x,y,\xi )=F(\varphi y,\varphi x,\xi )-2g(\varphi x,\varphi y) .
\end{equation}
From the condition $d\eta =\phi $ it follows that $d\phi =0$. Now, $d\phi (x,y,\xi )=0$ implies
\begin{equation}\label{409}
F(x,y,\xi )-F(y,x,\xi )+F(\xi ,x,y)=0 .
\end{equation}
Substituting \eqref{408} in \eqref{409} we have $F(\varphi y,\varphi x,\xi )-F(\varphi x,\varphi y,\xi )+
F(\xi ,x,y)=0$. In the last equality we replace $x$ with $\varphi x$, $y$ with $\varphi y$ and by using
\eqref{407} we obtain
\begin{equation}\label{410}
F(y,x,\xi )-F(x,y,\xi )+F(\xi ,x,y)=0 .
\end{equation}
The equalities \eqref{409} and \eqref{410} imply $F(\xi ,x,y)=0$. Also, substituting $F(\xi ,x,y)=0$ in \eqref{409} we get
\begin{equation}\label{411}
F(x,y,\xi )=F(y,x,\xi ) .
\end{equation}
Since $F(\xi ,\xi ,y)=F(\xi ,x,y)=0$, we conclude that $F=F^{W_1}+F^{W_2}$. Hence
\begin{equation}\label{412}
d\eta _F=d\eta _{F^{W_1}}+d\eta _{F^{W_2}} .
\end{equation}
Using \eqref{411} and taking into account the characteristic conditions of the twelve classes we obtain $F^{W_2}=F^5+F^8+F^{10}$. From \lemref{Lemma 2.2} it follows that $d\eta _{F^{W_1}}=0$, $d\eta _{F^5}(x,y)=\frac{\theta _{F^5}(\xi )}{2n}g(\varphi x,y)$,
$d\eta _{F^8}(x,y)=-F^8(x,\varphi y,\xi )$ and $d\eta _{F^{10}}(x,y)=0$.
Then \eqref{412} becomes
\begin{equation}\label{413}
g(\varphi x,y)=\frac{\theta _{F^5}(\xi )}{2n}g(\varphi x,y)-F^8(x,\varphi y,\xi ) .
\end{equation}
The equality \eqref{413} implies that either $F^8=0$ or $F^8=F^5$. The case $F^8=F^5$ leads a contradiction. Therefore
$F^{W_2}=F^5+F^{10}$. Now, because $g$ is non-degenerate, as an immediate consequence from \eqref{413} we obtain $\theta _{F^5}(\xi )=2n$. This means that $F^5=\overline {F}^5$, where by
$\overline {F}^5$ we denote the projection of $F$ in $\overline {\mathbb{G}}_5$. Thus, for $F^{W_2}$ we obtain $F^{W_2}=\overline {F}^5+F^{10}$ or $F^{W_2}=\overline {F}^5$. The conditions $d\eta _{F^{W_1}}=d\eta _{F^{10}}=0$ and \eqref{412} imply that the case $F^{W_2}=F^{10}$ is impossible. In both cases for $F^{W_2}$ we have $\mathop{\s} \limits_{(x,y,z)}F^{W_2}(x,y,z)=0$.
Then from the equality $\mathop{\s} \limits_{(x,y,z)}F(x,y,z)=\mathop{\s} \limits_{(x,y,z)}F^{W_1}(x,y,z)+\mathop{\s} \limits_{(x,y,z)}F^{W_2}(x,y,z)$ it follows that $\mathop{\s} \limits_{(x,y,z)}F^{W_1}(x,y,z)=0$. Taking into account the characteristic conditions of the classes $\mathbb{G}_i$ $(i=1,2,3,4)$ we conclude that $F^{W_1}=F^4$. Finally, for $F$ we obtain $F=\overline {F}^5$, or
$F=\overline {F}^5+F^4$, or $F=\overline {F}^5+F^{10}$, or $F=\overline {F}^5+F^4+F^{10}$. Sinse $d\eta _{F^4}=0$, the case $F=F^4$ is impossible.
\end{proof}
As an immediate consequence from \thmref{Theorem 2.2} and \thmref{Theorem 2.3} we obtain
\begin{co}\label{Corollary 2.1}
A $(2n+1)$-dimensional almost paracontact metric manifold $M(\varphi ,\xi ,\\
\eta ,g)$ is para-Sasakian if and only if $M$ belongs to the class $\overline {\mathbb{G}}_5$.
\end{co}
\begin{rem}\label{Remark 2.1}
The result in Corollary \ref{Corollary 2.1} is the same as that obtained in \cite{Z}.
\end{rem}
It is known that $\xi $ is Killing vector field if $(L_\xi g)(x,y)=0$. By using \eqref{2.4} we get
\begin{pro}\label{Proposition 2.2}
The vector field $\xi $ is Killing only in the classes $\mathbb{G}_1$, $\mathbb{G}_2$, $\mathbb{G}_3$,
$\mathbb{G}_4$, $\mathbb{G}_5$, $\mathbb{G}_8$, $\mathbb{G}_9$, $\mathbb{G}_{11}$ and in the classes which are their direct sums.
\end{pro}
Using \thmref{Theorem 2.3} and \propref{Proposition 2.2} we obtain that among the classes of paracontact metric manifolds the vector field $\xi $ is Killing only in $\overline {\mathbb{G}}_5$ and $\overline {\mathbb{G}}_5\oplus \mathbb{G}_4$. Then we state
\begin{thm}\label{Theorem 2.4}
A $(2n+1)$-dimensional almost paracontact metric manifold $M(\varphi ,\xi ,\\
\eta ,g)$ is K-paracontact metric if and only if $M$ belongs to the classes $\overline {\mathbb{G}}_5$ and $\overline {\mathbb{G}}_5\oplus \mathbb{G}_4$.
\end{thm}
Finally, we check that $d\phi$ vanishes only for normal almost paracontact metric manifolds
belonging to the classes $\mathbb{G}_5$, $\mathbb{G}_8$ and $\mathbb{G}_5\oplus \mathbb{G}_8$. Thus we establish the truth of the following theorem
\begin{thm}\label{Theorem 2.5}
A $(2n+1)$-dimensional almost paracontact metric manifold $M(\varphi ,\xi ,\\
\eta ,g)$ is quasi-para-Sasakian if and only if $M$ belongs to the classes $\mathbb{G}_5$, $\mathbb{G}_8$ and $\mathbb{G}_5\oplus \mathbb{G}_8$.
\end{thm}
\section{The projections of the structure tensor $F$ for dimension 3}\label{sec-4}
Let $(M,\varphi ,\xi ,\eta ,g )$ be a 3-dimensional almost paracontact metric manifold and $\{e_i\}_{i=1}^3=\{e_1,e_2,e_3\}$ be a $\varphi $-basis of
$T_pM$, which satisfies the following conditions:
\[
\begin{array}{ll}
\varphi e_1=e_2, \quad \varphi e_2=e_1, \quad e_3=\xi , \\
g(e_1,e_1)=g(e_3,e_3)=-g(e_2,e_2)=1, \quad g(e_i,e_j)=0, \quad i\neq j \in \{1,2,3\}.
\end{array}
\]
We denote the components of the structure tensor $F$ with respect to the $\varphi $-basis $\{e_i\}_{i=1}^3$ by $F_{ijk}=F(e_i,e_j,e_k)$. By direct computations for arbitrary $x, y, z$, given by $x=x^ie_i$, $y=y^ie_i$, $z=z^ie_i$ with respect to $\{e_i\}_{i=1}^3$, we obtain
\begin{equation}\label {3.1}
\begin{array}{ll}
F(x,y,z)=x^1\left\{F_{113}(y^1z^3-y^3z^1)+F_{123}(y^2z^3-y^3z^2)\right\} \\
+x^2\left\{F_{213}(y^1z^3-y^3z^1)+F_{223}(y^2z^3-y^3z^2)\right\} \\
+x^3\left\{F_{331}(y^3z^1-y^1z^3)+F_{332}(y^3z^2-y^2z^3)\right\} .
\end{array}
\end{equation}
For the components $\theta _F^i=\theta _F(e_i)$, $\theta _F^{* i}=\theta _F^*(e_i)$, $\omega _F^i=\omega (e_i)$ of the Lee forms $\theta _F$, $\theta _F^*$, $\omega _F$ of $F$ we have
\begin{equation}\label {3.2}
\begin{array}{ll}
\theta _F^1=\theta _F^2=\theta _F^{* 1}=\theta _F^{* 2}=0, \quad \theta _F^3=F_{113}-F_{223}, \\
\theta _F^{* 3}=F_{123}-F_{213}, \quad \omega _F^1=F_{331}, \quad \omega _F^2=F_{332}, \quad \omega _F^3=F_{333}=0 .
\end{array}
\end{equation}
\begin{pro}\label {Proposition 3.1}
The structure tensor $F^i$ $(i=1,\ldots ,12)$ of a 3-dimensional almost paracontact metric manifold $(M,\varphi ,\xi ,\eta ,g )$ has the following form in the corresponding basic classes $\mathbb{G}_i$:
\begin{equation}\label {3.3}
\begin{array}{l}
F^1(x,y,z)=F^2(x,y,z)=F^3(x,y,z)=F^4(x,y,z)=0 ; \\ \\
F^5(x,y,z)=\frac{\theta _F^3}{2}\left\{x^1(y^1z^3-y^3z^1)-x^2(y^2z^3-y^3z^2)\right\}, \\
\frac{\theta _F^3}{2}=F_{113}=-F_{131}=-F_{223}=F_{232} ; \\ \\
F^6(x,y,z)=\frac{\theta _F^{* 3}}{2}\left\{x^1(y^2z^3-y^3z^2)-x^2(y^1z^3-y^3z^1)\right\}, \\
\frac{\theta _F^{* 3}}{2}=F_{123}=-F_{132}=-F_{213}=F_{231} ; \\ \\
F^7(x,y,z)=F^8(x,y,z)=F^9(x,y,z)=0 ; \\ \\
F^{10}(x,y,z)=F^{10}_{113}\left\{x^1(y^1z^3-y^3z^1)+x^2(y^2z^3-y^3z^2)\right\} \\
+F^{10}_{123}\left\{x^1(y^2z^3-y^3z^2)+x^2(y^1z^3-y^3z^1)\right\} ,\\
F^{10}_{113}=-F^{10}_{131}=F^{10}_{223}=-F^{10}_{232} , \quad F^{10}_{123}=-F^{10}_{132}=F^{10}_{213}=-F^{10}_{231} ;\\ \\
F^{11}(x,y,z)=0 ; \\ \\
F^{12}(x,y,z)=\omega _F^1x^3(y^3z^1-y^1z^3)+\omega _F^2x^3(y^3z^2-y^2z^3), \\
\omega _F^1=F_{331}=-F_{313}, \quad \omega _F^2=F_{332}=-F_{323} .
\end{array}
\end{equation}
\end{pro}
By using \eqref{3.3} we have
\begin{pro}\label {Proposition 3.2}
The 3-dimensional almost paracontact metric manifolds belong to the classes $\mathbb{G}_5$,
$\mathbb{G}_6$, $\mathbb{G}_{10}$, $\mathbb{G}_{12}$ and to the classes which are their direct sums.
\end{pro}
We note that the assertion in \propref{Proposition 3.2} follows also from \thmref{Theorem 2.1}.
Taking into account \propref{Proposition 3.2}, \thmref{Theorem 2.2}, \thmref{Theorem 2.3}, \corref{Corollary 2.1}, \propref{Proposition 2.2}, \thmref{Theorem 2.4} and \thmref{Theorem 2.5}
we state:
\begin{thm}\label {Theorem 3.1}
(a) The classes of the 3-dimensional normal almost paracontact metric manifolds are $\mathbb{G}_5$, $\mathbb{G}_6$ and $\mathbb{G}_5\oplus \mathbb{G}_6$; \\
(b) The classes of the 3-dimensional paracontact metric manifolds are $\overline {\mathbb{G}}_5$ and
$\overline {\mathbb{G}}_5\oplus \mathbb{G}_{10}$; \\
(c) The class of the 3-dimensional para-Sasakian manifolds is $\overline {\mathbb{G}}_5$; \\
(d) The class of the 3-dimensional K-paracontact metric manifolds is $\overline {\mathbb{G}}_5$; \\
(e) The class of the 3-dimensional quasi-para-Sasakian manifolds is $\mathbb{G}_5$.
\end{thm}
\begin{rem}\label{Remark 3.1}
Well known result in the literature is that every $(2n+1)$-dimensional para-Sasakian manifold $M$ is a K-paracontact metric manifold but the converse is true only if $M$ is 3-dimensional. The assertions in
Corollary \ref{Corollary 2.1}, \thmref{Theorem 2.4} and (c), (d) from \thmref{Theorem 3.1} agree with
this result.
\end{rem}
\section{3-dimensional Lie algebras corresponding to Lie groups with almost paracontact metric structure}\label{sec-5}
Let $L$ be a 3-dimensional real connected Lie group and ${\g}$ be its Lie algebra with a basis $\{E_1,E_2,E_3\}$ of left invariant vector fields. We define
an almost paracontact structure $(\varphi ,\xi ,\eta)$ and a semi-Riemannian metric $g$ in the following way:
\[
\begin{array}{llll}
\varphi E_1=E_2 , \quad \varphi E_2=E_1 , \quad \varphi E_3=0 \\
\xi =E_3 , \quad \eta (E_3)=1 , \quad \eta (E_1)=\eta (E_2)=0 , \\
g(E_1,E_1)=g(E_3,E_3)=-g(E_2,E_2)=1 ,\\
\quad g(E_i,E_j)=0, \quad i\neq j \in \{1,2,3\}.
\end{array}
\]
Then $(L,\varphi ,\xi ,\eta ,g)$ is a 3-dimensional almost paracontact metric manifold. Since the metric $g$ is left invariant the Koszul equality becomes
\begin{equation}\label {4.1}
\begin{array}{l}
2g(\nabla _xy,z)=g([x,y],z)+g([z,x],y)+g([z,y],x) ,
\end{array}
\end{equation}
where $\nabla $ is the Levi-Civita connection of $g$. By using \eqref{4.1} we find the components $F_{ijk}=F(E_i,E_j,E_k)$,
$(i,j,k \in \{1,2,3\})$ of the tensor $F$:
\begin{equation}\label {4.2}
\begin{array}{ll}
2F_{ijk}=g([E_i,\varphi E_j]-\varphi [E_i,E_j],E_k)+g([E_k,\varphi E_j]+[\varphi E_k,E_j],E_i) \\
\,\, \, \quad \quad +g([\varphi E_k,E_i]-\varphi[E_k,E_i],E_j) .
\end{array}
\end{equation}
Let the commutators of ${\g}$ be defined by $[E_i,E_j]=C_{ij}^kE_k$, where the structure constants $C_{ij}^k$ are real numbers and $C_{ij}^k=-C_{ji}^k$.
Then from \eqref{4.2} for the non-zero components $F_{ijk}$ we obtain
\begin{equation}\label {4.3}
\begin{array}{llll}
F_{113}=-F_{131}=\frac{1}{2}(C_{12}^3+C_{13}^2-C_{23}^1), \\ \\
F_{223}=-F_{232}=\frac{1}{2}(C_{13}^2-C_{12}^3-C_{23}^1), \\ \\
F_{123}=-F_{132}=-C_{13}^1 , \quad F_{213}=-F_{231}=C_{23}^2 , \\ \\
F_{331}=-F_{313}=C_{23}^3 , \quad F_{332}=-F_{323}=C_{13}^3 .
\end{array}
\end{equation}
Taking into account \eqref{3.2} and \eqref{4.3}, for the non-zero components of $\theta _F$, $\theta _F^*$, $\omega _F$ we have
\begin{equation}\label {4.4}
\theta _F^3=C_{12}^3, \quad \theta _F^{* 3}=-C_{13}^1-C_{23}^2, \quad \omega _F^1=C_{23}^3, \quad \omega _F^2=C_{13}^3.
\end{equation}
Using \eqref{3.3}, \eqref{4.3}, \eqref{4.4} and applying the Jacobi identity
\[
\mathop{\s} \limits_{E_i,E_j,E_k}\bigl[[E_i,E_j],E_k\bigr]=0
\]
we deduce the following
\begin{thm}\label{Theorem 4.1}
The manifold $(L,\varphi ,\xi ,\eta ,g)$ belongs to the class $\mathbb{G}_i (i\in \{5,6,10,12\})$ if and only if the corresponding Lie algebra ${\g}$ is determined
by the following commutators:
\begin{equation}\label {4.5}
\begin{array}{ll}
{\bf\mathbb{G}_5} : [E_1,E_2]=C_{12}^1E_1+C_{12}^2E_2+C_{12}^3E_3, \quad [E_1,E_3]=C_{13}^2E_2, \\ \\
[E_2,E_3]=C_{13}^2E_1 : \quad \theta _{F_5}^3=C_{12}^3\neq 0, \quad C_{12}^1C_{13}^2=0, \quad C_{12}^2C_{13}^2=0;
\end{array}
\end{equation}
\begin{equation}\label {4.6}
\begin{array}{ll}
{\bf\mathbb{G}_6} : [E_1,E_2]=C_{12}^1E_1+C_{12}^2E_2, \quad [E_1,E_3]=C_{13}^1E_1+C_{13}^2E_2, \\ \\
[E_2,E_3]=C_{13}^2E_1+C_{13}^1E_2 : \theta _{F_6}^{* 3}=-2C_{13}^1\neq 0, \\ \\
C_{12}^2C_{13}^2-C_{13}^1C_{12}^1=0, \quad C_{12}^1C_{13}^2-C_{13}^1C_{12}^2=0;
\end{array}
\end{equation}
\begin{equation}\label {4.7}
\begin{array}{ll}
{\bf\mathbb{G}_{10}} : [E_1,E_2]=C_{12}^1E_1+C_{12}^2E_2, \quad [E_1,E_3]=C_{13}^1E_1+C_{13}^2E_2, \\ \\
[E_2,E_3]=C_{23}^1E_1-C_{13}^1E_2 : C_{13}^2\neq C_{23}^1 \quad {\text {\rm or}} \quad C_{13}^1\neq 0, \\ \\
C_{12}^1C_{13}^1+C_{12}^2C_{23}^1=0, \quad C_{12}^1C_{13}^2-C_{12}^2C_{13}^1=0;
\end{array}
\end{equation}
\begin{equation}\label {4.8}
\begin{array}{ll}
{\bf\mathbb{G}_{12}} : [E_1,E_2]=C_{12}^1E_1+C_{12}^2E_2, \quad [E_1,E_3]=C_{13}^2E_2+C_{13}^3E_3, \\ \\
[E_2,E_3]=C_{13}^2E_1+C_{23}^3E_3 : C_{13}^3\neq 0 \quad {\text {\rm or}} \quad C_{23}^3\neq 0, \\ \\
(C_{12}^1-C_{23}^3)C_{13}^2=0, \quad (C_{12}^2+C_{13}^3)C_{13}^2=0, \\ \\
(C_{12}^1-C_{23}^3)C_{13}^3+(C_{12}^2+C_{13}^3)C_{23}^3=0.
\end{array}
\end{equation}
\end{thm}
\section{Matrix Lie groups as 3-dimensional almost paracontact metric manifolds}\label{sec-6}
Let $(L,\varphi ,\xi ,\eta ,g)$ be a 3-dimensional almost paracontact metric manifold from \secref{sec-5} belonging to some of the classes $\mathbb{G}_i (i\in \{5,6,10,12\})$.
By $G$ we denote the simply connected Lie group isomorphic to $L$, both with one and the same Lie algebra ${\g}$. Further, we find the
adjoint representation $\rm Ad$ of $G$, which is the following Lie group homomorphism
\[
\rm {Ad} : G \longrightarrow Aut({\g}).
\]
For $X\in {\g}$, the map ${\rm {ad}}_X : {\g}\longrightarrow {\g}$ is defined by ${\rm {ad}}_X(Y)=[X,Y]$, where by ${\rm ad}_X$ is denoted as ${\rm {ad}}(X)$. Due to the Jacobi identity, the map
\[
\rm {ad} : {\g} \longrightarrow End({\g}) : X\longrightarrow ad_X
\]
is Lie algebra homomorphism, which is called adjoint representation of ${\g}$.
Since the set ${\rm End}({\g})$ of all ${\K}$-linear maps from ${\g}$ to ${\g}$ is isomorphic to the set of all $(n\times n)$ matrices ${\rm M}(n,{\K})$ with entries in ${\K}$, $\rm {ad}$ is a matrix representation of ${\g}$. We denote by $M_i$ the matrices of ${\rm ad}_{E_i}$ (i=1,2,3) with respect to the basis $\{E_1,E_2,E_3\}$ of ${\g}$. Then for an arbitrary $X=aE_1+bE_2+cE_3$ ($a, b, c \in {\R}$) in ${\g}$ the matrix $A$ of ${\rm ad}_X$ is $A=aM_1+bM_2+cM_3$.
By using the well known identity $e^A={\rm {Ad}}\left(e^X\right)$ we find the matrix representation of the Lie group $G$.
\subsection{Matrix Lie groups as manifolds from the class $\mathbb{G}_5$}\label{subsec-6.1}
Let ${\g}_5$ be the Lie algebra obtained from \eqref{4.5} by $C_{12}^1=C_{12}^2=C_{13}^2=0$, i.e.
\begin{equation}\label{5.1.1}
[E_1,E_2]=\alpha E_3, \quad [E_1,E_3]=0, \quad [E_2,E_3]=0,
\end{equation}
where $\alpha =\theta _{F_5}^3=C_{12}^3$.
Then from \thmref{Theorem 4.1} it follows that $(L,\varphi ,\xi ,\eta ,g)$, where $L$ is a Lie group with a Lie algebra ${\g}_5$, belongs to the class $\mathbb{G}_5$.
In partucular, the manifold is paracontact metric if and only if $\alpha =2$.
The Levi-Civita connection ${\nabla}$ is given by
\[
\begin{array}{llll}
\nabla_{E_1}E_1=0 , \quad \nabla_{E_1}E_2=\frac{\alpha}{2}E_3 , \quad \nabla_{E_1}E_3=\frac{\alpha}{2}E_2 , \\
\nabla_{E_2}E_1=-\frac{\alpha}{2}E_3 , \quad \nabla_{E_2}E_2=0 , \quad \nabla_{E_2}E_3=\frac{\alpha}{2}E_1 , \\
\nabla_{E_3}E_1=\frac{\alpha}{2}E_2 , \quad \nabla_{E_3}E_2=\frac{\alpha}{2}E_1 , \quad \nabla_{E_3}E_3=0. \\
\end{array}
\]
It is not hard to see that the Ricci tensor $Ric$ is equal to
$$Ric(x,y)=scalg(x,y)-2scal\eta(x)\eta(y),$$
where $scal=\frac{\alpha^2}{2}$ is the scalar curvature. Consequently, $L$ is an $\eta$-Einstein manifold.
For the matrices $M_i$ \, (i=1,2,3) and $A$ we have:
\[
M_1=\left(\begin{array}{lll}
0 & 0 & 0 \cr
0 & 0 & 0 \cr
0 & \alpha & 0
\end{array}\right) , \quad
M_2=\left(\begin{array}{cll}
0 & 0 & 0 \cr
0 & 0 & 0 \cr
-\alpha & 0 & 0
\end{array}\right) , \quad
M_3=\left(\begin{array}{lll}
0 & 0 & 0 \cr
0 & 0 & 0 \cr
0 & 0 & 0
\end{array}\right) ,
\]
\begin{equation}\label{5.1.2}
A=\left(\begin{array}{ccl}
0 & 0 & 0 \cr
0 & 0 & 0 \cr
-b\alpha & a\alpha & 0
\end{array}\right) .
\end{equation}
\begin{thm}\label{Theorem 5.1.1}
The matrix representation of the Lie group $G_5$ corresponding to the Lie algebra ${\g}_5$, determined by \eqref{5.1.1} and having the matrix representation
\eqref{5.1.2}, is
\begin{equation}\label{5.1.3}
G_5=\left\{e^A=
\left(\begin{array}{ccl}
1 & 0 & 0 \cr
0 & 1 & 0 \cr
-b\alpha & a\alpha & 1
\end{array}\right)
\right\} .
\end{equation}
\end{thm}
\begin{proof}
The matrix $A$ is nilpotent of degree $q=2$. Therefore we compute $e^A$ directly from
\begin{equation}\label {5.1}
\begin{array}{l}
e^A=E+A+\frac{A^2}{2!}+\frac{A^3}{3!}+\ldots +\frac{A^{q-1}}{(q-1)!} \, ,
\end{array}
\end{equation}
i.e. $e^A=E+A$. So we obtain \eqref{5.1.3}.
\end{proof}
\subsection{Matrix Lie groups as manifolds from the class $\mathbb{G}_6$}\label{subsec-6.2}
We consider the Lie algebra ${\g}_6$ obtained from \eqref{4.6} by
$C_{12}^1=C_{12}^2=0$, i.e.
\begin{equation}\label{5.2.1}
[E_1,E_2]=0, \quad [E_1,E_3]=\alpha E_1+\beta E_2, \quad [E_2,E_3]=\beta E_1+\alpha E_2,
\end{equation}
where $\alpha =-\frac{\theta _{F_6}^{*3}}{2}=C_{13}^1, \, \, \beta =C_{13}^2\neq 0$.
Then from \thmref{Theorem 4.1} it follows that $(L,\varphi ,\xi ,\eta ,g)$, where $L$ is a Lie group with a Lie algebra ${\g}_6$, belongs to the class $\mathbb{G}_6$.
The Levi-Civita connection ${\nabla}$ is given by
\[
\begin{array}{llll}
\nabla_{E_1}E_1=-\alpha E_3 , \quad \nabla_{E_1}E_2=0 , \quad \nabla_{E_1}E_3=\alpha E_1 , \\
\nabla_{E_2}E_1=0 , \quad \nabla_{E_2}E_2=\alpha E_3 , \quad \nabla_{E_2}E_3=\alpha E_2 , \\
\nabla_{E_3}E_1=-\beta E_2 , \quad \nabla_{E_3}E_2=-\beta E_1 , \quad \nabla_{E_3}E_3=0. \\
\end{array}
\]
It not hard to see that the Ricci tensor $Ric$ is equal to
$$Ric(x,y)=\frac{scal}{3}g(x,y),$$
where $scal=-6\alpha^2$ is the scalar curvature. Consequently, $L$ is an Einstein manifold.
For the matrices $M_i$ \, (i=1,2,3) and $A$ we get:
\[
M_1=\left(\begin{array}{llc}
0 & 0 & \alpha \cr
0 & 0 & \beta \cr
0 & 0 & 0
\end{array}\right) , \quad
M_2=\left(\begin{array}{llr}
0 & 0 & \beta \cr
0 & 0 & \alpha \cr
0 & 0 & 0
\end{array}\right) , \quad
M_3=\left(\begin{array}{rrl}
-\alpha & -\beta & 0 \cr
-\beta & -\alpha & 0 \cr
0 & 0 & 0
\end{array}\right) ,
\]
\begin{equation}\label{5.2.2}
A=\left(\begin{array}{rrc}
-c\alpha & -c\beta & a\alpha +b\beta \cr
-c\beta & -c\alpha & b\alpha +a\beta\cr
0 & 0 & 0
\end{array}\right) .
\end{equation}
\begin{thm}\label{Theorem 5.2.1}
The matrix representation of the Lie group $G_6$ corresponding to the Lie algebra ${\g}_6$, determined by \eqref{5.2.1} and having the matrix representation
\eqref{5.2.2}, is as follows:
\begin{itemize}
\item If $c\neq 0$, $\beta =\alpha $ and $b=-a$, then
\begin{equation}\label{5.2.3}
G_6=\left\{e^A=
\left(\begin{array}{ccc}
\frac{1+e^{-2c\alpha }}{2} & \frac{-1+e^{-2c\alpha }}{2} & 0 \cr \cr
\frac{-1+e^{-2c\alpha }}{2} & \frac{1+e^{-2c\alpha }}{2} & 0 \cr \cr
\frac{-1+e^{-2c\alpha }}{2} & \frac{-1+e^{-2c\alpha }}{2} & 1
\end{array}\right)
\right\} .
\end{equation}
\item If $c\neq 0$, $\beta =\alpha $ and $b\neq-a$, then
\begin{equation}\label{5.2.4}
G_6=\left\{e^A=
\left(\begin{array}{ccc}
\frac{1+e^{-2c\alpha }}{2} & \frac{-1+e^{-2c\alpha }}{2} & \frac{(a+b)(1-e^{-2c\alpha })}{2c} \cr \cr
\frac{-1+e^{-2c\alpha }}{2} & \frac{1+e^{-2c\alpha }}{2} & \frac{(a+b)(1-e^{-2c\alpha })}{2c}\cr \cr
0 & 0 & 1
\end{array}\right)
\right\} .
\end{equation}
\item If $c\neq 0$, $\beta =-\alpha $ and $b=a$, then
\begin{equation}\label{5.2.5}
G_6=\left\{e^A=
\left(\begin{array}{ccc}
\frac{1+e^{-2c\alpha }}{2} & \frac{1-e^{-2c\alpha }}{2} & 0 \cr \cr
\frac{1-e^{-2c\alpha }}{2} & \frac{1+e^{-2c\alpha }}{2} & 0 \cr \cr
\frac{-1+e^{-2c\alpha }}{2} & \frac{1-e^{-2c\alpha }}{2} & 1
\end{array}\right)
\right\} .
\end{equation}
\item If $c\neq 0$, $\beta =-\alpha $ and $b\neq a$, then
\begin{equation}\label{5.2.6}
G_6=\left\{e^A=
\left(\begin{array}{ccc}
\frac{1+e^{-2c\alpha }}{2} & \frac{1-e^{-2c\alpha }}{2} & \frac{(a-b)(1-e^{-2c\alpha })}{2c} \cr \cr
\frac{1-e^{-2c\alpha }}{2} & \frac{1+e^{-2c\alpha }}{2} & \frac{(a-b)(-1+e^{-2c\alpha })}{2c}\cr \cr
0 & 0 & 1
\end{array}\right)
\right\} .
\end{equation}
\item If $c\neq 0$, $\beta \neq \pm \alpha $, then
\begin{equation}\label{5.2.7}
\small{G_6=
\left(\begin{array}{ccc}
e^{-c\alpha }\cosh c\beta & -e^{-c\alpha }\sinh c\beta & \frac{a(1-e^{-c\alpha }\cosh c\beta)+be^{-c\alpha }\sinh c\beta }{c} \cr \cr
-e^{-c\alpha }\sinh c\beta & e^{-c\alpha }\cosh c\beta & \frac{b(1-e^{-c\alpha }\cosh c\beta)+ae^{-c\alpha }\sinh c\beta }{c} \cr \cr
0 & 0 & 1
\end{array}\right) .}
\end{equation}
\item If $c=0$, then
\begin{equation}\label{5.2.8}
G_6=\left\{e^A=
\left(\begin{array}{llc}
1 & 0 & a\alpha +b\beta \cr
0 & 1 & b\alpha +a\beta \cr
0 & 0 & 1
\end{array}\right)
\right\} .
\end{equation}
\end{itemize}
\end{thm}
\begin{proof}
The characteristic polynomial of A is
\[
P_A(\lambda )=\lambda(-c\alpha +c\beta -\lambda )(c\alpha +c\beta+\lambda ) =0 .
\]
Hence for the eigenvalues $\lambda _i \, (i = 1, 2, 3)$ of $A$ we have
\[
\lambda _1=0 , \quad \lambda _2=c(\beta -\alpha ) , \quad \lambda _3=-c(\alpha +\beta ) .
\]
First, we assume that $c\neq 0$. If $\beta =\alpha $ \, (resp. $\beta =-\alpha $) we obtain
\[
\lambda _1=\lambda _2=0, \, \lambda _3=-2c\alpha \qquad
(\text {resp.} \, \lambda _1=\lambda _3=0, \, \lambda _2=-2c\alpha ).
\]
Let us consider the case $\beta =\alpha $. Then the eigenvectors
\[
p_1=(-1,1,0), \quad p_2=(a+b,0,c),
\]
corresponding to $\lambda _1=\lambda _2=0$, are linearly independent for arbitrary $a$ and $b$. The coordinates $(x_1,x_2,x_3)$ of the eigenvector $p_3$, corresponding to $\lambda _3=-2c\alpha $,
satisfy the following system:
\[
\left|\begin{array}{ll}
x_1-x_2+\frac{a+b}{c}x_3=0 \cr \cr
-x_1+x_2+\frac{a+b}{c}x_3=0 .
\end{array}\right.
\]
If we suppose that $b=-a$, then $p_1, p_2=(0,0,c)$ and $p_3=(1,1,1)$ are linearly independent and for the change of basis matrix P we get
\[
P=\left(\begin{array}{rll}
-1 & 0 & 1 \cr
1 & 0 & 1 \cr
0 & c & 1
\end{array}\right) .
\]
By using that $e^A = Pe^JP^{-1}$, where $J$ is the diagonal matrix with elements $J_{ii} =\lambda _i$ and $P^{-1}$ is the inverse matrix of $P$,
we obtain the matrix representation \eqref{5.2.3} of $G_6$. When $b\neq -a$ the vectors $p_1, p_2=(a+b,0,c)$ and $p_3=(1,1,0)$ are linearly independent
and the matrix representation of $G_6$ is in the form \eqref{5.2.4}.\\
In the case $\beta =-\alpha$, by analogical computations, we obtain \eqref{5.2.5} and \eqref{5.2.6}.\\
Now, if we take $\beta \neq \alpha$, then the eigenvalues $\lambda _i \, (i = 1, 2, 3)$ of $A$ are different and hence the corresponding eigenvectors
\[
p_1=\left(\frac{a}{c},\frac{b}{c},1\right), \quad p_2=(1,-1,0) \quad \text{and} \quad p_3=(1,1,0)
\]
are linearly independent. Then the matrix representation of $G_6$ is \eqref{5.2.7}.\\
Finally, the assumption $c=0$ implies that $A$ is nilpotent matrix of degree \\ q = 2 and by using \eqref{5.1} we obtain \eqref{5.2.8}.
\end{proof}
\subsection{Matrix Lie groups as manifolds from the class $\mathbb{G}_{10}$}\label{subsec-6.3}
We consider the Lie algebra ${\g}_{10}$ obtained from \eqref{4.7} by
$C_{12}^1=C_{12}^2=C_{13}^2=C_{23}^1=0$, i.e.
\begin{equation}\label{5.3.1}
[E_1,E_2]=0, \quad [E_1,E_3]=\alpha E_1, \quad [E_2,E_3]=-\alpha E_2,
\end{equation}
where $\alpha =C_{13}^1\neq 0$.
Then from \thmref{Theorem 4.1} it follows that $(L,\varphi ,\xi ,\eta ,g)$, where $L$ is a Lie group with a Lie algebra ${\g}_{10}$, belongs to the class $\mathbb{G}_{10}$.
The Levi-Civita connection ${\nabla}$ is given by
\[
\begin{array}{llll}
\nabla_{E_1}E_1=-\alpha E_3 , \quad \nabla_{E_1}E_2=0 , \quad \nabla_{E_1}E_3=\alpha E_1 , \\
\nabla_{E_2}E_1=0 , \quad \nabla_{E_2}E_2=-\alpha E_3 , \quad \nabla_{E_2}E_3=-\alpha E_2 , \\
\nabla_{E_3}E_1=0 , \quad \nabla_{E_3}E_2=0 , \quad \nabla_{E_3}E_3=0. \\
\end{array}
\]
It is easy to see that the Ricci tensor $Ric$ is equal to
$$Ric(x,y)=scal\eta(x)\eta(y),$$
where $scal=-6\alpha^2$ is the scalar curvature.
For the matrices $M_i$ \, (i=1,2,3) and $A$ we have:
\[
M_1=\left(\begin{array}{llc}
0 & 0 & \alpha \cr
0 & 0 & 0 \cr
0 & 0 & 0
\end{array}\right) , \quad
M_2=\left(\begin{array}{llr}
0 & 0 & 0 \cr
0 & 0 & -\alpha \cr
0 & 0 & 0
\end{array}\right) , \quad
M_3=\left(\begin{array}{rll}
-\alpha & 0 & 0 \cr
0 & \alpha & 0 \cr
0 & 0 & 0
\end{array}\right) ,
\]
\begin{equation}\label{5.3.2}
A=\left(\begin{array}{rcr}
-c\alpha & 0 & a\alpha \cr
0 & c\alpha & -b\alpha \cr
0 & 0 & 0
\end{array}\right) .
\end{equation}
\begin{thm}\label{Theorem 5.3.1}
The matrix representation of the Lie group $G_{10}$ corresponding to the Lie algebra ${\g}_{10}$, determined by \eqref{5.3.1} and having the matrix representation
\eqref{5.3.2}, is as follows:
\begin{itemize}
\item If $c\neq 0$, then
\begin{equation}\label{5.3.3}
G_{10}=\left\{e^A=
\left(\begin{array}{ccc}
e^{-c\alpha } & 0 & \frac{a\left(1-e^{-c\alpha }\right)}{c} \cr \cr
0 & e^{c\alpha } & \frac{b\left(1-e^{c\alpha }\right)}{c} \cr \cr
0 & 0 & 1
\end{array}\right)
\right\} .
\end{equation}
\item If $c=0$, then
\begin{equation}\label{5.3.4}
G_{10}=\left\{e^A=\left(\begin{array}{llc}
1 & 0 & a\alpha \cr
0 & 1 & -b\alpha \cr
0 & 0 & 1
\end{array}\right)\right\} .
\end{equation}
\end{itemize}
\end{thm}
\begin{proof}
From the characteristic polynomial of A
\[
P_A(\lambda )=(-c\alpha -\lambda )(c\alpha-\lambda )\lambda =0
\]
we find
\[
\lambda _1=-c\alpha , \quad \lambda _2=c\alpha , \quad \lambda _3=0 .
\]
If $c\neq 0$, then the eigenvalues $\lambda _i \, (i = 1, 2, 3)$ of $A$ are different and hence the corresponding to $\lambda _i \, (i = 1, 2, 3)$ eigenvectors
\[
p_1=(1,0,0), \quad p_2=(0,1,0), \quad p_3=(a,b,c)
\]
are linearly independent. For the change of basis matrix P we have
\[
P=\left(\begin{array}{lll}
1 & 0 & a \cr
0 & 1 & b \cr
0 & 0 & c
\end{array}\right) .
\]
Then for the matrix representation of $G_{10}$ we obtain \eqref{5.3.3}.\\
In the case when $c=0$ the matrix $A$ is nilpotent of degree $q=2$. Using \eqref{5.1} we establish \eqref{5.3.4}.
\end{proof}
\subsection{Matrix Lie groups as manifolds from the class $\mathbb{G}_{12}$}\label{subsec-6.4}
Let ${\g}_{12}$ be the Lie algebra obtained from \eqref{4.8} by $C_{13}^2=0$, i.e.
\begin{equation}\label{5.4.1}
[E_1,E_2]=\alpha E_1+\beta E_2, \quad [E_1,E_3]=-\beta E_3, \quad [E_2,E_3]=\alpha E_3,
\end{equation}
where $\alpha =\omega _F^1=C_{23}^3=C_{12}^1\neq 0$, $\beta =-\omega _F^2=-C_{13}^3=C_{12}^2\neq 0$.
Then from \thmref{Theorem 4.1} it follows that $(L,\varphi ,\xi ,\eta ,g)$, where $L$ is a Lie group with a Lie algebra ${\g}_{12}$, belongs to the class $\mathbb{G}_{12}$.
The Levi-Civita connection ${\nabla}$ is given by
\[
\begin{array}{llll}
\nabla_{E_1}E_1=\alpha E_2 , \quad \nabla_{E_1}E_2=\alpha E_1 , \quad \nabla_{E_1}E_3=0 , \\
\nabla_{E_2}E_1=-\beta E_2 , \quad \nabla_{E_2}E_2=-\beta E_1 , \quad \nabla_{E_2}E_3=0 , \\
\nabla_{E_3}E_1=\beta E_3 , \quad \nabla_{E_3}E_2=-\alpha E_3 , \quad \nabla_{E_3}E_3=-\beta E_1-\alpha E_2. \\
\end{array}
\]
Hence the matrices $M_i$ \, (i=1,2,3) and $A$ are:
\[
M_1=\left(\begin{array}{llr}
0 & \alpha & 0 \cr
0 & \beta & 0\cr
0 & 0 & -\beta
\end{array}\right) , \quad
M_2=\left(\begin{array}{rlr}
-\alpha & 0 & 0 \cr
-\beta & 0 & 0 \cr
0 & 0 & \alpha
\end{array}\right) , \quad
M_3=\left(\begin{array}{rrl}
0 & 0 & 0 \cr
0 & 0 & 0 \cr
\beta & -\alpha & 0
\end{array}\right) ,
\]
\begin{equation}\label{5.4.2}
A=\left(\begin{array}{rrc}
-b\alpha & a\alpha & 0 \cr
-b\beta & a\beta & 0\cr
c\beta & -c\alpha & b\alpha-a\beta
\end{array}\right) .
\end{equation}
\begin{thm}\label{Theorem 5.4.1}
The matrix representation of the Lie group $G_{12}$ corresponding to the Lie algebra ${\g}_{12}$, determined by \eqref{5.4.1} and having the matrix representation
\eqref{5.4.2}, is as follows:
\begin{itemize}
\item If $b\alpha-a\beta\neq 0$, then
\begin{equation}\label{5.4.3}
G_{12}=\left\{e^A=
\left(\begin{array}{llc}
\frac{a\beta -b\alpha e^{a\beta -b\alpha }}{a\beta -b\alpha} & \frac{a\alpha (e^{a\beta -b\alpha}-1)}{a\beta -b\alpha} & 0 \cr \cr
\frac{b\beta (1-e^{a\beta -b\alpha})}{a\beta -b\alpha} & \frac{-b\alpha +a\beta e^{a\beta -b\alpha }}{a\beta -b\alpha} & 0 \cr \cr
\frac{c\beta (1-e^{b\alpha -a\beta })}{a\beta -b\alpha} & \frac{c\alpha (e^{b\alpha -a\beta }-1)}{a\beta -b\alpha} & e^{b\alpha -a\beta }
\end{array}\right)
\right\} ,
\end{equation}
where $(a,b)\neq (0,0)$.
\item If $b\alpha-a\beta =0$, then
\begin{equation}\label{5.4.4}
G_{12}=\left\{e^A=
\left(\begin{array}{ccc}
1-b\alpha & a\alpha & 0 \cr
-b\beta & 1+a\beta & 0 \cr
c\beta & -c\alpha & 1
\end{array}\right)
\right\} ,
\end{equation}
where both $a$ and $b$ are zero or non-zero.
\end{itemize}
\end{thm}
\begin{proof}
From the characteristic polynomial of A
\[
P_A(\lambda )=\lambda (\lambda +b\alpha -a\beta )(b\alpha -a\beta -\lambda ) =0
\]
we find
\[
\lambda _1=0 , \quad \lambda _2=-b\alpha +a\beta , \quad \lambda _3=b\alpha -a\beta .
\]
First, we assume that $b\alpha-a\beta\neq 0$. From this condition it follows that $(a,b)\neq (0,0)$ and the eigenvalues $\lambda _i \, (i = 1, 2, 3)$ of $A$ are different. Then the corresponding to $\lambda _i \, (i = 1, 2, 3)$ eigenvectors
\[
p_1=(a,b,c), \quad p_2=(\alpha ,\beta ,0), \quad p_3=(0,0,1)
\]
are linearly independent and the change of basis matrix P is
\[
P=\left(\begin{array}{lll}
a & \alpha & 0 \cr
b & \beta & 0 \cr
c & 0 & 1
\end{array}\right) .
\]
By straightforward computations we obtain that in this case \eqref{5.4.3} is the matrix representation of $G_{12}$.\\
If $b\alpha-a\beta =0$, then both $a$ and $b$ are zero or non-zero. In this case the matrix $A$ is nilpotent of degree $q=2$ and the matrix representation of
$G_{12}$ is in the form \eqref{5.4.4}.
\end{proof}
\section*{Acknowledgments}
S.Z. is partially supported by Contract DFNI I02/4/12.12.2014 and Contract 80-10-33/2017 with the Sofia University ''St.Kl.Ohridski''.\\
G.N. is partially supported by Contract FSD-31-653-08/19.06.2017 with the University of Veliko Tarnovo "St. Cyril and St. Methodius".
|
2,877,628,090,352 | arxiv | \section{Introduction}
Throughout this paper,
we assume that
\begin{empheq}[box=\mybluebox]{equation}
\text{$X$ is a real Hilbert space}
\end{empheq}
with inner product $\scal{\cdot}{\cdot}$ and induced norm
$\|\cdot\|$.
Let $U$ and $V$ be closed convex subsets of $X$ such that
$U\cap V\neq\varnothing$.
The \emph{convex feasibility problem} is to find a point in
$U\cap V$. This is a basic problem in the natural sciences and
engineering (see, e.g., \cite{bb96}, \cite{CenZen}, and \cite{Comb93}) ---
as such, a plethora of algorithms based on the
nearest point mappings (projectors) $P_U$ and $P_V$ have been
proposed to solve it.
One particularly popular method is the \emph{Douglas--Rachford
splitting algorithm} \cite{DougRach} which utilizes the
the \emph{Douglas--Rachford splitting operator}
\begin{empheq}[box=\mybluebox]{equation}
\label{e:T}
T := P_V(2P_U-\ensuremath{\operatorname{Id}}) + \ensuremath{\operatorname{Id}}- P_U
\end{empheq}
and $x_0\in X$ to generate the sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ by
\begin{equation}
\label{e:DR}
(\forall\ensuremath{{n\in{\mathbb N}}}) \quad x_{n+1} := Tx_n.
\end{equation}
While the sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ may or may not converge to a
point in $U\cap V$, the projected (``shadow'') sequence
\begin{equation}
(P_Ux_n)_\ensuremath{{n\in{\mathbb N}}}
\end{equation}
always converges (weakly) to a point in $U\cap V$
(see \cite{LM}, \cite{a-loch}, \cite{BauschkeJonFest}, \cite{BC2011}).
The Douglas--Rachford algorithm has been applied very
successfully to various problems where $U$ and $V$ are not
necessarily convex, even though the supporting formal theory is
far from being complete (see, e.g., \cite{ABT}, \cite{JOSA}, and
\cite{Elser}).
Very recently, Hesse, Luke and Neumann \cite{HLN} (see also
\cite{HL}) considered
projection methods for the (nonconvex) sparse affine feasibility
problem. Their paper highlights the importance of understanding
the Douglas--Rachford algorithm for the case when $U$ and $V$
are closed \emph{subspaces} of $X$; their basic convergence
result is the following.
\begin{fact}[Hesse--Luke--Neumann]
\label{f:HLN}
{\rm (See \cite[Theorem~4.6]{HLN}.)}
Suppose that $X$ is finite-dimensional and
$U$ and $V$ are subspaces of $X$.
Then the sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ generated by \eqref{e:DR} converges to a
point in $\ensuremath{\operatorname{Fix}} T$ with a linear rate\footnote{Recall that
$x_n\to x$ \emph{linearly} or with a \emph{linear rate}
$\gamma\in\ensuremath{\left]0,1\right[}$ if $(\gamma^{-n}\|x_n-x\|)_\ensuremath{{n\in{\mathbb N}}}$ is bounded.}.
\end{fact}
The aim of this paper is three-fold.
We complement Fact~\ref{f:HLN} by providing the following:
\begin{itemize}
\item
We identify the limit of the shadow sequence $(P_Ux_n)$ as
$P_{U\cap V}x_0$; consequently and somewhat surprisingly,
the Douglas--Rachford method in this
setting not only solves a feasibility problem but actually a
\emph{best approximation problem}.
\item
We quantify the rate of convergence --- it turns out to be the cosine of the
Friedrichs angle between $U$ and $V$; moreover, our estimate is sharp.
\item
Our analysis is carried out in general (possibly
infinite-dimensional) Hilbert space.
\end{itemize}
The paper is organized as follows.
In Sections~\ref{s:aux} and \ref{s:static},
we collect various auxiliary results to
facilitate the proof of the main results (Theorem~\ref{t:main} and Theorem~\ref{t:main2})
in Section~\ref{s:main}.
In Section~\ref{s:plane}, we analyze the Douglas--Rachford algorithm
for two lines in the Euclidean plane.
The results obtained are used in Section~\ref{s:anex} for
an infinite-dimensional construction illustrating the lack of
linear convergence.
In Section~\ref{s:compare}, we compare
the Douglas--Rachford algorithm to the method of alternating
projections.
We report on numerical experiments in Section~\ref{s:numerical},
and conclude the paper in Section~\ref{s:conclusion}.
Notation is standard and follows largely \cite{BC2011}.
We write $U\oplus V$ to indicate that the terms of the Minkowski sum $U+V =
\menge{u+v}{u\in U,v\in V}$ satisfy $U\perp V$.
\section{Auxiliary results}
In this section, we collect various results to ease the derivation
of the main results.
\label{s:aux}
\subsection{Firmly nonexpansive mappings}
It is well known (see \cite{LM}, \cite{EckBer}, or \cite{JOSA})
that the Douglas--Rachford operator $T$ (see
\eqref{e:T}) is \emph{firmly nonexpansive}, i.e.,
\begin{equation}
(\forall x\in X)(\forall y\in X)
\quad
\|Tx-Ty\|^2 + \|(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y\|^2 \leq \|x-y\|^2.
\end{equation}
The following result will be useful in our analysis.
\begin{fact}
\label{f:firmiter}
{\rm (See \cite[Corollary~5.16 and Proposition~5.27]{BC2011}, or
\cite[Theorem~2.2]{BDHP}, \cite{BaiBruRei} and \cite{BruRei}.)}
Let $T\colon X\to X$ be linear and firmly nonexpansive, and let
$x\in X$.
Then $T^nx \to P_{\ensuremath{\operatorname{Fix}} T}x$.
\end{fact}
\subsection{Products of projections and the Friedrichs angle}
Unless otherwise stated, we assume from now on that
\begin{empheq}[box=\mybluebox]{equation}
\text{$U$ and $V$ are closed subspaces of $X$.}
\end{empheq}
The proof of the following useful fact
can be found in \cite[Lemma~9.2]{Deutsch}:
\begin{equation}
\label{e:PP}
U\subseteq V
\;\;\Rightarrow\;\;
P_U(V)\subseteq V
\;\;\Leftrightarrow\;\;
P_VP_U=P_UP_V = P_{U\cap V}.
\end{equation}
Our main results are formulated using the notion of the Friedrichs
angle between $U$ and $V$. Let us review the definition and
provide the key results which are needed in the sequel.
\begin{definition}
The \emph{cosine of the Friedrichs angle}
between $U$ and $V$ is
\begin{equation}
c_F := \sup \menge{\scal{u}{v}}{u\in U\cap (U\cap
V)^\perp,\; v\in V\cap (U\cap V)^\perp,\;
\|u\|\leq 1,\; \|v\|\leq 1}.
\end{equation}
We write $c_F(U,V)$ for $c_F$ if we emphasize the subspaces
utilized.
\end{definition}
\begin{fact}[fundamental properties of the Friedrichs angle]
\label{f:angle}
Let $n\in\{1,2,3,\ldots\}$. Then the following hold:
\begin{enumerate}
\item
\label{f:anglepos}
$U+V$ is closed
$\Leftrightarrow$
$c_F < 1$.
\item
\label{f:angleSolmon}
$c_F(U,V) =c_F(V,U)= c_F(U^\perp,V^\perp)$.
\item
\label{f:angle3}
{$c_F=\|P_VP_U-P_{U\cap V}\|=\|P_{V^\perp}P_{U^\perp}-P_{U^\perp\cap V^\perp}\|$}.
\item
\label{f:angleAKW}
{\rm \textbf{(Aronszajn--Kayalar--Weinert)}}
$\|(P_VP_U)^n-P_{U\cap V}\| = c_F^{2n-1}$.
\end{enumerate}
\end{fact}
\begin{proof}
\ref{f:anglepos}:
See \cite[Theorem~13]{Maratea}
\ref{f:angleSolmon}:
See \cite[Theorem~16]{Maratea}
\ref{f:angle3}:
{See \cite[Lemma~9.5(7)]{Deutsch} and \ref{f:angleSolmon} above. }
\ref{f:angleAKW}:
See \cite[Theorem~9.31]{Deutsch} (or the original works
\cite{Aron} and \cite{KW}).
\end{proof}
Note that Fact~\ref{f:angle}\ref{f:angleSolmon}\&\ref{f:angle3} yields
\begin{equation}
\label{e:cF0}
c_F = 0
\quad\Leftrightarrow\quad
P_VP_U = P_UP_V = P_{U\cap V}.
\end{equation}
The classical Fact~\ref{f:angle}\ref{f:angleAKW} deals with
\emph{even} powers of alternating projectors.
We complement this result by deriving the counterpart for
\emph{odd} powers.
\begin{lemma}[odd powers of alternating projections]
\label{l:main}
Let $n\in\{1,2,3,\ldots\}$.
Then
\begin{equation}
\label{l:eq}
\|P_U(P_VP_U)^n-P_{U\cap V}\|=c_F^{2n}.
\end{equation}
\end{lemma}
\begin{proof}
If $c_F=0$, then the conclusion is clear from \eqref{e:cF0}.
We thus assume that $c_F>0$.
By \eqref{e:PP},
\begin{subequations}
\begin{align}
\big((P_UP_V)^n-P_{U\cap V}\big)\big(P_VP_U-P_{U\cap V}\big)&=(P_UP_V)^nP_VP_U-(P_UP_V)^nP_{U\cap V}-P_{U\cap V}P_VP_U+P_{U\cap V}^2\\
&=(P_UP_V)^nP_U-P_{U\cap V}-P_{U\cap V}+P_{U\cap V}\\
&=P_U(P_VP_U)^n-P_{U\cap V}.
\end{align}
\end{subequations}
It thus follows from Fact~\ref{f:angle}\ref{f:angleAKW}\&\ref{f:angle3} that
\begin{equation}\label{lux1}
\|P_U(P_VP_U)^n-P_{U\cap V}\|\le \|(P_UP_V)^n-P_{U\cap V}\|\cdot\|P_VP_U-P_{U\cap V}\|=c_F^{2n}.
\end{equation}
Since
$(P_U(P_VP_U)^n-P_{U\cap V})(P_UP_V-P_{U\cap
V})=(P_UP_V)^{n+1}-P_{U\cap V}$,
we obtain from
Fact~\ref{f:angle}\ref{f:angleAKW}\&\ref{f:angle3} that
\begin{subequations}
\begin{align}
c_F^{2n+1}&=\|(P_UP_V)^{n+1}-P_{U\cap V}\|
\leq \|P_U(P_VP_U)^{n}-P_{U\cap V}\|\cdot\|P_UP_V-P_{U\cap
V}\|\\
&=\|P_U(P_VP_U)^n-P_{U\cap V}\|c_F.
\end{align}
\end{subequations}
Since $c_F>0$, we obtain
$c_F^{2n} \leq \|P_U(P_VP_U)^n-P_{U\cap V}\|$.
Combining with \eqref{lux1}, we deduce \eqref{l:eq}.
\end{proof}
\section{Basic properties of the Douglas--Rachford splitting operator}
\label{s:static}
We recall that the Douglas--Rachford operator is defined by
\begin{empheq}[box=\mybluebox]{equation}
\label{e:T2}
T = T_{V,U} := P_V(2P_U-\ensuremath{\operatorname{Id}}) + \ensuremath{\operatorname{Id}}- P_U.
\end{empheq}
For future reference, we record the following consequence of
Fact~\ref{f:firmiter}.
\begin{corollary}
\label{c:jammery}
Let $x\in X$.
Then $T^nx \to P_{\ensuremath{\operatorname{Fix}} T}x$.
\end{corollary}
It will be very useful to work with reflectors, which we define
next.
\begin{definition}[reflector]
The \emph{reflector} associated with $U$ is
\begin{empheq}[box=\mybluebox]{equation}
\label{e:defR}
R_U := 2P_U - \ensuremath{\operatorname{Id}} = P_U - P_{U^\perp}.
\end{empheq}
\end{definition}
The following simple yet useful result is easily verified.
\begin{proposition}
\label{p:easyR}
The reflector $R_U$ is a surjective isometry with
\begin{equation}
R_U^* = R_U^{-1} = -R_{U^\perp} = R_U.
\end{equation}
\end{proposition}
We now record several reformulations of $T$ and
$T^*$ which we will use repeatedly in the paper without
explicitly mentioning it.
\begin{proposition}
\label{p:otherT}
The following hold:
\begin{enumerate}
\item
\label{p:otherT1}
$T = \tfrac{1}{2}\ensuremath{\operatorname{Id}} + \tfrac{1}{2}R_VR_U =
P_VR_U + \ensuremath{\operatorname{Id}}-P_U
= P_VP_U + P_{V^\perp}P_{U^\perp}$.
\item
\label{p:otherT2}
$T^* = P_UP_V + P_{U^\perp}P_{V^\perp} = T_{U,V} = T^*_{V,U}$.
\item
\label{p:otherT3}
$T_{V,U} = T_{V^\perp,U^\perp}$.
\end{enumerate}
\end{proposition}
\begin{proof}
\ref{p:otherT1}: Expand and use the linearity of $P_U$ and $P_V$.
\ref{p:otherT2}\&\ref{p:otherT3}:
This follows from \ref{p:otherT1}.
\end{proof}
The next result highlights the importance of the reflectors
when traversing between $T$ and $T^*$.
Item~\ref{p:union3+} was observed previously in
\cite[Remark~4.1]{BT13jota}.
\begin{proposition}
\label{p:union}
The following hold:
\begin{enumerate}
\item
\label{p:union1}
$R_UT^*=TR_U = T^*R_V=R_VT=P_U+P_V-\ensuremath{\operatorname{Id}}$.
\item
\label{p:union2}
$T^*(R_VR_U)=(R_VR_U)T^*=T$ and
$T(R_UR_V)=(R_UR_V)T=T^*$.
\item
\label{p:union3}
$TT^*=T^*T$, i.e., $T$ is normal.
\item
\label{p:union3+}
$2TT^* = T+T^*$.
\item
\label{p:union3++}
$TT^*$ is firmly nonexpansive and self-adjoint.
\item
\label{p:union4}
$TT^* =
P_VP_UP_V + P_{V^\perp}P_{U^\perp}P_{V^\perp}
=
P_VP_U+P_UP_V-P_U-P_V+\ensuremath{\operatorname{Id}}
=
P_UP_VP_U + P_{U^\perp}P_{V^\perp}P_{U^\perp}$.
\end{enumerate}
\end{proposition}
\begin{proof}
\ref{p:union1}:
Indeed,
using Proposition~\ref{p:otherT}\ref{p:otherT1}\&\ref{p:otherT2},
we see that
\begin{subequations}
\begin{align}
TR_U &=(P_VP_U+P_{V^\perp}P_{U^\perp})(P_U-P_{U^\perp})
= P_VP_U-P_{V^\perp}P_{U^\perp}
= P_VP_U -(\ensuremath{\operatorname{Id}}-P_V)(\ensuremath{\operatorname{Id}}-P_U)\\
&= P_U+P_V-\ensuremath{\operatorname{Id}}\\
&= P_UP_V -(\ensuremath{\operatorname{Id}}-P_U)(\ensuremath{\operatorname{Id}}-P_V)
= P_UP_V - P_{U^\perp}P_{V^\perp}
= (P_UP_V+P_{U^\perp}P_{V^\perp})(P_V-P_{V^\perp})\\
&= T^*R_V
\end{align}
\end{subequations}
is \emph{self-adjoint}.
Hence
$R_UT^* = (TR_U)^* = TR_U = T^*R_V = (R_VT)^* = R_VT$
by Proposition~\ref{p:easyR}.
\ref{p:union2}: Clear from \ref{p:union1}.
\ref{p:union3}: Using \ref{p:union2},
we obtain
$TT^* = T^*(R_VR_U)(R_UR_V)T = T^*T$.
\ref{p:union3+}:
$4TT^* = (\ensuremath{\operatorname{Id}}+R_VR_U)(\ensuremath{\operatorname{Id}}+R_UR_V) = 2\ensuremath{\operatorname{Id}}+R_VR_U+R_UR_V
=2( (\ensuremath{\operatorname{Id}}+R_VR_U)/2 + (\ensuremath{\operatorname{Id}}+R_UR_V)/2) = 2(T+T^*)$.
\ref{p:union3++}:
Since $T$ and $T^*$ are firmly nonexpansive, so is their convex combination
$(T+T^*)/2$, which equals $TT^*$ by \ref{p:union3+}.
It is clear that $TT^*$ is self-adjoint.
\ref{p:union4}:
It follows
from Proposition~\ref{p:otherT}\ref{p:otherT1}\&\ref{p:otherT2} that
$TT^* =
(P_VP_U+P_{V^\perp}P_{U^\perp})(P_UP_V+P_{U^\perp}P_{V^\perp}) = P_VP_UP_V
+ P_{V^\perp}P_{U^\perp}P_{V^\perp}$, which yields the first equality.
Replacing $P_{U^\perp}$ and $P_{V^\perp}$ by
$\ensuremath{\operatorname{Id}}-P_U$ and $\ensuremath{\operatorname{Id}}-P_V$, respectively, followed by expanding and
simplifying results in the second equality. The last equality is
proved analogously.
\end{proof}
Parts of our next result were also obtained in \cite{HL} and
\cite{HLN} when $X$ is finite-dimensional.
\begin{proposition}
\label{p:Fix}
Let $\ensuremath{{n\in{\mathbb N}}}$.
Then the following hold:
\begin{enumerate}
\item
\label{p:Fix1}
$\ensuremath{\operatorname{Fix}} T = \ensuremath{\operatorname{Fix}} T^* = \ensuremath{\operatorname{Fix}} T^*T = (U\cap V)\oplus(U^\perp\cap V^\perp)$.
\item
\label{p:Fix1+}
$\ensuremath{\operatorname{Fix}} T = U\cap V$
$\Leftrightarrow$
$\overline{U+V}=X$.
\item
\label{p:Fix2}
$P_{\ensuremath{\operatorname{Fix}} T} = P_{U\cap V}+ P_{U^\perp\cap V^\perp}$.
\item \label{p:Fix4}
$P_{\ensuremath{\operatorname{Fix}} T}T=TP_{\ensuremath{\operatorname{Fix}} T}=P_{\ensuremath{\operatorname{Fix}} T}$.
\item
\label{p:Fix3}
$P_UP_{\ensuremath{\operatorname{Fix}} T} = P_VP_{\ensuremath{\operatorname{Fix}} T}= P_{U\cap V}P_{\ensuremath{\operatorname{Fix}} T} = P_{U\cap V}
= P_{U\cap V}P_{\ensuremath{\operatorname{Fix}} T}T^n = P_{U\cap V}T^n$.
\end{enumerate}
\end{proposition}
\begin{proof}
\ref{p:Fix1}:
Set $A = N_U$ and $B=N_V$. Then $(A+B)^{-1}(0) = U\cap V$.
Combining \cite[Example~2.7 and Corollary~5.5(iii)]{BBHM} yields
$\ensuremath{\operatorname{Fix}} T = (U\cap V)\oplus(U^\perp\cap V^\perp)$.
By \cite[Lemma~2.1]{BDHP}, we have $\ensuremath{\operatorname{Fix}} T = \ensuremath{\operatorname{Fix}} T^*$.
Since $T$ and $T^*$ are firmly nonexpansive, and $0\in \ensuremath{\operatorname{Fix}} T\cap \ensuremath{\operatorname{Fix}}
T^*$, we apply
\cite[Corollary~4.3 and Corollary~4.37]{BC2011}
to deduce that
$\ensuremath{\operatorname{Fix}} T^*T = \ensuremath{\operatorname{Fix}} T \cap \ensuremath{\operatorname{Fix}} T^*$.
\ref{p:Fix1+}:
Using \ref{p:Fix1}, we obtain $\ensuremath{\operatorname{Fix}} T=U\cap V$
$\Leftrightarrow$
$\ensuremath{{{U^\perp}}}\cap\ensuremath{{{V^\perp}}}=\{0\}$
$\Leftrightarrow$
$\overline{U+V} = \overline{U^{\perp\perp} + V^{\perp\perp}} =
(\ensuremath{{{U^\perp}}}\cap\ensuremath{{{V^\perp}}})^\perp =\{0\}^\perp = X$.
\ref{p:Fix2}:
This follows from \ref{p:Fix1}.
\ref{p:Fix4}: Clearly, $TP_{\ensuremath{\operatorname{Fix}} T}=P_{\ensuremath{\operatorname{Fix}} T}$. Furthermore,
$P_{\ensuremath{\operatorname{Fix}} T}T= TP_{\ensuremath{\operatorname{Fix}} T}$ by \cite[Lemma~3.12]{BDHP}.
\ref{p:Fix3}:
First, \ref{p:Fix2} and \eqref{e:PP} imply
$P_UP_{\ensuremath{\operatorname{Fix}} T} = P_VP_{\ensuremath{\operatorname{Fix}} T}= P_{U\cap V}P_{\ensuremath{\operatorname{Fix}} T} = P_{U\cap V}$.
This and \ref{p:Fix4} give the remaining equalities.
\end{proof}
\section{Main result}
\label{s:main}
We now are ready for our main results concerning the dynamical
behaviour of the Douglas--Rachford iteration.
\begin{theorem}[powers of $T$]
\label{t:main}
Let $n\in\{1,2,3,\ldots\}$, and let $x\in X$. Then
\begin{subequations}
\begin{align}
\label{mainline}
\|T^n-P_{\ensuremath{\operatorname{Fix}} T}\|&= c_F^n, \\
\|(TT^*)^n-P_{\ensuremath{\operatorname{Fix}} T}\| &= c_F^{2n},
\label{mainline3}
\end{align}
\end{subequations}
and
\begin{equation}
\label{mainline2}
\|T^nx-P_{\ensuremath{\operatorname{Fix}} T}x\|\le c_F^n\|x-P_{\ensuremath{\operatorname{Fix}} T}x\|\le c_F^n\|x\|.
\end{equation}
\end{theorem}
\begin{proof}
Set
\begin{equation}
c := \|T-P_{\ensuremath{\operatorname{Fix}} T}\| = \|TP_{(\ensuremath{\operatorname{Fix}} T)^\perp}\|,
\end{equation}
and observe that the second equality is justified
since $P_{\ensuremath{\operatorname{Fix}} T} = TP_{\ensuremath{\operatorname{Fix}} T}$.
Since $T$ is (firmly) nonexpansive and normal
(see Proposition~\ref{p:union}\ref{p:union3}),
it follows from \cite[Lemma~3.15(i)]{BDHP} that
\begin{equation}\label{e:c(T)}
\|T^n-P_{\ensuremath{\operatorname{Fix}} T}\|=c^n.
\end{equation}
Proposition~\ref{p:otherT}\ref{p:otherT1},
Proposition~\ref{p:Fix}\ref{p:Fix2},
and \eqref{e:PP} imply
\begin{subequations}
\begin{align}
(T-P_{\ensuremath{\operatorname{Fix}} T})x&=\big(P_VP_U+P_{V^\perp}P_{U^\perp}\big)x-\big(P_{U\cap V}+P_{U^\perp\cap V^\perp}\big)x\\
&=\underbrace{\big(P_VP_U-P_{U\cap V}\big)x}_{\in
V}+\underbrace{\big(P_{V^\perp}P_{U^\perp}-P_{U^\perp\cap
V^\perp}\big)x}_{\in V^\perp}\label{24b}\\
&=\big(P_VP_UP_Ux-P_{U\cap V}P_Ux\big)+\big(P_{V^\perp}P_{U^\perp}P_{U^\perp}x-P_{U^\perp\cap V^\perp}P_{U^\perp}x\big)\\
&=\underbrace{\big(P_VP_U-P_{U\cap V}\big)P_Ux}_{\in V} +
\underbrace{\big(P_{V^\perp}P_{U^\perp}-P_{U^\perp\cap
V^\perp}\big)P_{U^\perp}x}_{\in V^\perp}.
\label{e:24d}
\end{align}
\end{subequations}
Hence, using \eqref{e:24d} and Fact~\ref{f:angle}\ref{f:angle3}, we obtain
\begin{subequations}
\begin{align}
\|(T-P_{\ensuremath{\operatorname{Fix}} T})x\|^2&=\big\|\big(P_VP_U-P_{U\cap V}\big)P_Ux\big\|^2+\big\|\big(P_{V^\perp}P_{U^\perp}-P_{U^\perp\cap V^\perp}\big)P_{U^\perp}x\big\|^2\\
&\le\big\|P_VP_U-P_{U\cap V}\big\|^2 \|P_Ux\|^2+\big\|P_{V^\perp}P_{U^\perp}-P_{U^\perp\cap V^\perp}\big\|^2\|P_{U^\perp}x\|^2\\
&= c_F^2 \|P_Ux\|^2+c_F^2\|P_{U^\perp}x\|^2\\
&=c_F^2\|x\|^2.
\end{align}
\end{subequations}
We deduce that
\begin{equation}
\label{e:1206a}
c=\|T-P_{\ensuremath{\operatorname{Fix}} T}\|\leq c_F.
\end{equation}
Furthermore, \eqref{24b} implies
\begin{subequations}
\begin{align}
\|(T-P_{\ensuremath{\operatorname{Fix}} T})x\|^2&=
\big\|\big(P_VP_U-P_{U\cap V}\big)x\big\|^2+\big\|\big(P_{V^\perp}P_{U^\perp}-P_{U^\perp\cap V^\perp}\big)x\big\|^2\\
&\ge \big\|\big(P_VP_U-P_{U\cap V}\big)x\big\|^2.
\end{align}
\end{subequations}
This and Fact~\ref{f:angle}\ref{f:angle3} yield
\begin{equation}
\label{e:1206b}
c=\|T-P_{\ensuremath{\operatorname{Fix}} T}\|\geq \|P_VP_U-P_{U\cap V}\|=c_F.
\end{equation}
Combining \eqref{e:1206a} and \eqref{e:1206b},
we obtain $c=c_F$.
Consequently, \eqref{e:c(T)} yields \eqref{mainline}
while \eqref{mainline3} follows from \cite[Lemma~3.15.(2)]{BDHP}.
Finally, \cite[Lemma~3.14(1)\&(3)]{BDHP} results in
$\|T^nx-P_{\ensuremath{\operatorname{Fix}} T}x\|\le c^n\|x-P_{\ensuremath{\operatorname{Fix}} T}x\|=c_F^n\|x-P_{\ensuremath{\operatorname{Fix}}
T}x\|=c_F^n\|P_{{\ensuremath{\operatorname{Fix}} T}^\perp}x\|\le c_F^n\|x\|$.
\end{proof}
The following result yields further insights in various powers of $T$ and
$T^*$.
\begin{proposition}
\label{p:laura}
Let $\ensuremath{{n\in{\mathbb N}}}$. Then the following hold:
\begin{enumerate}
\item
\label{p:laura1}
$(TT^*)^n = (T^*T)^n = (P_UP_VP_U)^n + (P_{U^\perp}P_{V^\perp}P_{U^\perp})^n
= (P_VP_UP_V)^n + (P_{V^\perp}P_{U^\perp}P_{V^\perp})^n$ if $n\geq 1$.
\item
\label{p:laura2}
$P_U(TT^*)^n=(P_UP_V)^nP_U = (TT^*)^nP_U$
and $P_V(TT^*)^n=(P_VP_U)^nP_V=(P_VP_U)^nP_V$.
\item
\label{p:laura4}
$T^{2n} = (TT^*)^n(R_VR_U)^n$.
\item
\label{p:laura5}
$T^{2n+1} = (TT^*)^nT(R_VR_U)^n = (TT^*)^nT^*(R_VR_U)^{n+1}$.
\item
\label{p:laura6}
$T^{2n}=
\big((P_UP_V)^nP_U+(P_{U^\perp}P_{V^\perp})^nP_{U^\perp}\big)(R_VR_U)^{n}
=\big((P_VP_U)^nP_V+(P_{V^\perp}P_{U^\perp})^nP_{V^\perp}\big)(R_VR_U)^{n}$.
\item
\label{p:laura7}
$T^{2n+1}=
\big((P_UP_V)^{n+1}
+ (P_{U^\perp}P_{V^\perp})^{n+1}\big)(R_VR_U)^{n+1}
= \big((P_VP_U)^{n+1} +
(P_{V^\perp}P_{U^\perp})^{n+1}\big)(R_VR_U)^{n}$.
\end{enumerate}
\end{proposition}
\begin{proof}
\ref{p:laura1}:
Proposition~\ref{p:union}\ref{p:union3}\&\ref{p:union4} yield
\begin{equation}
(TT^*)^n = (P_UP_VP_U + P_{U^\perp}P_{V^\perp}P_{U^\perp})^n =
(P_UP_VP_U)^n + (P_{U^\perp}P_{V^\perp}P_{U^\perp})^n;
\end{equation}
the last equality follows similarly.
\ref{p:laura2}:
By \ref{p:laura1},
$P_U(TT^*)^n = (TT^*)^nP_U =(P_UP_VP_U)^n = (P_UP_V)^nP_U$.
The proof of the remaining equalities is similar.
\ref{p:laura4}:
Since $T = T^*R_VR_U = R_VR_UT^*$
(see Proposition~\ref{p:union}\ref{p:union2}),
we have $T^n = (T^*)^n(R_VR_U)^n$.
Thus
$T^{2n} = T^n(T^*)^n(R_VR_U)^n$, and therefore
$T^{2n} = (TT^*)^n(R_VR_U)^n$ using
Proposition~\ref{p:union}\ref{p:union3}.
\ref{p:laura5}:
Using \ref{p:laura4} and
Proposition~\ref{p:union}\ref{p:union3}\&\ref{p:union2},
we have
$T^{2n+1} = TT^{2n} = T(TT^*)^n(R_VR_U)^n=
(TT^*)^nT(R_VR_U)^n = (TT^*)^{n}(T^*R_VR_U)(R_VR_U)^n
= (TT^*)^{n}T^*(R_VR_U)^{n+1}$.
\ref{p:laura6}\&\ref{p:laura7}:
Using \ref{p:laura4}, \ref{p:laura1}, \ref{p:laura5},
and Proposition~\ref{p:otherT}\ref{p:otherT2},
we have
\begin{equation}
T^{2n} =
\big((P_UP_VP_U)^n+(P_{U^\perp}P_{V^\perp}P_{U^\perp})^n\big)(R_VR_U)^{n}
= \big((P_UP_V)^nP_U+(P_{U^\perp}P_{V^\perp})^nP_{U^\perp}\big)(R_VR_U)^{n}
\end{equation}
and
\begin{subequations}
\begin{align}
T^{2n+1} &=
\big((P_UP_VP_U)^n+(P_{U^\perp}P_{V^\perp}P_{U^\perp})^n\big)
(P_UP_V+P_{U^\perp}P_{V^\perp})(R_VR_U)^{n+1}\\
&=\big((P_UP_V)^{n+1}
+ (P_{U^\perp}P_{V^\perp})^{n+1}\big)(R_VR_U)^{n+1}.
\end{align}
\end{subequations}
Similarly, using
\ref{p:laura4}, \ref{p:laura1}, \ref{p:laura5},
and Proposition~\ref{p:otherT}\ref{p:otherT2},
we have
\begin{subequations}
\begin{align}
T^{2n}
&= \big((P_VP_UP_V)^n+(P_{V^\perp}P_{U^\perp}P_{V^\perp})^n\big)(R_VR_U)^{n}\\
&= \big((P_VP_U)^nP_V+(P_{V^\perp}P_{U^\perp})^nP_{V^\perp}\big)(R_VR_U)^{n}
\end{align}
\end{subequations}
and
\begin{subequations}
\begin{align}
T^{2n+1} &=
\big((P_VP_UP_V)^n+(P_{V^\perp}P_{U^\perp}P_{V^\perp})^n\big)(P_VP_U+P_{V^\perp}P_{U^\perp})(R_VR_U)^{n}\\
&= \big((P_VP_U)^{n+1}
+ (P_{V^\perp}P_{U^\perp})^{n+1}\big)(R_VR_U)^{n}.
\end{align}
\end{subequations}
The proof is complete.
\end{proof}
We are now ready for our main result.
Note that item~\ref{t:main3} is the counterpart of
Fact~\ref{f:angle}\ref{f:angleAKW} for the Douglas--Rachford algorithm.
\begin{theorem}[shadow powers of $T$]
\label{t:main2}
Let $\ensuremath{{n\in{\mathbb N}}}$, and let $x\in X$.
Then the following hold:
\begin{enumerate}
\item
\label{t:main3}
$\|P_UT^n - P_{U\cap V}\|=\|P_VT^n - P_{U\cap V}\|= c_F^n$.
\item
\label{t:main3+}
$\max\big\{\|P_UT^nx-P_{U\cap V}x\|,\|P_VT^nx-P_{U\cap V}x\|\big\}\leq
c_F^n\|x\|$.
\item
\label{t:main4}
$\|P_VT^{n+1}x - P_{U\cap V}x\| \leq
c_F\|P_UT^nx-P_{U\cap V}x\|\le c_F\|T^nx-P_{\ensuremath{\operatorname{Fix}} T}x\|\le
c_F^{n+1}\|x\|$.
\end{enumerate}
\end{theorem}
\begin{proof}
\ref{t:main3}:
Note that $P_{U\cap V}R_VR_U =
P_{U\cap V}(P_V-P_{V^\perp})R_U
=P_{U\cap V}R_U = P_{U\cap V}(P_U-P_{U^\perp})
= P_{U\cap V}$.
It follows that $P_{U\cap V}=P_{U\cap V}(R_VR_U)^n = P_{U\cap
V}(R_VR_U)^{n+1}$.
Hence, using Proposition~\ref{p:laura}\ref{p:laura6}, we have
$P_UT^{2n}-P_{U\cap V} = (P_UP_V)^nP_U(R_VR_U)^n - P_{U\cap V}(R_VR_U)^n
= ((P_UP_V)^nP_U-P_{U\cap V})(R_VR_U)^n$
and thus
$\|P_UT^{2n}-P_{U\cap V}\| = c_F^{2n}$ by Lemma~\ref{l:main}.
It follows likewise from Proposition~\ref{p:laura}\ref{p:laura7} and
Fact~\ref{f:angle}\ref{f:angleAKW} that
$\|P_UT^{2n+1}-P_{U\cap V}\|=\|((P_UP_V)^{n+1}-P_{U\cap
V})(R_VR_U)^{n+1}\|=\|(P_UP_V)^{n+1}-P_{U\cap V}\|=c_F^{2n+1}$.
Thus, we have $\|P_UT^{n}-P_{U\cap V}\|=c_F^n$.
The proof of $\|P_VT^{n}-P_{U\cap V}\|=c_F^n$ is analogous.
\ref{t:main3+}: Clear from \ref{t:main3}.
\ref{t:main4}:
Using Proposition~\ref{p:otherT}\ref{p:otherT1} and
Proposition~\ref{p:Fix}\ref{p:Fix3},
we have
\begin{subequations}
\begin{align}
P_VT^{n+1}-P_{U\cap V} &=
P_V(P_VP_U+P_{V^\perp}P_{U^\perp})T^n-P_{U\cap V}P_{\ensuremath{\operatorname{Fix}} T}
= P_VP_UT^n - P_{U\cap V}P_{\ensuremath{\operatorname{Fix}} T}T^n\\
&= P_VP_UT^n - P_{U\cap V} + P_{U\cap V} - P_{U\cap V}T^n
=(P_VP_U-P_{U\cap V})(P_UT^n-P_{U\cap V}).
\end{align}
\end{subequations}
Combining this with Fact~\ref{f:angle}\ref{f:angle3} and
\eqref{mainline}, we get
$\|P_VT^{n+1}x-P_{U\cap V}x\| \leq c_F\|P_UT^{n}x-P_{U\cap V}x\|
=c_F\|P_U(T^{n}x-P_{\ensuremath{\operatorname{Fix}} T}x)\|
\leq c_F\|T^{n}x-P_{\ensuremath{\operatorname{Fix}} T}x\| \leq c_F^{n+1}\|x\|$.
\end{proof}
\begin{corollary}[linear convergence]
\label{c:main2}
We have
$T^nx\to P_{(U\cap V)+(U^\perp \cap V^\perp)}x$,
$P_UT^nx \to P_{U\cap V}x$, and
$P_VT^nx \to P_{U\cap V}x$.
If $U+V$ is closed,
then convergence of these sequences is linear with rate $c_F<1$.
\end{corollary}
\begin{proof}
Corollary~\ref{c:jammery},
Proposition~\ref{p:Fix}\ref{p:Fix2}\&\ref{p:Fix3} and
\eqref{e:PP} imply
$P_UT^nx\to P_UP_{\ensuremath{\operatorname{Fix}} T}x= P_{U\cap V}x$ and analogously
$P_VT^nx\to P_{U\cap V}x$.
Recall from
Fact~\ref{f:angle}\ref{f:anglepos}
that $U+V$ is closed if and only if $c_F<1$.
The conclusion is thus clear from Theorem~\ref{t:main2}\ref{t:main4}.
\end{proof}
A translation argument gives the following result (see also
\cite[Theorem~3.17]{BCL04} for an earlier related result).
\begin{corollary}[affine subspaces]
Suppose that $U$ and $V$ are closed \emph{affine} subspaces
of $X$ such that $U\cap V\neq\varnothing$, and let $x\in X$.
Then
\begin{equation}
T^nx \to P_{\ensuremath{\operatorname{Fix}} T}x, \quad
P_UT^nx \to P_{U\cap V}x, \quad\text{and}\quad P_VT^nx \to P_{U\cap V}x.
\end{equation}
If $(U-U)+(V-V)$ is closed,
then the convergence is linear with rate $c_F(U-U,V-V)<1$.
\end{corollary}
\section{Two lines in the Euclidean plane}
\label{s:plane}
We present some geometric results concerning the lines in the
plane which will not only be useful later but which also illustrate
the results of the previous sections.
In this section, we assume that $X=\ensuremath{\mathbb R}^2$, and we set
\begin{equation}
e_0 := (1,0),\quad
e_{\pi/2} := (0,1), \quad
\text{and}\quad
\big(\forall \theta\in[0,\pi/2]\big)\;\;
e_\theta := \cos(\theta)e_0 + \sin(\theta)e_{\pi/2}.
\end{equation}
Define the (counter-clockwise) rotator by
\begin{equation}
\big(\forall \theta\in\ensuremath{\mathbb R}_+)\quad
R_\theta := \begin{pmatrix}
\cos(\theta) & -\sin(\theta)\\
\sin(\theta) & \cos(\theta)
\end{pmatrix},
\end{equation}
and note that $R_\theta^{-1} = R_\theta^*$.
Now let $\theta\in\left]0,\pi/2\right]$,
and suppose that
\begin{equation}
U=\ensuremath{\mathbb R}\cdot e_0
\quad\text{and}\quad
V=\ensuremath{\mathbb R}\cdot e_\theta = R_\theta(U).
\end{equation}
Then
\begin{equation}
U\cap V = \{0\}
\quad\text{and}\quad
c_F(U,V) = \cos(\theta).
\end{equation}
By, e.g., \cite[Proposition~28.2(ii)]{BC2011},
$P_V = R_\theta P_UR_\theta^*$.
In terms of matrices, we thus have
\begin{equation}
P_U =
\begin{pmatrix}
1 & 0 \\
0 & 0
\end{pmatrix}
\quad\text{and}\quad
P_V =
\begin{pmatrix}
\cos^2(\theta) & \sin(\theta)\cos(\theta)\\
\sin(\theta)\cos(\theta) & \sin^2(\theta)
\end{pmatrix}.
\end{equation}
Consequently,
the corresponding Douglas--Rachford splitting operator
is
\begin{subequations}
\begin{align}
T &= P_V(2P_U-\ensuremath{\operatorname{Id}}) + \ensuremath{\operatorname{Id}} - P_U =
\begin{pmatrix}
\cos^2(\theta) & -\sin(\theta)\cos(\theta)\\
\sin(\theta)\cos(\theta) & 1-\sin^2(\theta)
\end{pmatrix}\\
&=
\cos(\theta)\begin{pmatrix}
\cos(\theta) & -\sin(\theta)\\
\sin(\theta)& \cos(\theta)
\end{pmatrix}
=\cos(\theta) R_\theta.
\end{align}
\end{subequations}
Thus,
$\ensuremath{\operatorname{Fix}} T = \{0\} = P_U(\ensuremath{\operatorname{Fix}} T)$,
\begin{equation}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad
T^n = \cos^n(\theta)R_{n\theta},
\quad
P_UT^n = \cos^n(\theta)\begin{pmatrix}
\cos(n\theta) & -\sin(n\theta)\\
0 & 0
\end{pmatrix},
\end{equation}
and
\begin{equation}\label{e:16}
(\forall x\in X)\quad
\|T^n x\| = \cos^n(\theta)\|x\|,
\;\;
\|P_UT^nx\| = \cos^n(\theta)|\cos(n\theta)x_1-\sin(n\theta)x_2|.
\end{equation}
Furthermore,
\begin{equation}
(\forall n\geq 1)\quad
(P_VP_U)^n =
\cos^{2n-1}(\theta)\begin{pmatrix}
\cos(\theta) & 0 \\
\sin(\theta) & 0
\end{pmatrix}
\end{equation}
and thus
\begin{equation}\label{e:18}
(\forall x\in X)\quad
\|(P_VP_U)^nx\| = \cos^{2n-1}(\theta)|x_1|\leq
\cos^{2n-1}(\theta)\|x\|.
\end{equation}
\section{An example without linear rate of convergence}
\label{s:anex}
In this section, let us assume that our underlying Hilbert space is
$\ell^2(\ensuremath{\mathbb N}) = \ensuremath{\mathbb R}^2\oplus \ensuremath{\mathbb R}^2 \oplus \cdots$.
It will be more suggestive to use boldface letters
for vectors lying in, and operators acting on, $\ell^2(\ensuremath{\mathbb N})$.
Thus,
\begin{equation}
\ensuremath{{\mathbf{X}}} = \ell^2(\ensuremath{\mathbb N}),
\end{equation}
and we write is $\ensuremath{\mathbf{x}} = (x_n)_\ensuremath{{n\in{\mathbb N}}} =
((x_0,x_1),(x_2,x_3),\ldots)$ for a generic vector in $\ensuremath{{\mathbf{X}}}$.
Suppose that $(\theta_n)_\ensuremath{{n\in{\mathbb N}}}$ is a sequence of angles in
$\left]0,\pi/2\right[$ with $\theta_n\to 0^+$.
We set $(\forall\ensuremath{{n\in{\mathbb N}}})$ $c_n := \cos(\theta_n)\to 1^-$.
We will use notation and results
from Section~\ref{s:plane}.
We assume that
\begin{equation}\label{UU}
\ensuremath{{\mathbf{U}}} = \ensuremath{\mathbb R}\cdot e_0\times \ensuremath{\mathbb R}\cdot e_0 \times \cdots \subseteq \ensuremath{{\mathbf{X}}}
\end{equation}
and that
\begin{equation}\label{VV}
\ensuremath{{\mathbf{V}}} = \ensuremath{\mathbb R}\cdot e_{\theta_0}\times \ensuremath{\mathbb R}\cdot
e_{\theta_1}\times\cdots \subseteq \ensuremath{{\mathbf{X}}}.
\end{equation}
Then
\begin{equation}
\ensuremath{{\mathbf{U}}}\cap\ensuremath{{\mathbf{V}}} = \{\boldsymbol{0}\}
\quad\text{and}\quad
c_F(\ensuremath{{\mathbf{U}}},\ensuremath{{\mathbf{V}}}) = \sup_\ensuremath{{n\in{\mathbb N}}} c_F(\ensuremath{\mathbb R}\cdot e_0,\ensuremath{\mathbb R}\cdot e_{\theta_n}) = \sup_\ensuremath{{n\in{\mathbb N}}}
c_n = 1.
\end{equation}
The Douglas--Rachford splitting operator is
\begin{equation}
\ensuremath{{\mathbf{T}}} = c_0R_{\theta_0}\oplus c_1R_{\theta_1}\oplus\cdots.
\end{equation}
Now let $\ensuremath{\mathbf{x}}=(x_n)_\ensuremath{{n\in{\mathbb N}}} \in\ensuremath{{\mathbf{X}}}$ and $\gamma\in\ensuremath{\left]0,1\right[}$.
Assume further that $\menge{\ensuremath{{n\in{\mathbb N}}}}{x_n\neq 0}$ is infinite.
Then there exists $N\in\ensuremath{\mathbb N}$ such that
$(x_{2N},x_{2N+1})\in\ensuremath{\mathbb R}^2\smallsetminus\{(0,0)\}$ and
$c_N>\gamma$.
Hence
\begin{equation}
\gamma^{-n}\|\ensuremath{{\mathbf{T}}}^n\ensuremath{\mathbf{x}}\| \geq
\gamma^{-n}c_N^n\|(x_{2N},x_{2N+1})\|\to\ensuremath{+\infty};
\end{equation}
consequently,
\begin{equation}
\text{$\ensuremath{{\mathbf{T}}}^n\ensuremath{\mathbf{x}}\to \boldsymbol{0}$,\, but not linearly with
rate $\gamma$.}
\end{equation}
Let us now assume in addition that
$\theta_0=\pi/3$ and $(\forall n\geq 1)$ $\theta_n=\pi/(4n)$ and
$x_n = 1/(n+1)$.
Then there exists $\delta>1$ and
$N\geq 1$ such that $(\forall n\geq N)$ $\gamma^{-1}c_n\geq
\delta$. Hence, for every $n\geq N$, we have
\begin{equation}
\gamma^{-n}\|P_{\ensuremath{{\mathbf{U}}}}\ensuremath{{\mathbf{T}}}^n\ensuremath{\mathbf{x}}\|
\geq \gamma^{-n}c_n^n
|\cos(n\theta_n)x_{2n}-\sin(n\theta_n)x_{2n+1}|
\geq \frac{\delta^n}{2^{3/2}(n+1)(2n+1)}\to\ensuremath{+\infty};
\end{equation}
thus,
\begin{equation}
\text{$P_{\ensuremath{{\mathbf{U}}}}\ensuremath{{\mathbf{T}}}^n\ensuremath{\mathbf{x}}\to \boldsymbol{0}$,\, but not linearly with
rate $\gamma$.}
\end{equation}
In summary, these constructions illustrate that
when the Friedrichs angle is zero, then
one cannot expect linear convergence of the
(projected) iterates of the Douglas--Rachford splitting operator.
\section{Comparison with the method of alternating projections}
\label{s:compare}
Let us now compare our main results (Theorems~\ref{t:main} and
\ref{t:main2})
with the \emph{method of alternating projections}, for which
the following fundamental result is well known.
\begin{fact}[Aronszajn]
\label{f:map}
{\rm (See \cite{Aron} or \cite[Theorem~9.8]{Deutsch}.)}
Let $x\in X$.
Then
\begin{equation}
(\forall n\geq 1)\quad
\|(P_VP_U)^nx-P_{U\cap V}x\| \leq
c_F^{2n-1}\|x-P_{U\cap V}x\|.
\end{equation}
\end{fact}
In Fact~\ref{f:map}, the rate $c_F$ is best possible (see
Fact~\ref{f:angle}\ref{f:angleAKW} and the
results and comments in \cite[Chapter~9]{Deutsch}),
and if the Friedrichs
angle is 0, then slow convergence may occur
(see, e.g., \cite{BDH}).
From Theorem~\ref{t:main}, Corollary~\ref{c:main2}, and Fact~\ref{f:map}, we see that
the rate of convergence $c_F$ of $(T^nx)_\ensuremath{{n\in{\mathbb N}}}$ to $P_{\ensuremath{\operatorname{Fix}} T}x$ and,
\emph{a fortiori}, of $(P_UT^nx)_\ensuremath{{n\in{\mathbb N}}}$ to $P_{U\cap V}x$,
is clearly \emph{slower} than the rate of convergence $c_F^2$ of
$((P_VP_U)^nx)_\ensuremath{{n\in{\mathbb N}}}$ to $P_{U\cap V}x$.
In other words, the Douglas--Rachford splitting method
appears to be twice as slow as the method
of alternating projections.
While this is certainly the case for the iterates $(T^nx)_\ensuremath{{n\in{\mathbb N}}}$,
the actual iterates of interest, namely $(P_UT^nx)_\ensuremath{{n\in{\mathbb N}}}$, in
practice often (somewhat paradoxically) make striking
non-monotone progress.
Let us illustrate this using the set up of Section~\ref{s:plane}
the notation and results of which we will utilize.
Consider first \eqref{e:16} and \eqref{e:18} with
$x=e_0$ and $\theta={\pi}/{17}$.
In Figure~\ref{f1}, we show the first 100 iterates of
the sequences
$(\|T^n x\|)_\ensuremath{{n\in{\mathbb N}}}$ (red line),
$(\|P_UT^nx\|)_\ensuremath{{n\in{\mathbb N}}}$ (blue line), and $(\|(P_VP_U)^nx\|)_\ensuremath{{n\in{\mathbb N}}}$
(green line). The sequences $(\|T^nx\|)_\ensuremath{{n\in{\mathbb N}}}$ and
$(\|(P_VP_U)^nx\|)_\ensuremath{{n\in{\mathbb N}}}$, which are decreasing,
represent the distance of the iterates to
$0$, the unique solution of the problem.
While $(\|(P_VP_U)^nx\|)_\ensuremath{{n\in{\mathbb N}}}$ decreases
faster than $(\|T^nx\|)_\ensuremath{{n\in{\mathbb N}}}$, the sequence of ``shadows''
$(\|P_UT^nx\|)_\ensuremath{{n\in{\mathbb N}}}$ exhibits a curious non-monotone
``rippling'' behaviour
--- it may be quite close to the solution soon after
the iteration starts!
\begin{figure}[H]
\begin{center}
\includegraphics[height=3.0in]{fig05.pdf}
\end{center}
\caption{
The distance of the first 100 terms of the sequences
$(T^nx)_\ensuremath{{n\in{\mathbb N}}}$ (red), $(P_UT^nx)_\ensuremath{{n\in{\mathbb N}}}$ (blue), and
$((P_VP_U)^nx)_\ensuremath{{n\in{\mathbb N}}}$ (green) to the
unique solution.
\label{f1}}
\end{figure}
We next show in
Figure~\ref{f2a} and Figure~\ref{f2b}
the first $100$ terms of
$(\|T^nx\|)_\ensuremath{{n\in{\mathbb N}}}$ and $(\|(P_VP_U)^nx\|)_\ensuremath{{n\in{\mathbb N}}}$, where
$\theta\colon[0,1]\to[0,\pi/2]\colon t\mapsto (\pi/2)t^3$ is
parametrized to exhibit more clearly the behaviour for small
angles. Clearly, and as predicted, smaller angles correspond to
slower rates of convergence.
\begin{figure}[H]
\begin{center}
\includegraphics[height=3.0in]{fig02Tn.pdf}
\caption{
The distance of the first 100 terms of the sequence
$(T^nx)_\ensuremath{{n\in{\mathbb N}}}$ to the unique solution when the angle ranges
between $0$ and $\pi/2$.
\label{f2a}
}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[height=3.0in]{fig03MAP.pdf}
\caption{
The distance of the first 100 terms of the sequence
$((P_VP_U)^nx)_\ensuremath{{n\in{\mathbb N}}}$ to the unique solution when the angle ranges
between $0$ and $\pi/2$.
\label{f2b}}
\end{center}
\end{figure}
In Figure~\ref{f3}, we depict the ``shadow sequence'' $(\|P_UT^nx\|)_\ensuremath{{n\in{\mathbb N}}}$.
Observe again the ``rippling'' phenomenon. While the situation of two lines
appears at first to be quite special, it turns out that the same
``rippling'' also arises in a quite different setting; see \cite[Figures 4 and 6]{BK13}.
\begin{figure}[H]
\begin{center}
\includegraphics[height=3.0in]{fig01.pdf}
\caption{
The distance of the first 100 terms of the ``shadow'' sequence
$(P_UT^nx)_\ensuremath{{n\in{\mathbb N}}}$ to the unique solution when the angle ranges
between $0$ and $\pi/2$.
\label{f3}
}
\end{center}
\end{figure}
The figures in this section were prepared in \texttt{Maple}\texttrademark
\ (see \cite{Maple}).
\section{Numerical experiments}
\label{s:numerical}
In this section, we compare
the Douglas--Rachford method (DRM) to the method
of alternating projections (MAP)
for finding $P_{U\cap V}x_0$.
Our numerical set up is as follows.
We assume that $X=\ensuremath{\mathbb R}^{50}$, and we randomly
generated $100$ pairs of subspaces $U$ and $V$ of $X$ such that $U\cap
V\neq\{0\}$.
We then chose 10 random starting points, each with Euclidean norm $10$.
This resulted in a total of 1,000 instances for each algorithm.
Note that the sequences to monitor are
\begin{equation}
\big(P_UT^nx_0\big)_\ensuremath{{n\in{\mathbb N}}}
\quad\text{and}\quad
\big((P_VP_U)^nx_0\big)_\ensuremath{{n\in{\mathbb N}}}
\end{equation}
for DRM and for MAP, respectively.
Our stopping criterion tolerance was set to
\begin{equation}
\varepsilon:=10^{-3}.
\end{equation}
We investigated two different stopping criteria, which we detail
and discuss in the following two sections.
\subsection{Stopping criterion based on the true error}
We terminated the algorithm when the current iterate
of the monitored sequence $(z_n)_\ensuremath{{n\in{\mathbb N}}}$ satisfies
\begin{equation}
d_{U\cap V}(z_n)<\varepsilon
\end{equation}
for the first time.
Note that in applications, we typically would not have access to
this information but here we use it to see the true performance
of the two methods.
\begin{figure}[htb]
\hspace*{0.1in}
\includegraphics[width=5.8in]{scrTAll.pdf}
\caption{True error criterion}
\label{f:TAll}
\end{figure}
\begin{figure}[!h]
\hspace*{0.1in}
\includegraphics[width=5.8in]{scrTMed.pdf}
\caption{True error criterion}
\label{f:TMed}
\end{figure}
In Figure~\ref{f:TAll} and Figure~\ref{f:TMed}, the horizontal axis
represents the Friedrichs angle between the subspaces and the vertical axis
represents the number of iterations. Results for all 1,000 runs are
presented in Figure~\ref{f:TAll}, while we show the {\em median}
in Figure~\ref{f:TMed}.
From the figures, we see that DRM is generally faster
than MAP
when the Friedrichs angle $\theta<0.1$.
In the opposite case, MAP is faster.
This can be interpreted as follows. Since DRM converges with
linear rate $c_F = \cos(\theta)$ while MAP does with rate $c_F^2$, we expect that
MAP performs better when $c_F$ is small, i.e., $\theta$ is large.
But when the Friedrichs angle is small, the ``rippling'' behaviour of DRM
appears to manifest itself (see also Figure~\ref{f1}).
Note that MAP is not much faster than DRM, which suggests DRM as the better
overall choice.
\subsection{Stopping criterion based on individual distances}
In practice, it is not always possible to obtain the true error.
Thus, we utilized a reasonable alternative stopping criterion, namely when
the monitored sequence $(z_n)_\ensuremath{{n\in{\mathbb N}}}$ satisfies
\begin{equation}
\max\big\{d_U(z_n),d_V(z_n)\big\} < \varepsilon
\end{equation}
for the first time.
\begin{figure}[!h]
\hspace*{0.1in}
\includegraphics[width=5.8in]{scrMAll.pdf}
\caption{Max distance criterion}
\label{f:MAll}
\end{figure}
\begin{figure}[!h]
\hspace*{0.1in}
\includegraphics[width=5.8in]{scrMMed.pdf}
\caption{Max distance criterion}
\label{f:MMed}
\end{figure}
Figures~\ref{f:MAll} and \ref{f:MMed} show the results
when we use the max distance criterion with the same data.
The behaviour is similar to the experiments with the true error criterion.
The figures in this section were computed with the help of
\texttt{Julia} (see \cite{Julia}) and \texttt{Gnuplot} (see
\cite{Gnuplot}).
\section{Conclusion}
\label{s:conclusion}
We completely analyzed the Douglas--Rachford splitting method for the
important case of two subspaces. We determined the limit and the sharp rate
of convergence. Lack of linear convergence was illustrated by an example in
$\ell_2$. Finally, we compared this method to the method of alternating
projections and found the Douglas--Rachford method to be faster when the
Friedrichs angle between the subspaces is small.
\section*{Acknowledgments}
HHB was partially supported by a Discovery Grant and an Accelerator
Supplement of the Natural Sciences and Engineering
Research Council of Canada (NSERC) and by the Canada Research Chair Program.
JYBC was partially supported by CNPq and by projects UNIVERSAL and CAPES-MES-CUBA 226/2012. TTAN was partially supported by a postdoctoral
fellowship of the Pacific Institute for the Mathematical Sciences
and by NSERC grants of HHB and XW.
HMP was partially supported by NSERC grants of HHB and XW. XW was partially supported by a Discovery Grant of NSERC.
\small
|
2,877,628,090,353 | arxiv | \section{Introduction and Motivation }
Einstein spent the last years of his life trying
to build a geometric field theory in two main series of trails, to unify gravity and electromagnetism, known in the literature as
\lq\lq Unified Field Theories\rq\rq.\, In the first of these attempts, \lq\lq Einstein's absolute parallelism theory\rq\rq, he used the Absolute Parallelism geometry (AP-geometry). In his second attempt, \lq\lq Einstein's non-symmetric theory\rq\rq,\, he used another type of non-symmetric geometry \cite{AE}. Unfortunately, all these attempts were unsuccessful or incomplete.
This quest of unification preoccupied Einstein in vain during the last decades of his life as he tried to modify his basic equations of general relativity in an attempt to make additional room within the geometry of space-time for matter and force.
\smallskip
One of the successful attempts providing a unification of gravity and electromagnetism was accomplished by Mikhail and Wanas \cite{Mikhail}. The theory was formulated in the context of AP-geometry. Unlike Riemannian geometry, which has only ten degrees of freedom (in dimensional 4) just enough to describe gravity, AP-geometry has sixteen degrees of freedom. These extra degrees of freedom make AP-geometry a suitable mathematical framework for describing gravity and electromagnetism on geometric basis. It should be noted that this approach may be thought of as another alternative to the idea of
increasing the dimension of the underlying manifold as in the Kaluza-Klein theory.
\smallskip
In this paper we construct a set of field equations in the context of AP-geometry under the
additional assumption that the canonical (Weitzenb\"{o}ch) connection is semi-symmetric. We refer to this space as SAP space. We will use the same Lagrangian applied in the Generalized Field Theory (GFT) of Mikhail-Wanas. The reason for this choice will be clarified later on. As expected, SAP-space, being subject to more restrictions, reveals physical properties that do not necessarily hold in the general context of AP-space.
Moreover, many relations obtained acquire a much simpler and tangible form than their counterparts in AP-geometry. In particular, our chosen Lagrangian, which is similar in form to its counterpart in the GFT, acquires a much
simpler form. This, in turn, largely simplifies the calculations and gives rise to some interesting and unexpected results.
\smallskip
The paper is organized as follws. Section
$1$ provides a brief account of the basic concepts of AP-geometry. Section $ 2$ gives a short survey on the notion of a semi-symmetric connection. Section $3$ gives a brief and concise outline of the GFT. In section $4$, variational principle is applied on our chosen Lagrangian, under the assumption that the canonical connection is semi-symmetric, and physical consequences of the obtained field equations are discussed. Section 5 deals with a comparative study between AP-geometry and GFT field equations, on one side, and SAP-geometry and our field equations, on the other side. The paper is ended with some comments and concluding remarks (Secion 6).
\section{Absolute Parallelism geometry}
Absolute parallelism geometry has gained more attention in recent years in constructing and applying modified gravity theories (such as
TEGR, cf., \cite{Nashed3})
and $f(T)$-theories (cf., \cite{Nashed1}, \cite{Nashed2}, \cite{Nashed4}).
In this section, we give a short survey of the absolute parallelism geometry or the geometry of parallelizable manifolds. For more details, we refer, for example, to \cite{W}, \cite{YE} and \cite{AMR}.\vspace{-5pt}
\begin{defn} A parallelizable manifold is an $n$-dimensional smooth
manifold $M$ which admits $n$ independent vector fields
$\,\undersym{\lambda}{i}$ $(i = 1, ..., n)$ defined globally on $M$.
\end{defn}
\vspace{-5pt}
This space is also known in the literature as Absolute Parallelism space (AP-space) or teleparallel space.
Let $ \,\undersym{\lambda}{i}^{\mu} \,\,\, (\mu = 1,2,...,n) $ be
the coordinate components of the \emph{i-th} vector field $\,\undersym{\lambda}{i}$. The Einstein summation convention is
applied to both Latin (mesh) and Greek (world) indices, where all
Latin indices are written beneath the symbols. The covariant components\, $\undersym{\lambda}{i}_{\mu}$\, of\, $\undersym{\lambda}{i}$\, are given by the relations
\begin{equation*}\label{covariant components of lambda}
\undersym{\lambda}{i}^{\mu} \,\, \undersym{\lambda}{i}_{\nu} = \delta^{\mu}_{\nu}, \quad \undersym{\lambda}{i}^{\mu} \,\, \undersym{\lambda}{j}_{ \mu} = \delta_{i j}.
\end{equation*}
The canonical (or Weitzenb\"{o}ch) connection $\Gamma^{\alpha}_{\mu\nu}$ is defined by
\begin{equation}\label{canonical}
\Gamma^{\alpha}_{\mu \nu} := \undersym{\lambda}{i}^{\alpha} \,\, \undersym{\lambda}{i}_{ \mu ,\nu},
\end{equation}
where the comma here denotes partial
differentiation with respect to the coordinate function $x^{\nu}$. As easily checked, we have
\begin{equation*}\label{The condition of absolute parallelism}
\lambda_{\mu | \nu} = 0, \quad {\lambda^{\mu}}_{|\nu} = 0,
\end{equation*}
where the stroke \lq \lq $|$" denotes covariant differentiation with respect to the canonical connection\, $\Gamma^{\alpha}_{\mu \nu}$.
The torsion tensor $\Lambda^{\alpha}_{\mu\nu}$ of $\Gamma^{\alpha}_{\mu \nu}$ is given as usual by
\begin{equation}\label{tor}
\Lambda^{\alpha}_{\mu \nu} := \Gamma^{\alpha}_{\mu \nu} - \Gamma^{\alpha}_{\nu \mu}\end{equation}
On the other hand, the curvature tensor $R^{\alpha}_{\epsilon \mu \nu}$ of the canonical connection $\Gamma^{\alpha}_{\mu \nu}$ vanishes identically.
Hence, the AP-space is flat with respect to the canonical connection. However, there are
other three \emph{natural connections} which are non-flat. Namely, the dual connection
$\widetilde{\Gamma }^{\alpha}_{\mu \nu} := \Gamma^{\alpha}_{\nu \mu},$ the symmetric connection
$\widehat{\Gamma}^{\alpha}_{\mu \nu} := \frac{1}{2} (\Gamma^{\alpha}_{\mu \nu} + \Gamma^{\alpha}_{\nu \mu}) =\Gamma^{\alpha}_{(\mu \nu)}$
and the Levi-Civita connection
\begin{equation}\label{Riemannian AP}
\overcirc{\Gamma}{^{\alpha}_{\mu \nu}} := \frac{1}{2} g^{\alpha \epsilon} (g_{\epsilon \nu ,\mu} + g_{\epsilon \mu , \nu} - g_{\mu \nu , \epsilon})
\end{equation}
associated with the metric structure defined by
\begin{equation}\label{metric AP}
g_{\mu \nu} := \undersym{\lambda}{i}_{\mu} \,\, \undersym{\lambda}{i}_{\nu}.
\end{equation}
The covariant derivatives with respect to the dual, symmetric and Levi-Civita connection will be denoted by $\widetilde{|},\,\widehat{|},\, \text{and} \, ;$ respectively.
The contortion tensor is defined by anyone of the following equivalent formulae
\begin{equation}\label{cont}
\gamma^{\alpha}_{\mu\nu}:=\, \undersym{\lambda}{i}^{\alpha}\, \undersym{\lambda}{i}_{\mu;\,\nu}, \quad\gamma^{\alpha}_{\mu\nu}= \Gamma^{\alpha}_{\mu\nu}-\, \overcirc{\Gamma}{^{\alpha}_{\mu\nu}}.
\end{equation}
Since $\, \overcirc{\Gamma}{^{\alpha}_{\mu\nu}}$ is symmetric, it follows that
\begin{equation*}
\label{v}\Lambda^{\alpha}_{\mu\nu}= \gamma^{\alpha}_{\mu\nu}- \gamma^{\alpha}_{\nu\mu}.
\end{equation*}
Furthermore, the basic form $C_{\mu}$ is defined by
\begin{equation}
\label{basic} C_{\mu} : = \Lambda^{\alpha}_{\mu\alpha} = \gamma^{\alpha}_{\mu\alpha}.
\end{equation}
\vspace{8pt}
Table 1 summarizes the geometry of the AP-space \cite{AMR}.
\begin{center} Table 1: Geometry of the AP-space\\[0.2cm]
{\begin{tabular}
{|c|c|c|c|c|c|c|c|c|c|c|c|}\hline
\multirow{2}{*} {Connection}&\multirow{2}{*} {Coefficients}&Covariant
&\multirow{2}{*} {Torsion}&\multirow{2}{*} {Curvature}&\multirow{2}{*} {Metricity}\\
&& derivative&&&\\[0.2cm]\hline
Canonical&$\Gamma^{\alpha}_{\mu\nu}$&$|$
&$\Lambda^{\alpha}_{\mu\, \nu}$&0&metric\\[0.2cm]\hline
Dual&$\widetilde{\Gamma}^{\alpha}_{\mu\nu}$&$\begin{array}{cc}\tilde {}\\[-0.3cm]|\end{array}$
&$-\Lambda^{\alpha}_{\mu\, \nu}$&$\widetilde{R}^{\alpha}_{\mu\nu\sigma}$&non-metric
\\[0.2cm]\hline
Symmetric&$\widehat{\Gamma}^{\alpha}_{\mu\nu}$&$\begin{array}{cc}
\hat {}\\[-0.3cm]|\end{array}$&0&$\widehat{R}^{\alpha}_{\mu\nu\sigma}$&non-metric\\[0.2cm]\hline
Levi-Civita&$\overcirc{\Gamma}{^{\alpha}_{\mu\nu}}$&$;$&0&$\overcirc{R}{^{\alpha}_{\mu\nu\sigma}}$&metric\\[0.2cm]\hline
\end{tabular}}
\end{center}
\vspace{0.4 cm}
Table 2 gives a list of the most important second rank tensor fields of AP-geometry which play a key role in physical applications
(cf. \cite{Mikhail}, \cite{W1}, \cite{W2}). Moreover, most second rank tensor fields which have physical
significance in the AP-context can be expressed in terms
of these tensor fields. This table was first constructed by Mikhail \cite{FI}.
\vspace{0.05cm}
{\begin{center}{ Table 2: Fundamental second rank tensor fields of the AP-space}\\[0.2 cm]\
{\begin{tabular}{|c|c|}\hline
Skew-Symmetric& Symmetric
\\[0.2 cm]\hline
$\xi_{\mu\nu}: = \gamma_{\mu\nu} \!^{\alpha} \!\,_{|\alpha}$&
\\[0.2 cm]\hline
$\gamma_{\mu\nu}: = C_{\alpha}\gamma_{\mu\nu} \!^{\alpha}$&
\\[0.2 cm]\hline
$\eta_{\mu\nu} := C_{\alpha}\,\Lambda^{\alpha}_{\mu\nu}$&
$\phi_{\mu\nu} := C_{\alpha}\,(\gamma^{\alpha}_{\mu\nu}+\gamma^{\alpha}_{\nu\mu})$
\\[0.2 cm]\hline
$\chi_{\mu\nu} := \Lambda^{\alpha}_{\mu\nu|\alpha}$& $\psi_{\mu\nu}
:= \gamma^{\alpha}_{\mu\nu|\alpha}+\gamma^{\alpha}_{\nu\mu|\alpha}$\\[0.2 cm]\hline
$\epsilon_{\mu\nu} := C_{\mu|\nu} -
C_{\nu|\mu}$& $\theta_{\mu\nu} := C_{\mu|\nu}
+ C_{\nu|\mu}$\\[0.1 cm]\hline
{$\kappa_{\mu\nu} :=
\gamma^{\sigma}_{\alpha\mu}\gamma^{\alpha}_{\nu\sigma}
- \gamma^{\sigma}_{\mu\alpha}\gamma^{\alpha}_{\sigma\nu}$}&
{$\varpi_{\mu\nu}: =
\gamma^{\sigma}_{\alpha\mu}\gamma^{\alpha}_{\nu\sigma} +
\gamma^{\sigma}_{\mu\alpha}\gamma^{\alpha}_{\sigma\nu}$}
\\[0.2 cm]\hline
$ \ $&$\sigma_{\mu\nu} :=
\gamma^{\sigma}_{\alpha\mu}\gamma^{\alpha}_{\sigma\nu}$
\\[0.2 cm]\hline
$ \ $&$\omega_{\mu\nu} :=
\gamma^{\sigma}_{\mu\alpha}\gamma^{\alpha}_{\nu\sigma}$
\\[0.2 cm]\hline
$ \ $&$\alpha_{\mu\nu} := C_{\mu}C_{\nu}$
\\[0.2 cm]\hline
\end{tabular}}
\end{center}}
\vspace{8pt}
\section{Semi-symmetric canonical connection}
A linear connection on $M$ is said to be semi-symmetric (\cite{r57}, \cite{AMR}, \cite{Ebtsam thesis}) if its torsion tensor $T^{\alpha}_{\mu\nu}$ is written in the form $T^{\alpha}_{\mu\nu} = \delta^{\alpha}_{\mu} \,\omega_{\nu} - \delta^{\alpha}_{\nu}\, \omega_{\mu},$ where $\omega_{\gamma}$ are the components of an arbitrary differential 1-form $\omega$.
Let $M$ be an AP-space with paralellization vector fields $\,\undersym{\lambda}{i}$ and metric $g$ defined by (\ref{metric AP}). Hence, $(M,\,g)$ can be considered as a Riemannian space. \textbf{We assume that the canonical connection $\Gamma^{\alpha}_{\mu\nu}$ of the AP-space, given by (\ref{canonical}), is semi-symmetric}. An AP-space whose canonical connection is semi-symmetric will be referred to as an SAP-space.
From now on, we will be placed on an SAP-space $(M,\, \undersym{\lambda)}{i}$. Hence, the torsion tensor (\ref{tor}) is written in the form
\begin{equation}\label{localss}
\Lambda^{\alpha}_{\mu\nu} = \delta^{\alpha}_{\mu} \,\omega_{\nu} - \delta^{\alpha}_{\nu}\, \omega_{\mu},
\end{equation}
where $\omega_{\gamma}$ are the components of an arbitrary scalar form $\omega$. Consequently, the basic form (\ref{basic}) is given by
\begin{equation}\label{c&w}
C_{\mu} = (1 - n) \, \omega_{\mu}.\end{equation}
Moreover, as the canonical connection is metric, it can be written in the form \cite{r57}
\begin{equation}\label{SScanonical}
\Gamma^{\alpha}_{\mu\nu} = \,\overcirc{\Gamma}{^{\alpha}_{\mu \nu}} + \delta^{\alpha}_{\mu}\, \omega_{\nu} - g_{\mu \nu}\, \omega^{\alpha},
\end{equation}
where $\,\overcirc{\Gamma}{^{\alpha}_{\mu \nu}}$ is the Riemannian connection (\ref{Riemannian AP}) and $\omega^{\alpha} := g^{\alpha \beta}\, \omega_{\beta}.$
Consequently, the contortion tensor (\ref{cont}) is given by
\begin{equation}\label{contortion S.S}
\gamma^{\alpha}_{\mu \nu} = \delta^{\alpha}_{\mu}\, \omega_{\nu} - g_{\mu \nu}\, \omega^{\alpha}.\end{equation}
\par
As the curvature tensor of the canonical connection vanishes, using (\ref{SScanonical}), the Riemannian curvature tensor\, $\overcirc{R}{^{\alpha}_{\mu\nu\sigma}}$ of\,\, $ \overcirc{\Gamma}^\alpha_{\mu\nu}$ is given by
$$\overcirc{R}{^{\alpha}_{\mu\nu\sigma}} = \delta^{\alpha}_{\mu}
(\omega_{\nu|\sigma} - \omega_{\sigma|\nu})
+ (g_{\mu\sigma}\, \omega^{\alpha} \ _{|\nu} - g_{\mu\nu} \,\omega^{\alpha} \
_{|\sigma}) + 2\omega^{\alpha}(g_{\mu\sigma}\,\omega_{\nu} -
g_{\mu\nu}\,\omega_{\sigma}).$$
Let $\,\overcirc{R}{_{\mu\nu}}:=\, \overcirc{R}{^{\alpha}_{\mu\nu\alpha}}$ be the Ricci tensor, then
\begin{equation}\label{the Ricci Riemannian curvature} \overcirc{R}{_{\mu\nu}} = \omega_{\nu|\mu} - g_{\mu\nu}\, \omega^{\sigma}\,_{|\sigma} +
2(\omega_{\nu}\,\omega_{\mu} - g_{\mu\nu}\,\omega^{\sigma}\,\omega_{\sigma}).\end{equation}
Consequently, the scalar curvature $\,\overcirc{R} \, := g^{\mu \nu}\, \overcirc{R}{_{\mu \nu}}.$ is given by
\begin{equation}\label{the scalar Riemannian curvature} \overcirc{R} = (1 - n)({\omega^{\mu}\ _{|\mu}} + 2{\omega^{\mu}\omega_{\mu}}).\end{equation}
We have the following simple, interesting and unexpected result.
\begin{thm}\label{The second order covariant tensor}
The second order covariant tensor $\omega_{\nu|\mu}$ is symmetric:
\begin{equation}\label{der.commutes} \omega_{\mu|\nu} = \omega_{\nu|\mu}.\end{equation}
\end{thm}
\noindent The proof follows directly from (\ref{the Ricci Riemannian curvature}) and the fact that $\,\overcirc{R}{_{\mu\nu}}$ is symmetric
\begin{rem}
\em{In the context of AP-geometry, most of the geometric objects are expressed in terms of the torsion tensor $\Lambda^{\alpha}_{\mu\nu}$. By assuming that the canonical connection is semi-symmetric, it is found that most of the geometric objects are expressed in terms of the basic vector $C_{\mu}$ or the $1$-form $\omega_{\mu}$} (via (\ref{c&w})).
\end{rem}
\section{Generalized Field Theory}
The construction of a purely geometric theory unifying gravity and
electromagnetism, the Generalized Field Theory (GFT), was successfully
established by Mikhail and Wanas in 1977 \cite{Mikhail}. The GFT is formulated in
the context of AP-geometry. The sixteen degrees of freedom
of AP-geometry (in dimension four) make this geometry suitable
for describing the gravitational field, which needs ten degrees of
freedom, in addition to the electromagnetic field, which needs six
degrees of freedom.
\vspace{5pt}
We give here a brief outline of the GFT. For more details, we refer to \cite{Mikhail}. Beginning with the Lagrangian density
\begin{equation*}\label{Lagrangian}\mathcal{L}: = \det(\lambda)\,g^{\mu\nu}L_{\mu\nu} := \det(\lambda)\,g^{\mu\nu}(\Lambda^{\alpha}_{\epsilon\mu}\Lambda^{\epsilon}_{\alpha\nu} - C_{\mu}C_{\nu}),
\end{equation*}
where $\Lambda^{\alpha}_{\mu\nu}$ and $C_{\mu}$ are given by $(\ref{tor})$ and $(\ref{basic})$ and $\det(\lambda)$ denotes the determinant of the matrix $(\,\undersym{\lambda}{i}_{\alpha})$,
Mikhail and Wanas, using a certain variational technique, obtained the differential identity
\begin{equation*}
E^{\mu}\!\,_{\nu\widetilde{|}\mu} = 0.
\end{equation*}
Considering this identity as representing a certain conservation law, the field equations of the GFT are taken to be
\begin{equation}
\begin{split}\label{Vxx} E_{\mu\nu} :&= g_{\mu\nu}L - 2L_{\mu\nu} -
2(C_{\mu}C_{\nu} - C_{\nu|\mu}) + 2g_{\mu\nu}(C^{\epsilon}C_{\epsilon} - C^{\epsilon}\!\,_{|\epsilon})\\&
\ \ \ \, - \, 2(C^{\epsilon}\Lambda_{\mu\epsilon\nu} + g^{\epsilon\alpha}\Lambda_{\mu\nu\alpha|\epsilon})=0.
\end{split}
\end{equation}
The symmetric part of $(\ref{Vxx})$, can be written in the form
\begin{equation}\label{intx} R_{\mu\nu} - \frac{1}{2}\,g_{\mu\nu} R = T_{\mu\nu},\end{equation}
which may be considered as the Einstein field equations, where the energy-momentum tensor $T_{\mu\nu}$ is expressed in terms of the fundamental symmetric tensor fields of Table~2. Moreover, according to $(\ref{intx})$, $T_{\mu\nu}$ satisfies the conservation law
\begin{equation*}
T^{\mu\nu}\!\,_{;\,\mu} = 0.
\end{equation*}
On the other hand, the skew-symmetric part of (\ref{Vxx}) can be written in the form
\begin{equation}\label{curl}F_{\mu\nu} = C_{\mu, \nu} - C_{\nu, \mu}\end{equation}
where $F_{\mu\nu}$ is expressed in terms of the fundamental skew-symmetric tensor fields of Table 2. $F_{\mu\nu}$ may be interpreted as the electromagnetic field expressed as the curl of the basic form $C_{\mu}$.
In view of $(\ref{curl})$, $F_{\mu\nu}$ satisfies the (generalized) second Maxwell's equation
\begin{equation*}
{\mathfrak{S}_{\mu\nu\sigma}}\{ F_{\mu\nu;\,\sigma}\} ={\mathfrak{S}_{\mu\nu\sigma}}\{F_{\mu\nu, \sigma}\} = 0.
\end{equation*}
It should be noted that, in general, the gravitational and electromagnetic fields are not splitted completely unless we go to low-energy (week field) approximation.
\bigskip
To sum up, the field equations obtained are nonsymmetric. The symmetric
part of the field equations contains a second order tensor
representing the material distribution. This tensor is a pure
geometric, not a phenomenological, object. The skew-symmetric part of the
field equations gives rise to a generalized form of Maxwell's
equations in which the electromagnetic field is, again, purely
geometric. The skew-symmetric section of the theory is gauge invariant. The GFT coincides with both Maxwell's and Newton's
theories in the limits of weak static fields and slowly moving
test particles \cite{MW}. In the GFT, the metric tensor field $g_{\mu\nu}$ plays the role of the gravitational potential, while the basic form $C_{\mu}$ plays the role of the electromagnetic potential. Finally, all physical objects involved are expressed in terms of the fundamental tensor fields of the AP-space (Table 2).
\section{Field equations and Physical consequences}
\hspace{12pt}
We here construct field equations in the context of SAP-geometry. We take for the field equations a Lagrangian similar in form to that used in GFT (Section $3$). This is done for at least three reasons. First, the form of the chosen Lagrangian is relatively simple (depends on the vector fields $\lambda_{\beta}$ and their first derivatives $\lambda_{\beta,\gamma}$ {\it which are assumed to be independent}). Secondly, such form of the Lagrangian has led to powerful theoretical and experimental results in the context of AP-geometry (cf. \cite{MR}, \cite{**}, \cite{W1}, \cite{W2}). Finally, in order to facilitate comparison between our theory and the GFT.
\vspace{5pt}
Let M be an SAP-space with dimension $n \geq 3$\footnote{the reason for taking the dimension greater than two will be clarified later.}.
As easily checked, using $(\ref{canonical}),\,(\ref{tor})$ and $(\ref{localss})$, the following relation holds
\begin{equation}\label{yw}
\undersym{\lambda}{i}^{\alpha}\, \big(\,\undersym{\lambda}{i}_{\mu, \nu}-\undersym{\lambda}{i}_{\nu, \mu} \big) =\delta^{\alpha}_{\mu}\,\omega_{\nu}-\delta^{\alpha}_{\nu}\,\omega_{\mu}.
\end{equation}
Moreover, the curl of $\,\undersym{\lambda}{i}_{\mu}$ is thus given by
$$\,\undersym{\lambda}{i}_{\mu, \nu}-\undersym{\lambda}{i}_{\nu, \mu}=\,\undersym{\lambda}{i}_{\mu}\,\omega_{\nu}-\,\undersym{\lambda}{i}_{\nu}\,\omega_{\mu}.$$
\bigskip
Now, we start with the following scalar Lagrangian. Let
\begin{equation}\label{scalarl}
\mathcal{H} := \det(\lambda) g^{\mu \nu} H_{\mu \nu},
\end{equation}
where $\det(\lambda)$ denotes the determinant of the matrix $(\,\undersym{\lambda}{i}^{\alpha})$\, and
$$H_{\mu \nu} := \Lambda^{\alpha}_{\epsilon \mu} \,\, \Lambda^{\epsilon}_{\alpha \nu} - C_{\mu} \,\, C_{\nu}$$
We assume that $\,\undersym{\lambda}{i}_{\beta}$ and $\,\undersym{\lambda}{i}_{\beta,\gamma}$ are independent. The Euler-Lagrange equations corresponding to $(\ref{scalarl})$ are given by
\begin{equation}\label{EL}
\frac{\delta \mathcal{H}}{\delta\,\undersym{\lambda}{i}_{\beta}} := \frac{\partial \mathcal{H}}{\partial\,\undersym{\lambda}{i}_{\beta}} - \frac{\partial}{\partial x^{\gamma}} \left( \frac{\partial \mathcal{H}}{\partial\,\undersym{\lambda}{i}_{\beta,\gamma}}\right) = 0.
\end{equation}
In view of $(\ref{localss})$ and $(\ref{c&w})$, we have
\begin{equation}\label{Homega}
\mathcal{H} = (n-1)(2-n)\, det(\lambda)\, \omega^{2},
\end{equation}where
$$ \omega^{2} = g^{\mu \nu}\, \omega_{\mu} \,\omega_{\nu} = \omega^{\nu} \,\omega_{\nu}. $$
It is clear from $(\ref{Homega})$ that $n=1$ or $n=2$ give the trivial result that $\mathcal{H}$ vanishes identically. It is for this reason that we take $n\geq3$.
\vspace{5pt}
Using $(\ref{yw})$, $\omega_{\mu}$ can be expressed explicitly in terms of the $\lambda$'s in the form
\begin{equation}\label{omega is fn. on landa}
(1-n)\,\omega_{\mu}= \,\undersym{\lambda}{i}^{\alpha}\, \big(\,\undersym{\lambda}{i}_{\mu, \alpha}-\undersym{\lambda}{i}_{\alpha, \mu} \big).
\end{equation}
Relation $(\ref{omega is fn. on landa})$ is interesting as it represents a strong link between the AP-structure and our imposed condition. It should be noted that we have started with $\omega$ arbitrary but the AP-context forced $\omega$ to be a function of the parallelization vector fields $\undersym{\lambda}{i}$.
\vspace{5pt}
We now evaluate the constituents of the Euler-Lagrange equations $(\ref{EL})$. To accomplish this we need the following lemma.
\begin{lem}\label{1}
Let M be an SAP-space with dimension $n \geq 3$. Then the following identities hold:
\begin{description}
\item[(a)] $\displaystyle \frac{\partial\det(\lambda)}{\partial\,\undersym{\lambda}{i}_{\beta}} = \,\undersym{\lambda}{i}^{\beta} \det(\lambda)$.
\item[(b)] $\displaystyle \dd \frac{\partial\det(\lambda)}{\partial\,\undersym{\lambda}{i}_{\beta , \gamma }} = 0$.
\item[(c)] $\displaystyle \dd \frac{\partial\det(\lambda)}{\partial\, x^{\gamma}} = \det(\lambda)\,\, \undersym{\lambda}{k}^{\mu} \,\, \undersym{\lambda}{k}_{\mu , \gamma}$.
\item[(d)] $\displaystyle \frac{\partial \omega^{2}}{\partial\,\undersym{\lambda}{i}_{\beta}}= \frac{2}{1-n}\,\Big[\,\undersym{\lambda}{i}^{\beta}\,\omega^{2}+(n-2)\, \,\undersym{\lambda}{i}^{\alpha}\, \omega_{\alpha}\, \omega^{\beta} \Big]$.
\item[(e)] $\displaystyle \dd \frac{\partial\omega^{2}}{\partial\,\undersym{\lambda}{i}_{\beta , \gamma}} = \frac{2}{(1-n)} \, \Big[\,\undersym{\lambda}{i}^{\gamma}\, \omega^{\beta} - \omega^{\gamma}\, \undersym{\lambda}{i}^{\beta}\Big]$.
\end{description}
\end{lem}
\vspace{8pt}
Using (\ref{Homega}) and Lemma \ref{1}, we get:
\begin{lem}\label{2}
Let M be an SAP-space with dimension $n \geq 3$. Then the following identities hold:
\begin{description}
\item[(a)] $\displaystyle \frac{\partial \mathcal{H}}{\partial\,\undersym{\lambda}{i}_{\beta}} = (n-2) \,\det(\lambda)\, \left[ 2(n-2)\, \undersym{\lambda}{i}^{\alpha}\, \omega_{\alpha} \, \omega^{\beta}-(n-3)\, \undersym{\lambda}{i}^{\beta}\, \omega^{2} \right]$.
\item[(b)] $\displaystyle \frac{\partial \mathcal{H}}{\partial\, \undersym{\lambda}{i}_{\beta , \gamma}} = 2(n-2) \, \det(\lambda) \, \big [\,\undersym{\lambda}{i}^{\gamma}\, \omega^{\beta} - \omega^{\gamma} \, \undersym{\lambda}{i}^{\beta} \big].$
\item[(c)] $\displaystyle \dd \frac{\partial}{\partial x^{\gamma}} \Big( \frac{\partial \mathcal{H}}{\partial\, \undersym{\lambda}{i}_{\beta , \gamma}} \Big) = 2\,(n-2)\, \det(\lambda)
\Big[\,\undersym{\lambda}{k}^{\mu}\, \undersym{\lambda}{k}_{\mu , \gamma}\,(\,\undersym{\lambda}{i}^{\gamma}\, \omega^{\beta} - \,\undersym{\lambda}{i}^{\beta}
\, \omega^{\gamma}) + \,\undersym{{{\lambda}^{\gamma}}_{,\gamma}}{i}\,\omega^{\beta} +\, \undersym{\lambda}{i}^{\gamma}\, {\omega^{\beta}}_{,\gamma}\nonumber\\ \phantom{mmmmmmn}- \undersym{{{\lambda}^{\beta}}_{,\gamma}}{i} \,\omega^{\gamma} - \,\undersym{\lambda}{i}^{\beta}\, {\omega^{\gamma}}_{,\gamma} \Big]$.
\end{description}
\end{lem}
\vspace{8pt}
Now, let us define the geometric object
\begin{equation}\label{E}
E^{\beta}_{\sigma}:= \frac{1}{det(\lambda)} \left(\frac{\delta \mathcal{H}}{\delta \,\undersym{\lambda}{i}_{\beta}} \right) \,\undersym{\lambda}{i}_{\sigma}.
\end{equation}
Substituting the formulae of Lemma \ref{2} into (\ref{E}), using the definition (\ref{EL}) of $\dd \frac{\delta \mathcal{H}}{\delta\,\undersym{\lambda}{i}_{\beta}}$, we get
\begin{eqnarray*}\label{11075} \dd E_{\sigma}^{\beta}
&=&(n-2) \, \Big[ (n+3)\, \delta^{\beta}_{\sigma}\, \omega^{2}+2\,\delta^{\beta}_{\sigma}\,{\omega^{\gamma}}_{|\gamma}-2{\omega^{\beta}}_{|\sigma} \Big].
\end{eqnarray*}
The tensor character of $E_{\sigma}^{\beta}$ is clear. Lowering the index $\beta$ of $E^{\beta}_{\sigma}$, we obtain the field equations $E_{\mu\nu}=0$, where
\begin{equation}\label{11077} \dd E_{\mu\nu}= (n-2)\, \Big[(n+3)\, g_{\mu\nu}\,\omega^{2}
+2\,g_{\mu\nu}\,{\omega^{\gamma}}_{|\gamma}-2\,\omega_{\mu|\nu}\Big]. \end{equation}
By theorem \ref{The second order covariant tensor}, $\,\omega_{\mu|\nu}=\omega_{\nu|\mu}$, $\,E_{\mu\nu}$ is symmetric.
Clearly, by $(\ref{the Ricci Riemannian curvature})$ and $(\ref{the scalar Riemannian curvature})$, the Einstein tensor $G_{\mu\nu}:=\,\overcirc{R}{_{\mu\nu}}- \frac{1}{2}\, g_{\mu\nu}\,\overcirc{R}$ can be expressed in terms of $\omega_{\mu}$ in the form
\begin{equation}\label{Gmunu}
G_{\mu\nu}=\omega_{\nu|\mu}+ \frac{n-3}{2}\,g_{\mu\nu}\,{\omega^{\gamma}}_{|\gamma}+ (n-3)\,g_{\mu\nu}\,\omega^{2}+ 2\,\omega_{\mu}\,\omega_{\nu}.\end{equation}
Taking $(\ref{Gmunu})$ into account, $(\ref{11077})$ takes the form
\begin{equation}\label{Ewithwn1}
\frac{1}{n-2}\,E_{\mu\nu}=G_{\mu\nu}-2\,\omega_{\nu}\,\omega_{\mu}+6\,g_{\mu\nu}\,\omega^{2}-3\,\omega_{\mu|\nu}-\frac{n-7}{2}\,g_{\mu\nu}\,
{\omega^{\gamma}}_{|\gamma}.
\end{equation}
In view of $(\ref{Ewithwn1})$, the field equations $E_{\mu\nu}=0$ give rise to
\begin{equation}\label{EwithCn2}
\qquad \qquad \qquad\,\overcirc{R}{_{\mu\nu}}- \,\frac{1}{2}\, g_{\mu\nu}\,\overcirc{R}\,=2\,\omega_{\nu}\,\omega_{\mu}-6\,g_{\mu\nu}\,\omega^{2}+3\,\omega_{\mu|\nu}+\frac{n-7}{2}\,g_{\mu\nu}\,
{\omega^{\gamma}}_{|\gamma}. \end{equation}
If we set
\begin{equation}\label{Tmunu}
T_{\mu\nu} :=2\,\omega_{\nu}\,\omega_{\mu}-6\,g_{\mu\nu}\,\omega^{2}+3\,\omega_{\mu|\nu}+\frac{n-7}{2}\,g_{\mu\nu}\,
{\omega^{\gamma}}_{|\gamma}, \end{equation}
then, (\ref{EwithCn2}) takes the form
\begin{equation}\label{Einsteineq}
\,\overcirc{R}{_{\mu\nu}}-\, \frac{1}{2}\, g_{\mu\nu}\,\overcirc{R}=T_{\mu\nu}. \end{equation}
By taking into account the divergence of $(\ref{Einsteineq})$, one can see, from the second Bianchi identity, that $T_{\mu\nu}$ satisfy the equations
\begin{equation}\label{dif.Tmunu}
{T^{\mu\nu}}_{;\,\mu}=0.
\end{equation}
Hence, $T_{\mu\nu}$ can be interpreted as the energy-momentum tensor and (\ref{dif.Tmunu}) represents a conservation law. Unlike, the Einstein's field equations, the energy-momentum tensor defined by (\ref{Tmunu}) is purely geometric (Equations (\ref{Gmunu}) and (\ref{Tmunu}) show that both Einstein tensor $G_{\mu\nu}$ and the energy-momentum tensor $T_{\mu\nu}$ are expressed solely in terms of the geometric objects $g_{\mu\nu}$ and $\omega_{\mu}$). Furthermore, the gravitational potential can be attributed to the metric tensor $g_{\mu\nu}$.
\vspace{5pt}
The electromagnetic field strength is given by
\begin{equation*}
\label{emfs} F_{\mu\nu}= C_{\mu,\nu}-C_{\nu,\mu},
\end{equation*}
where $C_{\mu}$ is the electromagnetic potential. In view of (\ref{localss}), (\ref{c&w}) and (\ref{der.commutes}), we obtain
\begin{equation*}
F_{\mu\nu} \equiv 0.
\end{equation*}
This automatically implies that the electromagnetic field, which is represented in GFT by the skew symmetric part of the field equations $E_{\mu\nu}=0$, vanish identically.
\begin{rem}
\em{It is well known that if a linear connection (with covariant derivative~$||$) is symmetric, then $A_{\mu||\nu}-A_{\nu||\mu}=A_{\mu,\nu}-A_{\nu,\mu}$. It is interesting that such a relation holds here ($C_{\mu|\nu}-C_{\nu|\mu}=C_{\mu,\nu}-C_{\nu,\mu}$) though the canonical connection\, $\Gamma^{\alpha}_{\mu\nu}$\, is non-symmetric. This is due to the semi-symmetry condition.}
\end{rem}
We will refer to our theory, studying the GFT-Lagrangian in the SAP-context, as Special Generalized Field Theory (SGFT).
\section{A comparative study}
The fundamental second order tensors listed in Table 2 are necessary for physical applications. They can be used to determine what type of physical phenomena the geometry can describe (cf., for example, \cite{MW}).
\vspace{5pt}
\par
In our SGFT all fundamental skew-symmetric tensors vanish, namely,
$\xi_{\mu\nu},\, \gamma_{\mu\nu},$
$\,\eta_{\mu\nu},\,\chi_{\mu\nu},\,\epsilon_{\mu\nu}$ and $\kappa_{\mu\nu}$. For example, we have just shown that $\epsilon_{\mu\nu}:=C_{\mu|\nu}-C_{\nu|\mu}$ vanish. Also,
$\eta_{\mu\nu} := C_{\alpha}\,\Lambda^{\alpha}_{\mu\nu}= (1-n)\,\omega_{\alpha}\,(\delta^{\alpha}_{\mu}\,\omega_{\nu}-\delta^{\alpha}_{\nu}\,\omega_{\mu})=(1-n)(\omega_{\mu}\,\omega_{\nu}-\omega_{\nu}\omega_{\mu})
=0,$
and similarly for the other tensors.
\par
The vanishing of the fundamental skew symmetric tensors makes us sure that the resulting field equations of the SGFT, even before starting calculations, describe gravity only.
Table 3 compares between the fundamental symmetric tensors in AP-geometry and SAP-geometry. These tensors take a much simpler form thanks to $(\ref{c&w})$ and $(\ref{contortion S.S})$.
\begin{center} { Table 3: Fundamental symmetric second order tensors}\\[0.18cm]
{
\begin{tabular}{|c|c|}\hline
{AP-geometry}&{SAP-geometry}\\[0.2cm]\hline
$\sigma_{\mu\nu}=\gamma^{\alpha}_{\sigma\mu}\,\gamma^{\sigma}_{\alpha\nu} $& $\sigma_{\mu\nu}=(n-1)\, \omega_{\mu}\,\omega_{\nu}$\\[0.3cm]\hline
$\omega_{\mu\nu}= \gamma^{\alpha}_{\mu\sigma}\,\gamma^{\sigma}_{\nu\alpha}$ & $\omega_{\mu\nu}=2(\omega_{\mu}\,\omega_{\nu}-g_{\mu\nu}\,\omega^{2})$\\[0.3cm]\hline
$\alpha_{\mu\nu}= C_{\mu}\,C_{\nu}$ & $\alpha_{\mu\nu}= (n-1)\,\sigma_{\mu\nu}$\\[0.2cm]\hline
$\theta_{\mu\nu}=C_{\mu|\nu}+C_{\nu|\mu}$ & $\theta_{\mu\nu}=2\,(n-1)\,\omega_{\mu|\nu}$\\[0.3cm]\hline
$\psi_{\mu\nu}=\gamma^{\alpha}_{\mu\nu|\alpha}+\gamma^{\alpha}_{\nu\mu|\alpha} $ & $\psi_{\mu\nu}= 2\,(\omega_{\mu|\nu}-2\,g_{\mu\nu}\,{\omega^{\alpha}}_{|\alpha})$\\[0.3cm]\hline
$\phi_{\mu\nu}=C_{\alpha}\,(\gamma^{\alpha}_{\mu\nu}+\gamma^{\alpha}_{\nu\mu})$ & $\phi_{\mu\nu}=(n-1)\,\omega_{\mu\nu}$ \\[0.3cm]\hline
$\varpi_{\mu\nu}= \gamma^{\alpha}_{\mu\sigma}\,\gamma^{\sigma}_{\alpha\nu}+\gamma^{\alpha}_{\nu\sigma}\,\gamma^{\sigma}_{\alpha\mu}$ & $\varpi_{\mu\nu}=\omega_{\mu\nu}$ \\[0.3cm]\hline
\end{tabular}}
\end{center}
\smallskip
Using the above table, we can write $(\ref{Tmunu})$ in terms of the fundamental symmetric second order tensors as follows:
\begin{equation}\label{T2}
T_{\mu\nu}=3\,\omega_{\mu\nu}- \frac{n-7}{8}\,\psi_{\mu\nu}- \frac{4}{n-1}\,\sigma_{\mu\nu}+ \frac{n+5}{8(n-1)}\,\theta_{\mu\nu}.
\end{equation}
Moreover, the cosmological function defined by
\begin{equation*}
\Lambda:= \frac{1}{2}(\sigma-\omega),
\end{equation*}
where $\sigma:= g^{\mu\nu}\,\sigma_{\mu\nu}$ and $\omega:= g^{\mu\nu}\,\omega_{\mu\nu}$,
has the form
\begin{equation*}\label{cosm}
\Lambda= \frac{3}{2}(n-1)\,\omega^{2}.
\end{equation*}
It should be noted that our energy-momentum tensor $T_{\mu\nu}$ and cosmological function $\Lambda$ take a mush simpler form compared with those of the GFT. Moreover, the last equation implies that $\Lambda$ does not vanish.
\vspace{5pt}
Table 4 summarizes the most important results obtained so far.
\newpage
\begin{center} { Table 4: Comparison between SAP-geometry and AP-geometry}\\[0.25cm]
{\begin{tabular}{|c|c|}
\hline
{AP-geometry}&{SAP-geometry}\\[0.25cm]\hline
The most important tensor is $\Lambda^{\alpha}_{\mu\nu}$ & The most important tensor is $C_{\mu}$ \\[0.18cm]\hline
$C_{\mu}=0 \centernot\Longrightarrow \Lambda^{\alpha}_{\mu\nu}=0$& $C_{\mu}=0 \Longrightarrow \Lambda^{\alpha}_{\mu\nu}=0$ \\[0.18cm]\hline
$C_{\epsilon}\,\Lambda^{\epsilon}_{\mu\nu} \neq 0$ &$C_{\epsilon}\,\Lambda^{\epsilon}_{\mu\nu} = 0 $ \\[0.18cm]\hline
$C_{\mu|\nu}- C_{\nu|\mu}=C_{\mu,\nu}- C_{\nu,\mu}+C_{\epsilon}\,\Lambda^{\epsilon}_{\mu\nu}$&$C_{\mu|\nu}- C_{\nu|\mu}=C_{\mu,\nu}- C_{\nu,\mu}=0$ \\[0.18cm] \hline
For $\alpha \neq \mu$ and $\alpha \neq \nu$ simultaneously,&For all $\alpha \neq \mu$ and $\alpha \neq \nu$ simultaneously, \\[0.18cm]
$\Lambda^{\alpha}_{\mu\nu} \neq 0$ in general&$\Lambda^{\alpha}_{\mu\nu} = 0$ \\[0.18cm] \hline
Fundamental skew-symmetric tensors & Fundamental skew-symmetric tensors \\[0.1cm]
do not vanish & vanish \\[0.18cm] \hline
$E_{\mu\nu}=0$ describe & $E_{\mu\nu}=0$ describe\\[0.1cm]
gravity and electromagnetism &gravity only \\[0.18cm] \hline
\end{tabular}}
\end{center}
\vspace{6pt}
We end this section with Table 5 presenting a comparison between GR, GFT and SGFT.
\begin{center} {Table 5: Comparison between GR, GFT and SGFT}\\[0.25cm]
{\begin{tabular}
{|c|c|c|c|c|c|c|c|c|c|c|c|}\hline
{Field} & { Field} & {No. of} & {Field} & {Differential}\\
{Theory}& { Variables}& {Field} & {Equations}& {Identities}\\
& & {Variables}& & \\ [0.2cm]\hline
{ GR}&$g_{\mu\nu}$& $10$ &$G^{\mu}\,_{\nu} = 0$&$G^{\mu}\,_{;\,\mu} = 0$\\[0.2 cm]\hline
{ GFT}&$\,\undersym{\lambda}{i}^{\mu}$&$16$&$E^{\mu}\,_{\nu} = 0$&$E^{\mu}\,_{\nu\widetilde{|}\mu} = 0$\\[0.2cm]\hline
{SGFT}&$\,\undersym{\lambda}{i}^{\mu}$&$16$&$E^{\mu}\,_{\nu} = 0$&$ T^{\mu\nu}\,_{;\,\mu} = 0$\\[0.2cm]\hline
\end{tabular}}
\end{center}
\vspace{0.2cm}
\section{Concluding remarks}
\begin{itemize}
\item[$\bullet$]
In this work, we consider an AP-space in which the canonical connection is semi-symmetric. The field equations are constructed by applying a variational technique to a suitable Lagrangian defined in terms of the torsion and the basic form of the space.
\item[$\bullet$]
In view of (\ref{Homega}), the explicit dependence of the scalar Lagrangian on $n$ trivializes the cases $n = 1$ and $n = 2$.
Consequently, we consider only the values of $n \geq 3$. It should be noted that the values $n =1$ and $n = 2$ are not forbidden, but
are excluded on the ground that they imply the vanishing of $H$.
Accordingly, without loss of generality, we may (and do) assume that $n\geq 3$.
\item[$\bullet$]
Equation $(\ref{omega is fn. on landa})$ tells us that the $1$-form $\omega_{\mu}$ can be expressed in terms of the vector fields forming the parallelization. This provides a strong link between the semi-symmetry condition and the AP-structure.
\item[$\bullet$]
The vanishing of the basic form is equivalent to the vanishing of the torsion tensor, that is,
$$C_{\mu}=0 \Longleftrightarrow \Lambda^{\alpha}_{\mu\nu}=0.$$
The implication $ \Lambda^{\alpha}_{\mu\nu}=0 \Longrightarrow C_{\mu}=0$ holds trivially. The reverse implication follows directly from the fact that $C_{\mu}= (1-n)\,\omega_{\mu}$ and $\Lambda^{\alpha}_{\mu\nu}= \delta^{\alpha}_{\mu}\, \omega_{\nu}-\delta^{\alpha}_{\nu}\, \omega_{\mu}$. This implication is not true in the general case of an AP-structure.
\item[$\bullet$]
Unlike AP-geometry, in which most of the geometric objects are expressed in terms of the torsion tenser, in the SAP-context, the basic form plays the role of the torsion tensor: most of the geometric objects can be expressed in terms of the basic form.
This is one of the reasons that geometric objects and geometric relations in the SAP-context
acquire a simpler form than their counterparts in the AP-context. This fact is due to the simplicity of the basic form (contracted torsion) compared with the torsion tensor.
\item[$\bullet$]
Table 4 is quite revealing and sheds light upon various properties that differentiate between SAP-geometry and AP-geometry. It shows that some geometric objects, unlike their counterparts in AP-geometry,
vanish identically. In particular, the fundamental skew symmetric tensers, some torsion components and, last but not least, the contraction of the basic form with the torsion tenser. Moreover, it gives an inverse implication that is not true in general.
\item[$\bullet$]
A crucial difference between our field equations and those of the GFT is that the Lagrangian in the GFT depends \emph{implicitly} on the dimension $n$ of the underlying manifold (being defined in terms of the $n$ vector fields forming the parallelization), whereas its counterpart in our field equations depends \emph{explicitly} on $n$.
\item[$\bullet$] Under the additional condition that the canonical connection is semi-symmetric, our chosen Lagrangian acquires a much simpler form compared to Mikhail-Wanas Lagrangian of the GFT. Moreover, this assumption gives rise to the interesting property that the field equations describing the electromagnetism disappear.
This is because, as stated above, all fundamental skew-symmetric tensors vanish, a property which obviously does not hold in the AP-context.
\item[$\bullet$]
Though the field equations under the semi-symmetry condition describe only gravity, it has a big advantage compared to Einstein's field equations. Unlike the classical general theory of relativity, the energy-momentum tensor $(\ref{Tmunu})$ (or (\ref{T2})) has a geometric origin. All geometric entities in our field equations are expressed in terms of both the metric $g_{\mu\nu}$ and $1$-form $\omega_{\mu}$. The field equations obtained may be used in physical applications related to inflation and the status of early universe.
\item[$\bullet$]
One of the possible reasons why our field equations describe only gravity is that our imposed condition of semi-symmetry seems to be too strong, hence too restrictive. We conjecture that, by relaxing the semi-symmetry condition, namely, replacing $$\Lambda^{\alpha}_{\mu\nu}= \delta^{\alpha}_{\mu}\,\omega_{\nu}-\delta^{\alpha}_{\nu}\,\omega_{\mu}$$ by $$\Lambda^{\alpha}_{\mu\nu}= L^{\alpha}_{\mu}\,\omega_{\nu}-L^{\alpha}_{\nu}\,\omega_{\mu},$$
where $L^{\alpha}_{\mu}$ is an arbitrary tensor field of type $(1,1)$, the field equations describing electromagnetism will not disappear. This point may be the subject of future research.
\item[$\bullet$] This work can be continued and extended. The equations of motion in the context of SAP-geometry can be studied. The field equations and the equations of motion under the assumption that the canonical connection is semi-symmetric can be studied in the more general context of GAP-geometry $\cite{GAP}$, EAP-geometry (\cite{amr}, \cite{quant}) or FP-geometry $\cite{Ebtsam}$.
\item[$\bullet$] Finally, in view of Theorem 1 of Yano \cite{r57}, the following result follows \cite{AMR}.
If $(M,\,\undersym{\lambda}{i})$ is an SAP-space, then the associated Riemannian metric $g_{\mu\nu}= \undersym{\lambda}{i}_{\mu} \,\, \undersym{\lambda}{i}_{ \nu}$ is conformally flat. Some physical consequences may arise from such result. This needs more investigation.
\end{itemize}
\vspace{-1.5cm}
\providecommand{\bysame}{\leavevmode\hbox
to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
} \providecommand{\href}[2]{#2}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.